CN111210465A - Image registration method and device, computer equipment and readable storage medium - Google Patents

Image registration method and device, computer equipment and readable storage medium Download PDF

Info

Publication number
CN111210465A
CN111210465A CN201911424759.1A CN201911424759A CN111210465A CN 111210465 A CN111210465 A CN 111210465A CN 201911424759 A CN201911424759 A CN 201911424759A CN 111210465 A CN111210465 A CN 111210465A
Authority
CN
China
Prior art keywords
image
sample
registered
features
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911424759.1A
Other languages
Chinese (zh)
Other versions
CN111210465B (en
Inventor
曲国祥
马姗姗
曹晓欢
薛忠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai United Imaging Intelligent Healthcare Co Ltd
Original Assignee
Shanghai United Imaging Intelligent Healthcare Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai United Imaging Intelligent Healthcare Co Ltd filed Critical Shanghai United Imaging Intelligent Healthcare Co Ltd
Priority to CN201911424759.1A priority Critical patent/CN111210465B/en
Publication of CN111210465A publication Critical patent/CN111210465A/en
Application granted granted Critical
Publication of CN111210465B publication Critical patent/CN111210465B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Abstract

The invention relates to an image registration method, an image registration device, a computer device and a readable storage medium, wherein the method comprises the following steps: acquiring an image to be registered and a reference image; the image to be registered and the reference image are images of different modalities; respectively inputting the image to be registered and the reference image into a preset feature extraction model to obtain the features of the image to be registered and the features of the reference image; the features of the image to be registered and the features of the reference image are features in the same feature space; inputting the characteristics of the image to be registered and the characteristics of the reference image into a preset deformation field prediction model to obtain a deformation field for registering the image to be registered to the reference image; and carrying out image registration on the image to be registered according to the deformation field. The method can accurately and effectively register the image to be registered, and improves the registration efficiency and accuracy of the image to be registered.

Description

Image registration method and device, computer equipment and readable storage medium
Technical Field
The present invention relates to the field of medical images, and in particular, to an image registration method, apparatus, computer device, and readable storage medium.
Background
Image registration is a process of spatially matching two images of different modalities, for example, in the process of registering an image a to an image B, where B is a reference image and a is a floating image, the result is a spatial transformation relationship of the image a to the image B. The image registration is the basis of medical image processing, and plays an important role in the fields of image information fusion, auxiliary diagnosis, operation planning, operation navigation, medical basic theory research and the like. The essence of image registration is to match the contents of two images one to one.
In the conventional technology, when images of different modalities are registered, for example, when an image of a modality a and an image of a modality B are registered, the image of the modality a is converted into the image of the modality B, and then the images of the different modalities are registered by using a registration method of the same modality.
However, the conventional image registration method has the problem that the registration result is inaccurate.
Disclosure of Invention
Based on this, it is necessary to provide an image registration method, an apparatus, a computer device and a readable storage medium for solving the problem that the conventional image registration method has inaccurate registration result.
In a first aspect, an embodiment of the present invention provides an image registration method, where the method includes:
acquiring an image to be registered and a reference image; the image to be registered and the reference image are images of different modalities;
inputting the image to be registered and the reference image into a preset feature extraction model respectively to obtain the features of the image to be registered and the features of the reference image; the features of the image to be registered and the features of the reference image are features in the same feature space;
inputting the characteristics of the image to be registered and the characteristics of the reference image into a preset deformation field prediction model to obtain a deformation field for registering the image to be registered to the reference image;
and carrying out image registration on the image to be registered according to the deformation field.
In a second aspect, an embodiment of the present invention provides an image registration apparatus, including:
the first acquisition module is used for acquiring an image to be registered and a reference image; the image to be registered and the reference image are images of different modalities;
the feature extraction module is used for respectively inputting the image to be registered and the reference image into a preset feature extraction model to obtain the features of the image to be registered and the features of the reference image; the features of the image to be registered and the features of the reference image are features in the same feature space;
the second acquisition module is used for inputting the characteristics of the image to be registered and the characteristics of the reference image into a preset deformation field prediction model to obtain a deformation field for registering the image to be registered to the reference image;
and the registration module is used for carrying out image registration on the image to be registered according to the deformation field.
In a third aspect, an embodiment of the present invention provides a computer device, including a memory and a processor, where the memory stores a computer program, and the processor implements the following steps when executing the computer program:
acquiring an image to be registered and a reference image; the image to be registered and the reference image are images of different modalities;
inputting the image to be registered and the reference image into a preset feature extraction model respectively to obtain the features of the image to be registered and the features of the reference image; the features of the image to be registered and the features of the reference image are features in the same feature space;
inputting the characteristics of the image to be registered and the characteristics of the reference image into a preset deformation field prediction model to obtain a deformation field for registering the image to be registered to the reference image;
and carrying out image registration on the image to be registered according to the deformation field.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the following steps:
acquiring an image to be registered and a reference image; the image to be registered and the reference image are images of different modalities;
inputting the image to be registered and the reference image into a preset feature extraction model respectively to obtain the features of the image to be registered and the features of the reference image; the features of the image to be registered and the features of the reference image are features in the same feature space;
inputting the characteristics of the image to be registered and the characteristics of the reference image into a preset deformation field prediction model to obtain a deformation field for registering the image to be registered to the reference image;
and carrying out image registration on the image to be registered according to the deformation field.
In the image registration method and apparatus, the computer device, and the readable storage medium provided in the above embodiments, the computer device obtains an image to be registered and a reference image, where the image to be registered and the reference image are images of different modalities, respectively inputs the image to be registered and the reference image into a preset feature extraction model, obtains features of the image to be registered and features of the reference image, where the features of the image to be registered and the features of the reference image are features in the same feature space, inputs the features of the image to be registered and the features of the reference image into a preset deformation field prediction model, obtains a deformation field in which the image to be registered is registered to the reference image, and performs image registration on the image to be registered according to the deformation field. In the method, computer equipment respectively inputs an image to be registered and a reference image into a preset feature extraction model, the obtained features of the image to be registered and the reference image are features in the same feature space, because the features of the image to be registered and the features of the reference image are features in the same feature space, the image contents contained in the region with the same feature value of the image to be registered and the reference image are the same, therefore, the characteristics of the image to be registered and the characteristics of the reference image are input into the preset deformation field prediction model, the deformation field of the image to be registered and the reference image can be quickly and accurately obtained, the efficiency and the accuracy of the obtained deformation field of the image to be registered and the reference image are improved, and further, the efficiency and the accuracy of image registration of the image to be registered according to the obtained deformation field are improved.
Drawings
FIG. 1 is a schematic diagram of an internal structure of a computer device according to an embodiment;
FIG. 2 is a flowchart illustrating an image registration method according to an embodiment;
FIG. 2a is a schematic flow chart of image registration according to an embodiment;
fig. 3 is a schematic flowchart of an image registration method according to another embodiment;
fig. 4 is a schematic flowchart of an image registration method according to another embodiment;
FIG. 4a is a schematic diagram illustrating a training process of a preset feature extraction model according to an embodiment;
fig. 5 is a schematic flowchart of an image registration method according to another embodiment;
fig. 6 is a schematic flowchart of an image registration method according to another embodiment;
fig. 7 is a schematic structural diagram of an image registration apparatus according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The image registration method provided by the embodiment of the application can be applied to computer equipment shown in fig. 1. The computer device comprises a processor and a memory connected by a system bus, wherein a computer program is stored in the memory, and the steps of the method embodiments described below can be executed when the processor executes the computer program. Optionally, the computer device may further comprise a network interface, a display screen and an input device. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a nonvolatile storage medium storing an operating system and a computer program, and an internal memory. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. Optionally, the computer device may be a server, a personal computer, a personal digital assistant, other terminal devices such as a tablet computer, a mobile phone, and the like, or a cloud or a remote server, and the specific form of the computer device is not limited in the embodiment of the present application.
In the conventional technology, when images of different modalities are registered, for example, when an image of a modality a and an image of a modality B are registered, the image of the modality a is converted into the image of the modality B, and then the images of the different modalities are registered by using a registration method of the same modality. However, the conventional image registration method has the problem that the registration result is inaccurate. To this end, an embodiment of the present application provides an image registration method, which maps images of different modalities to the same feature space based on a modality-converted feature space mapping algorithm, where corresponding channels of two feature maps derived from different modalities include the same content information. Therefore, images of different modalities and the same content can be calculated to obtain the same feature map, the image content contained in the regions with the same value of the feature map is also the same, and after the images of different modalities are mapped to the same feature space, the two groups of features are registered by using a basic same-modality registration method, so that the registration between the images of different modalities is realized. In the present application, images of different modalities can be mapped to the same feature space through a preset feature extraction model, wherein the idea of training the preset feature extraction model is as follows: the method comprises the steps of using a method based on an antagonism generation network to search and map a feature space, firstly realizing mutual reconstruction between images of two modes through assistance and unsupervised of a discriminator, then, in order to ensure that details are not lost while the images are generated, ensuring that the contents of the images of a CT mode are completely consistent with those of an original CT image when the images of the CT mode are transferred to the images of an MR mode and then transferred back to the CT image, introducing cyclic consistency loss, ensuring that mutual transfer of the two different modes can be realized, further, introducing feature similarity loss in order to ensure that feature maps of the images of the two modes are in one-to-one correspondence, ensuring that the feature numbers and sizes of features extracted from the images of the different modes are completely consistent, and thus realizing mapping the images of the different modes to the same feature space and obtaining a preset feature extraction model.
It should be noted that, in the image registration method provided in the embodiment of the present application, an implementation subject may be an image registration apparatus, and the image registration apparatus may be implemented as part or all of a computer device by software, hardware, or a combination of software and hardware. In the following method embodiments, the execution subject is a computer device as an example.
The following describes the technical solution of the present invention and how to solve the above technical problems with specific examples. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.
Fig. 2 is a flowchart illustrating an image registration method according to an embodiment. Fig. 2a is a schematic flowchart of image registration according to an embodiment. The embodiment relates to a specific implementation process of acquiring features in the same feature space of an image to be registered and a reference image by computer equipment, inputting the features of the image to be registered and the features of the reference image into a preset deformation field prediction model, registering the image to be registered to the deformation field of the reference image according to the acquired features, and carrying out image registration on the image to be registered. As shown in fig. 2, the method may include:
s201, acquiring an image to be registered and a reference image; the image to be registered and the reference image are images of different modalities.
Specifically, the computer device obtains images to be registered and reference images of different modalities. The images of different modalities are images obtained by using different Imaging principles and different Imaging apparatuses, and for example, the images obtained by using the Imaging apparatuses such as Computed Tomography (CT), Magnetic Resonance Imaging (MRI), Positron Emission Tomography (PET), functional Magnetic Resonance Imaging (fMRI), and the like are images of different modalities. For example, in this embodiment, the image to be registered may be an MRI image, and the reference image may be a CT image. Optionally, the computer device may obtain the images to be registered and the reference images in different modalities from a Picture Archiving and Communication Systems (PACS) server, or may directly obtain the images to be registered and the reference images in different modalities from different medical imaging devices. Optionally, after the computer device acquires the image to be registered and the reference image, preprocessing operations such as pixel value normalization processing and rigid registration may be performed on the image to be registered and the reference image, where rigid registration refers to performing simple image overall movement, such as translation and rotation, on the image to be registered.
S202, respectively inputting the image to be registered and the reference image into a preset feature extraction model to obtain the features of the image to be registered and the features of the reference image; the features of the image to be registered and the features of the reference image are features in the same feature space.
Specifically, the computer device inputs the image to be registered and the reference image into a preset feature extraction model respectively to obtain features of the image to be registered and features of the reference image. The features of the image to be registered and the features of the reference image are features in the same feature space. In the feature space, the feature map corresponding channels derived from two images of different modalities include the same content information, and images of different modalities of the same tissue and the same organ in the feature space can calculate the same feature map, and the image content included in the region where the feature map values of the images of different modalities are equal is also the same. It can be understood that the process of extracting the features of the image to be registered from the image to be registered is a process of mapping the original data corresponding to the image to be registered to a higher-dimensional feature space, where the features in the feature space are higher-dimensional abstractions of the original data corresponding to the image to be registered, and similarly, the process of extracting the features of the reference image from the reference image is a process of mapping the original data corresponding to the reference image to a higher-dimensional feature space, where the features in the feature space are higher-dimensional abstractions of the original data corresponding to the reference image, so that the features of the image to be registered and the features of the reference image in the same feature space can be obtained by mapping both the image to be registered and the reference image to the same higher-dimensional feature space through the feature extraction model. Optionally, after the computer device inputs the image to be registered and the reference image into the preset feature extraction model respectively, the feature extraction model may map the image to be registered and the reference image to the same feature space through a series of operations such as convolution and pooling, so as to obtain features of the image to be registered and features of the reference image.
And S203, inputting the characteristics of the image to be registered and the characteristics of the reference image into a preset deformation field prediction model to obtain a deformation field for registering the image to be registered to the reference image.
Specifically, the computer device inputs the features of the image to be registered and the features of the reference image into a preset deformation field prediction model to obtain a deformation field in which the image to be registered is registered to the reference image. It can be understood that the feature of the image to be registered and the feature of the reference image are features in the same feature space, and the deformation field prediction model may perform feature extraction on the feature of the image to be registered and the feature of the reference image by using a registration method based on feature similarity, so as to obtain a deformation field in which the image to be registered is registered to the reference image.
And S204, carrying out image registration on the image to be registered according to the deformation field.
Specifically, the computer device performs image registration on the image to be registered according to the registration of the image to be registered to the deformation field of the reference image, so that the features of the image to be registered can achieve the maximum similarity with the features of the reference image after deformation, and a registration result is obtained. Exemplarily, taking an image to be registered as an MRI image and a reference image as a CT, as shown in fig. 2a, fig. 2a is a schematic flowchart of a process of registering the image to be registered by a computer device.
In this embodiment, the computer device inputs the image to be registered and the reference image into a preset feature extraction model respectively, the obtained features of the image to be registered and the reference image are features in the same feature space, because the features of the image to be registered and the features of the reference image are features in the same feature space, the image contents contained in the region with the same feature value of the image to be registered and the reference image are the same, therefore, the characteristics of the image to be registered and the characteristics of the reference image are input into the preset deformation field prediction model, the deformation field of the image to be registered and the reference image can be quickly and accurately obtained, the efficiency and the accuracy of the obtained deformation field of the image to be registered and the reference image are improved, and further, the efficiency and the accuracy of image registration of the image to be registered according to the obtained deformation field are improved.
Fig. 3 is a flowchart illustrating an image registration method according to another embodiment. The embodiment relates to a specific implementation process for respectively inputting a to-be-registered image and a reference image into a preset feature extraction model by computer equipment to obtain features of the to-be-registered image and features of the reference image. As shown in fig. 3, based on the above embodiment, as an optional implementation manner, the preset feature extraction model includes a first feature extraction model and a second feature extraction model, and the S202 includes:
s301, inputting the image to be registered into the first feature extraction model to obtain the features of the image to be registered.
Specifically, the computer device inputs the image to be registered into the first feature extraction model to obtain the features of the image to be registered. It can be understood that the process of inputting the image to be registered into the first feature extraction model to obtain the features of the image to be registered is to input the image to be registered into the first feature extraction model, and map the original data corresponding to the image to be registered into a higher-dimensional space, so as to obtain the features of the image to be registered. Optionally, the first feature extraction model may perform a series of operations such as convolution and pooling on the input image to be registered, and map the image to be registered to a preset feature space to obtain features of the image to be registered.
And S302, inputting the reference image into the second feature extraction model to obtain the features of the reference image.
Specifically, the computer device inputs the reference image into the second feature extraction model to obtain the features of the reference image. It can be understood that the reference image is input into the second feature extraction model, and the process of obtaining the features of the reference image is to input the image to be registered into the second feature extraction model, and map the original data corresponding to the reference image to the higher-dimensional space described in S301, so as to obtain the features of the reference image. It should be noted that, when the first feature extraction model and the second feature extraction model are trained, a method based on a countermeasure generation network is used to search and map the feature space, so that it is ensured that the features of the image to be registered obtained by the first feature extraction model and the features of the reference image obtained by the second feature extraction model are the features of the same feature space. Optionally, the second feature extraction model may perform a series of operations such as convolution and pooling on the input reference image, and map the image to be referenced to the preset feature space in S301, so as to obtain the features of the reference image.
Optionally, the computer device may input the image to be registered into the first feature extraction model to obtain features of the image to be registered, and then input the reference image into the second feature extraction model to obtain features of the reference image; the image to be registered and the reference image can be simultaneously and respectively input into the first feature extraction model and the second feature extraction model, so that the features of the image to be registered and the features of the reference image are obtained.
In this embodiment, the computer device inputs the image to be registered into the first feature extraction model, inputs the reference image into the second feature extraction model, and can map the original data corresponding to the image to be registered and the original data corresponding to the reference image into the same higher-dimensional feature space through the first feature extraction model and the second feature extraction model, so that the features of the image to be registered and the features of the reference image in the same feature space can be quickly and accurately obtained, and the efficiency and accuracy of obtaining the features of the image to be registered and the features of the reference image are improved.
Fig. 4 is a flowchart illustrating an image registration method according to another embodiment. Fig. 4a is a schematic diagram of a training process of a preset feature extraction model according to an embodiment. The embodiment relates to a specific implementation process for obtaining a preset feature extraction model by computer equipment. As shown in fig. 4, on the basis of the foregoing embodiment, as an optional implementation manner, the training process of the preset feature extraction model includes:
s401, acquiring a first sample image and a second sample image; the first sample image and the second sample image are two images of different modalities.
In particular, a computer device acquires a first sample image and a second sample image that are different modalities. Alternatively, as shown in fig. 4a, the first sample image may be an MRI image and the second sample image may be a CT image. Alternatively, the computer device may obtain the first sample image and the second sample image of different modalities from a Picture Archiving and Communication Systems (PACS) server, or may directly obtain the first sample image and the second sample image of different modalities from different medical imaging devices.
S402, inputting the first sample image into a preset first initial neural network to obtain a first sample characteristic and a first predicted image; the modality of the first prediction image is the same as the modality of the second sample image.
Specifically, the computer device inputs the first sample image into a preset first initial neural network (e.g., G in FIG. 4 a)MR) And obtaining the first sample characteristic and the first predicted image. The mode of the first predicted image is the same as the mode of the second sample image, and the first sample image described in S401 is taken as the MRI image and the second sample image is the CT image as an example to explain, that is, if the second sample image is the CT image, the first predicted image obtained through the preset first initial neural network is also the CT image. The first sample feature is a sample feature corresponding to the first sample image, and the first predicted image is an image of the same modality as the second sample image generated from the first sample feature corresponding to the first sample image.
S403, inputting the first prediction image and the second sample image into a preset first initial judgment network to obtain a first judgment result of the first prediction image; training the first initial discrimination network according to the first discrimination result and the authenticity attribute of the second sample image to obtain a first discrimination network; the first discrimination result is used to indicate an authenticity attribute of the first prediction image.
Specifically, the computer device inputs the first prediction image and the second sample image into a preset first initial discrimination network (e.g., D in fig. 4 a)CT) Obtaining a first judgment result used for indicating the authenticity attribute of the first prediction image; and obtaining a value of a first initial discrimination network loss function according to the first discrimination result and the authenticity attribute of the second sample image, training the first initial discrimination network according to the value of the first initial discrimination network loss function, and determining the corresponding first initial discrimination network as the first discrimination network when the value of the first initial discrimination network loss function reaches a stable value. Continuing with the example where the first sample image described in S401 is an MRI image and the second sample image is a CT image, optionally, the computer device may utilize a formula
Figure BDA0002353251550000071
A value of a first initial discrimination network loss function is obtained, where,
Figure BDA0002353251550000081
representing the value of the first initial discrimination network loss function, DCT(CT) represents the authenticity attribute of the second sample image, with a value between 0 and 1, DCTA larger value of (CT) indicates a higher probability that the second sample image is true, DCTA smaller value of (CT) indicates a higher probability that the second sample image is a false image, for example, if DCT(CT) is taken to be 0.8, the second sample image can be determined to be true, if D isCT(CT) is 0.2, then the second sample image can be determined to be a pseudo image; dCT(GMR(MR)) represents a first decision, which takes on a value between 0 and 1, DCT(GMR(MR)) a higher value indicates a higher probability that the first predicted image is true,DCT(GMR(MR)) smaller values indicate a higher probability that the first predicted image is a dummy image, illustratively, DCT(GMR(MR)) is 0.9, the first predicted image can be determined to be true, if DCT(GMR(MR)) is 0.1, the first predicted image can be determined to be a dummy image.
S404, inputting a second sample image into a preset second initial neural network to obtain a second sample characteristic and a second predicted image; the modality of the second predicted image is the same as the modality of the first sample image.
Specifically, the computer device inputs the second sample image into a preset second initial neural network (e.g., G in FIG. 4 a)CT) And obtaining a second sample characteristic and a second predicted image. The mode of the second predicted image is the same as the mode of the first sample image, and the first sample image described in S401 is taken as the MRI image and the second sample image is the CT image as an example to explain, that is, if the first sample image is the CT image, the second predicted image obtained by the preset second initial neural network is also the MRI image. The second sample feature is a sample feature corresponding to the second sample image, and the second predicted image is an image of the same modality as the first sample image generated from the second sample feature corresponding to the second sample image.
S405, inputting the second predicted image and the first sample image into a preset second initial judgment network to obtain a second judgment result of the second predicted image; training the second initial discrimination network according to the second discrimination result and the authenticity attribute of the first sample image to obtain a second discrimination network; the second discrimination result is used to indicate the authenticity attribute of the second predicted image.
Specifically, the computer device inputs the second prediction image and the first sample image into a preset second initial discrimination network (see D in fig. 4 a)MR) Obtaining a second judgment result used for indicating the authenticity attribute of the second prediction image; obtaining the value of a second initial discrimination network loss function according to the second discrimination result and the authenticity attribute of the first sample image, and discriminating the network loss according to the second initial discriminationAnd training the value of the loss function on the second initial discrimination network, and determining the corresponding second initial discrimination network as the second discrimination network when the value of the loss function of the second initial discrimination network reaches a stable value. Continuing with the example where the first sample image described in S401 is an MRI image and the second sample image is a CT image, optionally, the computer device may utilize a formula
Figure BDA0002353251550000082
Obtaining a value of a second initial discrimination network loss function, wherein,
Figure BDA0002353251550000083
value representing a second initial discrimination network loss function, DMR(MR) represents the authenticity attribute of the first sample image, having a value between 0 and 1, DMRA larger value of (MR) indicates a higher probability that the first sample image is true, DMRA smaller value of (MR) indicates a higher probability that the first sample image is a false image, for example, if DMRIf the value of (MR) is 0.7, the first sample image can be determined to be true, if D isMRIf the value of (MR) is 0.1, the first sample image can be determined to be a pseudo image; dMR(GCT(CT)) represents a second decision result, which takes on a value between 0 and 1, DMR(GCT(CT)) the greater the value, the higher the probability that the second predicted image is true, DMR(GCT(CT)) a smaller value indicates a higher probability that the second predicted image is a dummy image, illustratively, DMR(GCT(CT)) is 0.9, the second predicted image can be determined to be true, if D is trueMR(GCT(CT)) is 0.2, the second predicted image can be determined to be a dummy image.
S406, training the first initial neural network and the second initial neural network respectively according to the first sample image, the second sample image, the first prediction image, the second prediction image, the first sample characteristic, the second sample characteristic, the first judgment result and the second judgment result to obtain a first characteristic extraction model and a second characteristic extraction model.
Specifically, the computer device trains the first initial neural network and the second initial neural network respectively according to the first sample image, the second sample image, the first prediction image, the second prediction image, the first sample feature, the second sample feature, the first judgment result and the second judgment result to obtain a first feature extraction model and a second feature extraction model. Optionally, the computer device may obtain a value of a loss function of the first initial neural network and a value of a loss function of the second initial neural network according to the first sample image, the second sample image, the first predicted image, the second predicted image, the first sample feature, the second sample feature, the first discrimination result, and the second discrimination result, train the first initial neural network according to the value of the loss function of the first initial neural network to obtain a first feature extraction model, and train the second initial neural network according to the value of the loss function of the second initial neural network to obtain a second feature extraction model.
In the present embodiment, the computer apparatus can obtain a first sample feature and a first prediction image having the same modality as that of the second sample image through the first initial neural network, input the first prediction image and the second sample image into a preset first initial discrimination network, can obtain a first discrimination result indicating the authenticity attribute of the first prediction image, can obtain a second sample feature and a second prediction image having the same modality as that of the first sample image through the second initial neural network, input the second prediction image and the first sample image into a preset second initial discrimination network, can obtain a second discrimination result indicating the authenticity attribute of the second prediction image, and can further obtain the second discrimination result from the first sample image, the second sample image, the first prediction image, the second prediction image, the first sample feature, the second sample feature, the first discrimination result and the second discrimination result, the method for generating the network based on the countermeasure is used for searching and mapping the feature space, mutual reconstruction between the first sample image and the second sample image is achieved, the first initial neural network and the second initial neural network can be accurately trained, and accuracy of the obtained first feature extraction model and the second feature extraction model is improved; in addition, the computer equipment can accurately obtain the loss function value of the first initial discrimination network according to the first discrimination result and the authenticity attribute of the second sample image, can accurately train the first initial discrimination network, improves the accuracy of the obtained first discrimination network, can accurately obtain the loss function value of the second initial discrimination network according to the second discrimination result and the authenticity attribute of the first sample image, can accurately train the second initial discrimination network, and improves the accuracy of the obtained second discrimination network.
Fig. 5 is a flowchart illustrating an image registration method according to another embodiment. The embodiment relates to a specific implementation process for obtaining a first feature extraction model and a second feature extraction model by computer equipment. As shown in fig. 5, on the basis of the foregoing embodiment, as an optional implementation manner, the foregoing S406 includes:
s501, obtaining a value of a first loss function according to the first judgment result.
Specifically, the computer device obtains a value of the first loss function according to the first discrimination result. Exemplarily, the present embodiment continues to take the first sample image described in S401 as an MRI image and the second sample image as a CT image as an example, optionally, the computer device may use a formula
Figure BDA0002353251550000101
A value of a first loss function is obtained, where,
Figure BDA0002353251550000102
representing the value of the first loss function, DCT(GMR(MR)) represents the first discrimination result.
And S502, obtaining a value of a second loss function according to the second judgment result.
Specifically, the computer device obtains a value of the second loss function according to the second determination result. Exemplarily, the present embodiment continues to take the first sample image described in S401 as an MRI image and the second sample image as a CT image as an example, optionally, the computer device may use a formula
Figure BDA0002353251550000103
A value of a second loss function is obtained, where,
Figure BDA0002353251550000104
representing the value of the second loss function, DMR(GCT(CT)) represents the second discrimination result.
S503, inputting the second prediction image into the first initial neural network to obtain a third sample characteristic and a third prediction image; the modality of the third prediction image is the same as the modality of the second sample image.
Specifically, the computer device inputs the second prediction image into the first initial neural network to obtain a third sample characteristic and a third prediction image; and the modality of the third prediction image is the same as that of the second sample image, and the third sample characteristic is the sample characteristic corresponding to the second prediction image. Illustratively, the present embodiment continues with taking the first sample image described in S401 as an MRI image and the second sample image as a CT image as an example, that is, the second prediction image is an MRI image, the computer device inputs the second prediction image into the first initial neural network, obtains a third sample feature corresponding to the second prediction image, and obtains a third prediction image having the same modality as that of the second sample image, that is, obtains the third prediction image as a CT image. The third predicted image is an image of the same modality as the second sample image generated based on the third sample feature corresponding to the second predicted image
S504, inputting the first prediction image into a second initial neural network to obtain a fourth sample characteristic and a fourth prediction image; the modality of the fourth predicted image is the same as that of the first sample image.
Specifically, the computer device inputs the first prediction image into the second initial neural network to obtain a fourth sample characteristic and a fourth prediction image; and the modality of the fourth prediction image is the same as that of the first sample image, and the fourth sample characteristic is the sample characteristic corresponding to the first prediction image. Illustratively, the present embodiment continues with taking the first sample image described in S401 as an MRI image and the second sample image as a CT image as an example, that is, the first prediction image is a CT image, the computer device inputs the first prediction image into the second initial neural network, obtains a fourth sample feature corresponding to the first prediction image, and obtains a fourth prediction image having the same modality as that of the first sample image, that is, obtains the fourth prediction image as an MRI image. The fourth predicted image is an image of the same modality as the first sample image generated based on the fourth sample feature corresponding to the first predicted image.
S505, a value of a third loss function is obtained according to the second sample image and the third predicted image, and the first sample image and the fourth predicted image.
Specifically, the computer device obtains a value of the third loss function from the second sample image and the third prediction image, and from the first sample image and the fourth prediction image. Alternatively, the computer device may obtain a value of a fifth loss function according to a loss between the second sample image and the third prediction image, obtain a value of a sixth loss function according to a loss between the first sample image and the fourth prediction image, and determine a sum of the value of the fifth loss function and the value of the sixth loss function as the value of the third loss function. Optionally, continuing to use the first sample image described in S401 as the MRI image and the second sample image as the CT image as an example, the computer device may use a formula
Figure BDA0002353251550000111
Obtaining the value of the fifth loss function by formula
Figure BDA0002353251550000112
Obtain the value of the sixth loss function and
Figure BDA0002353251550000113
and
Figure BDA0002353251550000114
is determined as the value L of the third loss functioncycleIn the formula (I), wherein,
Figure BDA0002353251550000115
representing the value of the fifth loss function, MSE being the mean square error function, CT representing the second sample image, GMR(GCT(CT)) represents a third predicted image,
Figure BDA0002353251550000116
representing the value of a sixth loss function, MR representing a first sample image, GCT(GMR(MR)) means a fourth predictive image.
And S506, acquiring a value of a fourth loss function according to the first sample characteristic and the fourth sample characteristic, and the second sample characteristic and the third sample characteristic.
Specifically, the computer device obtains a value of the fourth loss function according to the first sample characteristic and the fourth sample characteristic, and the second sample characteristic and the third sample characteristic. Alternatively, the computer device may obtain a value of a seventh loss function according to a loss between the first sample characteristic and the fourth sample characteristic, obtain a value of an eighth loss function according to a loss between the second sample characteristic and the third sample characteristic, and determine a sum of the value of the seventh loss function and the value of the eighth loss function as the value of the fourth loss function. Optionally, continuing to use the first sample image described in S401 as the MRI image and the second sample image as the CT image as an example, the computer device may use the formula LCT=MSE(CNNCT(CT),CNNMR(GCT(CT))) to obtain a value of a seventh loss function, using the formula LMR=MSE(CNNMR(MR),CNNCT(GMR(MR))) to obtain a value for an eighth loss function, and applying L to the valueCTAnd LMRIs determined as the value L of the fourth loss functionsimIn the formula, LCTRepresenting the value of the seventh loss function, MSE being the mean square error function, CNNCT(CT) denotes a first sample characteristic, CNNMR(GCT(CT)) represents a fourth sample characteristic, LMRValue representing an eighth loss function, CNNMR(MR) denotes a second sample characteristic, CNNCT(GMR(MR)) represents a third sample characteristic. Wherein, CNNMRAnd CNNCTThe number and size of channels of the extracted features are completely consistent, and the mean square error functionThe function of the number MSE is to ensure that the features extracted from the MR image express the same meaning as each channel of the features extracted from the CT image, and the mean square error function MSE may be replaced by other loss functions including, but not limited to, the L1 loss function, mutual information, correlation coefficients, etc.
And S507, training the initial first neural network according to the value of the first loss function, the value of the third loss function and the value of the fourth loss function to obtain a first feature extraction model.
Specifically, the computer device trains the initial first neural network according to the value of the first loss function, the value of the third loss function, and the value of the fourth loss function, so as to obtain a first feature extraction model. Alternatively, continuing with the description in the above embodiment as an example, the computer device may assign the value of the first penalty function
Figure BDA0002353251550000121
Value L of the third loss functioncycleAnd the value L of the fourth loss functionsimAnd determining the sum as the value of the loss function of the initial first neural network, training the initial first neural network, and determining the initial first neural network corresponding to the stable value of the loss function of the initial first neural network as the first feature extraction model.
And S508, training the initial second neural network according to the value of the second loss function, the value of the third loss function and the value of the fourth loss function to obtain a second feature extraction model.
Specifically, the computer device trains the initial second neural network according to the value of the second loss function, the value of the third loss function and the value of the fourth loss function to obtain a second feature extraction model. Alternatively, continuing with the description in the above embodiment as an example, the computer device may assign the value of the first penalty function
Figure BDA0002353251550000122
Value L of the third loss functioncycleAnd the value L of the fourth loss functionsimThe sum is determined as the initial second neural networkTraining the initial second neural network, and determining the initial second neural network corresponding to the stable value of the loss function of the initial second neural network as the first feature extraction model. By means of the first feature extraction model obtained by training the initial first neural network and the second feature extraction model obtained by training the initial second neural network, images of different modalities can be mapped into the same feature space, and meanings represented by each feature channel are in one-to-one correspondence in the feature space.
In this embodiment, the computer device can accurately train the initial first neural network through the obtained value of the first loss function, the obtained value of the third loss function, and the obtained value of the fourth loss function, so as to improve the accuracy of the obtained first feature extraction model, and can accurately train the initial second neural network through the obtained value of the second loss function, the obtained value of the third loss function, and the obtained value of the fourth loss function, so as to improve the accuracy of the obtained second feature extraction model.
Fig. 6 is a flowchart illustrating an image registration method according to another embodiment. The embodiment relates to a specific implementation process for obtaining a preset deformation field prediction model by computer equipment. As shown in fig. 6, on the basis of the foregoing embodiment, as an alternative implementation, the training process of the preset deformation field prediction model includes:
s601, acquiring the characteristics of the first sample image and the characteristics of the second sample image; the features of the first sample image and the features of the second sample image are features of the same feature space.
Specifically, the computer device obtains features of the first sample image and features of the second sample image. And the features of the first sample image and the features of the second sample image are features of the same feature space. Alternatively, the computer device may obtain the features of the first sample image and the features of the second sample image as the same feature space through the feature extraction model. It should be noted that, if the obtained sample images are two sample images in different modalities, the sample images may be mapped to the same feature space through the feature extraction model, and then the features of the first sample image and the features of the second sample image are extracted; if the obtained sample images are two sample images in the same modality, the features of the first sample image and the features of the second sample image can be directly extracted through the feature extraction model.
S602, inputting the characteristics of the first sample image and the characteristics of the second sample image into a preset initial deformation field prediction model to obtain a sample deformation field for registering the first sample image to the second sample image.
Specifically, the computer device inputs the features of the first sample image and the features of the second sample image into a preset initial deformation field prediction model to obtain a sample deformation field for registering the first sample image to the second sample image. The sample deformation field is a spatial mapping relation between the first sample image and the second sample image, and the first sample image can be registered as the second sample image according to the sample deformation field.
S603, training a preset initial deformation field prediction model according to the characteristics of the sample deformation field and the second sample image to obtain a preset deformation field prediction model.
Specifically, the computer device transforms the feature of the first sample image according to the sample deformation field to obtain a transformed feature, then obtains a value of a loss function of the initial deformation field prediction model according to the transformed feature and the feature of the second sample image, trains the preset initial deformation field prediction model according to the value of the loss function of the initial deformation field prediction model, and determines the corresponding initial deformation field prediction model as the preset deformation field prediction model when the value of the loss function of the initial deformation field prediction model reaches a stable value. Alternatively, the loss function of the initial deformation field prediction model may be a diff loss function, or may also be a mean square error function or an L1 loss function. Illustratively, taking the feature of the first sample image as the feature of the MRI image and the feature of the second sample image as the feature of the CT image as an example, the computer device may use the formula L ═ Diff (F)CT,FMR) Calculating loss of initial deformation field prediction modelFunction, where L represents the value of the loss function of the initial deformation field prediction model, FCTFeatures representing the second sample image, FMRThe feature of the first image is transformed to obtain transformed feature.
In this embodiment, the computer device inputs the features of the first sample image and the features of the second sample image in the same feature space into a preset initial deformation field prediction model to obtain a sample deformation field in which the first sample image is registered to the second sample image, performs transformation processing on the features of the first sample image according to the sample deformation field to obtain transformed features, trains the preset initial deformation field prediction model according to the transformed features and the features of the second sample image, and can accurately train the preset initial deformation field prediction model by the transformed features obtained by transforming the features of the first sample image and the features of the second sample image, thereby improving the accuracy of the obtained deformation field prediction model.
It should be understood that although the various steps in the flow charts of fig. 2-6 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-6 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternating with other steps or at least some of the sub-steps or stages of other steps.
Fig. 7 is a schematic structural diagram of an image registration apparatus according to an embodiment. As shown in fig. 7, the apparatus may include: a first acquisition module 10, a feature extraction module 11, a second acquisition module 12 and a registration module 13.
Specifically, the first obtaining module 10 is configured to obtain an image to be registered and a reference image; the image to be registered and the reference image are images of different modalities;
the feature extraction module 11 is configured to input the image to be registered and the reference image into a preset feature extraction model respectively, so as to obtain features of the image to be registered and features of the reference image; the features of the image to be registered and the features of the reference image are features in the same feature space;
the second obtaining module 12 is configured to input the features of the image to be registered and the features of the floating image into a preset deformation field prediction model, so as to obtain a deformation field in which the image to be registered is registered to the reference image;
and the registration module 13 is configured to perform image registration on the image to be registered according to the deformation field.
The image registration apparatus provided in this embodiment may implement the above method embodiments, and the implementation principle and the technical effect are similar, which are not described herein again.
On the basis of the foregoing embodiment, optionally, the preset feature extraction model includes a first feature extraction model and a second feature extraction model, and the feature extraction module 11 includes a first feature extraction unit and a second feature extraction unit.
Specifically, the first feature extraction unit is configured to input the image to be registered into the first feature extraction model, so as to obtain features of the image to be registered;
and the second feature extraction unit is used for inputting the reference image into the second feature extraction model to obtain the features of the reference image.
The image registration apparatus provided in this embodiment may implement the above method embodiments, and the implementation principle and the technical effect are similar, which are not described herein again.
On the basis of the foregoing embodiment, optionally, the apparatus further includes: the device comprises a third acquisition module, a fourth acquisition module, a fifth acquisition module, a sixth acquisition module, a seventh acquisition module and a first training module.
Specifically, the third obtaining module is configured to obtain a first sample image and a second sample image; the first sample image and the second sample image are two images with different modalities;
the fourth obtaining module is used for inputting the first sample image into a preset first initial neural network to obtain a first sample characteristic and a first predicted image; the modality of the first prediction image is the same as that of the second sample image;
the fifth obtaining module is used for inputting the first predicted image and the second sample image into a preset first initial judgment network to obtain a first judgment result of the first predicted image; training the first initial discrimination network according to the first discrimination result and the authenticity attribute of the second sample image to obtain a first discrimination network; the first judgment result is used for indicating the authenticity attribute of the first prediction image;
the sixth acquisition module is used for inputting the second sample image into a preset second initial neural network to obtain a second sample characteristic and a second predicted image; the modality of the second predicted image is the same as that of the first sample image;
the seventh obtaining module is configured to input the second predicted image and the first sample image into a preset second initial judgment network to obtain a second judgment result of the second predicted image; training the second initial discrimination network according to the second discrimination result and the authenticity attribute of the first sample image to obtain a second discrimination network; the second judgment result is used for indicating the authenticity attribute of the second prediction image;
the first training module is used for respectively training the first initial neural network and the second initial neural network according to the first sample image, the second sample image, the first predicted image, the second predicted image, the first sample characteristic, the second sample characteristic, the first judgment result and the second judgment result to obtain a first characteristic extraction model and a second characteristic extraction model.
The image registration apparatus provided in this embodiment may implement the above method embodiments, and the implementation principle and the technical effect are similar, which are not described herein again.
On the basis of the foregoing embodiment, optionally, the first training module includes: the device comprises a first acquisition unit, a second acquisition unit, a third acquisition unit, a fourth acquisition unit, a fifth acquisition unit, a sixth acquisition unit, a first training unit and a second training unit.
Specifically, the first obtaining unit is configured to obtain a value of a first loss function according to the first determination result;
a second obtaining unit, configured to obtain a value of a second loss function according to a second determination result;
the third obtaining unit is used for inputting the second prediction image into the first initial neural network to obtain a third sample characteristic and a third prediction image; the modality of the third prediction image is the same as that of the second sample image;
the fourth obtaining unit is used for inputting the first prediction image into the second initial neural network to obtain a fourth sample characteristic and a fourth prediction image; the modality of the fourth predicted image is the same as that of the first sample image;
a fifth obtaining unit configured to obtain a value of a third loss function from the second sample image and the third prediction image, and from the first sample image and the fourth prediction image;
a sixth obtaining unit, configured to obtain a value of a fourth loss function according to the first sample characteristic and the fourth sample characteristic, and the second sample characteristic and the third sample characteristic;
the first training unit is used for training the initial first neural network according to the value of the first loss function, the value of the third loss function and the value of the fourth loss function to obtain a first feature extraction model;
and the second training unit is used for training the initial second neural network according to the value of the second loss function, the value of the third loss function and the value of the fourth loss function to obtain a second feature extraction model.
The image registration apparatus provided in this embodiment may implement the above method embodiments, and the implementation principle and the technical effect are similar, which are not described herein again.
On the basis of the foregoing embodiment, optionally, the fifth obtaining unit is specifically configured to obtain a value of a fifth loss function according to a loss between the second sample image and the third predicted image; obtaining a value of a sixth loss function according to the loss between the first sample image and the fourth prediction image; the sum of the value of the fifth loss function and the value of the sixth loss function is determined as the value of the third loss function.
The image registration apparatus provided in this embodiment may implement the above method embodiments, and the implementation principle and the technical effect are similar, which are not described herein again.
On the basis of the foregoing embodiment, optionally, the sixth obtaining unit is specifically configured to obtain a value of a seventh loss function according to a loss between the first sample feature and the fourth sample feature; obtaining a value of an eighth loss function according to the loss between the second sample characteristic and the third sample characteristic; the sum of the value of the seventh loss function and the value of the eighth loss function is determined as the value of the fourth loss function.
The image registration apparatus provided in this embodiment may implement the above method embodiments, and the implementation principle and the technical effect are similar, which are not described herein again.
On the basis of the foregoing embodiment, optionally, the apparatus further includes: the device comprises an eighth acquisition module, a ninth acquisition module, a transformation module and a second training module.
Specifically, the eighth obtaining module is configured to obtain a feature of the first sample image and a feature of the second sample image; the features of the first sample image and the features of the second sample image are features of the same feature space;
a ninth obtaining module, configured to input the features of the first sample image and the features of the second sample image into a preset initial deformation field prediction model, so as to obtain a sample deformation field in which the first sample image is registered to the second sample image;
and the second training module is used for training the preset initial deformation field prediction model according to the characteristics of the sample deformation field and the second sample image to obtain the preset deformation field prediction model.
The image registration apparatus provided in this embodiment may implement the above method embodiments, and the implementation principle and the technical effect are similar, which are not described herein again.
For specific definition of the image registration apparatus, reference may be made to the above definition of the image registration method, which is not described herein again. The modules in the image registration apparatus can be implemented in whole or in part by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
acquiring an image to be registered and a reference image; the image to be registered and the reference image are images of different modalities;
respectively inputting the image to be registered and the reference image into a preset feature extraction model to obtain the features of the image to be registered and the features of the reference image; the features of the image to be registered and the features of the reference image are features in the same feature space;
inputting the characteristics of the image to be registered and the characteristics of the reference image into a preset deformation field prediction model to obtain a deformation field for registering the image to be registered to the reference image;
and carrying out image registration on the image to be registered according to the deformation field.
The implementation principle and technical effect of the computer device provided by the above embodiment are similar to those of the above method embodiment, and are not described herein again.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring an image to be registered and a reference image; the image to be registered and the reference image are images of different modalities;
respectively inputting the image to be registered and the reference image into a preset feature extraction model to obtain the features of the image to be registered and the features of the reference image; the features of the image to be registered and the features of the reference image are features in the same feature space;
inputting the characteristics of the image to be registered and the characteristics of the reference image into a preset deformation field prediction model to obtain a deformation field for registering the image to be registered to the reference image;
and carrying out image registration on the image to be registered according to the deformation field.
The implementation principle and technical effect of the computer-readable storage medium provided by the above embodiments are similar to those of the above method embodiments, and are not described herein again.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A method of image registration, the method comprising:
acquiring an image to be registered and a reference image; the image to be registered and the reference image are images of different modalities;
inputting the image to be registered and the reference image into a preset feature extraction model respectively to obtain the features of the image to be registered and the features of the reference image; the features of the image to be registered and the features of the reference image are features in the same feature space;
inputting the characteristics of the image to be registered and the characteristics of the reference image into a preset deformation field prediction model to obtain a deformation field for registering the image to be registered to the reference image;
and carrying out image registration on the image to be registered according to the deformation field.
2. The method according to claim 1, wherein the preset feature extraction model comprises a first feature extraction model and a second feature extraction model; the step of respectively inputting the image to be registered and the reference image into a preset feature extraction model to obtain the features of the image to be registered and the features of the reference image comprises the following steps:
inputting the image to be registered into the first feature extraction model to obtain the features of the image to be registered;
and inputting the reference image into the second feature extraction model to obtain the features of the reference image.
3. The method of claim 2, wherein the training process of the preset feature extraction model comprises:
acquiring a first sample image and a second sample image; the first sample image and the second sample image are two images of different modalities;
inputting the first sample image into a preset first initial neural network to obtain a first sample characteristic and a first predicted image; the modality of the first prediction image is the same as that of the second sample image;
inputting the first prediction image and the second sample image into a preset first initial discrimination network to obtain a first discrimination result of the first prediction image; training the first initial discrimination network according to the first discrimination result and the authenticity attribute of the second sample image to obtain a first discrimination network; the first judgment result is used for indicating the authenticity attribute of the first prediction image;
inputting the second sample image into a preset second initial neural network to obtain a second sample characteristic and a second predicted image; the modality of the second prediction image is the same as that of the first sample image;
inputting the second predicted image and the first sample image into a preset second initial judgment network to obtain a second judgment result of the second predicted image; training the second initial discrimination network according to the second discrimination result and the authenticity attribute of the first sample image to obtain a second discrimination network; the second judgment result is used for indicating the authenticity attribute of the second prediction image;
and training the first initial neural network and the second initial neural network respectively according to the first sample image, the second sample image, the first prediction image, the second prediction image, the first sample feature, the second sample feature, the first judgment result and the second judgment result to obtain the first feature extraction model and the second feature extraction model.
4. The method according to claim 3, wherein the training the first initial neural network and the second initial neural network respectively according to the first sample image, the second sample image, the first prediction image, the second prediction image, the first sample feature, the second sample feature, the first discrimination result, and the second discrimination result to obtain the first feature extraction model and the second feature extraction model comprises:
obtaining a value of a first loss function according to the first judgment result;
obtaining a value of a second loss function according to the second judgment result;
inputting the second predicted image into the first initial neural network to obtain a third sample characteristic and a third predicted image; the modality of the third prediction image is the same as that of the second sample image;
inputting the first prediction image into the second initial neural network to obtain a fourth sample characteristic and a fourth prediction image; the modality of the fourth prediction image is the same as that of the first sample image;
acquiring a value of a third loss function according to the second sample image and the third predicted image, and the first sample image and the fourth predicted image;
obtaining a value of a fourth loss function according to the first sample characteristic and the fourth sample characteristic, and the second sample characteristic and the third sample characteristic;
training the initial first neural network according to the value of the first loss function, the value of the third loss function and the value of the fourth loss function to obtain the first feature extraction model;
and training the initial second neural network according to the value of the second loss function, the value of the third loss function and the value of the fourth loss function to obtain the second feature extraction model.
5. The method according to claim 4, wherein said obtaining a value of a third loss function from said second sample picture and said third predicted picture, and said first sample picture and said fourth predicted picture comprises:
obtaining a value of a fifth loss function according to the loss between the second sample image and the third prediction image;
obtaining a value of a sixth loss function according to the loss between the first sample image and the fourth prediction image;
determining a sum of the value of the fifth loss function and the value of the sixth loss function as the value of the third loss function.
6. The method of claim 4, wherein obtaining a value of a fourth loss function based on the first and fourth sample characteristics, the second and third sample characteristics comprises:
obtaining a value of a seventh loss function according to the loss between the first sample characteristic and the fourth sample characteristic;
obtaining a value of an eighth loss function according to the loss between the second sample characteristic and the third sample characteristic;
determining a sum of the value of the seventh loss function and the value of the eighth loss function as the value of the fourth loss function.
7. The method according to claim 1, wherein the training process of the preset deformation field prediction model comprises:
acquiring the characteristics of the first sample image and the characteristics of the second sample image; the features of the first sample image and the features of the second sample image are features of the same feature space;
inputting the characteristics of the first sample image and the characteristics of the second sample image into a preset initial deformation field prediction model to obtain a sample deformation field for registering the first sample image to the second sample image;
and training the preset initial deformation field prediction model according to the characteristics of the sample deformation field and the second sample image to obtain the preset deformation field prediction model.
8. An image registration apparatus, characterized in that the apparatus comprises:
the first acquisition module is used for acquiring an image to be registered and a reference image; the image to be registered and the reference image are images of different modalities;
the feature extraction module is used for respectively inputting the image to be registered and the reference image into a preset feature extraction model to obtain the features of the image to be registered and the features of the reference image; the features of the image to be registered and the features of the reference image are features in the same feature space;
the second acquisition module is used for inputting the characteristics of the image to be registered and the characteristics of the floating image into a preset deformation field prediction model to obtain a deformation field for registering the image to be registered to the reference image;
and the registration module is used for carrying out image registration on the image to be registered according to the deformation field.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method according to any of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN201911424759.1A 2019-12-31 2019-12-31 Image registration method, image registration device, computer equipment and readable storage medium Active CN111210465B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911424759.1A CN111210465B (en) 2019-12-31 2019-12-31 Image registration method, image registration device, computer equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911424759.1A CN111210465B (en) 2019-12-31 2019-12-31 Image registration method, image registration device, computer equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN111210465A true CN111210465A (en) 2020-05-29
CN111210465B CN111210465B (en) 2024-03-22

Family

ID=70788304

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911424759.1A Active CN111210465B (en) 2019-12-31 2019-12-31 Image registration method, image registration device, computer equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN111210465B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112102385A (en) * 2020-08-20 2020-12-18 复旦大学 Multi-modal liver magnetic resonance image registration system based on deep learning
CN112419376A (en) * 2020-11-20 2021-02-26 上海联影智能医疗科技有限公司 Image registration method, electronic device and storage medium
CN113538533A (en) * 2021-06-22 2021-10-22 南方医科大学 Spine registration method, spine registration device, spine registration equipment and computer storage medium
CN113870327A (en) * 2021-09-18 2021-12-31 大连理工大学 Medical image registration method based on multi-level deformation field prediction
EP4231234A1 (en) * 2022-02-16 2023-08-23 Siemens Medical Solutions USA, Inc. Deep learning for registering anatomical to functional images

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080267483A1 (en) * 2007-04-30 2008-10-30 Siemens Medical Solutions Usa, Inc. Registration of Medical Images Using Learned-Based Matching Functions
CN109598745A (en) * 2018-12-25 2019-04-09 上海联影智能医疗科技有限公司 Method for registering images, device and computer equipment
CN109767461A (en) * 2018-12-28 2019-05-17 上海联影智能医疗科技有限公司 Medical image registration method, device, computer equipment and storage medium
CN109767460A (en) * 2018-12-27 2019-05-17 上海商汤智能科技有限公司 Image processing method, device, electronic equipment and computer readable storage medium
CN110021037A (en) * 2019-04-17 2019-07-16 南昌航空大学 A kind of image non-rigid registration method and system based on generation confrontation network
CN110599526A (en) * 2019-08-06 2019-12-20 上海联影智能医疗科技有限公司 Image registration method, computer device, and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080267483A1 (en) * 2007-04-30 2008-10-30 Siemens Medical Solutions Usa, Inc. Registration of Medical Images Using Learned-Based Matching Functions
CN109598745A (en) * 2018-12-25 2019-04-09 上海联影智能医疗科技有限公司 Method for registering images, device and computer equipment
CN109767460A (en) * 2018-12-27 2019-05-17 上海商汤智能科技有限公司 Image processing method, device, electronic equipment and computer readable storage medium
CN109767461A (en) * 2018-12-28 2019-05-17 上海联影智能医疗科技有限公司 Medical image registration method, device, computer equipment and storage medium
CN110021037A (en) * 2019-04-17 2019-07-16 南昌航空大学 A kind of image non-rigid registration method and system based on generation confrontation network
CN110599526A (en) * 2019-08-06 2019-12-20 上海联影智能医疗科技有限公司 Image registration method, computer device, and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
KEYVAN KASIRI ET AL.: "Self-similarity measure for multi-modal image registration", pages 4498 - 4502 *
刘薇: "医学图像配准的关键技术研究", no. 01, pages 138 - 102 *
张杰等: ""虚拟中国人男性一号"多模态图像配准", no. 05, pages 329 - 332 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112102385A (en) * 2020-08-20 2020-12-18 复旦大学 Multi-modal liver magnetic resonance image registration system based on deep learning
CN112419376A (en) * 2020-11-20 2021-02-26 上海联影智能医疗科技有限公司 Image registration method, electronic device and storage medium
CN112419376B (en) * 2020-11-20 2024-02-27 上海联影智能医疗科技有限公司 Image registration method, electronic device and storage medium
CN113538533A (en) * 2021-06-22 2021-10-22 南方医科大学 Spine registration method, spine registration device, spine registration equipment and computer storage medium
CN113870327A (en) * 2021-09-18 2021-12-31 大连理工大学 Medical image registration method based on multi-level deformation field prediction
EP4231234A1 (en) * 2022-02-16 2023-08-23 Siemens Medical Solutions USA, Inc. Deep learning for registering anatomical to functional images

Also Published As

Publication number Publication date
CN111210465B (en) 2024-03-22

Similar Documents

Publication Publication Date Title
CN111210465B (en) Image registration method, image registration device, computer equipment and readable storage medium
US10810735B2 (en) Method and apparatus for analyzing medical image
CN110148192B (en) Medical image imaging method, device, computer equipment and storage medium
US10210613B2 (en) Multiple landmark detection in medical images based on hierarchical feature learning and end-to-end training
CN109598745B (en) Image registration method and device and computer equipment
CN109754447B (en) Image generation method, device, equipment and storage medium
CN111429421B (en) Model generation method, medical image segmentation method, device, equipment and medium
CN110599526B (en) Image registration method, computer device, and storage medium
CN109697740B (en) Image reconstruction method and device and computer equipment
CN111161269B (en) Image segmentation method, computer device, and readable storage medium
CN112801875B (en) Super-resolution reconstruction method and device, computer equipment and storage medium
WO2022057309A1 (en) Lung feature recognition method and apparatus, computer device, and storage medium
CN112102235B (en) Human body part recognition method, computer device, and storage medium
CN111583184A (en) Image analysis method, network, computer device, and storage medium
WO2022032824A1 (en) Image segmentation method and apparatus, device, and storage medium
CN112530550A (en) Image report generation method and device, computer equipment and storage medium
CN110490841B (en) Computer-aided image analysis method, computer device and storage medium
CN111027469B (en) Human body part recognition method, computer device, and readable storage medium
EP3044756A1 (en) Method and apparatus for generating image alignment data
CN111582449B (en) Training method, device, equipment and storage medium of target domain detection network
CN113192031A (en) Blood vessel analysis method, blood vessel analysis device, computer equipment and storage medium
CN115331071A (en) Tuberculous meningoencephalitis prediction method and system based on multi-scale feature map
Gass et al. Detection and correction of inconsistency-based errors in non-rigid registration
CN112669450A (en) Human body model construction method and personalized human body model construction method
CN112801908A (en) Image denoising method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant