CN113034453B - Mammary gland image registration method based on deep learning - Google Patents

Mammary gland image registration method based on deep learning Download PDF

Info

Publication number
CN113034453B
CN113034453B CN202110279530.4A CN202110279530A CN113034453B CN 113034453 B CN113034453 B CN 113034453B CN 202110279530 A CN202110279530 A CN 202110279530A CN 113034453 B CN113034453 B CN 113034453B
Authority
CN
China
Prior art keywords
image
registration
deformation field
network
moving
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110279530.4A
Other languages
Chinese (zh)
Other versions
CN113034453A (en
Inventor
欧阳效芸
谢耀钦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN202110279530.4A priority Critical patent/CN113034453B/en
Publication of CN113034453A publication Critical patent/CN113034453A/en
Priority to PCT/CN2021/137602 priority patent/WO2022193750A1/en
Application granted granted Critical
Publication of CN113034453B publication Critical patent/CN113034453B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30068Mammography; Breast

Abstract

The invention discloses a breast image registration method based on deep learning. The method comprises the following steps: preprocessing a first position mammary gland image and a second position mammary gland image, and constructing a training data set by taking the first position mammary gland image as a fixed image and the second position mammary gland image as a moving image; inputting the training data set into a registration network, outputting a deformation field, wherein the deformation field represents the moving direction and the moving distance of a voxel on a moving image, and acting the obtained deformation field on the moving image to obtain a transformed moving image; and training the registration network, and calculating a loss function value between the deformation field and the fixed image as well as the transformed moving image until a set optimization convergence condition is met to obtain the trained registration network for subsequent breast image registration. The invention can realize more accurate registration effect and has high registration speed.

Description

Mammary gland image registration method based on deep learning
Technical Field
The invention relates to the technical field of medical image processing, in particular to a breast image registration method based on deep learning.
Background
The breast images in the supine and prone positions refer to breast images obtained by a professional imaging apparatus in different body positions for the same patient. Image registration refers to the process of making correspondence points in two images identical in spatial location by taking one or a series of transformations between the two images. As the mammary tissue is a soft tissue and the body positions of the patient change, the mammary glands in different body positions have larger shape change, so that the difficulty of making corresponding points in mammary images in the supine position and the prone position consistent in space by a related registration method is higher. The registration of the supine and prone breast images has potential applications in breast cancer diagnosis, breast cancer surgery and planning of radiation therapy.
In order to meet the application of the supine position and prone position breast image registration in the aspects, the existing supine position and prone position breast image registration methods are mainly divided into two types:
1) A registration method based on a biomechanical model. Assume that one of the images is a fixed image and the other image is a moving image. The specific flow of the registration method comprises the following steps: 1) Modeling acting force between mammary tissues and gravity borne by mammary glands in different body positions; 2) Predicting a transformation between a moving image and a fixed image with a model solver; 3) Calculating the similarity between the transformed moving image and the fixed image; 4) And (4) if the similarity meets the set condition, finishing the registration, and if the similarity does not meet the set condition, optimizing the model parameters, and repeating the steps (1) to (4) until the similarity meets a certain condition.
2) A hybrid registration method based on biomechanical models and gray-scale based non-rigid registration. According to the method, firstly, a registration method based on a biomechanics model is used for carrying out rough registration on a mammary gland image, and then a non-rigid registration method based on gray scale is used for carrying out fine registration, so that registration errors of the registration method based on the biomechanics model are reduced, and more accurate registration is realized.
For a registration method based on a biomechanical model and a mixed registration method based on the biomechanical model and a non-rigid registration method based on gray scale, the two methods need to carry out biomechanical modeling, the modeling process needs to consider various internal forces and external forces applied to mammary tissue, parameters of the biomechanical model may be different among different individuals, and the registration takes a long time and does not meet clinical requirements.
For the existing large deformation registration method based on deep learning, the method mainly aims at brain images, lung images and liver images, and the images relate to deformation which is not large between supine position breast images and prone position breast images. At present, a deep learning network special for breast image registration does not exist.
In the existing registration method based on deep learning, aiming at the problem of large deformation registration similar to breast images in the supine position and the prone position, a multi-step solving method is generally adopted, and each step aims at obtaining a deformation field which is more similar between the images and obtained, namely, the transformation between the two images is smooth. Sometimes, the method also needs to use the traditional registration method to perform rigid registration or affine registration preprocessing.
In a word, the existing mammary gland image registration method in the supine position and the prone position has the problems of complex modeling, low registration accuracy, slow registration speed and model difference among individuals, and the existing mammary gland image registration method based on deep learning does not exist.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a mammary image registration method based on deep learning, which is a new technical scheme of mammary image registration in supine and prone positions based on deep learning.
The invention provides a breast image registration method based on deep learning. The method comprises the following steps:
step S1, preprocessing a first volume mammary gland image and a second volume mammary gland image, and constructing a training data set by taking the first volume mammary gland image as a fixed image and the second volume mammary gland image as a moving image;
s2, inputting the training data set into a registration network, outputting a deformation field, wherein the deformation field represents the moving direction and the moving distance of a voxel on a moving image, and acting the obtained deformation field on the moving image to obtain a transformed moving image;
and S3, training the registration network, calculating a loss function value between the deformation field and the fixed image as well as the transformed moving image until a set optimization convergence condition is met, and obtaining the trained registration network for subsequent breast image registration.
Compared with the prior art, the method has the advantages that compared with the existing supine position and prone position breast image registration method, the method does not need a complex modeling process, only a registration network needs to be built, and the registration speed is high and the registration precision is high. Different from the existing mammary gland image registration method, the method does not have obvious difference of different individual registration performances, and can perform registration of the supine position mammary gland image on the prone position mammary gland image and registration of the prone position mammary gland image on the supine position mammary gland image, for example, only the sequence of the images input into the network input layer needs to be exchanged once.
Other features of the present invention and advantages thereof will become apparent from the following detailed description of exemplary embodiments thereof, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description, serve to explain the principles of the invention.
Fig. 1 is a flowchart of a method of deep learning based breast image registration according to an embodiment of the present invention;
fig. 2 is a flow chart of deep learning based breast image registration in supine and prone positions according to one embodiment of the present invention;
FIG. 3 is a schematic view of a supine and prone breast image according to one embodiment of the present invention;
fig. 4 is a schematic diagram of the general architecture of a supine and prone breast image registration network according to one embodiment of the present invention;
fig. 5 is a schematic diagram of an affine registration network of breast images in supine and prone positions according to one embodiment of the present invention;
FIG. 6 is a schematic diagram of an elastic registration network for supine and prone breast images according to one embodiment of the present invention;
fig. 7 is a diagram illustrating registration results of a registration network according to an embodiment of the present invention.
Detailed Description
Various exemplary embodiments of the present invention will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present invention unless specifically stated otherwise.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the invention, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any particular value should be construed as merely illustrative, and not limiting. Thus, other examples of the exemplary embodiments may have different values.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
In brief, the present invention firstly inputs the preprocessed image data into the registration network, and generates the deformation field through the registration network. And then acting the deformation field on the image to be registered to obtain the transformed image to be registered. And calculating a deformation field by using the loss function, and fixing the loss function value between the image and the transformed image to be registered. And inputting the loss function value into a deep learning optimizer, and optimizing network parameters until set conditions are met.
Specifically, as shown in fig. 1 and fig. 2, the provided breast image registration method based on deep learning includes the following steps.
And S100, preprocessing the acquired breast images of different body positions to construct a data set.
In the following, registration between breast images in two different positions, supine and prone, is exemplified.
In one embodiment, the process of pre-processing the image comprises:
and step S101, acquiring image data and segmenting a mammary gland partial image.
Since the acquired image data includes unnecessary parts such as bones, heart, liver, etc., the breast part in the image is segmented by, for example, segmentation software 3 dSlicer.
Step S102, rotating the prone position mammary gland image to reduce learning pressure of affine registration.
And step S103, cutting the segmented mammary gland image to remove redundant background.
And step S104, adjusting the voxel spacing and size of the image, and reducing the registration difficulty caused by the difference of the voxel spacing in different body positions.
In step S105, a normalization process normalizes the voxel values to [0,1].
Step S106, data enhancement is performed to expand the data set sample.
In order to improve the generalization capability of the network, the preprocessing step in the invention also comprises data enhancement. Specifically, the preprocessed data processed in steps S101-S104 are subjected to cubic B-spline conversion, and any one of the supine position images and any one of the prone position images of the same individual are combined in channel dimensions, where the combination sequence may be the supine position image as an image on the first channel or the prone position image as an image on the first channel, and in short, the image on the first channel is a fixed image I F The image on the second channel is a moving image I M And thus the final data set is obtained. For example, in the obtained dataset, 2300 combined images are a training dataset, 100 combined images are a validation dataset, and 100 combined images are a test dataset.
In this step, the constructed data set contains correspondences between fixed image and moving image spatial location points in different body positions.
And S200, inputting the training data set into a registration network, outputting a deformation field, wherein the deformation field represents the moving direction and the moving distance of a voxel on a moving image, and acting the obtained deformation field on the moving image to obtain a transformed moving image.
Specifically, the step S200 includes:
step S201, an affine registration network is used to implement affine registration preprocessing.
The deformation between the supine and prone breast images is large as shown in fig. 3, where the left image represents the supine breast image and the right image represents the prone image. The registration results are not very good using a single network for registration, so multiple networks are used for registration of the breast image. In order to reduce the data preprocessing steps as much as possible and to achieve end-to-end registration, affine registration preprocessing steps are therefore implemented with affine registration networks. Therefore, all images to be registered use the same affine registration network, and the process that parameters need to be adjusted manually by using a traditional affine registration method is reduced.
Furthermore, for memory reasons, in one embodiment, the registration network of the supine and prone breast images is designed to contain one affine registration network and three elastic registration networks. The overall architecture of the network is shown in fig. 4.
Step S202, fixing the pre-processed image I F And moving image I M Inputting the data into affine registration network, performing affine registration and outputting deformation field phi 1 . Said deformation field phi 1 Representing moving images I M The direction of movement and the distance of movement of the upper voxel.
Step S203, the Space Transformation Network (STN) transforms the field phi 1 And moving image I M As an input, a moving image I 'of which the moving image is subjected to the transformation of the deformed field is output' M
Step S204, fixing the image I F And the mobile image I 'after being transformed by the affine registration network' M Inputting to the first elastic registration network for local registration and outputting deformation field phi 2 . Said deformation field phi 2 Representing the transformed moving image I' M The direction of movement and the distance of movement of the upper voxel.
Step S205, the obtained deformation field phi 1 And phi 2 Combining to obtain a combined deformation field
Figure BDA0002978185010000066
Step S206, will
Figure BDA0002978185010000061
And I M Input deviceTo a space transformation network to obtain a transformed moving image I ″ M
Step S207, fixing the image I F And a transformed moving image I M Inputting to a second elastic registration network for local registration and outputting a deformation field phi 3 . Said deformation field phi 3 Representing a transformed moving image I M The direction of movement and the distance of movement of the upper voxel.
Step S208, the obtained deformation field phi 1 ,φ 2 And phi 3 Combining to obtain a combined deformation field
Figure BDA0002978185010000062
Step S209 is to
Figure BDA0002978185010000063
And I M Inputting the converted moving image I 'into a space conversion network to obtain a converted moving image I' M
Step S210, fixing the image I F And a transformed moving image I' M Inputting to a third elastic registration network for local registration and outputting a deformation field phi 4 . Said deformation field phi 4 Representing the transformed moving image I' M The direction of movement and the distance of movement of the upper voxel.
Step S211, the obtained deformation field phi 1 ,φ 2 ,φ 3 And phi 4 Combining to obtain a combined deformation field
Figure BDA0002978185010000064
Step S212, will
Figure BDA0002978185010000065
And I M Inputting the converted moving image I' into a space conversion network to obtain a converted moving image M
Further, the step S201 includes:
step S201.1, designing an affine registration network, which mainly comprises an input module, a down-sampling module, an affine transformation parameter output module and a whole image deformation field module.
An example of an affine registration network is shown in fig. 5, where the numbers represent the number of channels and the rectangles of different colors represent different operations.
The input module reads a data set obtained after preprocessing to an input layer of a network to be used as the input of the network.
The down-sampling module refers to reducing the size of the input layer image through a series of operations. For example, the operation order of the downsampling module is convolution operation with convolution kernel of 3 × 3 × 3 and step size of 1, convolution operation with leak relu activation function, convolution operation with convolution kernel of 3 × 3 × 3 and step size of 2, convolution operation with leak relu activation function and residual operation of 4, and convolution operation with convolution kernel of 4 × 3 × 3 and step size of 2 and convolution operation with leak relu activation function of 4 occur alternately. The operation sequence of the residual operation is that the operation of two LeakyReLU activation functions and the convolution operation with two convolution kernels of 3 multiplied by 3 and step length of 1 alternately appear, and finally the input of the residual operation and the output of the second convolution operation are added to obtain the output of the residual operation.
The affine transformation parameter output module is used for further processing the output of the down-sampling module, and for example, includes a convolution operation with a convolution kernel of 3 × 3 × 3 and a step size of 1, an operation of a leakyreu activation function, and convolution operations with outputs of 9 numbers and 3 numbers. The first 9 numbers represent the clipping, scaling, rotation parameters, etc. of the affine transformation parameters. The last 3 numbers are translation parameters. And (4) obtaining a whole-image deformation field through the obtained 12 affine transformation parameters, namely obtaining a whole-image deformation field module. The magnitude of the deformation field is the same as the magnitude of the moving image. Since the deformation field is a point in three-dimensional space, the position change of each point corresponds to the position change in three directions, and therefore the channel dimension of the deformation field is 3.
Step S201.2: and designing an elastic registration network structure.
In one embodiment, the structure of the three elastic registration networks employs the same network structure, as shown in fig. 6. The basic structure of the elastic registration network comprises a down-sampling process and an up-sampling process.
For example, the downsampling process includes: one convolution kernel of 3 × 3 × 3 with a convolution operation of step size 1, one operation of the LeakyReLU activation function, one convolution kernel of 3 × 3 × 3 with a convolution operation of step size 2, one operation of the LeakyReLU activation function and 4 residual operations, 4 convolution kernels of 3 × 3 × 3 with a convolution operation of step size 2 and 4 operations of the LeakyReLU activation function occurring alternately. Followed by a convolution operation with a convolution kernel of 3 x 3 and a step size of 1 and an operation of the LeakyReLU activation function. The upsampling process includes 4 complex upsampling operations and one simple upsampling process. The complex up-sampling process comprises the operations of transposition convolution, the output of the transposition convolution operation and the output of down-sampling of the same layer are combined in the channel dimension, and then the convolution operation with convolution kernel of 1 multiplied by 1 and step length of 1, the operation of a LeakyReLU activation function, the convolution operation with convolution kernel of 3 multiplied by 3 and step length of 1 and the operation of a LeakyReLU activation function are carried out. The simple up-sampling process comprises the operation of transposition convolution, the output of the transposition convolution operation and the output of down-sampling of the same layer are combined in the channel dimension, and then the operation of convolution operation with convolution kernel of 3 multiplied by 3 and step length of 1 and the operation of an LeakyReLU activation function are carried out. The output of the elastic registration network is the deformation field.
Compared with the existing large deformation registration method based on deep learning, the embodiment adds the residual error module, and adds 1 × 1 × 1 convolution operation in the up-sampling process, so that the feature utilization rate is improved, and the dimensionality of the feature map is reduced with less calculation amount.
Step S300, training a registration network, and calculating a loss function value between a deformation field and a fixed image and a transformed moving image.
In this step S300, the values of the loss functions between the deformation field and the fixed and transformed moving images are calculated using a loss function, e.g. the total loss function of the registration network as in equation (1), L sim The normalized cross-correlation loss function is expressed in a specific form as shown in formula (2),
Figure BDA0002978185010000081
the mean gray value of the image is indicated, p represents the point in the image and Ω represents the dimension of the image. L is smooth A regularization loss function representing a deformation field. The concrete form is as formula (3).
Figure BDA0002978185010000082
Figure BDA0002978185010000083
Figure BDA0002978185010000084
Where theta denotes the deformation field parameter,
Figure BDA0002978185010000085
representing the derivative of the deformation field in the x-direction,
Figure BDA0002978185010000086
representing the derivative of the deformation field in the direction of the y-axis,
Figure BDA0002978185010000087
representing the derivative of the deformation field in the z-direction.
The loss function of the affine registration network is the fixed image I F And a transformed image I' M Normalized cross-correlation loss function therebetween; the loss function of the first elastic registration network is set to the deformation field phi 2 A regularization loss function of (a); the loss function of the second elastic registration network is the deformation field phi 3 A regularization loss function of (a); the loss function of the third elastic registration network is the deformation field phi 4 A regularization loss function of; further comprising a fixed image I F And a transformed image I "" M Normalized cross-correlation loss function between. The three elastic registration network loss functions are designed to generate a networkThe deformation field is smoother and more practical, and although the designed loss function is more emphasized on the smoothness of the deformation field, the obtained registration accuracy is also high.
And step S400, inputting the loss function value into the deep learning optimizer, and updating parameters in the network by using the optimizer.
And S500, optimizing the registration network for multiple times until a set optimization convergence condition is reached to obtain a trained registration network.
For example, steps S200 to S400 are performed in a loop, and the registration network is optimized for multiple times until the number of iterations reaches a set number or the loss function value is smaller than a set threshold, so as to obtain a trained registration network.
Preferably, after training the registration network, the validation data set or the test data set may be input into the trained registration network for registration network performance evaluation. Specifically, the evaluation process comprises the steps of:
step S601, visualizing the moving image, the fixed image and the moving image after the transformation of the deformation field, and evaluating the registration performance of the registration network from the aspect of the image.
Step S602, calculating a normalized cross-correlation value and a normalized cross-information value between the fixed image and the moving image after the transformation of the deformation field, and evaluating the registration performance of the registration network from the aspect of image similarity.
And step S603, calculating the Jacobian determinant value of the deformation field, and evaluating whether the deformation field generated by the registration network accords with reality.
Fig. 7 is a schematic diagram of a visualization of a registration result obtained by testing a registration network with a test set, wherein a first column represents a fixed image, a second column represents a moving image, a third column represents an image obtained by transforming the moving image through a deformation field generated by the registration network in the present invention, and a fourth column represents the deformation field. The registration of the first and third lines means that the prone position breast image is registered to the supine position breast image, and the registration of the second and fourth lines is that the supine position breast image is registered to the prone position breast image. A total of 15 patients had a mammary image dataset in supine and prone positions in the experiment. When the performance of the registration network is tested by using a cross-validation method, the registration performance of different individuals does not change greatly. Cross-validation refers to training the registration network by picking the data set of 13 patients from 15 patient data, and using the remaining 2 patient data sets to test the performance of the network. Through test experiments, the method has high registration performance and high registration speed.
It should be noted that those skilled in the art can appropriately change or modify the above-described embodiments without departing from the spirit and scope of the present invention. For example, different elastic registration networks are designed for different stages in the registration process, thereby achieving a better registration effect and reducing the amount of calculation. As another example, the invention may be applied to registration between breast images of other different body positions or other types of medical images. As another example, other forms of loss functions (e.g., weights that can be set to multiple losses) may be designed, or network architectures with different numbers, different numbers of layers, different convolution kernel sizes, etc. may be set.
In conclusion, the supine position and prone position breast image registration method based on deep learning solves the problems that the existing supine position and prone position breast image registration method is complex in modeling, low in registration accuracy, low in registration speed and different in models among individuals. The registration network is an end-to-end registration network and does not require any registration related pre-processing. By adding the residual error module into the registration network, the utilization rate of the features is improved, and the registration accuracy can be improved compared with a simple convolution network. Adding the 1 × 1 × 1 convolution operation to the upsampling process of the elastic registration can reduce the dimensionality of the feature map with less computation. Training all networks together can allow all networks to learn the registration of breast images together. In addition, the method has generalization on breast image registration, is not specific to a certain individual, and is suitable for image registration among other different postures. In practical application, images to be registered are input into a trained registration network, and then image registration with high accuracy and high registration speed can be achieved.
The present invention may be a system, method and/or computer program product. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied therewith for causing a processor to implement various aspects of the present invention.
The computer-readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives the computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present invention may be assembler instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + +, python, or the like, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present invention are implemented by personalizing an electronic circuit, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA), with state information of computer-readable program instructions, which can execute the computer-readable program instructions.
Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. It is well known to those skilled in the art that implementation by hardware, by software, and by a combination of software and hardware are equivalent.
Having described embodiments of the present invention, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or technical improvements to the market, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. The scope of the invention is defined by the appended claims.

Claims (8)

1. A breast image registration method based on deep learning comprises the following steps:
the method comprises the following steps that S1, a first position mammary gland image and a second position mammary gland image are preprocessed, the first position mammary gland image is used as a fixed image, and the second position mammary gland image is used as a moving image to construct a training data set;
s2, inputting the training data set into a registration network, outputting a deformation field, wherein the deformation field represents the moving direction and the moving distance of a voxel on a moving image, and acting the obtained deformation field on the moving image to obtain a transformed moving image;
s3, training the registration network, calculating a loss function value between a deformation field and a fixed image as well as a transformed moving image until a set optimization convergence condition is met, and obtaining the trained registration network for subsequent breast image registration;
the registration network comprises an affine registration network and a plurality of elastic registration networks, wherein the affine registration network takes a fixed image and a moving image as input, is used for affine registration and outputs a deformation field; the elastic registration networks are set to have the same or different structures and are used for taking a fixed image and a moving image transformed by the affine registration network as input so as to carry out local registration;
wherein the plurality of elastic registration networks include a first elastic registration network, a second elastic registration network, and a third elastic registration network, and step S2 includes:
to fix the image I F And moving image I M Inputting into affine registration network, performing affine registration, and outputting deformation field phi 1 The deformation field phi 1 Representing moving images I M The moving direction and the moving distance of the upper voxel;
transforming the deformation field phi by using a spatial transformation network 1 And moving image I M As an input, a moving image I 'of which the moving image is subjected to the transformation of the deformed field is output' M
To fix an image I F And the mobile image I 'after being transformed by the affine registration network' M Inputting into a first elastic registration network for local registration and outputting a deformation field phi 2 The deformation field phi 2 Representing the transformed moving image I' M The moving direction and the moving distance of the upper voxel;
the obtained deformation field phi 1 And phi 2 Combining to obtain a combined deformation field
Figure FDA0003806500790000011
Will combine the deformation fields
Figure FDA0003806500790000012
And moving image I M Inputting the image data into a space transformation network to obtain a transformed moving image I ″ M
To fix an image I F And a transformed moving image I M Inputting into a second elastic registration network for local registration and outputting a deformation field phi 3 The deformation field phi 3 Representing a transformed moving image I M The moving direction and the moving distance of the upper voxel;
the obtained deformation field phi 1 ,φ 2 And phi 3 Combining to obtain a combined deformation field
Figure FDA0003806500790000021
Will combine the deformation fields
Figure FDA0003806500790000022
And moving image I M Input to skyAn inter-conversion network to obtain a converted moving image I' M
To fix an image I F And a transformed moving image I' M Inputting the data into a third elastic registration network for local registration and outputting a deformation field phi 4 The deformation field phi 4 Representing the transformed moving image I' M The moving direction and the moving distance of the upper voxel;
the obtained deformation field phi 1 ,φ 2 ,φ 3 And phi 4 Combining to obtain a combined deformation field
Figure FDA0003806500790000023
Will combine the deformation fields
Figure FDA0003806500790000024
And moving image I M Inputting the converted moving image I' into a space conversion network to obtain a converted moving image M
2. The method of claim 1, wherein the breast image of the first body position is a supine breast image and the second body position is a prone breast image, or the first body position breast image is a prone breast image and the second body position breast image is a supine breast image.
3. The method of claim 1, wherein a total loss function for training the registration network is expressed as:
Figure FDA0003806500790000025
wherein L is sim Represents a normalized cross-correlation loss function expressed as:
Figure FDA0003806500790000026
Figure FDA0003806500790000027
representing the mean gray value of the image, p representing a point in the image, Ω representing the dimension of the image, L smooth A regularization loss function representing a deformation field, the expression being:
Figure FDA0003806500790000028
where theta denotes the deformation field parameter,
Figure FDA0003806500790000029
representing the derivative of the deformation field in the x-direction,
Figure FDA00038065007900000210
representing the derivative of the deformation field in the direction of the y-axis,
Figure FDA00038065007900000211
representing the derivative of the deformation field in the z-axis direction.
4. The method of claim 1, wherein the affine registration network comprises an input module for reading a dataset to an input layer of the registration network, a downsampling module, an affine transformation parameter output module, and a full-map deformation field module; the down-sampling module is used for reducing the size of the input layer image through a series of operations, including convolution operation, activation processing and residual error operation; the affine transformation parameter output module is used for processing the output of the down-sampling module to output affine transformation parameters; and the whole-image deformation field module is used for solving a whole-image deformation field by using affine transformation parameters.
5. The method of claim 1, constructing the training data set according to the steps of:
obtaining a first volume breast image and a second volume breast image from the acquired image data segmentation;
shearing the segmented mammary gland image to remove the background;
adjusting the voxel spacing and size of the image;
carrying out cubic B-spline transformation on the data after the voxel spacing and size of the image are adjusted, and combining any one first position mammary gland image and any one second position mammary gland image of the same individual on the channel dimension after the transformation, wherein the image on the first channel is a fixed image, and the image on the second channel is a moving image;
and carrying out normalization processing on the voxel values to obtain a final training data set.
6. A breast image registration method, comprising: inputting a breast image to be registered into a trained registration network obtained according to the method of any one of claims 1 to 5, obtaining a registered image.
7. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 6.
8. A computer device comprising a memory and a processor, on which memory a computer program is stored which is executable on the processor, characterized in that the processor implements the steps of the method according to any one of claims 1 to 6 when executing the program.
CN202110279530.4A 2021-03-16 2021-03-16 Mammary gland image registration method based on deep learning Active CN113034453B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110279530.4A CN113034453B (en) 2021-03-16 2021-03-16 Mammary gland image registration method based on deep learning
PCT/CN2021/137602 WO2022193750A1 (en) 2021-03-16 2021-12-13 Breast image registration method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110279530.4A CN113034453B (en) 2021-03-16 2021-03-16 Mammary gland image registration method based on deep learning

Publications (2)

Publication Number Publication Date
CN113034453A CN113034453A (en) 2021-06-25
CN113034453B true CN113034453B (en) 2023-01-10

Family

ID=76470820

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110279530.4A Active CN113034453B (en) 2021-03-16 2021-03-16 Mammary gland image registration method based on deep learning

Country Status (2)

Country Link
CN (1) CN113034453B (en)
WO (1) WO2022193750A1 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113034453B (en) * 2021-03-16 2023-01-10 深圳先进技术研究院 Mammary gland image registration method based on deep learning
CN113643332B (en) * 2021-07-13 2023-12-19 深圳大学 Image registration method, electronic device and readable storage medium
CN115457020B (en) * 2022-09-29 2023-12-26 电子科技大学 2D medical image registration method fusing residual image information
CN116433730B (en) * 2023-06-15 2023-08-29 南昌航空大学 Image registration method combining deformable convolution and modal conversion
CN116958217B (en) * 2023-08-02 2024-03-29 德智鸿(上海)机器人有限责任公司 MRI and CT multi-mode 3D automatic registration method and device

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110728706A (en) * 2019-09-30 2020-01-24 西安电子科技大学 SAR image fine registration method based on deep learning

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107072595B (en) * 2013-12-31 2021-11-26 威斯康星州医药大学股份有限公司 Adaptive re-planning based on multi-modality imaging
CN108549906A (en) * 2018-04-10 2018-09-18 北京全域医疗技术有限公司 Radiotherapy hooks target method for registering images and device
US10607108B2 (en) * 2018-04-30 2020-03-31 International Business Machines Corporation Techniques for example-based affine registration
CN109872332B (en) * 2019-01-31 2022-11-11 广州瑞多思医疗科技有限公司 Three-dimensional medical image registration method based on U-NET neural network
CN110599528B (en) * 2019-09-03 2022-05-27 济南大学 Unsupervised three-dimensional medical image registration method and system based on neural network
CN110827335B (en) * 2019-11-01 2020-10-16 北京推想科技有限公司 Mammary gland image registration method and device
CN112150425A (en) * 2020-09-16 2020-12-29 北京工业大学 Unsupervised intravascular ultrasound image registration method based on neural network
CN113034453B (en) * 2021-03-16 2023-01-10 深圳先进技术研究院 Mammary gland image registration method based on deep learning

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110728706A (en) * 2019-09-30 2020-01-24 西安电子科技大学 SAR image fine registration method based on deep learning

Also Published As

Publication number Publication date
WO2022193750A1 (en) 2022-09-22
CN113034453A (en) 2021-06-25

Similar Documents

Publication Publication Date Title
CN113034453B (en) Mammary gland image registration method based on deep learning
EP3511942B1 (en) Cross-domain image analysis using deep image-to-image networks and adversarial networks
JP7102531B2 (en) Methods, Computer Programs, Computer-Readable Storage Mediums, and Devices for the Segmentation of Anatomical Structures in Computer Toxiography Angiography
CN112907439B (en) Deep learning-based supine position and prone position breast image registration method
JP4545140B2 (en) Image data processing method, medical observation system, medical examination apparatus, and computer program
JP2022191354A (en) System and method for anatomical structure segmentation in image analysis
Chan et al. Volumetric parametrization from a level set boundary representation with PHT-splines
US10275909B2 (en) Systems and methods for an integrated system for visualizing, simulating, modifying and 3D printing 3D objects
JP7349158B2 (en) Machine learning devices, estimation devices, programs and trained models
CN112634265B (en) Method and system for constructing and segmenting fully-automatic pancreas segmentation model based on DNN (deep neural network)
Chen et al. GPU-based polygonization and optimization for implicit surfaces
Zhong et al. Deep action learning enables robust 3D segmentation of body organs in various CT and MRI images
Santa Cruz et al. CorticalFlow++: boosting cortical surface reconstruction accuracy, regularity, and interoperability
Shi et al. Direct cortical mapping via solving partial differential equations on implicit surfaces
CN108805876B (en) Method and system for deformable registration of magnetic resonance and ultrasound images using biomechanical models
Garcia Guevara et al. Elastic registration based on compliance analysis and biomechanical graph matching
Jin et al. High-resolution cranial implant prediction via patch-wise training
CN111402221B (en) Image processing method and device and electronic equipment
Garrido et al. Image segmentation with cage active contours
US20200234494A1 (en) Structure estimating apparatus, structure estimating method, and computer program product
CN112991406A (en) Method for constructing brain atlas based on differential geometry technology
Twining et al. Constructing an atlas for the diffeomorphism group of a compact manifold with boundary, with application to the analysis of image registrations
JP7433913B2 (en) Route determination method, medical image processing device, model learning method, and model learning device
WO2007070008A1 (en) Warping and transformation of images
Unal Nonparametric joint shape learning for customized shape modeling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant