CN113610752A - Mammary gland image registration method, computer device and storage medium - Google Patents

Mammary gland image registration method, computer device and storage medium Download PDF

Info

Publication number
CN113610752A
CN113610752A CN202110663128.6A CN202110663128A CN113610752A CN 113610752 A CN113610752 A CN 113610752A CN 202110663128 A CN202110663128 A CN 202110663128A CN 113610752 A CN113610752 A CN 113610752A
Authority
CN
China
Prior art keywords
image
sample
registered
registration
gland
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110663128.6A
Other languages
Chinese (zh)
Inventor
霍璐
姜娈
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai United Imaging Intelligent Healthcare Co Ltd
Original Assignee
Shanghai United Imaging Intelligent Healthcare Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai United Imaging Intelligent Healthcare Co Ltd filed Critical Shanghai United Imaging Intelligent Healthcare Co Ltd
Priority to CN202110663128.6A priority Critical patent/CN113610752A/en
Publication of CN113610752A publication Critical patent/CN113610752A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/14Transformations for image registration, e.g. adjusting or mapping for alignment of images
    • G06T3/147Transformations for image registration, e.g. adjusting or mapping for alignment of images using affine transformations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/344Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • G06T2207/10096Dynamic contrast-enhanced magnetic resonance imaging [DCE-MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30068Mammography; Breast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The application relates to a breast image registration method, a computer device and a storage medium. The method comprises the following steps: acquiring a first image and a second image; inputting the first image and the second image into a preset registration network, and obtaining a deformation field corresponding to the second image through the registration network; the registration network obtains a registered sample breast image by inputting a first sample image and a second sample image into a preset initial registration network, obtains a registered sample gland segmentation image according to the registered sample breast image, obtains a first loss according to the first sample image and the registered sample gland segmentation image, obtains a second loss according to the segmentation image corresponding to the first sample image and the registered sample gland segmentation image, and trains the initial registration network according to the first loss and the second loss; and obtaining a registered mammary gland image according to the second image and the deformation field. By adopting the method, the time consumption of breast DCE-MRI image registration can be reduced.

Description

Mammary gland image registration method, computer device and storage medium
Technical Field
The present application relates to the field of image registration technologies, and in particular, to a breast image registration method, a computer device, and a storage medium.
Background
In the process of mammary gland dynamic enhanced magnetic resonance imaging (DCE-MRI) scanning, mammary glands are imaged before, in the middle and at the later stages of contrast agent injection, and multi-slice and multi-time-phase scanning images can be obtained. Generally, due to the diffusion of contrast agents along with time, the breast DCE-MRI images of different time phases on the same layer have gray level difference, and the difference is most obvious in a focus area, so that the diagnosis of a tissue focus is inaccurate; in addition, the DCE-MRI scanning process of the mammary gland lasts for 2-3 minutes, the respiratory motion of a patient can cause the deformation displacement of mammary gland tissues, so that the internal tissue areas of the images in different time phases on the same layer are not aligned, and the analysis and the processing of the DCE-MRI images of the mammary gland are difficult. Therefore, before analyzing the DCE-MRI of the breast, the DCE-MRI images of the breast at different time phases need to be registered to eliminate the deformation displacement of the breast tissue and the gray scale difference between the DCE-MRI images of the breast at different time phases in the same layer.
In the conventional technology, the registration of the DCE-MRI image of the breast is mainly performed by performing iterative optimization on the DCE-MRI image of the breast, for example, performing iterative optimization on the DCE-MRI image of the breast by using a Free From Deformation (FFD) model, an HS optical flow field model, a markov random field with a similarity evaluation model, and the like, so as to achieve the registration of the DCE-MRI image of the breast.
However, the conventional breast DCE-MRI image registration method has the problem of long time consumption.
Disclosure of Invention
In view of the above, there is a need to provide a breast image registration method, apparatus, computer device and storage medium capable of reducing the time consumption of breast DCE-MRI image registration.
A breast image registration method, the method comprising:
acquiring a first image and a second image; wherein the first image and the second image are both breast dynamic scanning images; the breast dynamic scanning image comprises a breast and a gland;
inputting the first image and the second image into a preset registration network, and obtaining a deformation field corresponding to the second image through the registration network; wherein the registration network is obtained by the following training method: inputting a first sample image and a second sample image into a preset initial registration network to obtain a registered sample mammary image, obtaining a registered sample gland segmentation image according to the registered sample mammary image, obtaining a first loss according to the first sample image and the registered sample mammary image, obtaining a second loss according to a segmentation image corresponding to the first sample image and the registered sample gland segmentation image, and training the initial registration network according to the first loss and the second loss;
and obtaining a registered mammary gland image according to the second image and the deformation field.
In one embodiment, the first image and the second image are breast dynamic enhanced magnetic resonance images of the examinee in different scanning phases of the same body position; before the first image and the second image are input into a preset registration network and a deformation field corresponding to the second image is obtained through the registration network, the method further includes:
pre-registering the second image and the first image through three-dimensional affine transformation to obtain a first registered image;
the inputting the first image and the second image into a preset registration network, and obtaining a deformation field corresponding to the second image through the registration network includes:
and inputting the first image and the first registration image into the registration network, and obtaining the deformation field through the registration network.
In one embodiment, the method further comprises:
acquiring a first gland mask image and a second gland mask image; the first gland mask image is a gland mask image corresponding to the first image, and the second gland mask image is a gland mask image corresponding to the second image;
pre-registering the first gland mask image and the second gland mask image through three-dimensional affine transformation to obtain a second registered image;
and obtaining a registered glandular image according to the second registered image and the deformation field.
In one embodiment, the obtaining a registered breast image according to the second image and the deformation field includes:
and inputting the second image and the deformation field into a space converter model, and obtaining the registered mammary gland image through the space converter model.
In one embodiment, the spatial transformer model includes a grid generator and a sampler, the inputting the second image and the deformation field into a spatial transformer model, and obtaining the registered breast image through the spatial transformer model includes:
inputting the deformation field into the grid generator to obtain a first mapping relation between each voxel of the second image and each voxel of the breast image after registration;
and inputting the first mapping relation and the second image into the sampler, and obtaining the registered mammary gland image through the sampler.
In one embodiment, the obtaining a registered glandular image according to the second registered image and the deformation field includes:
and inputting the second registration image and the deformation field into a space converter model, and obtaining the registered glandular image through the space converter model.
In one embodiment, the spatial converter model includes a grid generator and a sampler, the inputting the second registered image and the deformation field into the spatial converter model, and obtaining the registered glandular image through the spatial converter model includes:
inputting the deformation field into the grid generator to obtain a second mapping relation between each voxel of the second registration image and each voxel of the registered gland image;
and inputting the second mapping relation and the second registration image into the sampler, and obtaining the registered gland image through the sampler.
In one embodiment, the loss of the initial registration network further includes a smoothing loss, and the training process of the registration network includes:
pre-registering the first sample image and the second sample image through three-dimensional affine transformation to obtain a first sample registration image;
pre-registering the gland mask image corresponding to the first sample image and the gland mask image corresponding to the second sample image through three-dimensional affine transformation to obtain a second sample registration image;
inputting the first sample image and the first sample registration image into the initial registration network, and obtaining a sample deformation field and the smoothing loss corresponding to the first sample registration image through the initial registration network;
inputting the first sample registration image and the sample deformation field into the space transformer model, and obtaining a registered sample mammary gland image through the space transformer model;
inputting the second sample registration image and the sample deformation field into the space transformer model, and obtaining the registered sample gland segmentation image through the space transformer model;
obtaining the first loss according to the registered sample breast image and the first sample image;
obtaining the second loss according to the registered sample gland segmentation image and the segmentation image corresponding to the first sample image;
and training the initial registration network according to the smoothing loss, the first loss and the second loss to obtain the registration network.
A breast image registration apparatus, the apparatus comprising:
the first acquisition module is used for acquiring a first image and a second image; wherein the first image and the second image are both breast dynamic scanning images; the breast dynamic scanning image comprises a breast and a gland;
the second acquisition module is used for inputting the first image and the second image into a preset registration network and obtaining a deformation field corresponding to the second image through the registration network; wherein the registration network is obtained by the following training method: inputting a first sample image and a second sample image into a preset initial registration network to obtain a registered sample mammary image, obtaining a registered sample gland segmentation image according to the registered sample mammary image, obtaining a first loss according to the first sample image and the registered sample mammary image, obtaining a second loss according to a segmentation image corresponding to the first sample image and the registered sample gland segmentation image, and training the initial registration network according to the first loss and the second loss;
and the first registration module is used for obtaining a registered mammary gland image according to the second image and the deformation field.
A computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
acquiring a first image and a second image; wherein the first image and the second image are both breast dynamic scanning images; the breast dynamic scanning image comprises a breast and a gland;
inputting the first image and the second image into a preset registration network, and obtaining a deformation field corresponding to the second image through the registration network; wherein the registration network is obtained by the following training method: inputting a first sample image and a second sample image into a preset initial registration network to obtain a registered sample mammary image, obtaining a registered sample gland segmentation image according to the registered sample mammary image, obtaining a first loss according to the first sample image and the registered sample mammary image, obtaining a second loss according to a segmentation image corresponding to the first sample image and the registered sample gland segmentation image, and training the initial registration network according to the first loss and the second loss;
and obtaining a registered mammary gland image according to the second image and the deformation field.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring a first image and a second image; wherein the first image and the second image are both breast dynamic scanning images; the breast dynamic scanning image comprises a breast and a gland;
inputting the first image and the second image into a preset registration network, and obtaining a deformation field corresponding to the second image through the registration network; wherein the registration network is obtained by the following training method: inputting a first sample image and a second sample image into a preset initial registration network to obtain a registered sample mammary image, obtaining a registered sample gland segmentation image according to the registered sample mammary image, obtaining a first loss according to the first sample image and the registered sample mammary image, obtaining a second loss according to a segmentation image corresponding to the first sample image and the registered sample gland segmentation image, and training the initial registration network according to the first loss and the second loss;
and obtaining a registered mammary gland image according to the second image and the deformation field.
According to the breast image registration method, the breast image registration device, the computer equipment and the storage medium, the registered sample breast image can be obtained by inputting the first sample image and the second sample image into the preset initial registration network, so that the registered sample gland segmentation image can be obtained according to the registered sample breast image, the first loss is obtained according to the first sample image and the registered sample breast image, the second loss is obtained according to the segmentation image corresponding to the first sample image and the registered sample gland segmentation image, the initial registration network can be trained according to the first loss and the second loss to obtain the registration network, the obtained first image and the obtained second image are input into the trained registration network, the deformation field corresponding to the second image can be quickly obtained through the registration network, compared with the traditional method that the registered breast image is obtained by carrying out iterative optimization on a breast dynamic enhanced magnetic resonance image, the efficiency of obtaining the deformation field corresponding to the second image is improved through the registration network, and the efficiency of obtaining the registered breast image according to the second image and the deformation field is improved, so that the efficiency of obtaining the registered breast image according to the second image and the deformation field is improved, and the time for obtaining the registered breast image is shortened.
Drawings
FIG. 1 is a diagram of an exemplary embodiment of a breast image registration method;
FIG. 2 is a flowchart illustrating a breast image registration method according to an embodiment;
FIG. 2a is a schematic diagram of a registration network in one embodiment;
FIG. 3 is a flowchart illustrating a breast image registration method according to an embodiment;
FIG. 4 is a schematic diagram of an embodiment of a space transformer;
FIG. 5 is a flowchart illustrating a breast image registration method according to an embodiment;
FIG. 6 is a schematic diagram illustrating a registration result obtained by the breast image registration method according to the present application in one embodiment;
FIG. 7 is a schematic diagram illustrating a registration result obtained by the breast image registration method according to the present application in one embodiment;
FIG. 8 is a graph comparing the registration results with and without a glandular mask assisted optimization loss function in one embodiment;
fig. 9 is a block diagram of a breast image registration apparatus according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The breast image registration method provided by the embodiment of the application can be applied to computer equipment shown in fig. 1. The computer device comprises a processor and a memory connected by a system bus, wherein a computer program is stored in the memory, and the steps of the method embodiments described below can be executed when the processor executes the computer program. Optionally, the computer device may further comprise a network interface, a display screen and an input device. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a nonvolatile storage medium storing an operating system and a computer program, and an internal memory. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. Optionally, the computer device may be a server, a personal computer, a personal digital assistant, other terminal devices such as a tablet computer, a mobile phone, and the like, or a cloud or a remote server, and the specific form of the computer device is not limited in the embodiment of the present application.
In one embodiment, as shown in fig. 2, a breast image registration method is provided, which is exemplified by the application of the method to the computer device in fig. 1, and includes the following steps:
s201, acquiring a first image and a second image; wherein, the first image and the second image are both breast dynamic scanning images; the dynamic scanning image of the breast includes the breast and the gland.
Wherein, the breast dynamic scanning image is obtained by dynamic scanning of the breast of the examinee. Optionally, the breast dynamic scan image may be a breast dynamic augmented magnetic resonance image. Generally, in the process of dynamic enhanced magnetic resonance scanning, nuclear magnetic signals are affected by diffusion of a contrast agent, so that brightness of the dynamic enhanced magnetic resonance image changes, mammary glands are imaged before, during and after injection of the contrast agent, and multi-slice and multi-time-phase scanning images can be obtained. Specifically, the computer device acquires a first image and a second image, wherein the first image and the second image are both dynamic breast scanning images, and the dynamic breast scanning images comprise breasts and glands. Alternatively, the computer device may acquire the first image and the second image from a magnetic resonance device, or may acquire the first image and the second image from a PACS (Picture Archiving and Communication Systems) server.
S202, inputting the first image and the second image into a preset registration network, and obtaining a deformation field corresponding to the second image through the registration network; the registration network is obtained by the following training method: inputting the first sample image and the second sample image into a preset initial registration network to obtain a registered sample mammary gland image, obtaining a registered sample gland segmentation image according to the registered sample mammary gland image, obtaining a first loss according to the first sample image and the registered sample mammary gland image, obtaining a second loss according to the segmentation image corresponding to the first sample image and the registered sample gland segmentation image, and training the initial registration network according to the first loss and the second loss.
Specifically, the computer device inputs the acquired first image and the acquired second image into a preset registration network, and obtains a deformation field corresponding to the second image through the registration network. Wherein the registration network is obtained by the following training method: inputting a first sample image and a second sample image into a preset initial registration network to obtain a registered sample mammary gland image, obtaining a registered sample gland segmentation image according to the registered sample mammary gland image, obtaining a first loss according to the first sample image and the registered sample mammary gland image, obtaining a second loss according to the segmentation image corresponding to the first sample image and the registered sample gland segmentation image, and training the initial registration network according to the first loss and the second loss. Optionally, the registered sample breast image may be a mask image of a breast corresponding to the registered sample breast image, or may be a breast segmentation image in the registered sample breast image. Alternatively, the network structure diagram of the registration network may be as shown in fig. 2a, generally, the input of the registration network is a two-channel three-dimensional image formed by stitching a first image and a second image, the size of the input image is 2 × 256 × 96, the coding path includes a plurality of blocks, each block is composed of two convolution layers of 3 × 3 × 3 and a maximum pooling layer of 2 × 2 × 2, each convolution layer is followed by a normalization layer and a linear unit with leakage correction (slope is 0.2), the convolution step size is 2, the size of each direction of the feature map of the minimum layer is 1/16 of the size of each direction of the input image, the decoding path also includes a plurality of blocks, each block is similar to the block composition of the same layer of the coding stage, the upsampling is implemented by deconvolution of 2 × 2 × 2, the jump connection between the coding path and the decoding path represents stitching the feature map after the upsampling with the feature map of the corresponding downsampling, the method comprises the steps of directly propagating the features learned in an up-sampling stage to layers for generating a deformation field, acquiring hierarchical features of an input image pair at a convolution layer to estimate a corresponding deformation field, mapping a feature map to an output layer of the whole network by convolution of 1 x 1 in the last layer of a decoding path, and generating a displacement field with the same size as the input image in x, y and z directions respectively, wherein the displacement field represents the displacement of each voxel in each direction (x, y and z), and the size of the output displacement field is 3 x 256 x 96.
And S203, obtaining a registered mammary gland image according to the second image and the deformation field.
Specifically, the computer device obtains a registered breast image according to the second image and the obtained deformation field. Optionally, the computer device may apply the deformation field to the second image to obtain a registered breast image, or input the deformation field and the second image into a preset spatial transformation network to obtain a registered breast image.
In the breast image registration method, the registered sample breast image can be obtained by inputting the first sample image and the second sample image into a preset initial registration network, so that the registered sample gland segmentation image can be obtained according to the registered sample breast image, a first loss is obtained according to the first sample image and the registered sample breast image, a second loss is obtained according to the segmentation image corresponding to the first sample image and the registered sample gland segmentation image, the initial registration network can be trained according to the first loss and the second loss to obtain the registration network, the obtained first image and the obtained second image are input into the trained registration network, a deformation field corresponding to the second image can be quickly obtained through the registration network, and compared with the traditional method that the registered breast image is obtained by carrying out iterative optimization on a breast dynamic enhanced magnetic resonance image, the efficiency of obtaining the deformation field corresponding to the second image is improved through the registration network, and the efficiency of obtaining the registered breast image according to the second image and the deformation field is improved, so that the efficiency of obtaining the registered breast image according to the second image and the deformation field is improved, and the time for obtaining the registered breast image is shortened.
In the scene for acquiring the first image and the second image, the first image and the second image are dynamic mammary gland enhancement magnetic resonance images of the examinee at the same body position and at different scanning time phases. In an embodiment, before S202, the method further includes: and pre-registering the second image and the first image through three-dimensional affine transformation to obtain a first registered image.
Specifically, the computer device performs pre-registration on the second image and the first image through three-dimensional affine transformation to obtain a first registered image, where S202 includes: and inputting the first image and the obtained first registration image into the registration network, and obtaining the deformation field through the registration network. The second image and the first image are pre-registered through three-dimensional affine transformation, so that displacement change caused by respiratory motion of a detector can be eliminated, and influence caused by registration of the second image and the first image is reduced.
In this embodiment, the computer device performs pre-registration on the acquired second image and the first image through three-dimensional affine transformation, and the acquired first registration image can eliminate displacement change caused by respiratory motion of the examiner on the second image and the first image, so that influence of the displacement change on registration of the second image and the first image can be reduced.
In some scenarios, after the computer device obtains the registered breast image through the second image and the first image, the computer device needs to register the gland in the breast image to obtain the registered gland image. In one embodiment, as shown in fig. 3, the method further comprises:
s301, acquiring a first gland mask image and a second gland mask image; the first gland mask image is a gland mask image corresponding to the first image, and the second gland mask image is a gland mask image corresponding to the original second image.
Specifically, the computer device acquires a first gland mask image and a second gland mask image, wherein the first gland mask image is a gland mask image corresponding to the first image, and the second gland mask image is a gland mask image corresponding to the original second image. Optionally, the computer device may input the first image into a preset segmentation network, and segment the first image through the segmentation network to obtain the first gland mask image, or label the gland mask in the first image, and segment the labeled gland mask image to obtain the first gland mask image. Optionally, the computer device may input the second image into the preset segmentation network, and segment the second image through the segmentation network to obtain the second gland mask image, or label the gland mask in the second image, and segment the labeled gland mask image to obtain the second gland mask image.
S302, pre-registering the first gland mask image and the second gland mask image through three-dimensional affine transformation to obtain a second registration image.
Specifically, the computer device performs pre-registration on the first gland mask image and the second gland mask image through three-dimensional affine transformation to obtain a second registration image. The first gland mask image and the second gland mask image are pre-registered through three-dimensional affine transformation, so that displacement change caused by respiratory motion of a detector can be eliminated, and influence caused by registration of the first gland mask image and the second gland mask image is reduced.
And S303, obtaining a registered gland image according to the second registered image and the deformation field.
Specifically, the computer device obtains a registered glandular image according to the obtained second registration image and the obtained deformation field. Optionally, the computer device may apply the deformation field to the second registration image to obtain a registered gland image, or may input the deformation field and the second registration image into a preset spatial transformation network to obtain a registered gland image.
In this embodiment, the computer device performs pre-registration on the first gland mask image corresponding to the first image and the second gland mask image corresponding to the second image through three-dimensional affine transformation, so that displacement change caused by respiratory motion of the examiner can be eliminated, influence caused by registration of the first gland mask image and the second gland mask image is reduced, accuracy of the obtained second registration image is improved, the second registration image can be accurately registered according to the second registration image and the deformation field, and accuracy of the obtained registered gland image is improved.
In the above scene of obtaining the registered breast image according to the second image and the deformation field, in an embodiment, the above S203 includes: and inputting the second image and the deformation field into a space converter model, and obtaining a breast image after registration through the space converter model.
Specifically, the computer device inputs the second image and the obtained deformation field into a space transformer model, and obtains the registered breast image through the space transformer model. Optionally, the spatial converter model includes a grid generator and a sampler. Optionally, as shown in fig. 4, the computer device may input the deformation field into a grid generator of the space transformer model to obtain a first mapping relationship between each voxel of the second image and each voxel of the registered breast image, and thenAnd step one, inputting the first mapping relation and the second image into a sampler of a space transformer model by the computer equipment, and obtaining a registered mammary gland image through the sampler. It should be noted that the grid generator obtains a mapping relation Tθ(G) Assuming that the coordinates of each voxel in the second image are
Figure RE-GDA0003247034620000111
The coordinates of each voxel in the registered breast image are
Figure RE-GDA0003247034620000112
Spatial transformation function Tθ(G) In the form of a three-dimensional affine transformation function,
Figure RE-GDA0003247034620000113
and
Figure RE-GDA0003247034620000114
the correspondence of (a) may be expressed as:
Figure RE-GDA0003247034620000115
optionally, the computer device inputs the first mapping relationship and the second image into the sampler, and the obtaining of the breast image after registration by the sampler may be sampling by using an 8-neighborhood linear interpolation method, as shown in the following formula:
Figure RE-GDA0003247034620000116
wherein, for each voxel p, p ' ═ p + u (p), Z (p ') denotes the neighborhood voxel of p ', d denotes the spatial dimension, facilitating the calculation of the gradient of the loss function when optimizing the loss function in the back propagation process.
In this embodiment, the computer device inputs the second image and the deformation field corresponding to the second image into the space transformer model, and the registered breast image can be quickly obtained through the space transformer model, so that the efficiency of obtaining the registered breast image is improved.
In the above-mentioned scene where the registered gland image is obtained according to the second registered image and the deformation field, in an embodiment, the above-mentioned S303 includes: and inputting the second registration image and the deformation field into a space converter model, and obtaining a registered glandular image through the space converter model.
Specifically, the computer device inputs the second registered image and the deformation field into a space transformer model, and the registered glandular image is obtained through the space transformer model. Optionally, the spatial converter model comprises a grid generator and a sampler. Optionally, the computer device inputs the obtained deformation field into the grid generator to obtain a second mapping relationship between each voxel of the second registered image and each voxel of the registered gland image, and further inputs the obtained second mapping relationship and the second registered image into a sampler, and obtains the registered gland image through the sampler. It should be noted that the detailed process and principle of the grid generator obtaining the second mapping relationship and obtaining the registered glandular image by the sampler are the same as the process and principle of obtaining the first mapping relationship and obtaining the registered mammary image by the sampler, and this embodiment is not described herein again.
In this embodiment, the computer device inputs the second registration image and the deformation field corresponding to the second registration image into the space transformer model, and the registered gland image can be quickly obtained through the space transformer model, so that the efficiency of obtaining the registered gland image is improved.
In the above-mentioned scene where the first image and the second image are input into a preset registration network, and the deformation field corresponding to the second image is obtained through the registration network, the registration network is a pre-trained network, in an embodiment, as shown in fig. 5, the loss of the initial registration network further includes a smoothing loss, and the training process of the registration network includes:
s501, pre-registering the first sample image and the second sample image through three-dimensional affine transformation to obtain a first sample registration image.
Specifically, the computer device performs pre-registration on the acquired first sample image and the second sample image through three-dimensional affine transformation to obtain a first sample registration image. It should be noted that, by pre-registering the first sample image and the second sample image through three-dimensional affine transformation, displacement change caused by respiratory motion of the examiner can be eliminated, thereby reducing influence caused when the first sample image and the second sample image are registered.
And S502, pre-registering the gland mask image corresponding to the first sample image and the gland mask image corresponding to the second sample image through three-dimensional affine transformation to obtain a second sample registration image.
Specifically, the computer device performs pre-registration on the gland mask image corresponding to the first sample image and the gland mask image corresponding to the second sample image through three-dimensional affine transformation to obtain a second sample registration image. It should be noted that, by pre-registering the gland mask image corresponding to the first sample image and the gland mask image corresponding to the second sample image through three-dimensional affine transformation, displacement change caused by respiratory motion of the examiner can be eliminated, thereby reducing influence caused when registering the gland mask image corresponding to the first sample image and the gland mask image corresponding to the second sample image.
S503, inputting the first sample image and the first sample registration image into an initial registration network, and obtaining a sample deformation field and a smooth loss corresponding to the first sample registration image through the initial registration network.
Specifically, the computer device inputs the first sample image and the obtained first sample registration image into the initial registration network, and obtains a sample deformation field and a smooth loss corresponding to the first sample registration image through the initial registration network. The smoothing loss is a smoothing loss L which the sample deformation field itself hassmooth(phi) in the form of a crystal. It can be understood that the network structure of the initial registration network is the same as the network structure of the registration network, and please refer to the description of the above embodiment for the detailed description of the network structure of the initial registration network, and the present embodiment does not perform the redundancy on the network structure of the initial registration network hereThe above-mentioned processes are described.
S504, inputting the first sample registration image and the sample deformation field into a space converter model, and obtaining a registered sample breast image through the space converter model.
Specifically, the computer device inputs the obtained first sample registration image and the sample deformation field into the space transformer model, and obtains a registered sample breast image through the space transformer model. Optionally, the spatial converter model comprises a grid generator and a sampler. Optionally, the computer device inputs the obtained sample deformation field into the grid generator to obtain a mapping relationship between each voxel of the first sample registration image and each voxel of the registered sample breast image, and further inputs the obtained mapping relationship and the first sample registration image into a sampler, and obtains the registered sample breast image through the sampler. It should be noted that the detailed process and principle of the grid generator obtaining the mapping relationship and obtaining the registered sample breast image by the sampler are the same as the process and principle of obtaining the first mapping relationship and obtaining the registered breast image by the sampler, and this embodiment is not described herein again.
And S505, inputting the second sample registration image and the sample deformation field into a space converter model, and obtaining a registered sample gland mask image through the space converter model.
Specifically, the computer device inputs the second sample registration image and the sample deformation field into a space transformer model, and obtains a registered sample gland mask image through the space transformer model. Optionally, the spatial converter model comprises a grid generator and a sampler. Optionally, the computer device inputs the obtained sample deformation field into the grid generator to obtain a mapping relationship between each voxel of the second sample registration image and each voxel of the registered sample gland mask image, and further inputs the obtained mapping relationship and the second sample registration image into a sampler, and obtains the registered sample gland mask image through the sampler. It should be noted that the detailed process and principle of obtaining the mapping relationship by the grid generator and obtaining the registered sample gland mask image by the sampler are the same as the process and principle of obtaining the first mapping relationship and obtaining the registered breast image by the sampler, and this embodiment is not described herein again.
S506, obtaining a first loss according to the registered sample breast image and the first sample image.
Specifically, the computer device is used for registering a breast image of the sample according to the breast image
Figure RE-GDA0003247034620000141
Obtaining a first loss from the first sample image f
Figure RE-GDA0003247034620000142
The value of (c). It should be noted that, since the gray values of the dynamically enhanced magnetic resonance image of the breast before and after the contrast agent injection are significantly changed, the gray value distributions of the first sample image f and the second sample image m are no longer similar, so that the first sample image f and the registered sample breast image are used
Figure RE-GDA0003247034620000143
The local correlation between them is used as similarity measurement, wherein the local correlation calculation formula is:
Figure RE-GDA0003247034620000144
in the formula (I), the compound is shown in the specification,
Figure RE-GDA0003247034620000145
and
Figure RE-GDA0003247034620000146
respectively representing the first sample image f and the registered sample breast image
Figure RE-GDA0003247034620000147
The mean image of (1). The larger the CC value is, the stronger the cross-correlation between the registered image and the first sample image is, the better the registration effect is, therefore, the first loss is:
Figure RE-GDA0003247034620000148
when the similarity loss reaches the minimum, the closer the registered image is to the first sample image, but the deformation field may be discontinuous and not consistent with the real deformation condition, so a Diffusion constraint (Diffusion Regularizer) is applied to the spatial gradient of the displacement field u to smooth the deformation field phi, as shown in the following formula:
Figure RE-GDA0003247034620000149
in the formula, Ω is a gland voxel set. Note that the first loss
Figure RE-GDA00032470346200001410
May not be limited to passing the first sample image f and the registered sample breast image
Figure RE-GDA00032470346200001411
The local correlation between them is obtained as a similarity measure.
And S507, obtaining a second loss according to the registered sample gland segmentation image and the segmentation image corresponding to the first sample image.
In particular, the computer device segments the image from the registered sample gland
Figure RE-GDA00032470346200001412
A divided image s corresponding to the first sample imagefTo obtain a second loss
Figure RE-GDA00032470346200001413
The value of (c). Alternatively, the second loss can be defined as shown by the following equation:
Figure RE-GDA00032470346200001414
in the formula, Dice is used for measuring the gland segmentation image s corresponding to the first sample imagefAnd the registered segmented image of the sample gland
Figure RE-GDA00032470346200001415
The overlap of the middle gland region is calculated as follows:
Figure RE-GDA0003247034620000151
it should be noted that in the calculation
Figure RE-GDA0003247034620000152
Previously, linear interpolation of the glandular binary mask image was required to enable
Figure RE-GDA0003247034620000153
Can be minute in the back propagation process.
And S508, training the initial registration network according to the smooth loss, the first loss and the second loss to obtain the registration network.
Specifically, the computer device trains an initial registration network according to the obtained smoothing loss, the first loss and the second loss to obtain the registration network. Alternatively, the computer device may rely on the smoothing loss LsmoothValue of (phi), first loss
Figure RE-GDA0003247034620000154
Value of (2) and second loss
Figure RE-GDA0003247034620000155
To obtain a loss function of the initial registration network
Figure RE-GDA0003247034620000156
In the formula, λ and γ are regularization parameters. It should be noted that the regularization is performed for the smooth loss and the second loss because the similarity loss of the optimized glandular mask image causes discontinuity of the deformation field, and thus the regularization is performed. Optionally, the computer device may determine the initial registration network corresponding to the loss function of the initial registration network reaching a minimum value or a stable value as the registration network. It should be noted that, during the training of the initial registration network, the batch size may beWith a setting of 1, each Epoch (Epoch) is defined as an iterative optimization over 100 Batch, with a maximum value of 700. Optionally, an Adaptive Moment Estimation method (Adam) may be used to optimize the loss function, an initialization parameter of a normal distribution random initial registration network is adopted, the initial learning rate is set to 1e-4, and regularization coefficients λ and γ are respectively set to 1.5 and 1.8 through Grid Search.
In this embodiment, the computer device performs pre-registration on the first sample image and the second sample image through three-dimensional affine transformation to obtain a first sample registration image, performs pre-registration on the gland mask image corresponding to the first sample image and the gland mask image corresponding to the second sample image through three-dimensional affine transformation to obtain a second sample registration image, and can reduce displacement change caused by respiratory motion of a detector, so that the first sample image and the first sample registration image can be input into an initial registration network, a sample deformation field and a smooth loss corresponding to the first sample registration image can be accurately obtained through the initial registration network, the first sample registration image and the sample deformation field can be input into a space transformer model, a registered sample breast image can be accurately obtained through the space transformer model, and the second sample registration image and the sample deformation field can be input into the space transformer model, the registered sample gland segmentation image is accurately obtained through the space converter model, further, a first loss can be obtained according to the registered sample breast image and the first sample image, a second loss can be obtained according to the registered sample gland segmentation image and the segmentation image corresponding to the first sample image, then the initial registration network can be accurately trained according to the smooth loss, the first loss and the second loss, so that the registration network is obtained, and the accuracy of the obtained registration network is improved.
Illustratively, in one embodiment, testing the above registration model results in an average Dice similarity coefficient of 0.831 ± 0.019 across the test dataset. Fig. 6 and 7 are schematic diagrams of results obtained by image registration taking breast dynamic enhanced magnetic resonance images of a lump type case and a microcalcification focus type case as examples, respectively, and the deformation field is subjected to grid visualization, so that the gland regions with deformation field distortion parts concentrated in the breast can be observed, and the registered gland regions are basically overlapped. By comparing the difference image of the second image and the first image before and after registration, it can be seen that the tissue and the lesion contour are fuzzy in the difference image before registration, and the image after registration is basically coincident with the first image, and the lesion contour becomes clear. Fig. 7 is a schematic diagram of a microcalcification focus, in which the focus in the right breast is hardly visible in the difference image before registration, but the focus is highlighted in the difference image after registration, which indirectly proves the registration effectiveness, and realizes the alignment of the physical and spatial positions of the voxels in the ROI (gland) with diagnostic significance, thereby achieving the expected target.
Further, by training the registration models with or without the gland Mask aided Optimization loss function (AFMO) respectively and evaluating the registration performance on the same test set, the average Dice of the registration results with or without the AFMO model are 0.831 ± 0.019 and 0.793 ± 0.036 respectively (p-value is 0.016<0.05), which indicates that the gland similarity of the registration result using the AFMO is higher. Fig. 8 compares the registration results of four representative DCE-MR image samples with or without AFMO, wherein each column represents a different DCE-MR image sample, wherein the first column is a sample without lesion and with low breast density, and the second to fourth columns are all samples with lesion, and it can be seen that the registration results Dice with or without AFMO are 0.812 and 0.825, respectively, and the difference is not large for the case without lesion and with low breast density (the first column in fig. 8). For cases where lesions are present, the overlap of the gland regions is higher in the registration results with AFMO than without AFMO, and in the registration results with AFMO, the lesion regions are highlighted, whereas in the registration results without AFMO, although the breast regions overlap well, the edge contour of the lesion is weakened (second column of fig. 8), or the entire lesion is blurred or even disappears (third and fourth columns of fig. 8), which will bring difficulties to the analysis of the lesion and the gland. Therefore, the AFMO is used in the registration process, so that the information of the gland region and the BPE tissue in the image is well kept, and the registration effectiveness of the gland region is improved. In addition, the performances of image registration through affine alignment, SimpleElastix toolkit, ANTs SyN toolkit, AFMO-free model and the registration model in the application are compared, the average Dice value of the registration model in the application on the same registration test set is optimal, and good balance is achieved between the registration effectiveness and the real-time performance.
It should be understood that although the various steps in the flow charts of fig. 2-5 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-5 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed in turn or alternately with other steps or at least some of the other steps.
In one embodiment, as shown in fig. 9, there is provided a breast image registration apparatus including: a first acquisition module, a second acquisition module, and a first registration module, wherein:
the first acquisition module is used for acquiring a first image and a second image; wherein, the first image and the second image are both breast dynamic scanning images; the dynamic scanning image of the breast includes the breast and the gland.
The second acquisition module is used for inputting the first image and the second image into a preset registration network and obtaining a deformation field corresponding to the second image through the registration network; the registration network is obtained by the following training method: inputting the first sample image and the second sample image into a preset initial registration network to obtain a registered sample mammary gland image, obtaining a registered sample gland segmentation image according to the registered sample mammary gland image, obtaining a first loss according to the first sample image and the registered sample mammary gland image, obtaining a second loss according to the segmentation image corresponding to the first sample image and the registered sample gland segmentation image, and training the initial registration network according to the first loss and the second loss.
And the first registration module is used for obtaining a registered mammary gland image according to the second image and the deformation field.
The breast image registration apparatus provided in this embodiment may implement the above method embodiments, and the implementation principle and the technical effect are similar, which are not described herein again.
On the basis of the above embodiment, optionally, the first image and the second image are breast dynamic enhanced magnetic resonance images of the examinee in different scanning time phases of the same body position, and the apparatus further includes: a second registration module, wherein:
and the second registration module is used for pre-registering the second image and the first image through three-dimensional affine transformation to obtain a first registration image.
And the second acquisition module is used for inputting the first image and the first registration image into the registration network and obtaining the deformation field through the registration network.
The breast image registration apparatus provided in this embodiment may implement the above method embodiments, and the implementation principle and the technical effect are similar, which are not described herein again.
On the basis of the foregoing embodiment, optionally, the apparatus further includes: a third acquisition module, a third registration module, and a fourth registration module, wherein:
the third acquisition module is used for acquiring a first gland mask image and a second gland mask image; the first gland mask image is a gland mask image corresponding to the first image, and the second gland mask image is a gland mask image corresponding to the second image.
And the third registration module is used for pre-registering the first gland mask image and the second gland mask image through three-dimensional affine transformation to obtain a second registration image.
And the fourth registration module is used for obtaining the registered glandular image according to the second registration image and the deformation field.
The breast image registration apparatus provided in this embodiment may implement the above method embodiments, and the implementation principle and the technical effect are similar, which are not described herein again.
On the basis of the foregoing embodiment, optionally, the first registration module includes: a first registration unit, wherein:
and the first registration unit is used for inputting the second image and the deformation field into the space converter model, and obtaining a registered mammary gland image through the space converter model.
The breast image registration apparatus provided in this embodiment may implement the above method embodiments, and the implementation principle and the technical effect are similar, which are not described herein again.
On the basis of the foregoing embodiment, optionally, the spatial converter model includes a grid generator and a sampler, where the first registration unit is configured to input the deformation field into the grid generator to obtain a first mapping relationship between each voxel of the second image and each voxel of the breast image after registration; and inputting the first mapping relation and the second image into a sampler, and obtaining a registered mammary gland image through the sampler.
The breast image registration apparatus provided in this embodiment may implement the above method embodiments, and the implementation principle and the technical effect are similar, which are not described herein again.
On the basis of the foregoing embodiment, optionally, the fourth registration module includes: a second registration unit, wherein:
and the second registration unit is used for inputting the second registration image and the deformation field into the space converter model, and obtaining a registered glandular image through the space converter model.
The breast image registration apparatus provided in this embodiment may implement the above method embodiments, and the implementation principle and the technical effect are similar, which are not described herein again.
On the basis of the foregoing embodiment, optionally, the spatial converter model includes a grid generator and a sampler, and the second registration unit is configured to input the deformation field into the grid generator to obtain a second mapping relationship between each voxel of the second registration image and each voxel of the registered gland image; and inputting the second mapping relation and the second registration image into a sampler, and obtaining a registered gland image through the sampler.
The breast image registration apparatus provided in this embodiment may implement the above method embodiments, and the implementation principle and the technical effect are similar, which are not described herein again.
On the basis of the foregoing embodiment, optionally, the loss of the initial registration network further includes a smoothing loss, and the apparatus further includes: a fifth registration module, a sixth registration module, a fourth acquisition module, a fifth acquisition module, a sixth acquisition module, a seventh acquisition module, an eighth acquisition module, and a training module, wherein:
and the fifth registration module is used for pre-registering the first sample image and the second sample image through three-dimensional affine transformation to obtain a first sample registration image.
And the sixth registration module is used for pre-registering the gland mask image corresponding to the first sample image and the gland mask image corresponding to the second sample image through three-dimensional affine transformation to obtain a second sample registration image.
And the fourth acquisition module is used for inputting the first sample image and the first sample registration image into the initial registration network, and obtaining a sample deformation field and a smooth loss corresponding to the first sample registration image through the initial registration network.
And the fifth acquisition module is used for inputting the first sample registration image and the sample deformation field into the space converter model and obtaining a registered sample mammary gland image through the space converter model.
And the sixth acquisition module is used for inputting the second sample registration image and the sample deformation field into the space converter model and obtaining a registered sample gland segmentation image through the space converter model.
And the seventh obtaining module is used for obtaining the first loss according to the registered sample breast image and the first sample image.
And the eighth obtaining module is used for obtaining a second loss according to the registered sample gland segmentation image and the segmentation image corresponding to the first sample image.
And the training module is used for training the initial registration network according to the smooth loss, the first loss and the second loss to obtain the registration network.
The breast image registration apparatus provided in this embodiment may implement the above method embodiments, and the implementation principle and the technical effect are similar, which are not described herein again.
For specific definition of the breast image registration apparatus, reference may be made to the above definition of the breast image registration method, which is not described herein again. The modules in the breast image registration device can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
acquiring a first image and a second image; wherein, the first image and the second image are both breast dynamic scanning images; the dynamic scanning image of the mammary gland comprises a breast and a gland;
inputting the first image and the second image into a preset registration network, and obtaining a deformation field corresponding to the second image through the registration network; the registration network is obtained by the following training method: inputting the first sample image and the second sample image into a preset initial registration network to obtain a registered sample mammary gland image, obtaining a registered sample gland segmentation image according to the registered sample mammary gland image, obtaining a first loss according to the first sample image and the registered sample mammary gland image, obtaining a second loss according to the segmentation image corresponding to the first sample image and the registered sample gland segmentation image, and training the initial registration network according to the first loss and the second loss;
and obtaining a registered mammary gland image according to the second image and the deformation field.
The implementation principle and technical effect of the computer device provided by the above embodiment are similar to those of the above method embodiment, and are not described herein again.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring a first image and a second image; wherein, the first image and the second image are both breast dynamic scanning images; the dynamic scanning image of the mammary gland comprises a breast and a gland;
inputting the first image and the second image into a preset registration network, and obtaining a deformation field corresponding to the second image through the registration network; the registration network is obtained by the following training method: inputting the first sample image and the second sample image into a preset initial registration network to obtain a registered sample mammary gland image, obtaining a registered sample gland segmentation image according to the registered sample mammary gland image, obtaining a first loss according to the first sample image and the registered sample mammary gland image, obtaining a second loss according to the segmentation image corresponding to the first sample image and the registered sample gland segmentation image, and training the initial registration network according to the first loss and the second loss;
and obtaining a registered mammary gland image according to the second image and the deformation field.
The implementation principle and technical effect of the computer-readable storage medium provided by the above embodiments are similar to those of the above method embodiments, and are not described herein again.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), for example.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A breast image registration method, the method comprising:
acquiring a first image and a second image; wherein the first image and the second image are both breast dynamic scanning images; the breast dynamic scanning image comprises a breast and a gland;
inputting the first image and the second image into a preset registration network, and obtaining a deformation field corresponding to the second image through the registration network; wherein the registration network is obtained by the following training method: inputting a first sample image and a second sample image into a preset initial registration network to obtain a registered sample mammary image, obtaining a registered sample gland segmentation image according to the registered sample mammary image, obtaining a first loss according to the first sample image and the registered sample mammary image, obtaining a second loss according to a segmentation image corresponding to the first sample image and the registered sample gland segmentation image, and training the initial registration network according to the first loss and the second loss;
and obtaining a registered mammary gland image according to the second image and the deformation field.
2. The method according to claim 1, wherein the first image and the second image are breast dynamic enhanced magnetic resonance images of the examinee at different scanning time phases of the examinee in the same body position; before the first image and the second image are input into a preset registration network and a deformation field corresponding to the second image is obtained through the registration network, the method further includes:
pre-registering the second image and the first image through three-dimensional affine transformation to obtain a first registered image;
the inputting the first image and the second image into a preset registration network, and obtaining a deformation field corresponding to the second image through the registration network includes:
and inputting the first image and the first registration image into the registration network, and obtaining the deformation field through the registration network.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
acquiring a first gland mask image and a second gland mask image; the first gland mask image is a gland mask image corresponding to the first image, and the second gland mask image is a gland mask image corresponding to the second image;
pre-registering the first gland mask image and the second gland mask image through three-dimensional affine transformation to obtain a second registered image;
and obtaining a registered glandular image according to the second registered image and the deformation field.
4. The method of claim 1, wherein deriving a registered breast image from the second image and the deformation field comprises:
and inputting the second image and the deformation field into a space converter model, and obtaining the registered mammary gland image through the space converter model.
5. The method of claim 4, wherein the spatial transformer model comprises a grid generator and a sampler, and wherein inputting the second image and the deformation field into a spatial transformer model from which the registered breast image is derived comprises:
inputting the deformation field into the grid generator to obtain a first mapping relation between each voxel of the second image and each voxel of the breast image after registration;
and inputting the first mapping relation and the second image into the sampler, and obtaining the registered mammary gland image through the sampler.
6. The method of claim 3, wherein obtaining a registered gland image from the second registered image and the deformation field comprises:
and inputting the second registration image and the deformation field into a space converter model, and obtaining the registered glandular image through the space converter model.
7. The method of claim 6, wherein the spatial transformer model comprises a grid generator and a sampler, and wherein inputting the second registered image and the deformation field into a spatial transformer model from which the registered gland image is derived comprises:
inputting the deformation field into the grid generator to obtain a second mapping relation between each voxel of the second registration image and each voxel of the registered gland image;
and inputting the second mapping relation and the second registration image into the sampler, and obtaining the registered gland image through the sampler.
8. The method according to any one of claims 1-7, wherein the loss of the initial registration network further comprises a smoothing loss, and the training process of the registration network comprises:
pre-registering the first sample image and the second sample image through three-dimensional affine transformation to obtain a first sample registration image;
pre-registering the gland mask image corresponding to the first sample image and the gland mask image corresponding to the second sample image through three-dimensional affine transformation to obtain a second sample registration image;
inputting the first sample image and the first sample registration image into the initial registration network, and obtaining a sample deformation field and the smoothing loss corresponding to the first sample registration image through the initial registration network;
inputting the first sample registration image and the sample deformation field into the space transformer model, and obtaining a registered sample mammary gland image through the space transformer model;
inputting the second sample registration image and the sample deformation field into the space transformer model, and obtaining the registered sample gland segmentation image through the space transformer model;
obtaining the first loss according to the registered sample breast image and the first sample image;
obtaining the second loss according to the registered sample gland segmentation image and the segmentation image corresponding to the first sample image;
and training the initial registration network according to the smoothing loss, the first loss and the second loss to obtain the registration network.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 8.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 8.
CN202110663128.6A 2021-06-15 2021-06-15 Mammary gland image registration method, computer device and storage medium Pending CN113610752A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110663128.6A CN113610752A (en) 2021-06-15 2021-06-15 Mammary gland image registration method, computer device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110663128.6A CN113610752A (en) 2021-06-15 2021-06-15 Mammary gland image registration method, computer device and storage medium

Publications (1)

Publication Number Publication Date
CN113610752A true CN113610752A (en) 2021-11-05

Family

ID=78336543

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110663128.6A Pending CN113610752A (en) 2021-06-15 2021-06-15 Mammary gland image registration method, computer device and storage medium

Country Status (1)

Country Link
CN (1) CN113610752A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114511599A (en) * 2022-01-20 2022-05-17 推想医疗科技股份有限公司 Model training method and device, medical image registration method and device
CN115908515A (en) * 2022-11-11 2023-04-04 北京百度网讯科技有限公司 Image registration method, and training method and device of image registration model
CN116229218A (en) * 2023-05-09 2023-06-06 之江实验室 Model training and image registration method and device

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114511599A (en) * 2022-01-20 2022-05-17 推想医疗科技股份有限公司 Model training method and device, medical image registration method and device
CN115908515A (en) * 2022-11-11 2023-04-04 北京百度网讯科技有限公司 Image registration method, and training method and device of image registration model
CN115908515B (en) * 2022-11-11 2024-02-13 北京百度网讯科技有限公司 Image registration method, training method and device of image registration model
CN116229218A (en) * 2023-05-09 2023-06-06 之江实验室 Model training and image registration method and device
CN116229218B (en) * 2023-05-09 2023-08-04 之江实验室 Model training and image registration method and device

Similar Documents

Publication Publication Date Title
CN109978037B (en) Image processing method, model training method, device and storage medium
CN109523584B (en) Image processing method and device, multi-modality imaging system, storage medium and equipment
CN113610752A (en) Mammary gland image registration method, computer device and storage medium
CN107886508B (en) Differential subtraction method and medical image processing method and system
Martın-Fernández et al. An approach for contour detection of human kidneys from ultrasound images using Markov random fields and active contours
US9361686B2 (en) Method and apparatus for the assessment of medical images
Hashimoto et al. Automated segmentation of 2D low-dose CT images of the psoas-major muscle using deep convolutional neural networks
JP2023540910A (en) Connected Machine Learning Model with Collaborative Training for Lesion Detection
CN110827335B (en) Mammary gland image registration method and device
CN109767448B (en) Segmentation model training method and device
CN113487536A (en) Image segmentation method, computer device and storage medium
CN113888566B (en) Target contour curve determination method and device, electronic equipment and storage medium
CN112150571A (en) Image motion artifact eliminating method, device, equipment and storage medium
CN114943690A (en) Medical image processing method, device, computer equipment and readable storage medium
CN111243052A (en) Image reconstruction method and device, computer equipment and storage medium
US8805122B1 (en) System, method, and computer-readable medium for interpolating spatially transformed volumetric medical image data
CN117911432A (en) Image segmentation method, device and storage medium
CN110852993B (en) Imaging method and device under action of contrast agent
CN113129418A (en) Target surface reconstruction method, device, equipment and medium based on three-dimensional image
CN113129297B (en) Diameter automatic measurement method and system based on multi-phase tumor image
CN116128895A (en) Medical image segmentation method, apparatus and computer readable storage medium
CN110310314A (en) Method for registering images, device, computer equipment and storage medium
Karani et al. An image interpolation approach for acquisition time reduction in navigator-based 4D MRI
CN113379770B (en) Construction method of nasopharyngeal carcinoma MR image segmentation network, image segmentation method and device
CN113222987B (en) Magnetic resonance imaging vascular wall enhancement intensity mapping method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination