CN111275749A - Image registration and neural network training method and device - Google Patents

Image registration and neural network training method and device Download PDF

Info

Publication number
CN111275749A
CN111275749A CN202010071043.4A CN202010071043A CN111275749A CN 111275749 A CN111275749 A CN 111275749A CN 202010071043 A CN202010071043 A CN 202010071043A CN 111275749 A CN111275749 A CN 111275749A
Authority
CN
China
Prior art keywords
image
registration
neural network
registered
training sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010071043.4A
Other languages
Chinese (zh)
Other versions
CN111275749B (en
Inventor
王军搏
韩冬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Neusoft Medical Systems Co Ltd
Original Assignee
Shenyang Advanced Medical Equipment Technology Incubation Center Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenyang Advanced Medical Equipment Technology Incubation Center Co Ltd filed Critical Shenyang Advanced Medical Equipment Technology Incubation Center Co Ltd
Priority to CN202010071043.4A priority Critical patent/CN111275749B/en
Publication of CN111275749A publication Critical patent/CN111275749A/en
Application granted granted Critical
Publication of CN111275749B publication Critical patent/CN111275749B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/344Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image registration method, an image registration device, a neural network training method, an image registration device, an electronic device and a storage medium. The method comprises the following steps: acquiring a plurality of groups of training sample pairs, wherein each group of training sample pairs comprises two medical images acquired under different scanning conditions; aiming at each group of training sample pairs, respectively selecting two medical images as a registration reference image and an image to be registered, and inputting the images into a first neural network; performing spatial transformation on an image to be registered through registration information output by a first neural network to obtain a first prediction registration image; inputting the first prediction registration image and the image to be registered into a second neural network, wherein the second neural network is used for estimating registration information of the first prediction registration image to the image to be registered; network parameters of the first neural network and the second neural network are adjusted according to the registration information, and the first neural network and the second neural network share the network parameters. The neural network obtained by the method is used for registering the images, so that the accuracy of image registration is improved.

Description

Image registration and neural network training method and device
Technical Field
The invention relates to the technical field of medical imaging, in particular to an image registration and neural network training method and device, electronic equipment and a storage medium.
Background
The image registration refers to a process of matching medical images acquired based on different scanning conditions, such as different acquisition times, acquisition positions, acquisition devices, device parameters, and the like, for the measured object according to corresponding spatial positions or structural relationships.
In the conventional image registration technology, usually, two images to be registered need to be subjected to feature extraction, such as feature points, edges, contours, and the like, a matched feature pair is found by calculating the similarity of the features, the spatial position of the feature pair is mapped onto the image to be registered, and the image to be registered is subjected to spatial transformation to obtain the registered image. Because the traditional image registration method needs to manually select certain features as the basis of image registration, the accuracy and the speed of final registration are directly related to the advantages and the disadvantages of a feature selection and feature matching algorithm, and different types of image registration tasks are often not universal and have limitations.
Disclosure of Invention
The invention provides an image registration method and device, a neural network training method and device, electronic equipment and a storage medium, and aims to improve the accuracy of image registration.
Specifically, the invention is realized by the following technical scheme:
in a first aspect, a neural network training method for image registration is provided, the neural network training method including:
acquiring a training sample set, wherein the training sample set comprises a plurality of groups of training sample pairs, each group of training sample pairs comprises two medical images, and the two medical images are acquired under different scanning conditions;
selecting one medical image from the two medical images as a registration reference image and the other medical image as an image to be registered according to each group of training sample pairs, and inputting the selected medical image and the other medical image into a first neural network, wherein the first neural network is used for estimating registration information of the image to be registered to the registration reference image;
performing spatial transformation on the image to be registered through the registration information output by the first neural network to obtain a first prediction registration image;
inputting the first prediction registration image and the image to be registered into a second neural network, wherein the second neural network is used for estimating registration information of the first prediction registration image to the image to be registered;
adjusting network parameters of the first neural network and the second neural network according to the registration information, the network parameters being shared by the first neural network and the second neural network;
and carrying out image registration on the medical image through the trained neural network.
Optionally, adjusting network parameters of the first neural network and the second neural network according to the registration information comprises:
performing spatial transformation on the first prediction registration image through registration information output by the second neural network to obtain a second prediction registration image;
determining a loss error of the second prediction registration image and the image to be registered and a loss error of the first prediction registration image and the registration reference image;
and adjusting the network parameters according to the loss error.
Optionally, the neural network training method further includes:
and after the relation between the quasi-reference images of the two medical images in the training sample pair and the image to be registered is exchanged, the quasi-reference images are used as a new training sample pair to be input into the first neural network.
Optionally, the first neural network and the second neural network each comprise a first type of convolutional layer unit and a second type of convolutional unit;
the first neural network estimates registration information of the image to be registered to the registration reference image, and the registration information comprises:
performing convolution operation on the image to be registered and the registration reference image based on the first type of convolution layer unit to obtain a feature map of the image to be registered and the registration reference image;
and performing deconvolution operation on the feature map based on the second convolution unit to obtain registration information of the registration of the image to be registered to the registration reference image.
In a second aspect, another neural network training method for image registration is provided, the neural network training method including:
acquiring a training sample set, wherein the training sample set comprises a plurality of groups of training sample pairs, each group of training sample pairs comprises two medical images, and the two medical images are acquired under different scanning conditions;
selecting one medical image from the two medical images as a registration reference image and the other medical image as an image to be registered according to each group of training sample pairs, and inputting the image to be registered into a neural network, wherein the neural network is used for estimating registration information of the image to be registered to the registration reference image;
performing spatial transformation on the image to be registered through the registration information output by the neural network to obtain a predicted registration image;
determining loss errors of the predicted registration image and the registration reference image, and adjusting network parameters of the neural network according to the loss errors;
and carrying out image registration on the medical image through the trained neural network.
In a third aspect, an image registration method is provided, which includes:
inputting an image to be registered and a registration reference image into a neural network, wherein the neural network is obtained by training any one of the neural network training methods for image registration;
and performing spatial transformation on the image to be registered through the registration information output by the neural network to realize the registration of the image to be registered to the registration reference image.
In a fourth aspect, a neural network training device for image registration is provided, the neural network training device comprising:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a training sample set, the training sample set comprises a plurality of groups of training sample pairs, each group of training sample pairs comprises two medical images, and the two medical images are acquired under different scanning conditions;
an input module, configured to select one medical image from the two medical images as a registration reference image and the other medical image as an image to be registered, and input a first neural network to the input module, where the first neural network is configured to estimate registration information of the image to be registered with the registration reference image;
the spatial transformation module is used for carrying out spatial transformation on the image to be registered through the registration information output by the first neural network to obtain a first prediction registration image;
the input module is further configured to input the first predicted registration image and the image to be registered into a second neural network, where the second neural network is configured to estimate registration information of the first predicted registration image registered to the image to be registered;
an adjustment module for adjusting network parameters of the first neural network and the second neural network according to the registration information, the first neural network and the second neural network sharing the network parameters;
and carrying out image registration on the medical image through the trained neural network.
Optionally, the spatial transformation module is further configured to perform spatial transformation on the first predicted registration image through the registration information output by the second neural network to obtain a second predicted registration image;
the adjustment module includes:
a determining unit, configured to determine a loss error between the second predicted registration image and the image to be registered, and a loss error between the first predicted registration image and the registration reference image;
and the adjusting unit is used for adjusting the network parameters according to the loss error.
Optionally, the neural network training device further includes:
and the exchange module is used for exchanging the relation between the quasi-reference images of the two medical images in the training sample pair and the image to be registered and then inputting the quasi-reference images and the image to be registered into the first neural network as a new training sample pair.
Optionally, the first neural network and the second neural network each comprise a first type of convolutional layer unit and a second type of convolutional unit;
performing convolution operation on the image to be registered and the registration reference image based on the first type of convolution layer unit to obtain a feature map of the correlation between the image to be registered and the registration reference image;
and performing deconvolution operation on the feature map based on the second convolution unit to obtain registration information of the registration of the image to be registered to the registration reference image.
In a fifth aspect, another neural network training apparatus for image registration is provided, the neural network training apparatus comprising:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a training sample set, the training sample set comprises a plurality of groups of training sample pairs, each group of training sample pairs comprises two medical images, and the two medical images are acquired under different scanning conditions;
an input module, configured to select one medical image from the two medical images as a registration reference image and the other medical image as an image to be registered, and input the selected medical image and the other medical image into a neural network, where the neural network is configured to estimate registration information for registering the image to be registered to the registration reference image;
the spatial transformation module is used for carrying out spatial transformation on the image to be registered through the registration information output by the neural network to obtain a predicted registration image;
the adjusting module is used for determining loss errors of the prediction registration image and the registration reference image and adjusting network parameters of the neural network according to the loss errors;
and carrying out image registration on the medical image through the trained neural network.
In a sixth aspect, there is provided an image registration apparatus comprising:
the input module is used for inputting the image to be registered and the registration reference image into a neural network, and the neural network is obtained by training any one of the neural network training devices for image registration;
and the spatial transformation module is used for carrying out spatial transformation on the image to be registered through the registration information output by the neural network so as to realize the registration of the image to be registered to the registration reference image.
In a seventh aspect, an electronic device is provided, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and the processor implements the neural network training method for image registration according to any one of the above items when executing the computer program.
The technical scheme provided by the embodiment of the invention can have the following beneficial effects:
in the network training process, two neural networks are used for training simultaneously, the first neural network is used for realizing the registration of the image to be registered to the registration reference image, the second neural network is used for realizing the registration of the image to be registered to the predicted registration image obtained by prediction of the first neural network, on one hand, the registration of the image to be registered to the registration reference image is realized, and the characteristic information of the image to be registered can be kept, so that the trained neural network can realize the calculation of more accurate registration information, and the accuracy is greatly improved; on the other hand, the neural network is optimized by utilizing the generated image while utilizing the existing training sample pair, so that the accuracy is further improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a flow chart illustrating a neural network training method for image registration in accordance with an exemplary embodiment of the present invention;
fig. 2a is a schematic structural diagram of a network architecture of a registration model according to an exemplary embodiment of the present invention;
FIG. 2b is an architectural diagram of a neural network shown in an exemplary embodiment of the invention;
fig. 2c is a schematic structural diagram of a network architecture of a trained registration model according to an exemplary embodiment of the present invention;
FIG. 3 is a flowchart illustrating step 106 of FIG. 1 in accordance with an exemplary embodiment of the present invention;
fig. 4 is a schematic structural diagram illustrating a network architecture of another registration model according to an exemplary embodiment of the present invention;
FIG. 5 is a flow chart illustrating another neural network training method for image registration in accordance with an exemplary embodiment of the present invention;
FIG. 6 is a block diagram of a neural network training apparatus for image registration in accordance with an exemplary embodiment of the present invention;
fig. 7 is a schematic structural diagram of an electronic device according to an exemplary embodiment of the present invention.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present invention. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
The image registration refers to a process of matching medical images acquired based on different scanning conditions, such as different scanning periods, different scanning positions, different scanning devices (CT devices, MR devices, and the like), different device parameters, and the like, according to corresponding spatial positions or structural relationships. For example, in magnetic resonance spinal scan, due to the limitation of the scanning range of the device, the scan of the whole spinal column can be completed only by dividing into 3 to 4 scans, and the doctor expects to be able to directly observe the condition of the whole spinal column in one image during diagnosis, and at this time, the registration technology needs to be applied to match the segmented spinal column images according to the anatomical structure position, that is, the overlapped parts of the two medical images are spliced after registration to obtain the medical image of the whole spinal column, so that the doctor can conveniently diagnose.
In the related art, during image registration, usually, two images to be registered need to be subjected to feature extraction, such as feature points, edges, contours, and the like, and a matched feature pair is found by calculating the similarity of the features, and the spatial position of the feature pair is mapped onto the image to be registered, and after the image to be registered is subjected to spatial transformation, the registered image can be obtained. Because some characteristics need to be manually selected as the basis of image registration in the traditional image registration method, the accuracy and the speed of final registration are directly related to the advantages and the disadvantages of the characteristic selection and the characteristic matching algorithm, and the method is not universal for different types of image registration tasks, so that the application of the method is limited.
Based on the above situation, the embodiment of the invention provides a neural network training method for image registration, which does not need to artificially select some features as the basis of image registration, and can realize more accurate image registration.
Fig. 1 is a flowchart illustrating a neural network training method for image registration according to an exemplary embodiment of the present invention, including the following steps:
and 101, building a network architecture of the registration model.
Referring to fig. 2a, in this embodiment, the constructed network structure includes two neural networks, and for convenience of distinction, the two neural networks are respectively referred to as a first neural network and a second neural network. Each neural network comprises two input ends of an image to be registered and a registration reference image, and the output of the neural network is registration information of the image to be registered to the registration reference image. Two neural networks share network parameters during the network training process.
Fig. 2b is a schematic diagram of an architecture of a neural network according to an exemplary embodiment of the present invention, where the neural network includes a first type of convolutional layer unit and a second type of convolutional unit, and each of the first type of convolutional layer unit and the second type of convolutional layer unit includes a plurality of cascaded convolutional layer units. The number of the convolution layer units can be set according to actual requirements. Each convolution unit in the first type of convolution layer unit comprises a number of convolution layers and an activation function. Each convolution unit in the second type of convolution layer unit comprises a deconvolution layer, a plurality of convolution layers and an activation function. The corresponding number in fig. 2b for each convolution unit identifies the size and number of images that characterize the output of that convolution unit, e.g., "256 × 64" characterizes 64 images that output a size of 256 × 256; "32 x 64" characterizes 64 images with an output size of 32 x 32.
The first type of convolutional layer unit is used for performing layer-by-layer convolution operation on the image to be registered and the registration reference image so as to extract features and find feature pairs (feature maps) matched with the image to be registered and the registration reference image. Specifically, referring to fig. 2b, if the first convolution unit includes 4 cascaded convolution units, when an image to be registered and a registration reference image with a size of 256 × 256 are input, the first convolution unit in the first convolution unit performs a convolution operation on the two input images through a plurality of convolution kernels, and performs feature pair extraction. The layer-by-layer convolution operation is a resolution reduction process, and taking an example that each convolution unit shown in fig. 2b includes 64 convolution kernels, after the first convolution unit performs convolution operation on the two input images, 64 feature maps with the size of 256 × 256 can be obtained, and the feature maps are input into the second convolution unit; after the second convolution unit performs convolution operation on the input feature maps, 64 feature maps with the size of 128 × 128 can be obtained and input into a third convolution unit; after the convolution operation is carried out on the input feature maps by the third convolution unit, 64 feature maps with the size of 64 x 64 can be obtained and input into the fourth convolution unit; after the convolution operation is performed on the input feature maps by the fourth convolution unit, 64 feature maps with the size of 32 × 32 can be obtained and input into the second type of convolution unit.
The second convolution unit is used for performing layer-by-layer deconvolution operation on the feature graph so as to merge and accurately estimate the feature pairs to obtain registration information of the image to be registered to the registration reference image. Referring to fig. 2b, the second type of convolution unit includes 4 cascaded convolution units corresponding to the first type of convolution unit, and registration information can be obtained by performing a layer-by-layer deconvolution operation on the input 64 feature maps with the size of 32 × 32. The layer-by-layer deconvolution operation is a resolution-raising process. The registration information output by the neural network may be, but is not limited to, a displacement field for registering the image to be registered to the registration reference image.
It should be noted that the size and the number of images output by each convolution unit can be set by setting the size and the number of convolution kernels according to actual requirements.
And 102, acquiring a training sample set.
The training sample set includes a plurality of sets of training sample pairs, each set of training sample pairs includes two medical images, the two medical images are obtained under different scanning conditions, and the different scanning conditions may be, for example, different scanning periods, different scanning positions, different scanning devices (CT devices, MR devices, etc.), different device parameters, and the like. The two medical images in each set of training samples are preferably images for the same anatomical location, or medical images with overlapping portions of the anatomical location.
And 103, selecting one medical image from the two medical images as a registration reference image and the other medical image as an image to be registered according to each group of training sample pairs, and inputting the image to be registered into a first neural network.
Taking the example that the training sample pair includes a medical image a and a medical image B, referring to fig. 2a, image a may be selected as an image to be registered, and image B may be selected as a registration reference image, and input into the first neural network.
Referring to fig. 2b, if 256 × 256 images to be registered and target images are input, 64 feature maps with sizes reduced to 32 × 32 are output through the layer-by-layer feature extraction and resolution reduction of the first type convolution unit; and then 64 feature maps with the size of 256 × 256 are output through the ascending resolution estimation operation of the convolution unit of the second type. And combining the 64 characteristic graphs to generate two displacement field images with the size of 256 × 256, wherein one displacement field image is a horizontal displacement field, and the other displacement field image is a vertical displacement field.
And 104, performing spatial transformation on the image to be registered through the registration information output by the first neural network to obtain a first prediction registration image.
And (3) performing spatial transformation on the image to be registered through the registration information, namely moving each pixel point in the image to be registered according to the horizontal and vertical displacement information on the displacement field, wherein the operation result is the first predicted registration image (the image A' in the figure 2 a).
And 105, inputting the first prediction registration image and the image to be registered into a second neural network.
The second neural network is used for estimating registration information of the first prediction registration image to the image to be registered. The specific implementation process of estimating the registration information by the second neural network is similar to the implementation process of estimating the registration information by the first neural network in step 104, except that the input registration image is different from the registration reference image, where the image to be registered (serving as the registration reference image of the second neural network) and the predicted registration image (serving as the registration reference image of the second neural network) obtained through prediction by the first neural network are input into the second neural network, see fig. 2a, the image to be registered a is taken as the registration reference image of the second neural network, and the first predicted registration image a' is taken as the registration reference image of the second neural network and is input into the second neural network for image registration, and the specific implementation process is referred to step 104 and is not described herein again.
And 106, adjusting network parameters of the first neural network and the second neural network according to the registration information.
Wherein the registration information output by the second neural network is a displacement field, similar to the registration information output by the first neural network.
And (6) repeating the steps 103 to 106 until the training stopping condition is met, and ending the network training. The training stop condition may be, but is not limited to, the number of iterations reaching a preset number, or the loss error being less than a preset error threshold. Therefore, the trained neural network can be used for automatically estimating the registration information between the two images according to the difference of the two images, and the image registration is realized. It can be understood that the registration information can be obtained by inputting 2 medical images into one neural network, and two neural networks during model training are not needed.
In the embodiment, in the network training process, two neural networks are used for training simultaneously, the first neural network is used for realizing the registration of the image to be registered to the registration reference image, and the second neural network is used for realizing the registration of the image to be registered to the predicted registration image obtained by prediction of the first neural network; on the other hand, the neural network is optimized by further utilizing the generated image while utilizing the existing training sample pair, so that the accuracy is improved.
In another embodiment, a spatial transform module may be cascaded at the output of the trained neural network. Referring to fig. 2c, a spatial transformation module is cascaded at the output end of the trained neural network, and the spatial transformation module performs spatial transformation on the image to be registered through the registration information output by the neural network, so as to obtain the registered image. Taking registration of the image A to the image B as an example, the image A is an image to be registered, the image B is a registration reference image, the image A and the image B are input into a neural network shown in FIG. 2c, and after spatial transformation, an image A' output by the model is an image registered by the image A to the image B.
The network parameters are adjusted in step 106 based on the registration information, which may be used as a golden standard for training in one embodiment. And using the registration information as a golden standard of training, calculating the registration information of each group of training sample pairs by other methods, determining loss errors of the neural network according to the registration information output by the neural network and the calculated registration information, and adjusting network parameters of the neural network according to the loss errors.
In step 106, network parameters are adjusted based on the registration information, and in another embodiment, the registration reference image may be used as a golden standard for training. Fig. 3 is a flowchart of step 106 in fig. 1 according to an exemplary embodiment of the present invention, and referring to fig. 3, step 106 specifically includes the following steps:
and 106-1, carrying out spatial transformation on the first prediction registration image through the registration information output by the second neural network to obtain a second prediction registration image.
Taking a training sample pair comprising an image A and an image B, wherein the image A is an image to be registered, and the image B is a registration reference image as an example, referring to FIG. 2a, performing spatial transformation on the image A to be registered through registration information output by a first neural network to obtain a first prediction registration image A'; and carrying out spatial transformation on the first prediction registration image A 'through the registration information output by the second neural network to obtain a second prediction registration image A'.
And step 106-2, determining a first loss error of the first prediction registration image and the registration reference image and a first loss error of the second prediction registration image and the image to be registered.
The first loss error may be expressed, but is not limited to, as follows:
Figure BDA0002377335330000121
wherein L isS(s (a), a) represents a first loss error; s (A) represents an image A' obtained by registering the image A to be registered; b denotes the registration reference image. By minimizing LSAnd (S), (A) enabling the first neural network to learn the inter-pixel displacement relation according to the input image, and enabling the registered image A' to gradually approach the registered reference image B.
The second loss error may be expressed, but is not limited to, as follows:
Figure BDA0002377335330000122
wherein L isS(S (A '), A') represents a second loss error; s (a ') represents a registered image a ″ obtained by registering the image a'. By minimizing LS(S (A '), A ') enabling the second neural network to learn the displacement relation among the pixels according to the input image, and enabling the registered image A ' to gradually approximate to the image A.
And step 106-3, adjusting network parameters according to the first loss error and the second loss error.
The first neural network and the second neural network share network parameters, and the two loss errors can be weighted and summed in step 106-3, and the network parameters of the two neural networks can be adjusted according to the weighting result.
The weighting result may be expressed, but is not limited to, as follows:
LS=γ*LS(S(A),A)+(1-γ)*LS(S(A′),A′);
the weighting coefficient γ can be set according to actual requirements, for example, to 0.5.
In the embodiment, the registration reference image is used as a golden standard for network training, the network is subjected to unsupervised learning, the registration information of two medical images in a training sample pair does not need to be calculated, the training sample is easy to obtain, and the network training is faster and more accurate. And a training sample with labeling information is not needed, so that the method is suitable for the registration of various types of images and has good universality.
In order to increase the training samples, in another embodiment, a training sample pair after exchanging the registration reference image and the image to be registered in the training sample set may also be used as a new training sample pair, and the new training sample pair is input into the registration model for model training. Or taking a training sample pair comprising a medical image A and a medical image B as an example, firstly selecting the medical image A as an image to be registered, selecting the medical image B as a registration reference image, inputting the image to a neural network for network training, and calling the process as forward registration; then, the medical image B is selected as an image to be registered, the medical image a is selected as a registration reference image, and the registration reference image is input into a neural network for network registration, and the process is called reverse registration. The forward registration process and the reverse registration process are implemented in the same way, and the difference is that the relation between the image to be registered and the registration reference image is exchanged, so that the training sample set is further fully utilized, and the registration calculation accuracy is improved.
In another embodiment, a network architecture of 4 neural networks may be built in step 101, and in the network training process, the 4 neural networks share network parameters. Referring to fig. 4, the upper two neural networks are used to achieve forward registration and the lower two neural networks achieve reverse registration. In the network training process, the upper and lower neural networks are simultaneously input for the two images in each training sample pair, and it should be noted that the relationship between the images to be registered and the registration reference images of the two images input into the upper and lower neural networks is different. Or taking a training sample pair comprising a medical image A and a medical image B as an example, selecting the medical image A as an image to be registered, selecting the medical image B as a registration reference image, and inputting the first neural network; and selecting the medical image B as an image to be registered, selecting the medical image A as a registration reference image, inputting the registration reference image into a first neural network below, and performing network training. Taking the result of the spatial transformation as the golden standard of the network training, the loss function of the model is constructed as follows:
LS=a*LS(S(A),A)+b*LS(S(A′),A′)+c*LS(S(B),B)+d*LS(S(B′),B′);
the weighting coefficients a, b, c and d can be set according to actual requirements.
In this embodiment, the loss function of the registration model includes the loss functions of 4 neural networks, so that the registration information obtained by registering the image a to the image B and then registering the image B to the image a tends to be consistent with the registration information obtained by registering the image B to the image a.
Fig. 5 is another image registration neural network training method according to an exemplary embodiment of the present invention, which is basically the same as the neural network training method shown in fig. 1, except that in this embodiment, the established registration model only uses a network architecture of a neural network, the network architecture of the neural network may refer to fig. 2c, and after the network architecture is established, the neural network is trained by using the following steps:
step 501, obtaining a training sample set.
The training sample set includes a plurality of sets of training sample pairs, each set of training sample pairs includes two medical images, the two medical images are obtained under different scanning conditions, and the different scanning conditions may be, for example, different scanning periods, different scanning positions, different scanning devices (CT devices, MR devices, etc.), different device parameters, and the like. The two medical images in each set of training samples are preferably images for the same anatomical location, or medical images with overlapping portions of the anatomical location.
Step 502, selecting one medical image from the two medical images as a registration reference image and the other medical image as an image to be registered according to each group of training sample pairs, and inputting the image to be registered into a neural network.
The neural network is used for estimating registration information of the image to be registered to the registration reference image.
Step 503, performing spatial transformation on the image to be registered through the registration information output by the neural network to obtain a predicted registration image of the image to be registered.
The specific implementation process of steps 502 and 503 is similar to that of steps 103 and 104, and is not repeated here.
And step 504, determining loss errors of the predicted registration image and the registration reference image, and adjusting network parameters of the neural network according to the loss errors.
Wherein the loss error can be expressed, but not limited to, as follows:
Figure BDA0002377335330000141
wherein L isS(S (A), A) represents a loss error; s (A) represents an image A' obtained by registering the image A to be registered; b denotes the registration reference image. By minimizing LSAnd (S), (A) enabling the neural network to learn the inter-pixel displacement relation according to the input image, and enabling the registered image A' to gradually approach the registered reference image B.
In the embodiment, the registration reference image is used as a golden standard for network training, the network is subjected to unsupervised learning, the registration information of two medical images in a training sample pair does not need to be calculated, the training sample is easy to obtain, and the network training is faster and more accurate. And a training sample with labeling information is not needed, so that the method is suitable for the registration of various types of images and has good universality.
In another embodiment, to increase the richness of the samples, for each set of training sample pairs, the predictive registration image and the corresponding image to be registered may be combined into a new training sample, and the neural network may be trained using the new training sample.
In another embodiment, in order to increase the richness of the samples, for each training sample pair, the relation between the image to be registered and the registration reference image of the 2 images can be exchanged and used as a new training sample, and the neural network can be trained by using the new training sample.
The embodiment of the invention also provides an image registration method, wherein when image registration is carried out, one frame of image is selected from a plurality of frames of medical images obtained by perfusion imaging scanning as a registration reference image, the other frames of images are used as images to be registered, the images to be registered and the registration reference images are input into a neural network obtained by training of the neural network training method for image registration provided by any one of the embodiments, and the registration information output by the neural network is the registration information of the images to be registered to the registration reference images. Furthermore, the spatial transformation is carried out on the image to be registered through the registration information output by the neural network, so that the registered image can be obtained, and the image registration is realized.
Corresponding to the embodiments of the neural network training method and the image registration method for image registration, the invention also provides embodiments of a neural network training device and an image registration device for image registration.
Fig. 6 is a block diagram of a neural network training apparatus for image registration according to an exemplary embodiment of the present invention, where the apparatus includes: an acquisition module 61, an input module 62, a spatial transformation module 63 and an adjustment module 64.
The obtaining module 61 is configured to obtain a training sample set, where the training sample set includes a plurality of sets of training sample pairs, each set of training sample pairs includes two medical images, and the two medical images are obtained under different scanning conditions;
an input module 62 is configured to, for each group of training sample pairs, select one medical image from the two medical images as a registration reference image, and input a first neural network as an image to be registered, where the first neural network is configured to estimate registration information for registering the image to be registered to the registration reference image;
the spatial transformation module 63 is configured to perform spatial transformation on the image to be registered through the registration information output by the first neural network to obtain a first predicted registration image;
the input module is further configured to input the first predicted registration image and the image to be registered into a second neural network, where the second neural network is configured to estimate registration information of the first predicted registration image registered to the image to be registered;
the adjusting module 64 is configured to adjust network parameters of the first neural network and the second neural network according to the registration information, and the network parameters are shared by the first neural network and the second neural network.
Optionally, the spatial transformation module is further configured to perform spatial transformation on the first predicted registration image through the registration information output by the second neural network to obtain a second predicted registration image;
the adjustment module includes:
a determining unit, configured to determine a loss error between the second predicted registration image and the image to be registered, and a loss error between the first predicted registration image and the registration reference image;
and the adjusting unit is used for adjusting the network parameters according to the loss error.
Optionally, the neural network training device further includes:
and the exchange module is used for exchanging the relation between the quasi-reference images of the two medical images in the training sample pair and the image to be registered and then inputting the quasi-reference images and the image to be registered into the first neural network as a new training sample pair.
Optionally, the first neural network and the second neural network each comprise a first type of convolutional layer unit and a second type of convolutional unit;
performing convolution operation on the image to be registered and the registration reference image based on the first type of convolution layer unit to obtain a feature map of the correlation between the image to be registered and the registration reference image;
and performing deconvolution operation on the feature map based on the second convolution unit to obtain registration information of the registration of the image to be registered to the registration reference image.
The embodiment of the present invention further provides another neural network training device for image registration, where the neural network training device includes:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a training sample set, the training sample set comprises a plurality of groups of training sample pairs, each group of training sample pairs comprises two medical images, and the two medical images are acquired under different scanning conditions;
an input module, configured to select one medical image from the two medical images as a registration reference image and the other medical image as an image to be registered, and input the selected medical image and the other medical image into a neural network, where the neural network is configured to estimate registration information for registering the image to be registered to the registration reference image;
the spatial transformation module is used for carrying out spatial transformation on the image to be registered through the registration information output by the neural network to obtain a predicted registration image;
and the adjusting module is used for determining the loss error of the prediction registration image and the registration reference image and adjusting the network parameters of the neural network according to the loss error.
An embodiment of the present invention further provides an image registration apparatus, where the image registration apparatus includes:
the input module is used for inputting the image to be registered and the registration reference image into a neural network, and the neural network is obtained by training the neural network training device for image registration provided by any one of the embodiments;
and the spatial transformation module is used for carrying out spatial transformation on the image to be registered through the registration information output by the neural network so as to realize the registration of the image to be registered to the registration reference image.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the invention. One of ordinary skill in the art can understand and implement it without inventive effort.
Fig. 7 is a schematic diagram of an electronic device according to an exemplary embodiment of the present invention, and illustrates a block diagram of an exemplary electronic device 70 suitable for implementing embodiments of the present invention. The electronic device 70 shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiment of the present invention.
As shown in FIG. 7, the electronic device 70 may take the form of a general purpose computing device, which may be a server device, for example. The components of the electronic device 70 may include, but are not limited to: the at least one processor 71, the at least one memory 72, and a bus 73 connecting the various system components (including the memory 72 and the processor 71).
The bus 73 includes a data bus, an address bus, and a control bus.
The memory 72 may include volatile memory, such as Random Access Memory (RAM)721 and/or cache memory 722, and may further include Read Only Memory (ROM) 723.
Memory 72 may also include program means 725 (or utility means) having a set (at least one) of program modules 724, such program modules 724 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
The processor 71 executes various functional applications and data processing, such as the methods provided by any of the above embodiments, by running a computer program stored in the memory 72.
The electronic device 70 may also communicate with one or more external devices 74 (e.g., keyboard, pointing device, etc.). Such communication may be through an input/output (I/O) interface 75. Also, the model-generating electronic device 70 may also communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet) via the network adapter 76. As shown, the network adapter 76 communicates with the other modules of the model-generating electronic device 70 via a bus 73. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the model-generating electronic device 70, including but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID (disk array) systems, tape drives, and data backup storage systems, etc.
It should be noted that although in the above detailed description several units/modules or sub-units/modules of the electronic device are mentioned, such a division is merely exemplary and not mandatory. Indeed, the features and functionality of two or more of the units/modules described above may be embodied in one unit/module according to embodiments of the invention. Conversely, the features and functions of one unit/module described above may be further divided into embodiments by a plurality of units/modules.
Embodiments of the present invention further provide a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the method steps provided in any of the above embodiments.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (14)

1. A neural network training method for image registration, the neural network training method comprising:
acquiring a training sample set, wherein the training sample set comprises a plurality of groups of training sample pairs, and each group of training sample pairs comprises two medical images acquired under different scanning conditions;
selecting one medical image from the two medical images as a registration reference image and the other medical image as an image to be registered according to each group of training sample pairs, and inputting the selected medical image and the other medical image into a first neural network, wherein the first neural network is used for estimating registration information of the image to be registered to the registration reference image;
performing spatial transformation on the image to be registered through the registration information output by the first neural network to obtain a first prediction registration image;
inputting the first prediction registration image and the image to be registered into a second neural network, wherein the second neural network is used for estimating registration information of the first prediction registration image to the image to be registered;
adjusting network parameters of the first neural network and the second neural network according to the registration information, the network parameters being shared by the first neural network and the second neural network;
and carrying out image registration on the medical image through the trained neural network.
2. The neural network training method for image registration of claim 1, wherein adjusting network parameters of the first neural network and the second neural network according to the registration information comprises:
performing spatial transformation on the first prediction registration image through registration information output by the second neural network to obtain a second prediction registration image;
determining a loss error of the second prediction registration image and the image to be registered and a loss error of the first prediction registration image and the registration reference image;
and adjusting the network parameters according to the loss error.
3. The neural network training method for image registration as claimed in claim 1, further comprising:
and after the relation between the quasi-reference images of the two medical images in the training sample pair and the image to be registered is exchanged, the quasi-reference images are used as a new training sample pair to be input into the first neural network.
4. The neural network training method for image registration according to claim 1, wherein the first neural network and the second neural network each comprise a first type of convolutional layer unit and a second type of convolutional unit;
the first neural network estimates registration information of the image to be registered to the registration reference image, and the registration information comprises:
performing convolution operation on the image to be registered and the registration reference image based on the first type of convolution layer unit to obtain a feature map of the image to be registered and the registration reference image;
and performing deconvolution operation on the feature map based on the second convolution unit to obtain registration information of the registration of the image to be registered to the registration reference image.
5. A neural network training method for image registration, the neural network training method comprising:
acquiring a training sample set, wherein the training sample set comprises a plurality of groups of training sample pairs, each group of training sample pairs comprises two medical images, and the two medical images are acquired under different scanning conditions;
selecting one medical image from the two medical images as a registration reference image and the other medical image as an image to be registered according to each group of training sample pairs, and inputting the image to be registered into a neural network, wherein the neural network is used for estimating registration information of the image to be registered to the registration reference image;
performing spatial transformation on the image to be registered through the registration information output by the neural network to obtain a predicted registration image;
determining loss errors of the predicted registration image and the registration reference image, and adjusting network parameters of the neural network according to the loss errors;
and carrying out image registration on the medical image through the trained neural network.
6. An image registration method, characterized in that it comprises:
inputting an image to be registered and a registration reference image into a neural network, wherein the neural network is obtained by training the neural network training method for image registration according to any one of claims 1-5;
and performing spatial transformation on the image to be registered through the registration information output by the neural network to realize the registration of the image to be registered to the registration reference image.
7. A neural network training device for image registration, the neural network training device comprising:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a training sample set, the training sample set comprises a plurality of groups of training sample pairs, each group of training sample pairs comprises two medical images, and the two medical images are acquired under different scanning conditions;
an input module, configured to select one medical image from the two medical images as a registration reference image and the other medical image as an image to be registered, and input a first neural network to the input module, where the first neural network is configured to estimate registration information of the image to be registered with the registration reference image;
the spatial transformation module is used for carrying out spatial transformation on the image to be registered through the registration information output by the first neural network to obtain a first prediction registration image;
the input module is further configured to input the first predicted registration image and the image to be registered into a second neural network, where the second neural network is configured to estimate registration information of the first predicted registration image registered to the image to be registered;
an adjustment module for adjusting network parameters of the first neural network and the second neural network according to the registration information, the first neural network and the second neural network sharing the network parameters;
and carrying out image registration on the medical image through the trained neural network.
8. The neural network training device for image registration as set forth in claim 7,
the spatial transformation module is further configured to perform spatial transformation on the first predicted registration image through the registration information output by the second neural network to obtain a second predicted registration image;
the adjustment module includes:
a determining unit, configured to determine a loss error between the second predicted registration image and the image to be registered, and a loss error between the first predicted registration image and the registration reference image;
and the adjusting unit is used for adjusting the network parameters according to the loss error.
9. The neural network training device for image registration as claimed in claim 7, further comprising:
and the exchange module is used for exchanging the relation between the quasi-reference images of the two medical images in the training sample pair and the image to be registered and then inputting the quasi-reference images and the image to be registered into the first neural network as a new training sample pair.
10. The neural network training device for image registration according to claim 7, wherein the first neural network and the second neural network each comprise a first type convolutional layer unit and a second type convolutional unit;
performing convolution operation on the image to be registered and the registration reference image based on the first type of convolution layer unit to obtain a feature map of the correlation between the image to be registered and the registration reference image;
and performing deconvolution operation on the feature map based on the second convolution unit to obtain registration information of the registration of the image to be registered to the registration reference image.
11. A neural network training device for image registration, the neural network training device comprising:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a training sample set, the training sample set comprises a plurality of groups of training sample pairs, and each group of training sample pairs comprises two medical images acquired under different scanning conditions;
an input module, configured to select one medical image from the two medical images as a registration reference image and the other medical image as an image to be registered, and input the selected medical image and the other medical image into a neural network, where the neural network is configured to estimate registration information for registering the image to be registered to the registration reference image;
the spatial transformation module is used for carrying out spatial transformation on the image to be registered through the registration information output by the neural network to obtain a predicted registration image;
the adjusting module is used for determining loss errors of the prediction registration image and the registration reference image and adjusting network parameters of the neural network according to the loss errors;
and carrying out image registration on the medical image through the trained neural network.
12. An image registration apparatus, characterized in that the image registration apparatus comprises:
an input module, configured to input an image to be registered and a registration reference image into a neural network, where the neural network is trained by the neural network training apparatus for image registration according to any one of claims 7 to 11;
and the spatial transformation module is used for carrying out spatial transformation on the image to be registered through the registration information output by the neural network so as to realize the registration of the image to be registered to the registration reference image.
13. An electronic device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the neural network training method for image registration of any one of claims 1 to 5 when executing the computer program.
14. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the neural network training method for image registration of any one of claims 1 to 5.
CN202010071043.4A 2020-01-21 2020-01-21 Image registration and neural network training method and device thereof Active CN111275749B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010071043.4A CN111275749B (en) 2020-01-21 2020-01-21 Image registration and neural network training method and device thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010071043.4A CN111275749B (en) 2020-01-21 2020-01-21 Image registration and neural network training method and device thereof

Publications (2)

Publication Number Publication Date
CN111275749A true CN111275749A (en) 2020-06-12
CN111275749B CN111275749B (en) 2023-05-02

Family

ID=71002322

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010071043.4A Active CN111275749B (en) 2020-01-21 2020-01-21 Image registration and neural network training method and device thereof

Country Status (1)

Country Link
CN (1) CN111275749B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111862175A (en) * 2020-07-13 2020-10-30 清华大学深圳国际研究生院 Cross-modal medical image registration method and device based on cyclic canonical training
CN112950680A (en) * 2021-02-20 2021-06-11 哈尔滨学院 Satellite remote sensing image registration method
CN113052882A (en) * 2021-03-26 2021-06-29 上海商汤智能科技有限公司 Image registration method and related device, electronic equipment and storage medium
CN113256670A (en) * 2021-05-24 2021-08-13 推想医疗科技股份有限公司 Image processing method and device, and network model training method and device
CN113269815A (en) * 2021-05-14 2021-08-17 中山大学肿瘤防治中心 Deep learning-based medical image registration method and terminal
CN113487656A (en) * 2021-07-26 2021-10-08 推想医疗科技股份有限公司 Image registration method and device, training method and device, control method and device
CN113538537A (en) * 2021-07-22 2021-10-22 北京世纪好未来教育科技有限公司 Image registration method, model training method, device, equipment, server and medium
CN113705807A (en) * 2021-08-26 2021-11-26 上海睿刀医疗科技有限公司 Neural network training device and method, ablation needle arrangement planning device and method
CN113807378A (en) * 2020-06-16 2021-12-17 纬创资通股份有限公司 Training data increment method, electronic device and computer readable recording medium
CN114187337A (en) * 2021-12-07 2022-03-15 推想医疗科技股份有限公司 Image registration method, segmentation method, device, electronic equipment and storage medium
CN114332447A (en) * 2022-03-14 2022-04-12 浙江大华技术股份有限公司 License plate correction method, license plate correction device and computer readable storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109584283A (en) * 2018-11-29 2019-04-05 合肥中科离子医学技术装备有限公司 A kind of Medical Image Registration Algorithm based on convolutional neural networks
CN109767460A (en) * 2018-12-27 2019-05-17 上海商汤智能科技有限公司 Image processing method, device, electronic equipment and computer readable storage medium
CN109697741B (en) * 2018-12-28 2023-06-16 上海联影智能医疗科技有限公司 PET image reconstruction method, device, equipment and medium

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113807378A (en) * 2020-06-16 2021-12-17 纬创资通股份有限公司 Training data increment method, electronic device and computer readable recording medium
CN113807378B (en) * 2020-06-16 2024-05-31 纬创资通股份有限公司 Training data increment method, electronic device and computer readable recording medium
CN111862175A (en) * 2020-07-13 2020-10-30 清华大学深圳国际研究生院 Cross-modal medical image registration method and device based on cyclic canonical training
CN112950680A (en) * 2021-02-20 2021-06-11 哈尔滨学院 Satellite remote sensing image registration method
CN112950680B (en) * 2021-02-20 2022-07-05 哈尔滨学院 Satellite remote sensing image registration method
CN113052882A (en) * 2021-03-26 2021-06-29 上海商汤智能科技有限公司 Image registration method and related device, electronic equipment and storage medium
CN113052882B (en) * 2021-03-26 2023-11-24 上海商汤智能科技有限公司 Image registration method and related device, electronic equipment and storage medium
WO2022198915A1 (en) * 2021-03-26 2022-09-29 上海商汤智能科技有限公司 Image registration method and apparatus, electronic device, storage medium and program
CN113269815A (en) * 2021-05-14 2021-08-17 中山大学肿瘤防治中心 Deep learning-based medical image registration method and terminal
CN113269815B (en) * 2021-05-14 2022-10-25 中山大学肿瘤防治中心 Deep learning-based medical image registration method and terminal
CN113256670A (en) * 2021-05-24 2021-08-13 推想医疗科技股份有限公司 Image processing method and device, and network model training method and device
CN113538537A (en) * 2021-07-22 2021-10-22 北京世纪好未来教育科技有限公司 Image registration method, model training method, device, equipment, server and medium
CN113538537B (en) * 2021-07-22 2023-12-12 北京世纪好未来教育科技有限公司 Image registration and model training method, device, equipment, server and medium
CN113487656A (en) * 2021-07-26 2021-10-08 推想医疗科技股份有限公司 Image registration method and device, training method and device, control method and device
CN113705807A (en) * 2021-08-26 2021-11-26 上海睿刀医疗科技有限公司 Neural network training device and method, ablation needle arrangement planning device and method
CN114187337A (en) * 2021-12-07 2022-03-15 推想医疗科技股份有限公司 Image registration method, segmentation method, device, electronic equipment and storage medium
CN114332447A (en) * 2022-03-14 2022-04-12 浙江大华技术股份有限公司 License plate correction method, license plate correction device and computer readable storage medium
CN114332447B (en) * 2022-03-14 2022-08-09 浙江大华技术股份有限公司 License plate correction method, license plate correction device and computer readable storage medium

Also Published As

Publication number Publication date
CN111275749B (en) 2023-05-02

Similar Documents

Publication Publication Date Title
CN111275749B (en) Image registration and neural network training method and device thereof
US7945117B2 (en) Methods and systems for registration of images
CN110599528A (en) Unsupervised three-dimensional medical image registration method and system based on neural network
JP2021535482A (en) Deep learning-based registration
CN106355195B (en) System and method for measuring image definition value
WO2021169126A1 (en) Lesion classification model training method and apparatus, computer device, and storage medium
CN110136177B (en) Image registration method, device and storage medium
CN110838140A (en) Ultrasound and nuclear magnetic image registration fusion method and device based on hybrid supervised learning
WO2011109710A1 (en) Hierarchical atlas-based segmentation
CN112308765A (en) Method and device for determining projection parameters
CN113011401B (en) Face image posture estimation and correction method, system, medium and electronic equipment
CN110570435A (en) method and device for carrying out damage segmentation on vehicle damage image
CN109087333B (en) Target scale estimation method and device based on correlation filtering tracking algorithm
CN111161182B (en) MR structure information constrained non-local mean guided PET image partial volume correction method
CN113570658A (en) Monocular video depth estimation method based on depth convolutional network
WO2023092959A1 (en) Image segmentation method, training method for model thereof, and related apparatus and electronic device
CN109961435B (en) Brain image acquisition method, device, equipment and storage medium
CN112581385B (en) Diffusion kurtosis imaging tensor estimation method, medium and device based on multiple prior constraints
CN109559296B (en) Medical image registration method and system based on full convolution neural network and mutual information
CN116843679B (en) PET image partial volume correction method based on depth image prior frame
CN112150485B (en) Image segmentation method, device, computer equipment and storage medium
CN111951316A (en) Image quantization method and storage medium
JP4824034B2 (en) Optimized image recording using composite recording vectors
CN111161330A (en) Non-rigid image registration method, device, system, electronic equipment and storage medium
CN114187337B (en) Image registration method, segmentation method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20240219

Address after: 110167 No. 177-1 Innovation Road, Hunnan District, Shenyang City, Liaoning Province

Patentee after: Shenyang Neusoft Medical Systems Co.,Ltd.

Country or region after: China

Address before: Room 336, 177-1, Chuangxin Road, Hunnan New District, Shenyang City, Liaoning Province

Patentee before: Shenyang advanced medical equipment Technology Incubation Center Co.,Ltd.

Country or region before: China

TR01 Transfer of patent right