WO2020135374A1 - 图像配准方法、装置、计算机设备及可读存储介质 - Google Patents

图像配准方法、装置、计算机设备及可读存储介质 Download PDF

Info

Publication number
WO2020135374A1
WO2020135374A1 PCT/CN2019/127695 CN2019127695W WO2020135374A1 WO 2020135374 A1 WO2020135374 A1 WO 2020135374A1 CN 2019127695 W CN2019127695 W CN 2019127695W WO 2020135374 A1 WO2020135374 A1 WO 2020135374A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
registration
floating
preset
reference image
Prior art date
Application number
PCT/CN2019/127695
Other languages
English (en)
French (fr)
Inventor
曹晓欢
高菲菲
董昢
薛忠
詹翊强
周翔
Original Assignee
上海联影智能医疗科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201811586820.8A external-priority patent/CN109598745B/zh
Priority claimed from CN201811637721.8A external-priority patent/CN109754396B/zh
Application filed by 上海联影智能医疗科技有限公司 filed Critical 上海联影智能医疗科技有限公司
Publication of WO2020135374A1 publication Critical patent/WO2020135374A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments

Definitions

  • the present application relates to the field of image processing technology, and more specifically, to an image registration method, device, computer device, and readable storage medium.
  • Different medical images can reflect different human anatomical structure information.
  • CT computer tomography
  • Magnetic Resonance Imaging Magnetic Resonance Imaging
  • PET positron emission computed tomography
  • Ultrasound images Functional Magnetic Resonance Imaging (fMRI) images, etc.
  • An image registration method includes:
  • the target registration method is used to register images of different modalities.
  • the obtaining the registration result according to the floating image, the reference image and the target registration method includes:
  • the semantic information and the target image registration algorithm perform image registration on the floating image and the reference image to obtain an initial registration result;
  • the initial registration result includes the floating image and the reference Image transformation matrix;
  • the transformed floating image, the reference image, and the target registration model register the transformed floating image to obtain the registration result.
  • the semantic information includes: at least one of the segmented area and anatomical mark of the floating image, and at least one of the segmented area and anatomical mark of the reference image; the preset
  • the image registration algorithm includes an image registration algorithm based on segmentation and a registration algorithm based on anatomical markers; the anatomical markers include anatomical marker points, anatomical marker lines and anatomical marker faces.
  • the registration algorithm when the target image registration algorithm is the anatomical mark-based registration algorithm, the registration algorithm based on the semantic information and the target image
  • the reference image performs image registration to obtain an initial registration result, including:
  • the floating anatomical mark set to be registered, the reference anatomical mark set to be registered, and the anatomical mark-based registration algorithm Performing image registration on the reference image to obtain the initial registration result including:
  • the registration method based on the semantic information and the target image Performing image registration on the reference image to obtain the initial registration result, including:
  • image registration is performed on the floating image and the reference image to obtain the initial registration result.
  • the method further includes:
  • the initial registration results obtained by different anatomical markers and/or the initial registration results obtained by different segmentation regions are integrated.
  • the obtaining the transformed floating image according to the transformation matrix, the reference image and the floating image includes:
  • the down-sampled reference image obtained after down-sampling the reference image and the down-sampled floating image obtained after down-sampling the floating image determine the down-sampled reference image and the The similarity measure value between the transformed floating images corresponding to the down-sampled floating images;
  • the target registration model includes a forward registration network and a backward registration network; the training process of the target registration model includes:
  • a preset unsupervised method or a weakly supervised method is used to iteratively train the preset forward registration network and the preset backward registration network to obtain the target registration model.
  • the preset unsupervised method is used to iteratively train the preset forward registration network and the preset backward registration network to obtain the target registration model, including:
  • the first training mode is the training mode of the previous registration network and then the backward registration network
  • the second training mode is the training mode of the successive registration network and then the forward registration network
  • the preset first training mode is used to train the preset forward registration network and the preset backward registration network, including:
  • the preset forward registration network and the preset backward registration network are used for training.
  • the training of the preset forward registration network and the preset backward registration network according to the first similarity includes:
  • using the preset second training mode to train the preset forward registration network and the preset backward registration network includes:
  • Determining the first floating image as the third reference image of the preset backward registration network, and determining the first reference image as the third floating image of the preset backward registration network, Input the third floating image and the third reference image into the preset backward registration network to obtain a third registered floating image; the mode of the third reference image is mode two, the The mode of the third floating image is mode one; the mode of the third registered floating image is the same as the mode of the third floating image;
  • the training the preset backward registration network and the preset forward registration network according to the second similarity includes:
  • the preset first training mode and the second training mode are used to iteratively train the preset forward registration network and the preset backward registration network to obtain
  • the target registration model also includes:
  • the target registration model is determined according to the value of the first loss function and the value of the second loss function.
  • the determining the target registration model according to the value of the first loss function and the value of the second loss function includes:
  • the forward registration network and the backward registration network corresponding to the value of the first loss function and the value of the second loss function reaching a stable value are determined as the target registration model.
  • An image registration device the device includes:
  • An obtaining module used to obtain a floating image and a reference image to be registered; the floating image and the reference image are images of two different modalities;
  • a registration module is used to obtain a registration result based on the floating image, the reference image, and a target registration method; the target registration method is used to register images of different modalities.
  • An embodiment of the present application provides a computer device including a memory and a processor.
  • a computer program that can run on the processor is stored on the memory.
  • the processor implements the computer program to implement the following steps:
  • the target registration method is used to register images of different modalities.
  • An embodiment of the present application provides a readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the following steps are realized:
  • the target registration method is used to register images of different modalities.
  • An image registration method includes:
  • the semantic information includes: at least one of the segmented area and anatomically marked points of the floating image, and at least one of the segmented area and anatomically marked points of the reference image;
  • the preset image registration models include image registration models based on segmentation and registration models based on anatomical markers.
  • the target image registration model is the registration model based on anatomical markers
  • the reference image is registered according to the semantic information and the target image registration model Image registration with the floating image includes:
  • the reference anatomical mark point set to be registered, the floating anatomical mark point set to be registered, and the registration model based on the anatomical mark point Image registration of the image and the floating image includes:
  • the reference image and the reference image Image registration of the floating image includes:
  • the method further includes:
  • the method further includes:
  • the down-sampled reference image obtained after down-sampling the reference image and the down-sampled floating image obtained after down-sampling the floating image determine the down-sampled reference image and the Describe the similarity measure value between the transformed floating images corresponding to the down-sampled floating images;
  • the target parameter is determined according to the similarity metric value, the initial parameter, and a preset gradient descent method.
  • An image registration device the device includes:
  • the first acquisition module is used to acquire the reference image and the floating image to be registered
  • the first extraction module is used to extract semantic information of the reference image and the floating image to obtain a marked reference image and a marked floating image including the semantic information;
  • a first determining module configured to determine target image registration models corresponding to the mark reference image and the mark floating image respectively from preset image registration models according to the semantic information
  • the registration module is configured to perform image registration on the reference image and the floating image according to the semantic information and the target image registration model.
  • a computer device the computer device includes a memory and a processor, the memory stores a computer program, and when the processor executes the computer program, the method steps of any one of the above image registration methods are implemented.
  • the semantic information of the reference image and the floating image can be extracted first, so that different target image registration models are used for the reference image and the floating image according to different semantic information Registration is performed to complete the registration of reference images and floating images that include multiple semantic information, which solves the limitation of the prior art that only reference information and floating images can be registered based on a single semantic information, which greatly improves The applicable scope of image registration.
  • An image registration method includes:
  • a registration result is obtained according to the floating image, the reference image, and a pre-trained registration model; the registration model is used to register images of different modalities.
  • the method further includes:
  • a preset unsupervised method or a weak supervised method is used to iteratively train the preset forward registration network and the preset backward registration network to obtain the registration model.
  • the preset unsupervised method is used to iteratively train the preset forward registration network and the preset backward registration network to obtain the registration model, including:
  • the first training mode is the training mode of the previous registration network and then the backward registration network
  • the second training mode is the training mode of the successive registration network and then the forward registration network
  • the preset first training mode is used to train the preset forward registration network and the preset backward registration network, including:
  • the forward registration network and the backward registration network perform training.
  • the training the forward registration network and the backward registration network according to the first similarity includes:
  • the first similarity is determined as the first accuracy of the second registered floating image, and the training of the forward registration network and the backward registration network is guided according to the first accuracy.
  • using the preset second training mode to train the preset forward registration network and the preset backward registration network includes:
  • the image and the third reference image are input to the backward registration network to obtain a third registered floating image;
  • the mode of the third reference image is mode two, and the mode of the third floating image is mode State 1: the mode of the third registered floating image is the same as the mode of the third floating image;
  • the training the backward registration network and the forward registration network according to the second similarity includes:
  • the second similarity is determined as the second accuracy of the fourth registered floating image, and the training of the backward registration network and the forward registration network is guided according to the second accuracy.
  • the preset first training mode and the second training mode are used to iteratively train the preset forward registration network and the preset backward registration network to obtain the registration
  • the model also includes:
  • the registration model is determined according to the value of the first loss function and the value of the second loss function.
  • the determining the registration model according to the value of the first loss function and the value of the second loss function includes:
  • the forward registration network and the backward registration network corresponding to the values of the first loss function and the second loss function reaching a stable value are determined as the registration model.
  • An image registration device the device includes:
  • a first acquiring module configured to acquire a floating image and a reference image to be registered; the floating image and the reference image are images of two different modalities;
  • the second acquisition module is used to acquire registration parameters and registered images based on the floating image, the first reference image, and the pre-trained registration model; the registration model is used for different modalities Image registration.
  • a computer device the computer device includes a memory and a processor, the memory stores a computer program, and when the processor executes the computer program, the method steps of any one of the above image registration methods are implemented.
  • two floating images and reference images of two different modalities can be registered according to a pre-trained registration model for registering images of different modalities Perform registration to solve the problem that cross-modal images cannot be registered in the existing image registration technology; in addition, the pre-trained registration model is used to register two different modal images without the need for additional Parameter adjustment improves the registration efficiency and robustness of image registration. At the same time, the image registration according to the registration model also improves the registration accuracy.
  • FIG. 1 is a schematic flowchart of an image registration method provided by an embodiment
  • FIG. 2 is a schematic flowchart of an image registration method provided by another embodiment
  • FIG. 3 is a schematic flowchart of an image registration method provided by another embodiment
  • FIG. 4 is a schematic flowchart of an image registration method provided by another embodiment
  • FIG. 5 is a schematic flowchart of an image registration method provided by another embodiment
  • FIG. 6 is a schematic flowchart of an image registration method provided by another embodiment
  • FIG. 7 is a schematic flowchart of an image registration method provided by another embodiment
  • FIG. 8 is a schematic flowchart of an image registration method provided by another embodiment
  • FIG. 9 is a schematic flowchart of an image registration method provided by another embodiment.
  • FIG. 10 is a schematic structural diagram of an image registration device provided by an embodiment
  • FIG. 11 is a schematic diagram of an internal structure of a computer device provided by an embodiment
  • FIG. 12 is a schematic flowchart of an image registration method provided by an embodiment
  • FIG. 13 is a schematic flowchart of an image registration method provided by another embodiment
  • 16 is a schematic flowchart of an image registration method provided by another embodiment
  • 17 is a schematic structural diagram of an image registration device provided by an embodiment
  • 19 is a schematic structural diagram of an image registration device provided by another embodiment.
  • 21 is a schematic flowchart of an image registration method provided by another embodiment
  • 22 is a schematic diagram of a training process of a first training mode provided by an embodiment
  • 24 is a schematic diagram of a training process of a second training mode provided by an embodiment
  • 25 is a schematic flowchart of an image registration method provided by another embodiment
  • 26 is a schematic structural diagram of an image registration device provided by an embodiment
  • FIG. 27 is a schematic structural diagram of an image registration device provided by an embodiment
  • FIG. 28 is a schematic structural diagram of an image registration device provided by an embodiment
  • 29 is a schematic structural diagram of an image registration device provided by an embodiment
  • 31 is a schematic structural diagram of an image registration device provided by an embodiment
  • Different medical images can reflect different human anatomical structure information.
  • Medical clinics usually need to accurately and effectively register different medical images.
  • Image registration can achieve two acquisitions at different times, with different imaging devices or under different conditions.
  • the registration of different medical images is of great significance to the precise and intelligent development of clinical diagnosis and treatment.
  • image modalities that require image registration include but are not limited to computer tomography (Computed Tomography, CT) images, magnetic resonance (Magnetic Resonance Imaging, MRI) images, and positron emission computed tomography (Positron Emission Tomography (PET) images, Ultrasound images, Functional Magnetic Resonance Imaging (fMRI) images, etc.
  • CT computer tomography
  • Magnetic Resonance Imaging Magnetic Resonance Imaging
  • PET positron emission computed tomography
  • Ultrasound images Ultrasound images
  • fMRI Functional Magnetic Resonance Imaging
  • FIG. 1 a schematic flowchart of an image registration method is provided, including the following steps:
  • the floating image refers to the image to be registered
  • the reference image refers to the image space in which the floating image is to be registered in the past.
  • Different modal images refer to images obtained using different imaging principles and equipment, for example, using computer tomography (Computed Tomography, CT), nuclear magnetic resonance (Magnetic Resonance Imaging, MRI), positron emission computed tomography (Positron Emission Tomography) , PET), ultrasound (Ultrasound), functional magnetic resonance (functional Magnetic Resonance Imaging, fMRI), etc. Any two modal images are images of different modalities.
  • the computer equipment can obtain floating images and reference images of different modalities to be registered from the PACS (Picture Archiving and Communication Systems) server, or directly from different medical imaging equipment Different modal floating images and reference images to be registered.
  • the computer device can register the obtained two or more images, for example, one of the images is used as a reference image, and the other images are used as floating images, and the floating image is mapped to the reference image to realize the reference image and floating The alignment of the image under the anatomical structure.
  • the reference image and the floating image may be images of the same individual, or images of different individuals, or images containing the same anatomical structure, or images containing part of the same anatomical structure.
  • the embodiment does not limit the sources of the reference image and the floating image.
  • the reference image and the floating image may be two-dimensional images or three-dimensional images, which is not specifically limited in this embodiment.
  • the computer device obtains the registration result according to the floating image, the reference image, and the target registration method, where the target registration method is used to register images of different modalities.
  • the target registration method may be a registration algorithm, a registration model, or a combination method of a registration algorithm and a registration model.
  • the computer device when the target registration method is a registration algorithm, the computer device obtains the transformation matrix of the floating image and the reference image through the registration algorithm, and registers the floating image according to the obtained transformation matrix to obtain the registration result;
  • the target registration When the registration method is the registration model, the computer device inputs the floating image and the reference image into the registration model to obtain the deformation field of the floating image, and registers the floating image according to the obtained deformation field to obtain the registration result;
  • the target registration method When the registration algorithm and the registration model are combined, the computer device obtains the transformation matrix of the floating image and the reference image through the registration algorithm, transforms the floating image according to the obtained transformation matrix, and obtains the transformed floating image.
  • the floating image and the reference image are input into the registration model to obtain the deformation field, and the transformed floating image is registered according to the obtained deformation field to obtain the registration result.
  • the computer device can use the target registration method to register images of different modalities to register floating images and reference images of different modalities to obtain registration results, which solves the traditional image registration
  • the registration method cannot accurately and effectively register cross-modal images.
  • FIG. 2 a schematic flowchart of another image registration method is provided.
  • the above S1011 includes:
  • the computer device performs semantic information extraction on the floating image and the reference image to obtain a marked floating image and a marked reference image including the extracted semantic information.
  • the above semantic information includes: at least one of a segmented area and an anatomical mark of the floating image, and at least one of a segmented area and an anatomical mark of the reference image; wherein, the anatomical mark includes an anatomical mark point, an anatomy Mark lines and anatomically mark faces.
  • the semantic information may be an anatomical mark in the reference image and the floating image, or a segmented area in the reference image and the floating image.
  • the anatomical mark when the above anatomical mark is an anatomical mark point, the anatomical mark may be a geometric mark point, such as a gray-scale extreme value or a linear structure intersection point, or an anatomical mark point clearly visible in the anatomical shape and accurately located , Such as key marker points or feature points of human tissues, organs, or lesions; the above-mentioned segmented regions may be curves or curved surfaces corresponding to reference images and floating images, such as lungs, livers, or irregular regions.
  • the computer device may extract the semantic information of the floating image and the reference image according to the preset neural network model.
  • the computer device when extracting semantic information from the floating image and the reference image, if the computer device detects the region corresponding to the lung, the computer device may segment the region corresponding to the lung to extract the semantic information corresponding to the lung ; If the computer device detects a bone, the computer device can mark the position corresponding to the bone with a marked point, thereby extracting the semantic information corresponding to the bone.
  • S1021 Determine, according to the semantic information, target image registration algorithms corresponding to the mark floating image and the mark reference image respectively from preset image registration algorithms.
  • the registration algorithm based on anatomical markers is an image registration algorithm that can register the marker reference image and the marker floating image including the above anatomical markers, such as singular value decomposition algorithm, iterative closest point method, standard orthogonalization matrix Law and other algorithms.
  • the corresponding target image registration algorithm determined by the computer device is different, that is, the label reference image and the label floating image including the segmented area are different from those including the anatomical label Mark reference image and mark floating image can correspond to different registration algorithms.
  • S1022 Perform image registration on the floating image and the reference image according to the semantic information and the target image registration algorithm to obtain an initial registration result; the initial registration result includes a transformation matrix between the floating image and the reference image.
  • the computer device performs image registration on the floating image and the reference image according to the extracted semantic information and the determined target image registration algorithm to obtain an initial registration result of the transformation matrix including the floating image and the reference image.
  • a reference image or a floating image may include both the segmented area and the anatomical mark.
  • the computer device may first use the registration algorithm based on the anatomical mark to analyze the anatomy in the reference image and the floating image.
  • Marker registration and then use the segmentation-based image registration algorithm to register the segmented areas in the reference image and the floating image; you can also use the segmentation-based image registration algorithm to first perform segmentation on the reference image and the floating image Registration, and then use the registration algorithm based on anatomical markers to register the anatomical markers in the reference image and the floating image, or use the registration algorithm based on anatomical markers to register the anatomy in the reference image and the floating image at the same time
  • the markers are registered, and the image registration algorithm based on the division is used to register the divided regions in the reference image and the floating image, which is not limited in this embodiment.
  • the computer device can also introduce a graphics processor (Graphics Processing Unit) that supports the parallel computing architecture (Compute Unified Device Architecture, CUDA) while ensuring that the CPU in it is used for image registration related arithmetic processing.
  • GPU processes some operations to further speed up the speed of running the target image registration algorithm for image registration of floating images and reference images.
  • the computer device obtains the transformed floating image according to the transformation matrix of the obtained floating image and the reference image, the reference image, and the floating image.
  • the computer device may transform the floating image according to the transformation matrix of the floating image and the reference image, and adjust the obtained image in combination with the reference image to obtain the transformed floating image.
  • the transformed floating image only transforms the spatial structure of the floating image, the modality of the transformed floating image has not changed, and the transformed floating image and the reference image are still two images of different modalities .
  • S1024 Register the transformed floating image according to the transformed floating image, the reference image, and the target registration model to obtain a registration result.
  • the computer device inputs the transformed floating image and the reference image into the target registration model to obtain a deformation field, and registers the transformed floating image according to the obtained deformation field to obtain a registration result.
  • the target registration model is a pre-trained model for registering images of different modalities. It can be understood that the modal of the transformed floating image is different from that of the reference image, so that through the target registration model for registering images of different modalities, the transformed floating image can be matched
  • the standard is the image with the same modality as the reference image, and the registration image with the same modality as the reference image is obtained.
  • the computer device can first extract the semantic information of the reference image and the floating image, so that according to different semantic information, different target image registration algorithms are used to register the reference image and the floating image to obtain the floating image and the floating image.
  • the transformation matrix of the reference image according to the obtained transformation matrix, reference image and floating image, get the transformed floating image, and then according to the transformed floating image, reference image and target registration model, the transformed floating image is further Registration, the target registration model can further and more accurately register the transformed floating image according to the transformed floating image and the reference image, thereby improving the accuracy of the obtained registration result.
  • the target image registration algorithm is a registration algorithm based on anatomical markers, based on the above embodiment,
  • the above S1022 includes:
  • S1030 Acquire a floating anatomical mark set to be registered for marking a floating image and a reference anatomical mark set to be registered for marking a reference image.
  • the floating anatomical mark set to be registered and the reference anatomical mark set to be registered are a collection of coordinate information of each anatomical mark.
  • the anatomical marker may be a marker that is pre-marked manually.
  • the floating anatomical mark set to be registered may be a floating anatomical mark point set to be registered, a floating anatomical mark line set to be registered, or a floating anatomical mark face set to be registered.
  • the reference anatomy mark set to be registered may be a reference anatomy mark point set to be registered, a reference anatomy mark line set to be registered, or a reference anatomy mark face set to be registered.
  • S1031 Perform image registration on the floating image and the reference image according to the floating anatomical mark set to be registered, the reference anatomical mark set to be registered, and the registration algorithm based on the anatomical mark, to obtain an initial registration result.
  • the computer device performs image registration on the floating image and the reference image according to the floating anatomical mark set to be registered, the reference anatomical mark set to be registered and the registration algorithm based on the anatomical mark, to obtain the floating image and the reference
  • the initial registration result of the transformation matrix between images may be any one of the singular value decomposition algorithm, the iterative closest point algorithm, and the standard orthogonalization matrix algorithm.
  • the computer device may determine the intersection of the markers according to the matching results of the names of the markers in the floating anatomy marker set to be registered and the reference anatomy marker set to be registered; according to the marker intersection, the floating anatomy to be registered
  • the initial floating anatomical marker set and the initial reference anatomical marker set are determined respectively in the scientific marker set and the reference anatomical marker set to be registered; according to the initial floating anatomical marker set, the initial reference anatomical marker set, and the anatomical-based
  • the marked registration algorithm performs image registration on the floating image and the reference image to obtain an initial registration result including a transformation matrix between the floating image and the reference image.
  • Each anatomical mark has a unique name, and the anatomical mark with the same name for the floating anatomical mark set to be registered and the reference anatomical mark set to be registered constitute the intersection of the two marks.
  • the computer device may also use an anatomical mark with the same anatomical mark number as the mark intersection of the floating anatomical mark set to be registered and the reference anatomical mark set to be registered.
  • the computer device may use an anatomical-based marker based on the initial floating anatomical marker set and the initial reference anatomical marker set selected from the floating anatomical marker set to be registered and the reference anatomical marker set to be registered.
  • Registration algorithm for the floating image and the reference image can be divided into three stages, each stage can obtain the corresponding registration results, and the three-stage registration process as follows:
  • S10311 Determine a first registration result according to an initial floating anatomical mark set, an initial reference anatomical mark set, and an anatomical mark-based registration algorithm; the first registration result includes the first registration result set and the first transformation matrix .
  • the computer device can obtain the first configuration after the spatial transformation of the floating anatomical mark set to be registered Quasi-result set and first transformation matrix.
  • the first registration result set and the first transformation matrix constitute a first registration result.
  • the first spatial distance D1 where Pf1 is a set of marks corresponding to the first registration result set in the reference anatomical mark set to be registered, and Pre1 is the first registration result set.
  • the above-mentioned preset ratio may be any value within (0,1) set as required.
  • the first floating anatomy corresponding to the first spatial distance within the preset ratio may be directly selected
  • the marker set can also sort the distances in the first spatial distance in ascending order, and then select the first floating anatomical marker set corresponding to the first spatial distance within the preset ratio, because the reference anatomical marker set to be registered
  • the first floating anatomical mark set corresponding to the distance can improve the accuracy of registration.
  • the first floating anatomical mark set corresponds to the first spatial distance within a preset ratio selected from the floating anatomical mark set to be registered Collection.
  • the above target transformation matrix is a matrix used for image registration of the mark floating image and the mark reference image
  • the computer device may use the target transformation matrix to achieve registration of the mark floating image and the mark reference image.
  • the computer device may compare the number of markers in the first floating anatomical marker set with a preset number threshold, and determine whether to use the first transformation matrix as the target transformation matrix according to the comparison result.
  • the foregoing preset number threshold may be 5. When the number of markers in the first floating anatomical marker set is less than the preset number threshold, the first transformation matrix is used as the target transformation matrix, and S10311 is continued.
  • S10314 Acquire a first reference anatomy mark set corresponding to the reference anatomy mark set to be registered in the first floating anatomy mark set.
  • the first reference anatomical mark set is a set of marks corresponding to the marks whose names or numbers are the same as those in the reference anatomical mark set to be registered.
  • S10315 Determine the second transformation matrix according to the first floating anatomical marker set, the first reference anatomical marker set, and the registration algorithm based on the anatomical markers.
  • the computer device may obtain the second transformation according to the first floating anatomical marker set, the first reference anatomical marker set, and a preset registration algorithm based on anatomical markers matrix.
  • S10316 Determine a second registration result set according to the second transformation matrix and the floating anatomy mark set to be registered.
  • the computer device can use the second transformation matrix to perform spatial transformation on the registered floating anatomical marker set, and combine interpolation methods such as nearest neighbor interpolation, Methods such as bilinear interpolation or trilinear interpolation can obtain the second registration result set.
  • S10317 Determine, according to the second spatial distance set and the preset distance threshold, a second floating anatomy mark set corresponding to the second spatial distance that is less than the preset distance threshold; the reference anatomy to be registered is recorded in the second spatial distance set The second spatial distance between each corresponding mark in the mark set and the second registration result set.
  • D2
  • Pf2 the set corresponding to each marker in the reference anatomical marker set to be registered and the second registration result set
  • Pre2 is the second registration result set.
  • the above-mentioned preset distance threshold may be set according to need, for example, the distance threshold may be determined according to the actual distance between the corresponding reference anatomical mark set to be registered and the corresponding marks in the second registration result set acceptable to the user.
  • the second floating anatomical mark set is a set corresponding to the second spatial distance within a preset distance threshold selected from the floating anatomical mark set to be registered.
  • the computer device may compare the number of markers in the second floating anatomical marker set with a preset number threshold, and determine whether to use the second transformation matrix as the target transformation matrix according to the comparison result. When the number of markers in the second floating anatomical marker set is less than a preset number threshold, the second transformation matrix is used as the target transformation matrix, and S10311 is continued.
  • the second reference anatomical mark set is a set corresponding to the mark with the same name or number as the mark in the second floating anatomical mark set selected from the reference anatomical mark set to be registered.
  • S10320 Determine a third transformation matrix according to the second floating anatomy marker set, the second reference anatomy marker set, and the registration algorithm based on the anatomical markers, and use the third transformation matrix as the target transformation matrix.
  • the computer device may be based on the second floating anatomical marker set, the second reference anatomical marker set, and a preset registration algorithm based on anatomical markers To obtain a third transformation matrix. After obtaining the third transformation matrix, the computer device may directly use the third transformation matrix as the target transformation matrix.
  • the computer device can mark the floating image according to the product of the matrix formed by the coordinate position of each pixel of the floating image and the target transformation matrix, and combining with interpolation methods such as nearest neighbor interpolation, bilinear interpolation, or trilinear interpolation. Mapped to the mark reference image space to achieve the alignment of the mark reference image and the mark floating image under the anatomical structure, thereby completing the image registration of the mark reference image and the mark floating image.
  • interpolation methods such as nearest neighbor interpolation, bilinear interpolation, or trilinear interpolation.
  • the computer device may adjust the above-mentioned preset ratio and preset distance threshold according to the following manner: adding noise to each mark in the floating image and the reference image to be registered, and using the above-mentioned three-stage registration method to process the registration
  • the registered floating image and the reference image are registered to obtain a new target transformation matrix, and then the new target transformation matrix is used to perform image registration on the floating image and the reference image, and the preset similarity is used according to the obtained registration result
  • Similarity measurement model calculate the similarity measurement value between the floating image and the reference image after registration, compare the similarity measurement value with the preset similarity measurement threshold, if it is less than the preset similarity measurement threshold, adjust At least one of the above-mentioned preset ratio and preset distance threshold until the finally obtained similarity metric value is greater than the preset similarity metric threshold, thereby adjusting the preset ratio and the preset distance threshold to be appropriate
  • the value in turn, can make the registration accuracy of the image registered using the algorithm of the adjusted preset ratio and
  • the computer device can acquire the floating anatomical mark set to be registered for marking the floating image and the reference anatomical mark set to be registered for marking the reference image, and according to the floating anatomical mark set to be registered and the to be registered Refer to the anatomical marker set and the anatomical marker-based registration algorithm to perform image registration on the marker floating image and the marker reference image in three stages, each stage using certain conditions such as markers or presets within a preset ratio Marking within the distance threshold for image registration, instead of using all the marks for image registration, greatly reduces the amount of calculation and improves the registration speed; in addition, each stage has a different set of markers, which can reduce part Anatomical markers may be misdetected and affect the effect of registration accuracy, and the markers at each stage are screened and determined according to a preset ratio or a preset distance threshold, etc., which can improve the registration accuracy, so
  • the method for performing registration in stages provided in this embodiment can improve the accuracy of image registration.
  • the target image registration algorithm is a segmentation-based image registration algorithm
  • the foregoing S1022 includes:
  • S1040 Acquire a divided floating image corresponding to the floating image and a divided reference image corresponding to the reference image.
  • the segmented floating image and the segmented reference image may be images corresponding to the semantic information extracted from the floating image and the reference image to be registered according to a preset trained neural network model.
  • the computer device may use the preset trained neural network model to divide the floating image and the reference image to be registered into arbitrary regions to obtain the divided floating image and the divided reference image.
  • S1041 Perform image registration on the floating image and the reference image according to the segmentation floating image, the segmentation reference image, and the segmentation-based image registration algorithm to obtain an initial registration result.
  • the image registration algorithm based on segmentation may be any one of algorithms such as a surface matching algorithm, a mutual information method, and a gray mean square error method.
  • the computer device may determine the target segmentation transformation matrix according to the acquired segmentation floating image, segmentation reference image and segmentation-based image registration algorithm, so as to map the floating image to be registered to the reference image based on the target segmentation transformation matrix Under spatial coordinates, the registration of the floating image and the reference image is completed, and the initial registration result is obtained.
  • the computer device can acquire the divided floating image corresponding to the floating image and the divided reference image corresponding to the reference image, and according to the divided floating image, the divided reference image, and the image registration algorithm based on the segmentation, the floating image and the reference image
  • the image registration algorithm based on segmentation is directly used to register the floating image and the reference image.
  • the implementation is relatively simple, and the efficiency of image registration for the floating image and the reference image is improved.
  • FIG. 5 a schematic flowchart of another image registration method is provided.
  • the foregoing method further includes:
  • the computer device obtains the initial registration result of the transformation matrix including the floating image and the reference image after performing image registration on the floating image and the reference image.
  • S1051 Integrate the initial registration results obtained by different anatomical marks and/or the initial registration results obtained by different segmentation regions according to a preset registration result integration method.
  • the preset registration result integration method may be any one of trilinear interpolation method, B-spline interpolation method and the like.
  • Image integration can be two or more registration images from different imaging devices or acquired at different times, using an algorithm to organically combine the images.
  • the computer device integrates the initial registration results obtained by different anatomical markers and/or the initial registration results obtained by different segmentation regions according to a preset registration result integration method. That is, the computer device may integrate the floating image and the reference image in the initial registration result according to a preset registration result integration method to obtain a distorted image in which the floating image and the reference image are integrated in the reference image space.
  • the computer device can obtain the initial registration result after image registration is performed on the floating image and the reference image, so that the initial registration result obtained from different anatomical markers according to the preset registration result integration method and /Or the initial registration results obtained from different segmented areas are integrated to realize the integration of the floating image and the reference image into one image, so as to organically combine the advantages of the respective images to obtain a more informative New images, so as to better assist doctors to use the integrated images to judge the patient's condition.
  • the foregoing S1023 includes:
  • the computer device performs a downsampling operation on the reference image to obtain a downsampled reference image, performs a downsampling operation on the floating image to obtain a downsampled floating image, and uses a transformation matrix to spatially transform the downsampled floating image to obtain the corresponding Transformed floating image, and then determine the similarity between the converted floating image corresponding to the down-sampled reference image and the down-sampled floating image according to the conversion matrix, the down-sampled reference image and the converted floating image corresponding to the down-sampled floating image metric.
  • the computer device can perform a downsampling operation on the reference image and the floating image to obtain the downsampled reference image and the downsampled floating image, and use the transformation matrix to spatially transform the downsampled floating image to obtain the corresponding Transformed floating image, and then use preset calculation algorithms of similarity metric values such as mutual information method and gray mean square error method to determine between the converted floating image corresponding to the downsampled floating image and the downsampled reference image Similarity measure.
  • similarity metric values such as mutual information method and gray mean square error method
  • the similarity metric value between the converted floating image corresponding to the down-sampled floating image and the down-sampling reference image determined here refers to the converted floating image corresponding to the down-sampling floating image and the down-sampling reference
  • S1061 Perform at least one of a translation operation, a rotation operation, a miscut operation, and a zoom operation on the transformation matrix to extract initial parameters corresponding to the transformation matrix.
  • the computer device performs at least one of a translation operation, a rotation operation, a miscut operation, and a zoom operation on the transformation matrix of the floating image and the reference image to extract the initial parameters corresponding to the transformation matrix.
  • a translation operation e.g., a rotation operation, a miscut operation, and a zoom operation on the transformation matrix of the floating image and the reference image to extract the initial parameters corresponding to the transformation matrix.
  • the corresponding transformation matrix may be a 4*4 matrix
  • the computer device may perform a translation operation, a rotation operation, a miscut operation, and a zoom operation on the above transformation matrix.
  • the transformation matrix is decomposed into four 4*4 matrices such as translation matrix, rotation matrix, miscut matrix and scaling matrix, and then according to the translation distance, rotation angle and miscut of the four 4*4 matrices in the three-dimensional coordinate system Angle, scaling, etc., to get the initial parameters corresponding to the 12 transformation matrix.
  • the computer device can obtain the initial parameters corresponding to the eight transformation matrices.
  • S1062 Determine the target transformation matrix according to the similarity metric value, the initial parameter, and the preset gradient descent method.
  • the computer device may adjust the above initial parameters according to a preset gradient descent method, so that the above similarity metric value reaches the optimal value, and use the adjusted parameter corresponding to the optimal similarity metric value as the target parameter, according to the target
  • the parameter determines the target transformation matrix corresponding to the target parameter.
  • S1063 Transform the floating image according to the target transformation matrix to obtain the transformed floating image.
  • the computer device may use the target transformation matrix to transform the floating image and map it to the spatial coordinate system corresponding to the reference image to obtain the transformed floating image.
  • the computer device may determine, according to the transformation matrix of the floating image and the reference image, the down-sampled reference image obtained after down-sampling the reference image and the down-sampled floating image obtained after down-sampling the floating image
  • the similarity measure value between the down-sampled reference image and the converted floating image corresponding to the down-sampled floating image, at least one of a translation operation, a rotation operation, a miscut operation, and a zoom operation is performed on the transformation matrix to extract the transformation matrix corresponding
  • the initial parameters of the target are determined according to the similarity metric value, the initial parameters, and the preset gradient descent method. Since the target parameter is the parameter corresponding to the optimal similarity metric value, the target transformation determined according to the target parameter The matrix is also better. In this way, by using the target transformation matrix, the floating image can be accurately transformed, and the accuracy of the obtained transformed floating image is improved.
  • the target registration model includes a forward registration network and a backward registration network;
  • the training process of the target registration model includes:
  • the preset unsupervised method or weakly supervised method is used to iteratively train the preset forward registration network and the preset backward registration network to obtain the target registration model.
  • the unsupervised method refers to the use of unlabeled medical images as training sample images, and the distribution of images or the relationship between images and images is learned from the training sample images;
  • the weakly supervised method refers to the use of a part of labeled medical images as training sample images , Learn the distribution of images or the relationship between images based on training sample images.
  • the computer device may adopt a preset unsupervised method and use unlabeled medical images as training samples to iteratively train the preset forward registration network and the preset backward registration network to learn the distribution of images Or the relationship between the images and the images to obtain the target registration model for registering images of different modalities; or, the computer device can use a preset weak supervision method, using a part of the marked medical images and a part of the unmarked Medical images as training samples, iteratively train the preset forward registration network and the preset backward registration network, learn the distribution of images or the relationship between images and images, and use unlabeled images to accurately model
  • the degree and generalization ability are further improved to obtain a target registration model for registering images of different modalities.
  • the computer device adopts the preset unsupervised method or weakly supervised method, and the training process of iterative training on the preset forward registration network and the preset backward registration network is very effective.
  • the training of the model can be effectively completed, which greatly improves the efficiency of obtaining the target registration model, and thus improves the registration efficiency of the registration of the floating image.
  • a preset unsupervised method is used to iteratively train the preset forward registration network and the preset backward registration network to obtain the target configuration
  • the quasi-model includes: the preset first training mode and the second training mode are used to iteratively train the preset forward registration network and the preset backward registration network to obtain the target registration model;
  • One training mode is the training mode of the previous registration network and then the backward registration network, and the second training mode is the training mode of the successive registration network and then the forward registration network.
  • the computer device adopts a preset first training mode of training the forward registration network and then training the backward registration network and a preset first training of the forward registration network and then training the second training of the forward registration network Mode, iteratively trains the preset forward registration network and the preset backward registration network to obtain the target registration model.
  • the forward registration network and the backward registration network may be Convolutional Neural Networks (CNN) in deep learning.
  • the computer device adopts the preset first training mode and the second training mode to iteratively train the preset forward registration network and the backward registration network.
  • the iterative training can improve the The accuracy of the target registration model for registering different modal images improves the accuracy of registering floating images according to the target registration model.
  • FIG. 7 a schematic flowchart of another image registration method is provided.
  • the foregoing uses the preset first training Mode to train the preset forward registration network and the preset backward registration network, including:
  • S1070 input the first floating image and the first reference image into a preset forward registration network to obtain a first registered floating image; the modal of the first reference image is modal 1, and the modal of the first floating image is Mode 2: The mode of the first registered floating image is the same as the mode of the first floating image.
  • the computer device inputs the first reference image in mode one and the first floating image in mode two into a preset forward registration network to obtain the first registered floating image in the same mode as the first floating image.
  • the first reference image and the first floating image may be obtained from the PACS server or directly from different medical imaging devices.
  • the CT image may be used as the first reference image
  • the MRI image may be input into the forward registration network as the first floating image to obtain the first registered floating image, that is, the registration After the MRI image.
  • the target registration model is used to register the transformed floating image and the reference image, then, correspondingly, the first floating image mentioned here is also the transformed image, that is, the computer
  • the device performs semantic information extraction on the first floating image and the first reference image to obtain a marked first floating image including the extracted semantic information and a marked first reference image, and then determines to mark the first floating image according to the extracted semantic information
  • the target registration algorithm corresponding to the marked first reference image and then registering the first floating image and the first reference image according to the extracted semantic information and the target image registration algorithm to obtain the first floating image and the first reference
  • the transformation matrix between images obtains the transformed image according to the transformation matrix between the first floating image and the first reference image, the first reference image and the first floating image, that is, the first floating image referred to herein.
  • S1071 Determine the first registered floating image as the second reference image of the preset backward registration network.
  • the computer device determines the first registration floating image as the second reference image of the preset backward registration network, that is, the mode of the second reference image is mode 2.
  • the first registered floating image is the registered MRI image.
  • the computer device first obtains an image with mode 1 as the second floating image, uses the first registered floating image as the second reference image, and then inputs the second reference image and the second floating image To the registration network, obtain a second registered floating image with the same modality as the second floating image.
  • the computer device may obtain the second floating image from the PACS server, or may directly obtain the second floating image from the medical imaging device in the same modality as the modality one.
  • the second floating image is also a transformed image
  • the process of obtaining the second floating image here may refer to the description of the foregoing embodiment, and details are not described herein again.
  • S1073 Obtain a first similarity between the second registered floating image and the first reference image according to the second registered floating image and the first reference image, and perform a preset forward registration network Set up a backward registration network for training.
  • the computer device obtains a first similarity between the second registered floating image and the first reference image according to the second registered floating image and the first reference image, and registers the preset forward direction according to the first similarity Network and preset backward registration network for training.
  • the first similarity is a similarity measure between the second registered floating image and the first reference image.
  • the first similarity may be the cross-correlation, mean square error, mutual information, or correlation coefficient between the second registered floating image and the first reference image, or it may be a discriminator network for automatically discriminating between images Of similarity.
  • the discriminator network can be a simple convolutional neural network.
  • the computer device may adjust the parameter values in the preset forward registration network and the preset backward registration network according to the value of the first similarity. For the preset forward registration network and the preset Backward registration network for training.
  • the computer device inputs the first floating image and the first reference image into a preset forward registration network to obtain a first registered floating image with the same modality as the first floating image, and then registers the first floating image
  • the quasi-floating image is used as the second reference image of the preset backward registration network
  • the second floating image and the second reference image in mode 1 are input into the preset backward registration network to obtain the second registration Floating image, since the second registration floating image has the same modality as the first reference image, by acquiring the first similarity between the second registration floating image and the first reference image, the preset
  • the registration network and the preset backward registration network realize the registration of different modal images, and solve the registration problem of cross-modal images.
  • training the preset forward registration network and the preset backward registration network according to the first similarity in the above S1073 includes: A similarity is determined as the first accuracy of the second registered floating image, and the training of the preset forward registration network and the preset backward registration network is guided according to the first accuracy.
  • the computer device determines the first similarity acquired above as the first accuracy of the second registered floating image, and trains the forward registration network and the backward registration network according to the first accuracy.
  • the computer device determines the first similarity as the first accuracy of the second registered floating image, and guides the training of the forward registration network and the backward registration network according to the first accuracy.
  • the accuracy is determined according to the first similarity, which improves the accuracy of the determined first accuracy, and thus improves the accuracy of the forward registration network and the backward registration network obtained by training according to the first accuracy.
  • FIG. 8 a schematic flowchart of another image registration method is provided.
  • the foregoing uses the preset second training Mode to train the preset forward registration network and the preset backward registration network, including:
  • the computer device determines the first floating image as the third reference image of the backward registration network, and determines the first reference image as the third floating image of the backward registration network, that is, the third reference image
  • the mode is mode two
  • the mode of the third floating image is mode one.
  • the computer device inputs the third floating image and the third reference image into the backward registration network to obtain the same mode as the third floating image mode.
  • the three registration floating image that is, the mode of the third registration floating image is mode one.
  • the CT image is determined as the third floating image
  • the MRI image is determined as the third reference image
  • the CT image and the MRI image are input to the backward registration network to obtain the third registered floating image, and It is the CT image after registration.
  • S1081 Determine the third registered floating image as the fourth reference image of the preset forward registration network.
  • the computer device determines the third registration floating image as the fourth reference image of the preset forward registration network, that is, the mode of the fourth reference image is mode 1.
  • the fourth reference image is a registered CT image.
  • the computer device first obtains an image with mode 2 as the fourth floating image, uses the third registered floating image as the fourth reference image, and then inputs the fourth floating image and the fourth reference image into the preset Forward registration network, the fourth registration floating image with the same mode as the fourth floating image is obtained.
  • the computer device may obtain the fourth floating image from the PACS server, or may directly obtain the fourth floating image from the medical imaging device in the same mode as mode 2.
  • the fourth floating image is also a transformed image, and the process of obtaining the fourth floating image here may refer to the description of the foregoing embodiment, and details are not described herein again.
  • S1083 Acquire a second similarity between the fourth registered floating image and the third reference image according to the fourth registered floating image and the third reference image, and perform a pre-registration on the backward registration network and Set up the forward registration network for training.
  • the computer device obtains a second similarity between the fourth registered floating image and the third reference image according to the fourth registered floating image and the third reference image, and registers the preset backward according to the second similarity Network and preset forward registration network for training.
  • the second similarity is a similarity measure between the fourth registered floating image and the third reference image.
  • the second similarity may be the cross-correlation, mean square error, mutual information, or correlation coefficient between the fourth registered floating image and the third reference image, or it may be a discriminator network for automatically discriminating between the images. Similarity.
  • the discriminator network can be a simple convolutional neural network.
  • the computer device may adjust the parameter values in the preset backward registration network and the preset forward registration network according to the value of the second similarity. For the preset backward registration network and the preset Forward registration network for training.
  • the computer device determines the first floating image as the third reference image of the backward registration network, the first reference image as the third floating image of the backward registration network, and the third floating image and The third reference image is input to the backward registration network to obtain a third registration floating image with the same mode as the third floating image, and then the third registration floating image is used as the fourth reference image of the forward registration network
  • the fourth floating image in the second mode and the fourth reference image are input into a preset forward registration network to obtain a fourth registered floating image.
  • training the backward registration network and the forward registration network according to the second similarity in S1083 above includes: determining the second similarity as The second accuracy of registering the floating image is to guide the training of the preset backward registration network and the preset forward registration network according to the second accuracy.
  • the computer device determines the obtained second similarity as the second accuracy of the fourth registered floating image, and according to the second accuracy, the preset backward registration network and the preset forward registration network For training.
  • the larger the value of the second similarity the higher the second accuracy of the fourth registered floating image, and the smaller the value of the second similarity, the lower the second accuracy of the fourth registered floating image.
  • the computer device determines the second similarity as the second accuracy of the fourth registered floating image, and guides the preset backward registration network and the preset forward registration network according to the second accuracy Training, because the second accuracy is determined based on the second similarity, greatly improves the accuracy of the determined second accuracy, and thus improves the backward registration network and forward registration obtained from the second accuracy training The accuracy of the quasi-network.
  • FIG. 9 a schematic flowchart of another image registration method is provided.
  • the foregoing uses the preset first training Mode and the second training mode, iteratively training the preset forward registration network and the preset backward registration network to obtain the target registration model, further including:
  • S1090 Obtain the value of the first loss function of the first training mode according to the first similarity, and obtain the value of the second loss function of the second training mode according to the second similarity.
  • the loss function is the objective function in the training process of the image registration model
  • the loss function in the training process of the image registration model is defined by the dissimilarity between the images.
  • the computer device acquires the first loss function of the first training mode according to the first similarity, and acquires the second loss function of the second training mode according to the second similarity.
  • the first similarity is the cross-correlation between the second registered floating image and the first reference image
  • the value of the first loss function is equal to the value of 1-cross-correlation
  • the second similarity is the fourth registered floating image
  • the value of the second loss function is equal to the value of 1-mean square error.
  • S1091 Determine the target registration model according to the value of the first loss function and the value of the second loss function.
  • the computer device determines the forward registration network and the backward registration network corresponding to the first loss function and the second loss function according to the acquired values of the first loss function and the second loss function, and The forward registration network and backward registration network are determined as the target registration model.
  • the computer device may determine the corresponding forward registration network and backward registration network when the values of the first loss function and the second loss function reach stable values as the target registration model.
  • the computer device acquires the value of the first loss function of the first training mode according to the first similarity, and acquires the value of the second loss function according to the second similarity, due to the value of the first loss function and the second loss
  • the value of the function is obtained based on the similarity between the same modal images.
  • the obtained values of the first loss function and the second loss function are more accurate, thereby improving the value of the first loss function and the second loss function.
  • the accuracy of the registration model determined by the value.
  • the image registration device includes an acquisition module 110 and a registration module 111.
  • the obtaining module 110 is used to obtain a floating image and a reference image to be registered; the floating image and the reference image are images of two different modalities;
  • the registration module 111 is used to obtain registration results based on the floating image, the reference image and the target registration method; the target registration method is used to register images of different modalities.
  • the image registration device provided in this embodiment can execute the above method embodiments, and its implementation principles and technical effects are similar, and will not be repeated here.
  • Each module in the above image registration device may be implemented in whole or in part by software, hardware, and a combination thereof.
  • the above modules may be embedded in the hardware form or independent of the processor in the computer device, or may be stored in the memory in the computer device in the form of software so that the processor can call and execute the operations corresponding to the above modules.
  • a computer device is provided, and its internal structure diagram may be as shown in FIG. 11.
  • the computer equipment includes a processor, a memory, a network interface, a display screen, and an input device connected through a system bus.
  • the processor of the computer device is used to provide computing and control capabilities.
  • the memory of the computer device includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium stores an operating system and computer programs.
  • the internal memory provides an environment for the operating system and computer programs in the non-volatile storage medium.
  • the network interface of the computer device is used to communicate with external computer devices through a network connection.
  • the computer program is executed by the processor to implement an image processing method.
  • the display screen of the computer device may be a liquid crystal display screen or an electronic ink display screen
  • the input device of the computer device may be a touch layer covered on the display screen, or may be a button, a trackball, or a touchpad provided on the computer device housing , Can also be an external keyboard, touchpad or mouse.
  • a computer device including a memory and a processor, and a computer program is stored in the memory, and the processor implements the computer program to implement the following steps when executing the computer program:
  • the floating image and the reference image to be registered are images of two different modalities;
  • target registration methods are used to register images of different modalities.
  • a readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the following steps are realized:
  • the floating image and the reference image to be registered are images of two different modalities;
  • target registration methods are used to register images of different modalities.
  • Image registration can realize the matching and superposition of two or more images acquired at different times, different imaging devices or under different conditions, such as computerized tomography (CT) images and positron emission computed tomography Images such as (Positron Emission Computerized Tomography, PET) images are matched and superimposed to display the information of CT images and PET images participating in registration on the same image, which provides a good auxiliary role for clinical medical diagnosis. It is an image A key technology in the field of processing.
  • CT computerized tomography
  • PET positron emission computed tomography
  • the region of interest (Region Of Interest, ROI) is an irregular region
  • the irregular region in the image to be registered is extracted, and registration is performed based on the irregular region
  • the ROI is a key point
  • the extraction is performed Key points in the image to be registered, and registration is performed based on the key points.
  • an image registration method, device, computer device, and storage medium are provided.
  • an embodiment of the present application provides a method for image registration.
  • the method includes:
  • the reference image and the floating image may be the same-mode image or the heteromodal image.
  • the reference image and the floating image may be both CT images, one may be a CT image, and the other is a PET image.
  • the computer device can register the obtained two or more images, for example, one of the images is used as a reference image, and the other images are used as floating images, and the floating image is mapped to the reference image to realize the reference image and floating The alignment of the image under the anatomical structure.
  • the reference image and the floating image may be images of the same individual, or images of different individuals, or images containing the same anatomical structure, or images containing part of the same anatomical structure.
  • the embodiment does not limit the sources of the reference image and the floating image.
  • the reference image and the floating image may be two-dimensional images or three-dimensional images, which is not specifically limited in this embodiment.
  • the computer device can extract the semantic information in the reference image and floating image according to the preset trained neural network model, for example, if the corresponding lung Area, the computer device can segment the area corresponding to the lungs to extract the semantic information corresponding to the lungs: if a bone is detected, the position corresponding to the bone is marked with a marker point, thereby referring to the semantic information corresponding to the bone : Anatomically marked points.
  • the computer device uses the preset neural network model to extract the language information of the reference image and the floating image, the labeled reference image and the labeled floating image containing the extracted semantic information can be obtained.
  • a target image registration model corresponding to the mark reference image and the mark floating image is determined from the preset image registration models.
  • the above image registration model is a model for registering the reference image and the floating image obtained after extracting semantic information, such as surface matching algorithm, mutual information method, standard orthogonalization matrix method and least square method
  • semantic information such as surface matching algorithm, mutual information method, standard orthogonalization matrix method and least square method
  • the computer device can use different registration models to register the two, that is, the marker reference image including the segmented area and the marker floating image and the marker including the anatomical marker point Reference images and marked floating images can correspond to different image registration models.
  • the above semantic information includes: at least one of a segmented area and an anatomically marked point of the floating image, and at least one of a cut area and an anatomically marked point of the reference image.
  • the above semantic information may be anatomical mark points in the reference image and the floating image, or may be segmentation areas in the reference image and the floating image.
  • the above-mentioned anatomical markers may be geometric markers, such as gray-scale extreme values or intersection points of linear structures, or anatomical markers that are clearly visible in the anatomical shape and can be accurately positioned, such as human tissues, organs, or lesions.
  • Key marker points or feature points; the above-mentioned segmented regions may be curves or curved surfaces corresponding to the reference image and the floating image, such as lungs, livers, or irregular regions.
  • the above-mentioned preset image registration model may include an image registration model based on segmentation and a registration model based on anatomical markers.
  • the image registration model based on segmentation is an image registration model that can perform image registration on the mark reference image and the mark floating image including the above-mentioned segmented area, such as surface matching algorithm, mutual information method, gray scale mean square error method, etc.
  • the algorithm model corresponding to the method; the registration model based on anatomical markers is a registration model that can perform image registration on the marker reference image and the marker floating image including the above anatomical markers, such as singular value decomposition algorithm, iterative closest point Algorithm model corresponding to the method, standard orthogonal matrix method and so on.
  • image registration is performed on the reference image and the floating image according to the semantic information and the target image registration model.
  • the computer device may select a corresponding target image registration model to perform image registration on the reference image and the floating image.
  • a reference image or a floating image may include both the segmented area and the anatomical point.
  • the computer device may first use the target image registration model corresponding to the anatomical point to compare the reference image and the floating image.
  • the anatomical points are registered, and then the target image registration model corresponding to the segmented area is used to register the segmented area in the reference image and the floating image; the target image registration model corresponding to the segmented area can also be used to register the reference image and floating first Register the segmented areas in the image, and then use the target image registration model corresponding to the anatomical point to register the anatomical points in the reference image and the floating image, or use the target image registration model corresponding to the anatomical point at the same time
  • the anatomical points in the reference image and the floating image are registered, and the target image registration model corresponding to the segmented area is used to register the segmented areas in the reference image and the floating image, which is not limited in this embodiment.
  • the computer device can also introduce a graphics processor (Graphics Processing Unit) that supports the parallel computing architecture (Compute Unified Device Architecture, CUDA) while ensuring that the CPU in it is used for image registration related arithmetic processing.
  • GPU processes some operations to further accelerate the speed of the registration algorithm that registers the reference image and the floating image.
  • the computer device can obtain the reference image and the floating image to be registered; extract the semantic information of the reference image and the floating image to obtain a marked reference image and a marked floating image including semantic information; Then, according to the semantic information, the target image registration model corresponding to the mark reference image and the mark floating image is determined from the preset image registration models; finally, according to the semantic information and the target image registration model, the mark reference image and the mark floating image Perform image registration.
  • the computer device can first extract the semantic information of the reference image and the floating image, so that according to different semantic information, different target image registration models are used to register the reference image and the floating image to complete including multiple semantics
  • the registration of the reference image and the floating image of information solves the limitation in the prior art that the reference image and the floating image can only be registered based on a single semantic information, which greatly improves the applicable range of image registration.
  • FIG. 13 is a schematic flowchart of an image registration method according to another embodiment.
  • This embodiment relates to when the target image registration model is the above-mentioned registration model based on anatomical markers, the computer device registers the reference image and the floating image according to the registration model and semantic information based on anatomical markers process.
  • the above S2013 may include:
  • S2020 Acquire a set of reference anatomical marker points to be registered for the marked reference image and a set of floating anatomical marker points to be registered for the marked floating image.
  • the reference anatomical mark point set to be registered and the floating anatomical mark point set to be registered are a collection of coordinate information of each anatomical mark point.
  • the anatomical marking points may be manually pre-marked marking points.
  • S2021 Perform image registration on the reference image and the floating image according to the reference anatomical mark point set to be registered, the floating anatomical mark point set to be registered, and the registration model based on the anatomical mark point.
  • the registration model based on the anatomical marker points may be any one of the algorithm models corresponding to the singular value decomposition algorithm, iterative closest point algorithm, standard orthogonalization matrix method and the like.
  • the computer device may perform image registration on the reference image and the floating image according to the acquired reference anatomical marker point set to be registered, the floating anatomical marker point set to be registered, and the preset registration model based on the anatomical marker point.
  • the above S2021 may specifically include: determining the intersection of the marker points according to the matching result of the names of the marker points in the reference anatomical marker point set to be registered and the floating anatomical marker point set to be registered; based on the marker points Intersection, determine the initial reference anatomy marker point set and the initial floating anatomy marker point set from the reference anatomy marker point set to be registered and the floating anatomy marker point set to be registered respectively; according to the initial reference anatomy marker point Set, the initial floating anatomical mark point set and the registration model based on the anatomical mark point, image registration is performed on the reference image and the floating image.
  • Each anatomical marker has a unique name, and the anatomical markers with the same name for the reference anatomical marker set to be registered and the floating anatomical marker set to be registered constitute the marks of the two Point intersection.
  • the computer device may also use the anatomical markers with the same number as the reference anatomy marker set to be registered and the floating anatomy marker set to be registered as the intersection of the two markers.
  • the computer device may use the point set corresponding to the intersection of the aforementioned marked points in the reference anatomical marked point set to be registered as the initial reference anatomical marked point set, and select the marked point set in the floating anatomical marked point set to be registered
  • the point set corresponding to the intersection is used as the initial floating anatomical marker point set, so that the initial reference anatomical marker point set and the initial floating anatomical marker point set can be input into a preset registration model based on anatomical marker points to realize the reference image and Alignment of floating images under the same anatomical structure.
  • the computer device may use the initial reference anatomical marker point set and the initial floating anatomical marker point set selected from the reference anatomical marker point set to be registered and the floating anatomical marker point set to be registered, and use
  • the registration model based on anatomical markers performs image registration on the reference image and the floating image.
  • the registration process of the reference image and the floating image using the registration model based on anatomical markers can be divided into three stages of registration process, each stage Corresponding registration results can be obtained.
  • the three-stage registration process is as follows:
  • S20211 Determine the first registration result according to the initial reference anatomical marker point set and the initial floating anatomical marker point set and the registration model based on the anatomical marker point; the first registration result includes the first registration result point set and The first transformation matrix.
  • the floating anatomical marker point set to be registered can be obtained for spatial transformation After the first registration result point set and the first transformation matrix.
  • the above-mentioned first registration result point set and first transformation matrix constitute a first registration result.
  • S20212 Determine, according to the first spatial distance set and the preset ratio, a first floating anatomical marker point set corresponding to the first spatial distance within the preset ratio; wherein, the first spatial distance set records the reference anatomy to be registered Learn the first spatial distance between the set of marked points and the corresponding corresponding marked points in the first registration result point set.
  • D1
  • Pf1 is a point set composed of the marked points in the reference anatomical marked point set to be registered and the first registration result point set
  • Pre1 is the first registration result point set.
  • the above-mentioned preset ratio may be any value within (0,1) set as required.
  • the first floating anatomy corresponding to the first spatial distance within the preset ratio may be directly selected Mark point set, you can also sort the distances in the first spatial distance in ascending order, and then select the first floating anatomical mark point set corresponding to the first spatial distance within the preset ratio, because the reference anatomical mark is to be registered
  • the first set of floating anatomical markers corresponding to the first spatial distance within can improve registration accuracy.
  • the first set of floating anatomical markers is a preset selected from the set of floating anatomical markers to be registered The set of points corresponding to the first spatial distance within the ratio.
  • the above target transformation matrix is a matrix used for image registration between the mark reference image and the mark floating image
  • the computer device may use the target transformation matrix to achieve registration of the mark reference image and the mark floating image.
  • the computer device may compare the number of marker points in the first floating anatomical marker point set with a preset number threshold, and determine whether to use the first transformation matrix as the target transformation matrix according to the comparison result.
  • the foregoing preset number threshold may be 5. When the number of marker points in the first floating anatomical marker point set is less than the preset number threshold, the first transformation matrix is used as the target transformation matrix, and S20211 is continued.
  • S20214 Acquire a first reference anatomy mark point set corresponding to the first floating anatomy mark point set in the reference anatomy mark point set to be registered.
  • the first reference anatomical mark point set is the name or number of the mark point in the reference anatomical mark point set to be registered corresponds to the mark point in the first floating anatomical mark point set. Of dots made up of marked points.
  • S20215 Determine a second transformation matrix according to the first reference anatomical landmark set, the first floating anatomical landmark set, and the registration model based on the anatomical landmark.
  • the computer device may input the first reference anatomical marker point set and the first floating anatomical marker point set into a preset registration model based on anatomical marker points, in the same way as the above method for determining the first transformation matrix, thereby Get the second transformation matrix.
  • S20216 Determine a second registration result point set according to the second transformation matrix and the set of floating anatomical marker points to be registered.
  • the computer device can use the second transformation matrix to perform spatial transformation on the set of floating anatomical markers to be registered according to the product of the obtained second transformation matrix and the set of floating anatomical markers to be registered, combined with the interpolation method such as Methods such as nearest neighbor interpolation, bilinear interpolation, or trilinear interpolation can obtain the second registration result point set.
  • the interpolation method such as Methods such as nearest neighbor interpolation, bilinear interpolation, or trilinear interpolation can obtain the second registration result point set.
  • the second spatial distance set determines a second floating anatomical mark point set corresponding to the second spatial distance less than the preset distance threshold; the second spatial distance set records the reference anatomy to be registered The second spatial distance between the set of learned marker points and the corresponding corresponding marker points in the second registration result point set.
  • D2
  • the above-mentioned preset distance threshold may be set according to needs, for example, the distance threshold may be set according to the actual distance between the corresponding reference anatomical marker point set to be registered and the corresponding corresponding marker point in the second registration result point set acceptable to the user determine.
  • the second floating anatomical mark point set is a point set corresponding to a second spatial distance within a preset distance threshold selected from the floating anatomical mark point set to be registered.
  • the computer device may compare the number of marker points in the second floating anatomical marker point set with a preset number threshold, and determine whether to use the second transformation matrix as the target transformation matrix according to the comparison result. When the number of marker points in the second floating anatomical marker point set is less than the preset number threshold, the second transformation matrix is used as the target transformation matrix, and S20211 is continued.
  • the second reference anatomical mark point set is a point set corresponding to a mark point selected from the reference anatomical mark point set to be registered and having the same name or number as the mark point in the second floating anatomical mark point set.
  • S20220 Determine a third transformation matrix according to the second reference anatomical marker point set, the second floating anatomical marker point set, and the registration model based on the anatomical marker point, and use the third transformation matrix as the target transformation matrix.
  • the computer device may input the second reference anatomical marker point set and the second floating anatomical marker point set into preset anatomical marker points in the same way as the above method for determining the first transformation matrix and the second transformation matrix To obtain the third transformation matrix.
  • the computer device can directly use the third transformation matrix as the target transformation matrix.
  • S20221 Perform image registration on the reference image and the floating image according to the target transformation matrix.
  • the computer device can mark the floating image according to the product of the matrix formed by the coordinate position of each pixel of the floating image and the target transformation matrix, and combining with interpolation methods such as nearest neighbor interpolation, bilinear interpolation, or trilinear interpolation. Mapped to the mark reference image space to achieve the alignment of the mark reference image and the mark floating image under the anatomical structure, thereby completing the image registration of the mark reference image and the mark floating image.
  • interpolation methods such as nearest neighbor interpolation, bilinear interpolation, or trilinear interpolation.
  • the above-mentioned preset ratio and preset distance threshold can be adjusted as follows: adding noise to each marker point in the reference image and the floating image to be registered, and using the above three-stage registration method to process the registration
  • the reference image and the floating image are registered to obtain a new target transformation matrix, and then the new target transformation matrix is used to perform image registration on the above reference image and floating image, and according to the obtained registration result, the preset similarity is used
  • Similarity measurement model calculate the similarity measurement value between the reference image and the floating image after registration, and compare with the preset similarity measurement threshold according to the similarity measurement value, if it is less than the preset similarity measurement threshold, adjust At least one of the above-mentioned preset ratio and preset distance threshold until the finally obtained similarity metric value is greater than the preset similarity metric threshold, thereby adjusting the preset ratio and the preset distance threshold to be appropriate
  • the value in turn, can make the registration accuracy of the image registered using the algorithm model of the adjusted preset ratio and the preset threshold higher
  • the computer device may acquire a set of reference anatomical marker points to be registered for marking reference images and a set of floating anatomical marker points to be registered for marking floating images; and according to the reference anatomy to be registered Mark point set, floating anatomical mark point set to be registered and registration model based on anatomical mark point, image registration is performed on the mark reference image and the mark floating image in three stages.
  • Each stage uses certain conditions such as Marked points within a preset ratio or marked points within a preset distance threshold are used for image registration, instead of using all the marked points for image registration, which greatly reduces the amount of calculation and increases the registration speed; in addition, each The set of markers in each stage are different, which can reduce the influence of some anatomical markers that may be misdetected and affect the registration accuracy, and the markers in each stage are based on a preset ratio or a preset distance threshold Marking points determined by screening and the like can improve the registration accuracy. Therefore, the method of performing registration in stages provided in this embodiment can improve the accuracy of image registration.
  • the computer device may use the image registration method provided in another embodiment shown in FIG. 14 to perform image registration on the mark reference image and the mark floating image.
  • This embodiment relates to an implementation process in which a computer device performs image registration on the above-mentioned marker reference image and marker floating image according to the extracted segmented region and the corresponding segmentation-based image registration model.
  • another optional implementation manner of the foregoing S2013 may include:
  • the segmented reference image and the segmented floating image may be images corresponding to semantic information extraction of the reference image and the floating image to be registered according to the preset trained neural network model.
  • the computer device may use the above-mentioned preset trained neural network model to divide the reference image and the floating image to be registered into arbitrary regions to obtain the divided reference image and the divided floating image.
  • S2031 Perform image registration on the reference image and the floating image according to the divided reference image, the divided floating image, and the image registration model based on the division.
  • the above-mentioned image registration model based on segmentation may be any one of algorithm models corresponding to registration methods such as a surface matching algorithm, a mutual information method, and a grayscale mean square error method.
  • the computer device may determine the target segmentation transformation matrix according to the acquired segmentation reference image, segmentation floating image and the above-mentioned segmentation-based image registration model, so as to map the floating image to be registered to the reference image according to the target segmentation transformation matrix Under the spatial coordinates of the, the registration of the reference image and the floating image is completed.
  • the computer device can obtain the divided reference image corresponding to the marked reference image and the divided floating image corresponding to the floating image; and according to the divided reference image, the divided floating image, and the image registration model based on segmentation, the Reference image and floating image for image registration.
  • the computer device can directly use the preset segmentation-based image registration model to perform image registration on the reference image and the floating image according to the segmented reference image and the segmented floating image obtained after semantic information extraction. simple.
  • the above method may further include:
  • S2040 Obtain a registration result after performing image registration on the reference image and the floating image.
  • the registration result is the registered reference image and floating image obtained after performing image registration on the reference image and the floating image.
  • S2041 Perform image integration on the registration result according to the registration result and the preset image integration model.
  • the preset image integration model may be any one of trilinear interpolation and B-spline interpolation.
  • Image integration can be two or more registration images from different imaging devices or acquired at different times, using an algorithm to organically combine the images.
  • the computer device may integrate the reference image and the floating image in the registration result using a preset image integration model to obtain a distorted image in which the floating image and the reference image are integrated under the reference image space.
  • the computer device can obtain the registration result after the image registration of the reference image and the floating image; thus, the registration result is image integrated according to the registration result and the preset image integration model ,
  • the registration result is image integrated according to the registration result and the preset image integration model .
  • FIG. 16 is a schematic flowchart of an image registration method according to another embodiment.
  • This embodiment relates to the target matrix obtained by the computer device according to the above-mentioned embodiment, and the image after downsampling the reference image and the floating image, using the gradient descent method to adjust the similarity metric value to determine the realization process of the target parameter.
  • the above method may further include:
  • S2051 Determine the transformation corresponding to the down-sampled reference image and the down-sampled floating image according to the target transformation matrix, the down-sampled reference image obtained after down-sampling the reference image and the down-sampled floating image obtained after down-sampling the floating image The similarity measure between the floating images.
  • the computer device can down-sample the above reference image and floating image to obtain the down-sampled down-sampled reference image and down-sampling floating image.
  • the down-sampling operation can be performed on the above reference image and floating image to obtain Downsampling the reference image and the downsampling floating image, and using the above target transformation matrix to spatially transform the downsampling floating image to obtain the transformed floating image, and then using the calculation model of the preset similarity metric value such as mutual information method, gray
  • the algorithm model corresponding to the methods such as degree mean square error method determines the similarity measure between the transformed floating image and the down-sampled reference image.
  • S2052 Perform at least one of a translation operation, a rotation operation, a miscut operation, and a zoom operation on the target transformation matrix to extract initial parameters corresponding to the target transformation matrix.
  • the corresponding target transformation matrix may be a 4*4 matrix
  • the computer device may perform a translation operation, a rotation operation, a miscut operation, and a zoom operation on the target transformation matrix.
  • the initial parameters corresponding to the 12 target transformation matrices are obtained by miscutting the angle and scaling.
  • the computer device can obtain the initial parameters corresponding to the eight target transformation matrices.
  • S2053 Determine the target parameter according to the similarity metric value, the initial parameter, and the preset gradient descent method.
  • the computer device may adjust the initial parameters according to a preset gradient descent method, so that the similarity metric value reaches the optimal value, and the adjusted parameter corresponding to the optimal similarity metric value is used as the target parameter.
  • the computer device may determine the final transformation matrix corresponding to the target parameter according to the target parameter, and use the final transformation matrix to register the reference image and the floating image.
  • the computer device may also perform multiple downsampling operations on the reference image and the floating image, such as performing three downsampling operations and obtaining corresponding downsampling reference images and downsampling floating images, respectively.
  • the down-sampling reference image may include a first down-sampling reference image corresponding to the first down-sampling, a second down-sampling reference image corresponding to the second down-sampling, and a third down-sampling reference image corresponding to the third down-sampling
  • the down-sampling floating image may include a first down-sampling floating image corresponding to the first down-sampling, a second down-sampling floating image corresponding to the second down-sampling, and a third down-sampling floating corresponding to the third down-sampling image.
  • the target parameters can be determined by the following methods: Step 1: The computer device can use the target transformation matrix to spatially transform the third down-sampled floating image to map it to the spatial coordinate system corresponding to the third down-sampled reference image, Obtain the transformed third floating image, and determine the first similarity metric value between the transformed third floating image and the third down-sampled reference image using the preset calculation model of the similarity metric value; the second step : The computer device can use the preset gradient descent method to adjust the above initial parameters to make the first similarity measure value optimal, and determine a new target transformation matrix according to the parameter corresponding to the optimal first similarity measure value, and Use the new target transformation moment for the second down-sampled floating image and the down-sampled reference image to continue to perform the above first and second steps until the initial reference image and floating image are completed with the first and second steps Operation, the parameter corresponding to the optimal similarity measure value finally obtained is used as the target parameter, so that the computer device can determine the final transformation matrix corresponding to the target parameter
  • the computer device may first use the image integration method corresponding to the embodiment shown in FIG. 15 to perform image integration on the registration result after the image registration of the reference image and the floating image, and then use the method provided in this embodiment.
  • the registration result obtained by registering the reference image and the floating image using the final transformation matrix optimizes the integration result obtained in the embodiment shown in FIG. 15, and the image optimization method provided in this embodiment can also be used
  • the registration result of the floating image after image registration is used for image optimization, and then the image integration method corresponding to the embodiment shown in FIG. 15 is used to perform registration of the reference image and the floating image using the final transformation matrix in this embodiment.
  • the image integration is not limited in this embodiment.
  • the computer device can obtain the target transformation matrix, and according to the target transformation matrix, the down-sampling reference image obtained after down-sampling the reference image and the down-sampling operation obtained after down-sampling the floating image Sampling the floating image to determine the similarity measure between the down-sampled reference image and the converted floating image corresponding to the down-sampled floating image; performing at least one of a translation operation, a rotation operation, a miscut operation, and a zoom operation on the target transformation matrix Operation, extract the initial parameters corresponding to the target transformation matrix; and then determine the target parameters according to the similarity metric value, the initial parameters, and the preset gradient descent method.
  • the target parameter is the parameter corresponding to the optimal similarity metric value, therefore, according to The final transformation matrix determined by the target parameters is also superior, so that the final transformation matrix is used to register floating images and reference images with higher accuracy, which further improves the accuracy of image registration.
  • the computer device acquires the reference image and the floating image to be registered.
  • the computer device extracts semantic information from the reference image and the floating image to obtain a marked reference image and a marked floating image including semantic information; the semantic information includes: at least one of a segmented area of the floating image and an anatomically marked point, and a reference At least one of the cut area and anatomically marked points of the image.
  • the computer device determines target image registration models corresponding to the mark reference image and the mark floating image from the preset image registration model according to the semantic information; the preset image registration model includes a segmentation-based image registration model and Registration model based on anatomical markers.
  • the computer device determines whether the target image registration model is a registration model based on anatomical markers. If so, continue to execute S2064, and if not, execute S20619.
  • the computer device acquires a set of reference anatomical marker points to be registered for the marked reference image and a set of floating anatomical marker points to be registered for the marked floating image.
  • the computer device determines the reference point set to be registered and the floating anatomy mark point set to be registered according to the matching result of the reference anatomy mark point set to be registered and the floating anatomy mark point set to be registered.
  • the computer device determines the first registration result according to the initial reference anatomical marker point set, the initial floating anatomical marker point set, and the registration model based on the anatomical marker point.
  • the computer device determines the first floating anatomical mark point set corresponding to the first spatial distance within the preset ratio according to the first spatial distance set and the preset ratio; wherein, the first spatial distance set records that to be registered The first spatial distance between the reference anatomical marker point set and each corresponding marker point in the first registration result point set.
  • the computer device determines whether the number of marker points in the first floating anatomical marker point set is less than a preset number threshold, if yes, continue to execute S2069, if not, execute S20610.
  • the computer device uses the first transformation matrix as the target transformation matrix.
  • the computer device acquires a first reference anatomy mark point set corresponding to the first floating anatomy mark point set in the reference anatomy mark point set to be registered.
  • the computer device determines the second transformation matrix according to the first reference anatomical marker point set, the first floating anatomical marker point set, and the registration model based on the anatomical marker point.
  • S20612 The computer device determines the second registration result point set according to the second transformation matrix and the set of floating anatomical marker points to be registered.
  • the computer device determines, according to the second spatial distance set and the preset distance threshold, a second floating anatomical marker point set corresponding to the second spatial distance that is less than the preset distance threshold; the second spatial distance set records that registration is to be performed The second spatial distance between the reference anatomical marker point set and each corresponding marker point in the second registration result point set.
  • the computer device determines whether the number of marker points in the second floating anatomical marker point set is less than a preset threshold number. If yes, continue to execute S20615, if not, execute S20616.
  • the computer device uses the second transformation matrix as the target transformation matrix.
  • S20616 The computer device acquires a second reference anatomy mark point set corresponding to the second floating anatomy mark point set in the reference anatomy mark point set to be registered.
  • the computer device determines a third transformation matrix according to the second reference anatomical marker point set, the second floating anatomical marker point set, and the registration model based on the anatomical marker point, and uses the third transformation matrix as the target transformation matrix.
  • the computer device performs image registration on the reference image and the floating image according to the target transformation matrix; after executing S20618, it continues to execute S20621.
  • the computer device obtains a divided reference image corresponding to the marked reference image and a divided floating image corresponding to the floating image.
  • the computer device performs image registration on the reference image and the floating image according to the divided reference image, the divided floating image, and the image registration model based on the division.
  • S20621 The computer device obtains a registration result after performing image registration on the reference image and the floating image.
  • S20622 The computer device integrates the registration result according to the registration result and the preset image integration model.
  • the computer device obtains a target transformation matrix.
  • the computer device determines the correspondence between the down-sampled reference image and the down-sampled floating image according to the target transformation matrix, the down-sampled reference image obtained after down-sampling the reference image and the down-sampled floating image obtained after down-sampling the floating image The similarity measure between the transformed floating images.
  • the computer device performs at least one of a translation operation, a rotation operation, a miscut operation, and a zoom operation on the target transformation matrix to extract the initial parameters corresponding to the target transformation matrix.
  • the computer device determines the target parameter according to the similarity metric value, the initial parameter, and the preset gradient descent method.
  • steps in the flowcharts of FIGS. 12 to 16 are sequentially displayed in accordance with the arrows, the steps are not necessarily performed in the order indicated by the arrows. Unless clearly stated in this article, the execution of these steps is not strictly limited in order, and these steps may be executed in other orders. Moreover, at least some of the steps in FIGS. 2 to 6 may include multiple sub-steps or multiple stages. These sub-steps or stages are not necessarily executed at the same time, but may be executed at different times. These sub-steps or The execution order of the stages is not necessarily sequential, but may be executed in turn or alternately with other steps or sub-steps of the other steps or at least a part of the stages.
  • the apparatus may include a first acquisition module 2702, a first extraction module 2704, a first determination module 2706, and a registration module 2708.
  • the first obtaining module 2702 is used to obtain the reference image and the floating image to be registered;
  • the first extraction module 2704 is configured to extract semantic information from the reference image and the floating image to obtain a marked reference image and a marked floating image including semantic information;
  • the first determining module 2706 is configured to determine target image registration models corresponding to the mark reference image and the mark floating image from the preset image registration model according to semantic information;
  • the registration module 2708 is used to perform image registration on the reference image and the floating image according to the semantic information and the target image registration model.
  • the semantic information includes: at least one of the segmented area and anatomically marked points of the floating image, and at least one of the segmented area and anatomically marked points of the reference image;
  • the preset image registration model includes segmentation-based Image registration model and registration model based on anatomical markers.
  • the image registration device provided in this embodiment can execute the above method embodiments, and its implementation principles and technical effects are similar, and will not be repeated here.
  • the above registration may include a first acquisition unit and a first registration unit.
  • the first acquiring unit is configured to acquire a set of reference anatomical marker points to be registered for marking the reference image and a set of floating anatomical marker points to be registered for marking the floating image;
  • the first registration unit is used to perform image registration on the reference image and the floating image according to the reference anatomical mark point set to be registered, the floating anatomical mark point set to be registered, and the registration model based on the anatomical mark point.
  • the image registration device provided in this embodiment can execute the above method embodiments, and its implementation principles and technical effects are similar, and will not be repeated here.
  • the first registration unit may include a first determination subunit, a second determination subunit, and a registration subunit.
  • the first determining subunit is used to determine the intersection of the marker points based on the matching result of the names of the marker points in the reference anatomical marker point set to be registered and the floating anatomical marker point set to be registered;
  • the second determination subunit is used to determine the initial reference anatomy marker point set and the initial floating anatomy marker point set from the reference anatomy marker point set to be registered and the floating anatomy marker point set to be registered according to the intersection of the marker points ;
  • the registration subunit is used to perform image registration on the reference image and the floating image according to the initial reference anatomical marker point set, the initial floating anatomical marker point set, and the registration model based on the anatomical marker point.
  • the image registration device provided in this embodiment can execute the above method embodiments, and its implementation principles and technical effects are similar, and will not be repeated here.
  • the registration module 2708 may further include a second acquisition unit and a second registration unit.
  • a second obtaining unit configured to obtain a divided reference image corresponding to the marked reference image and a divided floating image corresponding to the floating image
  • the second registration unit is used to perform image registration on the reference image and the floating image according to the divided reference image, the divided floating image, and the image registration model based on the division.
  • the image registration device provided in this embodiment can execute the above method embodiments, and its implementation principles and technical effects are similar, and will not be repeated here.
  • the second obtaining module 2710 is used to obtain the registration result after the image registration of the reference image and the floating image;
  • the integration module 2712 is configured to perform image integration on the registration result according to the registration result and the preset image integration model.
  • the image registration device provided in this embodiment can execute the above method embodiments, and its implementation principles and technical effects are similar, and will not be repeated here.
  • FIG. 19 is a schematic structural diagram of an image registration device provided by another embodiment. Based on the above embodiment, optionally, the above device may further include a third acquisition module 2714, a second determination module 2716, a second extraction module 2718, and a third determination module 2720.
  • the third obtaining module 2714 is used to obtain a target transformation matrix.
  • the second determination module 2716 is used to determine the down-sampling reference image and the down-sampling reference image according to the target transformation matrix, the down-sampling reference image obtained after down-sampling the reference image and the down-sampling floating image obtained after down-sampling the floating image Sampling the similarity measure value between the transformed floating images corresponding to the floating images;
  • the second extraction module 2718 is configured to perform at least one of a translation operation, a rotation operation, a miscut operation, and a scaling operation on the target transformation matrix to extract the initial parameters corresponding to the target transformation matrix;
  • the third determination module 2720 is configured to determine the target parameter according to the similarity metric value, the initial parameter, and the preset gradient descent method.
  • the image registration device provided in this embodiment can execute the above method embodiments, and its implementation principles and technical effects are similar, and will not be repeated here.
  • a computer device including a memory and a processor, and a computer program is stored in the memory, and the processor implements the computer program to implement the following steps when executing the computer program:
  • the target image registration model corresponding to the mark reference image and the mark floating image from the preset image registration model
  • image registration is performed on the reference image and the floating image.
  • the target image registration model corresponding to the mark reference image and the mark floating image from the preset image registration model
  • image registration is performed on the reference image and the floating image.
  • the computer-readable storage medium provided by the above embodiments has similar implementation principles and technical effects as the above method embodiments, and will not be repeated here.
  • Non-volatile memory may include read-only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory can include random access memory (RAM) or external cache memory.
  • RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.
  • SRAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDRSDRAM double data rate SDRAM
  • ESDRAM enhanced SDRAM
  • SLDRAM synchronous chain (Synchlink) DRAM
  • SLDRAM synchronous chain (Synchlink) DRAM
  • Rambus direct RAM
  • DRAM direct memory bus dynamic RAM
  • RDRAM memory bus dynamic RAM
  • Different medical images can reflect different human anatomical structure information.
  • CT computer tomography
  • Magnetic Resonance Imaging Magnetic Resonance Imaging
  • PET positron emission computed tomography
  • Ultrasound images Functional Magnetic Resonance Imaging (fMRI) images, etc.
  • the existing image registration technology uses an unsupervised learning model based on deep learning.
  • a spatial transformation network is introduced into the unsupervised learning model, and the floating image is spatially transformed through the deformation field output by the model to obtain the registered image.
  • the non-similarity between the calibrated image and the reference image defines the loss function to realize the training of the registration model, and the deformation field is estimated according to the training model to achieve the registration of the same modal image.
  • the reference image and the registered image The dissimilarity is obtained based on the similarity between the reference image and the registered image.
  • an embodiment of the present application provides a method for image registration.
  • the method includes:
  • the images of different modalities refer to the images obtained by using different imaging principles and equipment, for example, using computer tomography (Computed Tomography, CT), nuclear magnetic resonance (Magnetic Resonance Imaging, MRI), positron emission computed tomography (Positron Emission Tomography, PET), Ultrasound, Functional Magnetic Resonance Imaging, fMRI, etc.
  • Any two modal images are images of different modalities.
  • the above floating image refers to the registration Image
  • reference image refers to the past image space where the floating image is to be registered.
  • the computer device can obtain floating images and reference images of different modalities to be registered from the PACS (Picture Archiving and Communication Systems) server, or directly from different Obtain floating images and reference images of different modalities to be registered in medical imaging equipment.
  • PACS Picture Archiving and Communication Systems
  • the computer device inputs the floating image and the reference image into a pre-trained registration model for registering images of different modalities to obtain Registration result.
  • the registration result may be a floating image after registration, or a registration parameter between the floating image and the reference image, and then the computer device transforms the floating image according to the registration parameter to obtain the floating image after registration .
  • the computer device uses the CT image as a floating image and the MRI image as a reference image.
  • the computer device inputs the CT image and MRI image into a pre-trained registration model to obtain the registration result.
  • the computer device can directly obtain the CT image after registration, or can obtain the registration parameters between the CT image and the MRI image, and then transform the CT image according to the registration parameters to obtain the registered CT image.
  • the computer device can register two floating images and reference images of different modalities according to a pre-trained registration model for registering images of different modalities, which solves the existing image
  • a pre-trained registration model for registering images of different modalities, which solves the existing image
  • the pre-trained registration model is used to register two different modal images without having to train each time the image is registered ,
  • the registration efficiency of image registration is improved, and the image registration according to the registration model also improves the registration accuracy of the registration image.
  • the method further includes: adopting a preset unsupervised method or a weakly supervised method to register the preset forward registration network and the preset backward registration The network performs iterative training to obtain a registration model.
  • the unsupervised method refers to the use of unlabeled medical images as training sample images, and the distribution of images or the relationship between images and images is learned from the training sample images;
  • the weakly supervised method refers to the use of a part of labeled medical images as training sample images , Learn the distribution of images or the relationship between images based on training sample images.
  • the computer device may adopt a preset unsupervised method and use unlabeled medical images as training samples to iteratively train the preset forward registration network and the preset backward registration network to learn the distribution of images Or the relationship between images and images to obtain a registration model for registering images of different modalities; or, the computer device can use a preset weak supervision method, using a part of the marked medical images and a part of the unmarked medical images Medical images are used as training samples, iteratively trains the preset forward registration network and the preset backward registration network, learns the distribution of images or the relationship between images and images, and uses unlabeled images to the accuracy of the model The generalization ability is further improved to obtain a registration model for registering images of different modalities.
  • the computer device adopts a preset unsupervised method or a weak supervised method, and the training process of iterative training on the preset forward registration network and the preset backward registration network is very effective.
  • the model training can also be completed effectively, which greatly improves the efficiency of obtaining the registration model, and thus improves the registration efficiency of the registration of the floating image.
  • a preset unsupervised method is used to iteratively train the preset forward registration network and the preset backward registration network to obtain registration
  • the model includes: using the preset first training mode and the second training mode to iteratively train the preset forward registration network and the preset backward registration network to obtain a registration model; wherein, the first training The mode is the training method of the previous registration network and then the backward registration network, and the second training mode is the training method of the successive registration network and then the forward registration network.
  • the computer device adopts a preset first training mode of training the forward registration network and then training the backward registration network and a preset first training of the forward registration network and then training the second training of the forward registration network Mode, iteratively training the preset forward registration network and the preset backward registration network to obtain a registration model.
  • the forward registration network and the backward registration network are Convolutional Neural Networks (CNN) in deep learning.
  • the computer device adopts the preset first training mode and the second training mode to iteratively train the preset forward registration network and the backward registration network.
  • the iterative training can improve the The accuracy of the registration model for registering different modal images further improves the registration accuracy of registering the registration images according to the registration model.
  • 21 is a schematic flowchart of an image registration method according to another embodiment.
  • 22 is a schematic diagram of a training process of a first training mode provided by an embodiment. This embodiment relates to a specific implementation process in which the computer device uses the preset first training mode to train the preset forward registration network and the preset backward registration network. As shown in FIG. 21, on the basis of the foregoing embodiment, as an optional implementation manner, a preset first training mode is adopted to preset the forward registration network and the preset backward registration network.
  • Conduct training including:
  • the computer device inputs the first reference image in mode one and the first floating image in mode two to the forward registration network to obtain the same first registration as the first floating image mode Floating image.
  • the first reference image and the first floating image may be obtained from the PACS server or directly from different medical imaging devices.
  • the CT image is used as the first reference image
  • the MRI image is input as the first floating image into the forward registration network to obtain the first registered floating image, that is, the registered MRI image.
  • S3021 Determine the first registered floating image as the second reference image of the backward registration network.
  • the computer device determines the first registration floating image as the second reference image of the backward registration network, that is, the mode of the second reference image is mode 2.
  • the first registered floating image is the registered MRI image.
  • the computer device first obtains an image whose mode is mode one as the second floating image, uses the first registered floating image as the second reference image, and then combines the second reference image and the sum
  • the second floating image is input into the backward registration network to obtain a second registered floating image with the same modality as the second floating image.
  • the computer device may obtain the second floating image from the PACS server, or may directly obtain the second floating image from the medical imaging device in the same modality as the modality one.
  • the computer device may obtain the second floating image from the PACS server, or may directly obtain the second floating image from the medical imaging device in the same modality as the modality one.
  • S3023 Acquire a first similarity between the second registered floating image and the first reference image according to the second registered floating image and the first reference image, and register the forward registration network and the backward registration according to the first similarity Network training.
  • the computer device obtains a first similarity between the second registered floating image and the first reference image according to the second registered floating image and the first reference image, and performs a forward registration network and post Train to the registration network.
  • the first similarity is a similarity measure between the second registered floating image and the first reference image.
  • the first similarity may be the cross-correlation, mean square error, mutual information, or correlation coefficient between the second registered floating image and the first reference image, or it may be a discriminator network for automatically discriminating between images Of similarity.
  • the discriminator network can be a simple convolutional neural network.
  • the computer device may adjust the parameter values in the forward registration network and the backward registration network according to the value of the first similarity, and train the forward registration network and the backward registration network.
  • the computer device inputs the first floating image and the first reference image into the forward registration network to obtain the first registered floating image with the same modality as the first floating image, and then the first registered floating image As the second reference image of the backward registration network, input the second floating image with mode 1 and the second reference image into the backward registration network to obtain the second registration floating image, because the second registration floats
  • the image has the same modality as the first reference image.
  • training the forward registration network and the backward registration network according to the first similarity includes: determining the first similarity as the second registration
  • the first accuracy of the floating image guides the training of the forward registration network and the backward registration network according to the first accuracy.
  • the computer device determines the first similarity acquired above as the first accuracy of the second registered floating image, and trains the forward registration network and the backward registration network according to the first accuracy.
  • the computer device determines the first similarity as the first accuracy of the second registered floating image, and guides the training of the forward registration network and the backward registration network according to the first accuracy.
  • the accuracy is determined according to the first similarity, which improves the accuracy of the determined first accuracy, and thus improves the accuracy of the forward registration network and the backward registration network obtained by training according to the first accuracy.
  • FIG. 23 is a schematic flowchart of an image registration method according to another embodiment.
  • 24 is a schematic diagram of a training process of a second training mode provided by an embodiment.
  • This embodiment relates to a specific implementation process in which a computer device uses a preset second training mode to train a preset forward registration network and a preset backward registration network.
  • a preset second training mode is adopted to preset the forward registration network and the preset backward registration network.
  • Conduct training including:
  • S3030 Determine the first floating image as the third reference image of the backward registration network, determine the first reference image as the third floating image of the backward registration network, and input the third floating image and the third reference image Go to the registration network to get the third registration floating image; the third reference image has a mode two, the third floating image has a mode one; the third registration floating image has a mode and the third float
  • the modalities of the image are the same.
  • the computer device determines the first floating image as the third reference image of the backward registration network and the first reference image as the third floating image of the backward registration network, also That is, the mode of the third reference image is mode two, and the mode of the third floating image is mode one, and then the computer device inputs the third floating image and the third reference image into the backward registration network to obtain the third floating image.
  • the third registration floating image with the same image modality, that is, the mode of the third registration floating image is mode one.
  • the CT image is determined as the third floating image
  • the MRI image is determined as the third reference image
  • the CT image and the MRI image are input to the backward registration network to obtain the third registered floating image, and It is the CT image after registration.
  • S3031 Determine the third registered floating image as the fourth reference image of the forward registration network.
  • the computer device determines the third registration floating image as the fourth reference image of the backward registration network, that is, the mode of the fourth reference image is mode 1.
  • the fourth reference image is a registered CT image.
  • the computer device first obtains an image with mode 2 as the fourth floating image, uses the third registered floating image as the fourth reference image, and then inputs the fourth floating image and the fourth reference image into the forward direction
  • the registration network obtains the fourth registered floating image in the same mode as the fourth floating image.
  • the computer device may obtain the fourth floating image from the PACS server, or may directly obtain the fourth floating image from the medical imaging device in the same mode as mode 2.
  • the computer device obtains the second similarity between the fourth registered floating image and the third reference image according to the fourth registered floating image and the third reference image, and performs a backward registration network and front registration on the basis of the second similarity Train to the registration network.
  • the second similarity is a similarity measure between the fourth registered floating image and the third reference image.
  • the second similarity may be the cross-correlation, mean square error, mutual information, or correlation coefficient between the fourth registered floating image and the third reference image, or it may be a discriminator network for automatically discriminating between the images. Similarity.
  • the discriminator network can be a simple convolutional neural network.
  • the computer device may adjust the parameter values in the backward registration network and the forward registration network according to the value of the second similarity, and train the backward registration network and the forward registration network.
  • the computer device determines the first floating image as the third reference image of the backward registration network, the first reference image as the third floating image of the backward registration network, and the third floating image and The third reference image is input to the backward registration network to obtain a third registration floating image with the same mode as the third floating image, and then the third registration floating image is used as the fourth reference image of the forward registration network
  • the fourth floating image and the fourth reference image in mode 2 are input into the forward registration network to obtain the fourth registration floating image.
  • training the backward registration network and the forward registration network according to the second similarity includes: determining the second similarity as the fourth registration
  • the second accuracy of the floating image guides the training of the backward registration network and the forward registration network according to the second accuracy.
  • the computer device determines the obtained second similarity as the second accuracy of the fourth registered floating image, and trains the backward registration network and the forward registration network according to the second accuracy.
  • the larger the value of the second similarity the higher the second accuracy of the fourth registered floating image, and the smaller the value of the second similarity, the lower the second accuracy of the fourth registered floating image.
  • the computer device determines the second similarity as the second accuracy of the fourth registered floating image, and guides the training of the backward registration network and the forward registration network according to the second accuracy.
  • the accuracy is determined according to the second similarity, which greatly improves the accuracy of the determined second accuracy, and further improves the accuracy of the backward registration network and the forward registration network obtained by training according to the second accuracy.
  • FIG. 25 is a schematic flowchart of an image registration method according to another embodiment.
  • This embodiment relates to the computer device adopting the preset first training mode and the second training mode to iteratively train the preset forward registration network and the preset backward registration network to obtain the specific registration model Implementation process.
  • a preset first training mode and a second training mode are adopted, and the preset forward registration network and the preset The backward registration network performs iterative training to obtain the registration model, which also includes:
  • S3030 Obtain the value of the first loss function of the first training mode according to the first similarity, and obtain the value of the second loss function of the second training mode according to the second similarity.
  • the loss function is the objective function in the training process of the image registration model
  • the loss function in the training process of the image registration model is defined by the dissimilarity between the images.
  • the computer device acquires the first loss function of the first training mode according to the first similarity, and acquires the second loss function of the second training mode according to the second similarity.
  • the first similarity is the cross-correlation between the second registered floating image and the first reference image
  • the value of the first loss function is equal to the value of 1-cross-correlation
  • the second similarity is the fourth registered floating image
  • the value of the second loss function is equal to the value of 1-mean square error.
  • S3031 Determine the registration model according to the value of the first loss function and the value of the second loss function.
  • the computer device may determine the forward registration network and the backward registration network corresponding to the first loss function and the second loss function according to the values of the first loss function and the second loss function obtained above, which will correspond to The forward registration network and the backward registration network are determined as the registration model.
  • the computer device may determine the corresponding forward registration network and backward registration network when the values of the first loss function and the second loss function reach stable values as the registration model.
  • the computer device acquires the value of the first loss function of the first training mode according to the first similarity, and acquires the value of the second loss function according to the second similarity, due to the value of the first loss function and the second loss
  • the value of the function is obtained based on the similarity between the same modal images.
  • the obtained values of the first loss function and the second loss function are more accurate, greatly improving the value of the first loss function and the second loss function.
  • the accuracy of the registration model determined by the value.
  • steps in the flowcharts of FIGS. 20-25 are displayed in order according to the arrows, the steps are not necessarily executed in the order indicated by the arrows. Unless clearly stated in this article, the execution of these steps is not strictly limited in order, and these steps may be executed in other orders. Moreover, at least some of the steps in FIGS. 20-25 may include multiple sub-steps or multiple stages. These sub-steps or stages are not necessarily executed at the same time, but may be executed at different times. These sub-steps or stages The execution order of is not necessarily sequential, but may be executed in turn or alternately with at least a part of other steps or sub-steps or stages of other steps.
  • FIG. 26 is a schematic structural diagram of an image registration device provided by an embodiment. As shown in FIG. 26, the apparatus may include: a first acquisition module 310 and a second acquisition module 311.
  • the first obtaining module 310 is used to obtain a floating image and a reference image to be registered; the floating image and the reference image are images of two different modalities;
  • the second obtaining module 311 is used to obtain the registration result according to the floating image, the reference image and the pre-trained registration model; the registration model is used to register images of different modalities.
  • the image registration device provided in this embodiment can execute the above method embodiments, and its implementation principles and technical effects are similar, and will not be repeated here.
  • FIG. 27 is a schematic structural diagram of an image registration device provided by an embodiment. Based on the above embodiment, optionally, as shown in FIG. 27, the device further includes: a training module 312.
  • the training module 312 is configured to iteratively train the preset forward registration network and the preset backward registration network using a preset unsupervised method or a weak supervised method to obtain a registration model.
  • the image registration device provided in this embodiment can execute the above method embodiments, and its implementation principles and technical effects are similar, and will not be repeated here.
  • the training module 312 is specifically configured to adopt the preset first training mode and the second training mode to iteratively train the preset forward registration network and the preset backward registration network to obtain registration model;
  • the first training mode is the training mode of the previous registration network and then the backward registration network
  • the second training mode is the training mode of the successive registration network and then the forward registration network.
  • the image registration device provided in this embodiment can execute the above method embodiments, and its implementation principles and technical effects are similar, and will not be repeated here.
  • FIG. 28 is a schematic structural diagram of an image registration device provided by an embodiment.
  • the training module 312 includes a first training unit 3121 for inputting the first floating image and the first reference image into the forward registration network to obtain the first Register the floating image; the mode of the first reference image is mode one, and the mode of the first floating image is mode two; the mode of the first registration floating image is the same as the mode of the first floating image; A registered floating image is determined as the second reference image of the backward registration network; input the second reference image and the second floating image into the backward registration network to obtain the second registered floating image; the modality of the second floating image Mode 1; the mode of the second registered floating image is the same as the mode of the second floating image; based on the second registered floating image and the first reference image, the second registered floating image and the first reference image are acquired Based on the first similarity, the forward registration network and the backward registration network are trained according to the first similarity.
  • the image registration device provided in this embodiment can execute the above method embodiments, and its implementation principles and technical effects are similar, and will not be repeated here.
  • the first training unit 121 trains the forward registration network and the backward registration network according to the first similarity, including: the first training unit 121 converts the first similarity Determine the first accuracy of the second registered floating image, and guide the training of the forward registration network and the backward registration network according to the first accuracy.
  • the image registration device provided in this embodiment can execute the above method embodiments, and its implementation principles and technical effects are similar, and will not be repeated here.
  • FIG. 29 is a schematic structural diagram of an image registration device provided by an embodiment.
  • the training module 312 further includes a second training unit 3122 for determining the first floating image as the third reference image of the backward registration network, and The first reference image is determined to be the third floating image of the backward registration network, and the third floating image and the third reference image are input to the backward registration network to obtain a third registered floating image; the modality of the third reference image is Mode two, the mode of the third floating image is mode one; the mode of the third registered floating image is the same as the mode of the third floating image; the third registered floating image is determined to be the forward registration network The fourth reference image; input the fourth reference image and the fourth floating image into the forward registration network to obtain the fourth registered floating image; the mode of the fourth floating image is mode two; the mode of the fourth registered floating image The state is the same as that of the fourth floating image; according to the fourth registered floating image and the third reference image, the second similarity between the fourth registered floating image and the
  • the image registration device provided in this embodiment can execute the above method embodiments, and its implementation principles and technical effects are similar, and will not be repeated here.
  • the above-mentioned second training unit 3122 trains the backward registration network and the forward registration network according to the second similarity, including: the second training unit 3122 converts the second similarity Determine the second accuracy of the fourth registered floating image, and guide the training of the backward registration network and the forward registration network according to the second accuracy.
  • the image registration device provided in this embodiment can execute the above method embodiments, and its implementation principles and technical effects are similar, and will not be repeated here.
  • FIG. 30 is a schematic structural diagram of an image registration device provided by an embodiment. Based on the above embodiment, optionally, as shown in FIG. 30, the device further includes: a third obtaining module 313 and a determining module 314.
  • the third obtaining module 313 is configured to obtain the value of the first loss function of the first training mode according to the first similarity, and obtain the value of the second loss function of the second training mode according to the second similarity;
  • the determining module 314 is configured to determine the registration model according to the value of the first loss function and the value of the second loss function.
  • the image registration device provided in this embodiment can execute the above method embodiments, and its implementation principles and technical effects are similar, and will not be repeated here.
  • FIG. 31 is a schematic structural diagram of an image registration device provided by an embodiment. Based on the above embodiment, optionally, as shown in FIG. 31, the above determination module 314 may include a determination unit 3141.
  • the determining unit 3141 is configured to determine the corresponding forward registration network and backward registration network when the values of the first loss function and the second loss function reach stable values as the registration model.
  • the image registration device provided in this embodiment can execute the above method embodiments, and its implementation principles and technical effects are similar, and will not be repeated here.
  • Each module in the above image registration device may be implemented in whole or in part by software, hardware, and a combination thereof.
  • the above modules may be embedded in the hardware form or independent of the processor in the computer device, or may be stored in the memory in the computer device in the form of software so that the processor can call and execute the operations corresponding to the above modules.
  • a computer device including a memory and a processor, and a computer program is stored in the memory, and the processor implements the computer program to implement the following steps when executing the computer program:
  • the floating image and the reference image to be registered are images of two different modalities;
  • registration results based on floating images, reference images, and pre-trained registration models; registration models are used to register images of different modalities.
  • a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the following steps are implemented:
  • the floating image and the reference image to be registered are images of two different modalities;
  • registration results based on floating images, reference images, and pre-trained registration models; registration models are used to register images of different modalities.
  • the computer-readable storage medium provided by the above embodiments has similar implementation principles and technical effects as the above method embodiments, and will not be repeated here.
  • Non-volatile memory may include read-only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory can include random access memory (RAM) or external cache memory.
  • RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.
  • SRAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDRSDRAM double data rate SDRAM
  • ESDRAM enhanced SDRAM
  • SLDRAM synchronous chain (Synchlink) DRAM
  • SLDRAM synchronous chain (Synchlink) DRAM
  • Rambus direct RAM
  • DRAM direct memory bus dynamic RAM
  • RDRAM memory bus dynamic RAM

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

一种图像配准方法、装置、计算机设备和可读存储介质,该方法可以利用对不同模态的图像进行配准的目标配准方法,对不同模态的浮动图像和参考图像进行配准,获取配准结果,解决了传统的图像配准方法无法准确有效地对跨模态图像进行配准的问题。

Description

图像配准方法、装置、计算机设备及可读存储介质
相关申请的交叉引用
本申请的相关申请分别要求于2018年12月25日申请的,申请号为201811586820.8,名称为“图像配准方法、装置、计算机设备和可读存储介质”;于2018年12月29日申请的,申请号为201811637721.8,名称为“图像的配准方法、装置、计算机设备和存储介质”,的中国专利申请的优先权,在此将其全文引入作为参考。
技术领域
本申请涉及图像处理技术领域,更具体的说,涉及一种图像配准方法、装置、计算机设备及可读存储介质。
背景技术
不同的医学图像能够反映出不同的人体解剖结构信息,医学临床上通常需要对不同的医学图像进行准确有效的配准,将不同的医学图像信息进行有效的融合,使得在临床疾病诊断或治疗上能够充分考虑不同的医学图像中互补的解剖结构信息。不同的医学图像配准对临床诊疗的精准化和智能化发展具有重要意义。根据不同的临床应用,需要实现图像配准的图像模态包含但不局限于计算机断层扫描(Computed Tomography,CT)图像,磁共振(Magnetic Resonance Imaging,MRI)图像,正电子发射计算机断层扫描(Positron Emission Tomography,PET)图像,超声(Ultrasound)图像,功能磁共振(functional Magnetic Resonance Imaging,fMRI)图像等。
发明内容
一种图像配准方法,所述方法包括:
获取待配准的浮动图像和参考图像;所述浮动图像和所述参考图像为两个不同模态的图像;
根据所述浮动图像、所述参考图像和目标配准方法,获取配准结果;所述目标配准方法用于对不同模态的图像进行配准。
在其中一个实施例中,所述根据所述浮动图像、所述参考图像和目标配准方法,获取配准结果,包括:
对所述浮动图像和所述参考图像进行语义信息的提取,得到包括所述语义信息的标记浮动图像和标记参考图像;
根据所述语义信息,从预设的图像配准算法中确定所述标记浮动图像和所述标记参考图像分别对应的目标图像配准算法;
根据所述语义信息和所述目标图像配准算法,对所述浮动图像和所述参考图像进行图像配准,得到初始配准结果;所述初始配准结果包括所述浮动图像和所述参考图像的变换矩阵;
根据所述变换矩阵、所述参考图像和所述浮动图像,得到变换后的浮动图像;
根据所述变换后的浮动图像、所述参考图像和目标配准模型,对所述变换后的浮动图像进行配准,得到所述配准结果。
在其中一个实施例中,所述语义信息包括:所述浮动图像的分割区域和解剖学标记中的至少一个,以及所述参考图像的分割区域和解剖学标记中的至少一个;所述预设的图像配准算法包括基于分割的图像配准算法和基于解剖学标记的配准算法;所述解剖学标记包括解剖学标记点、解剖学标记线和解剖学标记面。
在其中一个实施例中,当所述目标图像配准算法为所述基于解剖学标记的配准算法时,所述根据所述语义信息和所述目标图像配准算法,对所述浮动图像和所述参考图像进行图像配准,得到初始配准结果,包括:
获取所述标记浮动图像的待配准浮动解剖学标记集和所述标记参考图像的待配准参考解剖学标记集;
根据所述待配准浮动解剖学标记集、所述待配准参考解剖学标记集和所述基于解剖学标记的配准算法,对所述浮动图像和所述参考图像进行图像配准,得到所述初始配准结果。
在其中一个实施例中,所述根据所述待配准浮动解剖学标记集、所述待配准参考解剖学标记集和所述基于解剖学标记的配准算法,对所述浮动图像和所述参考图像进行图像配准,得到所述初始配准结果,包括:
根据所述待配准浮动解剖学标记集和所述待配准参考解剖学标记集中各个标记的名称的匹配结果,确定标记交集;
根据所述标记交集,从所述待配准浮动解剖学标记集和所述待配准参考解剖学标记集中分别确定初始浮动解剖学标记集和初始参考解剖学标记集;
根据所述初始浮动解剖学标记集、所述初始参考解剖学标记集和所述基于解剖学标记的配准算法,对所述浮动图像和所述参考图像进行图像配准,得到所述初始配准结果。
在其中一个实施例中,当所述目标图像配准算法为所述基于分割的图像配准算法时,所述根据所述语义信息和所述目标图像配准算法,对所述浮动图像和所述参考图像进行图像配准,得到所述初始配准结果,包括:
获取所述浮动图像对应的分割浮动图像和所述参考图像对应的分割参考图像;
根据所述分割浮动图像、所述分割参考图像和所述基于分割的图像配准算法,对所述浮动图像和所述参考图像进行图像配准,得到所述初始配准结果。
在其中一个实施例中,所述方法还包括:
获取对所述浮动图像和所述参考图像进行图像配准后的所述初始配准结果;
根据预设的配准结果整合方法,对不同解剖学标记得到的初始配准结果和/或不同分割区域得到的初始配准结果进行整合。
在其中一个实施例中,所述根据所述变换矩阵、所述参考图像和所述浮动图像,得到变换后的浮动图像,包括:
根据所述变换矩阵、对所述参考图像进行下采样操作后得到的下采样参考图像和对所述浮动图像进行下采样操作后得到的下采样浮动图像,确定所述下采样参考图像和所述下采样浮动图像对应的变换后的浮动图像之间的相似性度量值;
对所述变换矩阵进行平移操作、旋转操作、错切操作和缩放操作中的至少一个操作,提取所述变换矩阵对应的初始参数;
根据所述相似性度量值、所述初始参数和预设的梯度下降法,确定目标变换矩阵;
根据所述目标变换矩阵对所述浮动图像进行变换,得到所述变换后的浮动图像。
在其中一个实施例中,所述目标配准模型包括前向配准网络和后向配准网络;所述目 标配准模型的训练过程包括:
采用预设的无监督方法或弱监督的方法,对预设的前向配准网络和预设的后向配准网络进行迭代训练,得到所述目标配准模型。
在其中一个实施例中,所述采用预设的无监督方法,对预设的前向配准网络和预设的后向配准网络进行迭代训练,得到所述目标配准模型,包括:
采用预设的第一训练模式和第二训练模式,对所述预设的前向配准网络和所述预设的后向配准网络进行迭代训练,得到所述目标配准模型;
其中,所述第一训练模式为先前向配准网络再后向配准网络的训练方式,所述第二训练模式为先后向配准网络再前向配准网络的训练方式。
在其中一个实施例中,所述采用预设的第一训练模式,对预设的前向配准网络和预设的后向配准网络进行训练,包括:
将第一浮动图像和第一参考图像输入所述预设的前向配准网络,得到第一配准浮动图像;所述第一参考图像的模态为模态一,所述第一浮动图像的模态为模态二;所述第一配准浮动图像的模态与所述第一浮动图像的模态相同;
将所述第一配准浮动图像确定为所述预设的后向配准网络的第二参考图像;
将所述第二参考图像和第二浮动图像输入所述预设的后向配准网络,得到第二配准浮动图像;所述第二浮动图像的模态为模态一;所述第二配准浮动图像的模态与所述第二浮动图像的模态相同;
根据所述第二配准浮动图像和所述第一参考图像,获取所述第二配准浮动图像与所述第一参考图像间的第一相似度,根据所述第一相似度对所述预设的前向配准网络、所述预设的后向配准网络进行训练。
在其中一个实施例中,所述根据所述第一相似度对所述预设的前向配准网络、所述预设的后向配准网络进行训练,包括:
将所述第一相似度确定为所述第二配准浮动图像的第一准确度,根据所述第一准确度指导所述预设的前向配准网络和所述预设的后向配准网络的训练。
在其中一个实施例中,所述采用预设的第二训练模式,对预设的前向配准网络和预设的后向配准网络进行训练,包括:
将所述第一浮动图像确定为所述预设的后向配准网络的第三参考图像、将所述第一参考图像确定为所述预设的后向配准网络的第三浮动图像,将所述第三浮动图像和所述第三参考图像输入所述预设的后向配准网络,得到第三配准浮动图像;所述第三参考图像的模态为模态二,所述第三浮动图像的模态为模态一;所述第三配准浮动图像的模态与所述第三浮动图像的模态相同;
将所述第三配准浮动图像确定为所述预设的前向配准网络的第四参考图像;
将所述第四参考图像和第四浮动图像输入所述预设的前向配准网络,得到第四配准浮动图像;所述第四浮动图像的模态为模态二;所述第四配准浮动图像的模态与所述第四浮动图像的模态相同;
根据所述第四配准浮动图像和所述第三参考图像,获取所述第四配准浮动图像与所述第三参考图像间的第二相似度,根据所述第二相似度对所述预设的后向配准网络、所述预设的前向配准网络进行训练。
在其中一个实施例中,所述根据所述第二相似度对所述预设的后向配准网络、所述预 设的前向配准网络进行训练,包括:
将所述第二相似度确定为所述第四配准浮动图像的第二准确度,根据所述第二准确度指导所述预设的后向配准网络和所述预设的前向配准网络的训练。
在其中一个实施例中,所述采用预设的第一训练模式和第二训练模式,对所述预设的前向配准网络和所述预设的后向配准网络进行迭代训练,得到所述目标配准模型,还包括:
根据所述第一相似度获取所述第一训练模式的第一损失函数的值,根据所述第二相似度获取所述第二训练模式的第二损失函数的值;
根据所述第一损失函数的值和所述第二损失函数的值,确定所述目标配准模型。
在其中一个实施例中,所述根据所述第一损失函数的值和所述第二损失函数的值,确定所述目标配准模型,包括:
将所述第一损失函数的值和所述第二损失函数的值达到稳定值时对应的前向配准网络和后向配准网络,确定为所述目标配准模型。
一种图像配准装置,所述装置包括:
获取模块,用于获取待配准的浮动图像和参考图像;所述浮动图像和所述参考图像为两个不同模态的图像;
配准模块,用于根据所述浮动图像、所述参考图像和目标配准方法,获取配准结果;所述目标配准方法用于对不同模态的图像进行配准。
本申请实施例提供一种计算机设备,包括存储器、处理器,所述存储器上存储有可在处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现以下步骤:
获取待配准的浮动图像和参考图像;所述浮动图像和所述参考图像为两个不同模态的图像;
根据所述浮动图像、所述参考图像和目标配准方法,获取配准结果;所述目标配准方法用于对不同模态的图像进行配准。
本申请实施例提供一种可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现以下步骤:
获取待配准的浮动图像和参考图像;所述浮动图像和所述参考图像为两个不同模态的图像;
根据所述浮动图像、所述参考图像和目标配准方法,获取配准结果;所述目标配准方法用于对不同模态的图像进行配准。
一种图像配准方法,所述方法包括:
获取待配准的参考图像和浮动图像;
对所述参考图像和所述浮动图像进行语义信息的提取,得到包括所述语义信息的标记参考图像和标记浮动图像;
根据所述语义信息,从预设的图像配准模型中确定所述标记参考图像和所述标记浮动图像分别对应的目标图像配准模型;
根据所述语义信息和所述目标图像配准模型,对所述参考图像和所述浮动图像进行图像配准。
在其中一个实施例中,所述语义信息包括:所述浮动图像的分割区域和解剖学标记点中的至少一个,以及所述参考图像的分割区域和解剖学标记点中的至少一个;所述预设的 图像配准模型包括基于分割的图像配准模型和基于解剖学标记点的配准模型。
在其中一个实施例中,当所述目标图像配准模型为所述基于解剖学标记点的配准模型时,所述根据所述语义信息和所述目标图像配准模型,对所述参考图像和所述浮动图像进行图像配准,包括:
获取所述标记参考图像的待配准参考解剖学标记点集和所述标记浮动图像的待配准浮动解剖学标记点集;
根据所述待配准参考解剖学标记点集、所述待配准浮动解剖学标记点集和所述基于解剖学标记点的配准模型,对所述参考图像和所述浮动图像进行图像配准。
在其中一个实施例中,所述根据所述待配准参考解剖学标记点集、所述待配准浮动解剖学标记点集和所述基于解剖学标记点的配准模型,对所述参考图像和所述浮动图像进行图像配准,包括:
根据所述待配准参考解剖学标记点集和所述待配准浮动解剖学标记点集中各个标记点的名称的匹配结果,确定标记点交集;
根据所述标记点交集,从所述待配准参考解剖学标记点集和所述待配准浮动解剖学标记点集中分别确定初始参考解剖学标记点集和初始浮动解剖学标记点集;
根据所述初始参考解剖学标记点集、所述初始浮动解剖学标记点集和所述基于解剖学标记点的配准模型,对所述参考图像和所述浮动图像进行图像配准。
在其中一个实施例中,当所述目标图像配准模型为所述基于分割的图像配准模型时,则所述根据所述语义信息和所述目标图像配准模型,对所述参考图像和所述浮动图像进行图像配准,包括:
获取所述标记参考图像对应的分割参考图像和所述浮动图像对应的分割浮动图像;
根据所述分割参考图像、所述分割浮动图像和所述基于分割的图像配准模型,对所述参考图像和所述浮动图像进行图像配准。
在其中一个实施例中,所述方法还包括:
获取对所述参考图像和所述浮动图像进行图像配准后的配准结果;
根据所述配准结果和预设的图像整合模型,对所述配准结果进行图像整合。
在其中一个实施例中,在所述对所述参考图像和所述浮动图像进行图像配准后,所述方法还包括:
获取所述目标变换矩阵;
根据所述目标变换矩阵、对所述参考图像进行下采样操作后得到的下采样参考图像和对所述浮动图像进行下采样操作后得到的下采样浮动图像,确定所述下采样参考图像和所述下采样浮动图像对应的变换后的浮动图像之间的相似性度量值;
对所述目标变换矩阵进行平移操作、旋转操作、错切操作和缩放操作中的至少一个操作,提取所述目标变换矩阵对应的初始参数;
根据所述相似性度量值、所述初始参数和预设的梯度下降法,确定目标参数。
一种图像配准装置,所述装置包括:
第一获取模块,用于获取待配准的参考图像和浮动图像;
第一提取模块,用于对所述参考图像和所述浮动图像进行语义信息的提取,得到包括所述语义信息的标记参考图像和标记浮动图像;
第一确定模块,用于根据所述语义信息,从预设的图像配准模型中确定所述标记参考 图像和所述标记浮动图像分别对应的目标图像配准模型;
配准模块,用于根据所述语义信息和所述目标图像配准模型,对所述参考图像和所述浮动图像进行图像配准。
一种计算机设备,所述计算机设备包括存储器和处理器,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实现上述图像配准方法中任一项所述方法的步骤。
一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现上述图像配准方法中任一项所述方法的步骤。
上述图像配准方法、装置、计算机设备和可读存储介质中,可以先提取参考图像和浮动图像的语义信息,从而根据不同的语义信息,采用不同的目标图像配准模型对参考图像和浮动图像进行配准,以完成包括多种语义信息的参考图像和浮动图像的配准,解决了现有技术中只能基于单一的语义信息对参考图像和浮动图像进行配准的局限性,大大提高了图像配准的适用范围。
一种图像配准方法,所述方法包括:
获取待配准的浮动图像和参考图像;所述浮动图像和所述参考图像为两个不同模态的图像;
根据所述浮动图像、所述参考图像和预先训练的配准模型,获取配准结果;所述配准模型用于对不同模态的图像进行配准。
在其中一个实施例中,所述方法还包括:
采用预设的无监督方法或弱监督方法,对预设的前向配准网络和预设的后向配准网络进行迭代训练,得到所述配准模型。
在其中一个实施例中,所述采用预设的无监督方法,对预设的前向配准网络和预设的后向配准网络进行迭代训练,得到所述配准模型,包括:
采用预设的第一训练模式和第二训练模式,对预设的前向配准网络和预设的后向配准网络进行迭代训练,得到所述配准模型;
其中,所述第一训练模式为先前向配准网络再后向配准网络的训练方式,所述第二训练模式为先后向配准网络再前向配准网络的训练方式。
在其中一个实施例中,所述采用预设的第一训练模式,对预设的前向配准网络和预设的后向配准网络进行训练,包括:
将第一浮动图像和第一参考图像输入所述前向配准网络,得到第一配准浮动图像;所述第一参考图像的模态为模态一,所述第一浮动图像的模态为模态二;所述第一配准浮动图像的模态与所述第一浮动图像的模态相同;
将所述第一配准浮动图像确定为所述后向配准网络的第二参考图像;
将所述第二参考图像和第二浮动图像输入所述后向配准网络,得到第二配准浮动图像;所述第二浮动图像的模态为模态一;所述第二配准浮动图像的模态与所述第二浮动图像的模态相同;
根据所述第二配准浮动图像和所述第一参考图像,获取所述第二配准浮动图像与所述第一参考图像间的第一相似度,根据所述第一相似度对所述前向配准网络、所述后向配准网络进行训练。
在其中一个实施例中,所述根据所述第一相似度对所述前向配准网络、所述后向配准 网络进行训练,包括:
将所述第一相似度确定为所述第二配准浮动图像的第一准确度,根据所述第一准确度指导所述前向配准网络和所述后向配准网络的训练。
在其中一个实施例中,所述采用预设的第二训练模式,对预设的前向配准网络和预设的后向配准网络进行训练,包括:
将所述第一浮动图像确定为所述后向配准网络的第三参考图像、将所述第一参考图像确定为所述后向配准网络的第三浮动图像,将所述第三浮动图像和所述第三参考图像输入所述后向配准网络,得到第三配准浮动图像;所述第三参考图像的模态为模态二,所述第三浮动图像的模态为模态一;所述第三配准浮动图像的模态与所述第三浮动图像的模态相同;
将所述第三配准浮动图像确定为所述前向配准网络的第四参考图像;
将所述第四参考图像和第四浮动图像输入所述前向配准网络,得到第四配准浮动图像;所述第四浮动图像的模态为模态二;所述第四配准浮动图像的模态与所述第四浮动图像的模态相同;
根据所述第四配准浮动图像和所述第三参考图像,获取所述第四配准浮动图像与所述第三参考图像间的第二相似度,根据所述第二相似度对所述后向配准网络、所述前向配准网络进行训练。
在其中一个实施例中,所述根据所述第二相似度对所述后向配准网络、所述前向配准网络进行训练,包括:
将所述第二相似度确定为所述第四配准浮动图像的第二准确度,根据所述第二准确度指导所述后向配准网络和所述前向配准网络的训练。
在其中一个实施例中,所述采用预设的第一训练模式和第二训练模式,对预设的前向配准网络和预设的后向配准网络进行迭代训练,得到所述配准模型,还包括:
根据所述第一相似度获取所述第一训练模式的第一损失函数的值,根据所述第二相似度获取所述第二训练模式的第二损失函数的值;
根据所述第一损失函数的值和所述第二损失函数的值,确定所述配准模型。
在其中一个实施例中,所述根据所述第一损失函数的值和所述第二损失函数的值,确定所述配准模型,包括:
将所述第一损失函数的值和所述第二损失函数的值达到稳定值时对应的前向配准网络和后向配准网络,确定为所述配准模型。
一种图像配准装置,所述装置包括:
第一获取模块,用于获取待配准的浮动图像和参考图像;所述浮动图像和所述参考图像为两个不同模态的图像;
第二获取模块,用于根据所述浮动图像、所述第一参考图像和预先训练的配准模型,获取配准参数与配准后的图像;所述配准模型用于对不同模态的图像进行配准。
一种计算机设备,所述计算机设备包括存储器和处理器,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实现上述图像配准方法中任一项所述方法的步骤。
一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现上述图像配准方法中任一项所述方法的步骤。
上述图像配准方法、装置、计算机设备和可读存储介质中,可以根据预先训练的用于 对不同模态的图像进行配准的配准模型,对两个不同模态的浮动图像和参考图像进行配准,解决了现有图像配准技术中无法对跨模态图像进行配准的问题;另外,利用预先训练的配准模型对两个不同模态的图像进行配准,不需要额外的参数调节,提高了图像配准的配准效率与鲁棒性,同时根据配准模型对图像配准也提高了配准准确度。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据公开的附图获得其他的附图。
图1为一个实施例提供的图像配准方法的流程示意图;
图2为另一个实施例提供的图像配准方法的流程示意图;
图3为另一个实施例提供的图像配准方法的流程示意图;
图4为另一个实施例提供的图像配准方法的流程示意图;
图5为另一个实施例提供的图像配准方法的流程示意图;
图6为另一个实施例提供的图像配准方法的流程示意图;
图7为另一个实施例提供的图像配准方法的流程示意图;
图8为另一个实施例提供的图像配准方法的流程示意图;
图9为另一个实施例提供的图像配准方法的流程示意图;
图10为一个实施例提供的图像配准装置的结构示意图;
图11为一个实施例提供的计算机设备的内部结构示意图;
图12为一个实施例提供的图像配准方法的流程示意图;
图13为另一个实施例提供的图像配准方法流程示意图;
图14为另一个实施例提供的图像配准方法流程示意图;
图15为另一个实施例提供的图像配准方法流程示意图;
图16为另一个实施例提供的图像配准方法流程示意图;
图17为一个实施例提供的图像配准装置结构示意图;
图18为又一个实施例提供的图像配准装置结构示意图;
图19为又一个实施例提供的图像配准装置结构示意图;
图20为一个实施例提供的图像配准方法流程示意图;
图21为另一个实施例提供的图像配准方法的流程示意图;
图22为一个实施例提供的第一训练模式的训练过程示意图;
图23为另一个实施例提供的图像配准方法的流程示意图;
图24为一个实施例提供的第二训练模式的训练过程示意图;
图25为另一个实施例提供的图像配准方法的流程示意图;
图26为一个实施例提供的图像配准装置结构示意图;
图27为一个实施例提供的图像配准装置结构示意图;
图28为一个实施例提供的图像配准装置结构示意图;
图29为一个实施例提供的图像配准装置结构示意图;
图30为一个实施例提供的图像配准装置结构示意图;
图31为一个实施例提供的图像配准装置结构示意图;
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
不同的医学图像能够反映出不同的人体解剖结构信息,医学临床上通常需要对不同的医学图像进行准确有效的配准,图像配准可以实现将不同时间、不同成像设备或不同条件下获取的两幅或多幅图像进行匹配和叠加,对不同医学图像的信息进行有效融合,使得在临床疾病诊断或治疗上能够充分考虑不同医学图像中互补的解剖结构信息。不同的医学图像配准对临床诊疗的精准化和智能化发展具有重要意义。根据不同的临床应用,需要实现图像配准的图像模态包含但不局限于计算机断层扫描(Computed Tomography,CT)图像,磁共振(Magnetic Resonance Imaging,MRI)图像,正电子发射计算机断层扫描(Positron Emission Tomography,PET)图像,超声(Ultrasound)图像,功能磁共振(functional Magnetic Resonance Imaging,fMRI)图像等。但传统的图像配准方法无法解决非线性跨模态图像的配准问题。为了解决传统的图像配准方法无法解决非线性跨模态图像的配准问题,本申请其中一个实施例中提出了一种图像配准方法、装置、计算机设备及可读存储介质。
在一个实施例中,如图1所示,提供了一种图像配准方法的流程示意图,包括以下步骤:
S1010,获取待配准的浮动图像和参考图像;浮动图像和参考图像为两个不同模态的图像。
其中,浮动图像是指待配准的图像,参考图像是指浮动图像要配准过去的图像空间。不同模态的图像是指利用不同成像原理、设备得到的图像,例如,利用计算机断层扫描(Computed Tomography,CT),核磁共振(Magnetic Resonance Imaging,MRI),正电子发射计算机断层扫描(Positron Emission Tomography,PET),超声(Ultrasound),功能磁共振(functional Magnetic Resonance Imaging,fMRI)等得到的任意两个模态的图像均是不同模态的图像。可选的,计算机设备可以从PACS(Picture Archiving and Communication Systems,影像归档和通信系统)服务器中获取不同模态的待配准的浮动图像和参考图像,也可以直接从不同的医学影像设备中获取不同模态的待配准的浮动图像和参考图像。可选的,计算机设备可以对获得的两幅或多幅图像进行配准,比如将其中一幅图像作为参考图像,其它图像作为浮动图像,将浮动图像映射到参考图像,以实现参考图像与浮动图像在解剖学结构下的对齐。可选的,参考图像和浮动图像可以是同一个体的图像,也可以是不同个体的图像,可以是包含的解剖学结构均相同的图像,也可以是包含部分相同的解剖学结构的图像,本实施例对参考图像和浮动图像的来源并不做限定。可选的,参考图像和浮动图像可以是二维图像,也可以是三维图像,本实施例对此并不做具体限定。
S1011,根据浮动图像、参考图像和目标配准方法,获取配准结果;目标配准方法用于对不同模态的图像进行配准。
具体的,计算机设备根据浮动图像、参考图像和目标配准方法,获取配准结果,其中,目标配准方法用于对不同模态的图像进行配准。可选的,目标配准方法可以是配准算法, 也可以是配准模型,也可以是配准算法和配准模型相结合的方法。示例性地,目标配准方法为配准算法时,计算机设备通过配准算法,得到浮动图像和参考图像的变换矩阵,根据得到的变换矩阵对浮动图像进行配准,得到配准结果;目标配准方法为配准模型时,计算机设备将浮动图像和参考图像输入配准模型中,得到浮动图像的变形场,根据得到的变形场对浮动图像进行配准,得到配准结果;目标配准方法为配准算法和配准模型相结合时,计算机设备通过配准算法,得到浮动图像和参考图像的变换矩阵,根据得到的变换矩阵对浮动图像进行变换,得到变换后的浮动图像,将变换后的浮动图像和参考图像输入配准模型,得到变形场,根据得到的变形场对变换后的浮动图像进行配准,得到配准结果。
在本实施例中,计算机设备可以利用对不同模态的图像进行配准的目标配准方法,对不同模态的浮动图像和参考图像进行配准,获取配准结果,解决了传统的图像配准方法无法准确有效地对跨模态图像进行配准的问题。
在一个实施例中,如图2所示,提供了另一种图像配准方法的流程示意图,上述S1011包括:
S1020,对浮动图像和参考图像进行语义信息的提取,得到包括语义信息的标记浮动图像和标记参考图像。
具体的,计算机设备对浮动图像和参考图像进行语义信息的提取,得到包括提取的语义信息的标记浮动图像和标记参考图像。可选的,上述语义信息包括:浮动图像的分割区域和解剖学标记中的至少一个,以及参考图像的分割区域和解剖学标记中的至少一个;其中,解剖学标记包括解剖学标记点、解剖学标记线和解剖学标记面。可选的,上述语义信息可以为参考图像和浮动图像中的解剖学标记,也可以为参考图像和浮动图像中的分割区域。进一步的,上述解剖学标记为解剖学标记点时,解剖学标记可以是几何标记点,如灰度极值或线性结构交点,也可以是在解剖形态上清晰可见并可精确定位的解剖标记点,如人体组织、器官或病灶的关键标记点或特征点;上述分割区域可以是参考图像和浮动图像对应的曲线或曲面等,如肺部、肝部或不规则区域。可选的,计算机设备可以根据预设的神经网络模型对浮动图像和参考图像进行语义信息的提取。示例性地,对浮动图像和参考图像进行语义信息的提取时,如果计算机设备检测到肺部对应的区域,则计算机设备可以把肺部对应的区域分割出来,从而提取出肺部对应的语义信息;如果计算机设备检测到骨骼,则计算机设备可以用标记点将骨骼对应的位置标记出来,从而提取出骨骼对应的语义信息。
S1021,根据语义信息,从预设的图像配准算法中确定标记浮动图像和标记参考图像分别对应的目标图像配准算法。
具体的,计算机设备根据提取出的语义信息,从预设的图像配准算法中确定标记浮动图像和标记参考图像分别对应的目标图像配准算法。其中,图像配准算法为用于对提取语义信息后得到的标记参考图像和标记浮动图像进行配准的算法,例如,表面匹配算法、互信息法、标准正交化矩阵法和最小二乘法。可选的,预设的图像配准算法包括基于分割的图像配准算法和基于解剖学标记的配准算法。进一步的,基于分割的图像配准算法为能够对包括上述分割区域的标记参考图像和标记浮动图像进行配准的图像配准算法,如表面匹配算法、互信息法、灰度均方差法等算法;基于解剖学标记的配准算法为能够对包括上述解剖学标记的标记参考图像和标记浮动图像进行配准的图像配准算法,如奇异值分解算法、迭代最近点法、标准正交化矩阵法等算法。可选的,对于包含不同语义信息的标记参 考图像和标记浮动图像,计算机设备确定出的对应的目标图像配准算法不同,即包括分割区域的标记参考图像和标记浮动图像与包括解剖学标记的标记参考图像和标记浮动图像,可以对应不同的配准算法。
S1022,根据语义信息和目标图像配准算法,对浮动图像和参考图像进行图像配准,得到初始配准结果;初始配准结果包括浮动图像和参考图像间的变换矩阵。
具体的,计算机设备根据提取出的语义信息和确定的目标图像配准算法,对浮动图像和参考图像进行图像配准,得到包括浮动图像和参考图像的变换矩阵的初始配准结果。可选的,一幅参考图像或一幅浮动图像中可以同时包括分割区域和解剖学标记,此时,计算机设备可以先利用基于解剖学标记的配准算法对参考图像和浮动图像中的解剖学标记进行配准,再利用基于分割的图像配准算法对参考图像和浮动图像中的分割区域进行配准;也可以先利用基于分割的图像配准算法对参考图像和浮动图像中的分割区域进行配准,再利用基于解剖学标记的配准算法对参考图像和浮动图像中的解剖学标记进行配准,也可以同时利用基于解剖学标记的配准算法对参考图像和浮动图像中的解剖学标记进行配准,并利用基于分割的图像配准算法对参考图像和浮动图像中的分割区域进行配准,本实施例对此并不做限定。可选的,计算机设备可以在确保继续使用其中的CPU进行图像配准的相关运算处理的情况下,还可以引入支持并行计算架构(Compute Unified Device Architecture,CUDA)的图形处理器(Graphics Processing Unit,GPU)处理部分运算,以进一步加快运行对浮动图像和参考图像进行图像配准的目标图像配准算法的速度。
S1023,根据变换矩阵、参考图像和浮动图像,得到变换后的浮动图像。
具体的,计算机设备根据得到的浮动图像和参考图像的变换矩阵、参考图像和浮动图像,得到变换后的浮动图像。可选的,计算机设备可以根据浮动图像和参考图像的变换矩阵对浮动图像进行变换,并对得到的图像结合参考图像进行调整,得到变换后的浮动图像。需要说明的是,变换后的浮动图像仅对浮动图像的空间结构进行了变换,变换后的浮动图像的模态并没有改变,变换后的浮动图像与参考图像仍为两个不同模态的图像。
S1024,根据变换后的浮动图像、参考图像和目标配准模型,对变换后的浮动图像进行配准,得到配准结果。
具体的,计算机设备将变换后的浮动图像和参考图像输入目标配准模型中,得到变形场,根据得到的变形场对变换后的浮动图像进行配准,得到配准结果。其中,目标配准模型为预先训练好的用于对不同模态的图像进行配准的模型。可以理解的是,变换后的浮动图像的模态与参考图像的模态是不同的,这样通过用于对不同模态的图像进行配准的目标配准模型,可以将变换后的浮动图像配准为与参考图像模态相同的图像,得到与参考图像模态相同的配准图像。
在本实施例中,计算机设备可以先提取参考图像和浮动图像的语义信息,从而根据不同的语义信息,采用不同的目标图像配准算法对参考图像和浮动图像进行配准,得到包括浮动图像和参考图像的变换矩阵,根据得到的变换矩阵、参考图像和浮动图像,得到变换后的浮动图像,再根据变换后的浮动图像、参考图像和目标配准模型,对变换后的浮动图像进行进一步地配准,目标配准模型能够根据变换后的浮动图像和参考图像,对变换后的浮动图像进行进一步更加准确地配准,进而提高了得到的配准结果的准确度。
在一个实施例中,如图3所示,提供了另一种图像配准方法的流程示意图,当目标图像配准算法为基于解剖学标记的配准算法时,在上述实施例的基础上,作为一种可选的实 施方式,上述S1022包括:
S1030,获取标记浮动图像的待配准浮动解剖学标记集和标记参考图像的待配准参考解剖学标记集。
具体的,待配准浮动解剖学标记集和待配准参考解剖学标记集为各个解剖学标记的坐标信息的集合。可选的,解剖学标记可以为人工进行预标记的标记。可选的,待配准浮动解剖学标记集可以为待配准浮动解剖学标记点集,也可以为待配准浮动解剖学标记线集,也可以为待配准浮动解剖学标记面集。可选的,待配准参考解剖学标记集可以为待配准参考解剖学标记点集,也可以为待配准参考解剖学标记线集,也可以为待配准参考解剖学标记面集。
S1031,根据待配准浮动解剖学标记集、待配准参考解剖学标记集和基于解剖学标记的配准算法,对浮动图像和参考图像进行图像配准,得到初始配准结果。
具体的,计算机设备根据待配准浮动解剖学标记集、待配准参考解剖学标记集和基于解剖学标记的配准算法,对浮动图像和参考图像进行图像配准,得到包括浮动图像和参考图像间的变换矩阵的初始配准结果。可选的,基于解剖学标记的配准算法可以为奇异值分解算法、迭代最近点算法、标准正交化矩阵算法中的任意一种算法。可选的,计算机设备可以根据上述待配准浮动解剖学标记集和待配准参考解剖学标记集中各个标记的名称的匹配结果,确定标记交集;根据上述标记交集,从上述待配准浮动解剖学标记集和上述待配准参考解剖学标记集中分别确定初始浮动解剖学标记集和初始参考解剖学标记集;根据上述初始浮动解剖学标记集、上述初始参考解剖学标记集和上述基于解剖学标记的配准算法,对上述浮动图像和上述参考图像进行图像配准,得到包括浮动图像和参考图像间的变换矩阵的初始配准结果。其中,每个解剖学标记有唯一的名称,对于待配准浮动解剖学标记集和待配准参考解剖学标记集中的解剖学标记名称相同的解剖学标记构成二者的标记交集。可选的,计算机设备也可以将待配准浮动解剖学标记集和待配准参考解剖学标记集中的解剖学标记编号相同的解剖学标记作为二者的标记交集。确定标记交集后,计算机设备可以将待配准浮动解剖学标记集中与上述标记交集对应的集合作为初始浮动解剖学标记集,以及将待配准参考解剖学标记集中与上述标记交集对应的集合作为初始参考解剖学标记集,从而可以将初始参考解剖学标记集和初始浮动解剖学标记集输入预设的基于解剖学标记的配准算法,实现浮动图像和参考图像在相同解剖学结构下的对齐。
在本实施例中,计算机设备可以根据从待配准浮动解剖学标记集和待配准参考解剖学标记集中选取的初始浮动解剖学标记集和初始参考解剖学标记集,并利用基于解剖学标记的配准算法对上述浮动图像和上述参考图像进行图像配准。可选的,利用基于解剖学标记的配准算法对浮动图像和参考图像进行图像配准的过程可以分为三个阶段,每个阶段可以得到对应的配准结果,三个阶段的配准过程如下:
第一阶段的配准过程可以参见S10311至S10313:
S10311,根据初始浮动解剖学标记集、初始参考解剖学标记集和基于解剖学标记的配准算法,确定第一配准结果;第一配准结果包括第一配准结果集和第一变换矩阵。
具体的,计算机设备根据初始浮动解剖学标记集、初始参考解剖学标记集和预设的基于解剖学标记的配准算法,可以得到待配准浮动解剖学标记集进行空间变换后的第一配准结果集和第一变换矩阵。上述第一配准结果集和第一变换矩阵构成第一配准结果。
S10312,根据第一空间距离集合和预设的比率,确定预设的比率内的第一空间距离对 应的第一浮动解剖学标记集;其中,第一空间距离集合中记录有待配准参考解剖学标记集与第一配准结果集中各个对应标记的第一空间距离。
具体的,在得到第一配准结果集后,计算机设备可以根据公式D1=||Pf1–Pre1||2,计算出待配准参考解剖学标记集与第一配准结果集中各个对应标记的第一空间距离D1,其中,Pf1为待配准参考解剖学标记集中与第一配准结果集中对应的标记构成的集合,Pre1为第一配准结果集。可选的,上述预设的比率可以为根据需要设定的(0,1]内的任意值。可选的,可以直接选取预设的比率内的第一空间距离对应的第一浮动解剖学标记集,也可以对第一空间距离中的各个距离进行升序排序,再选取预设的比率内的第一空间距离对应的第一浮动解剖学标记集,由于待配准参考解剖学标记集与第一配准结果集中各个对应标记的第一空间距离越小,配准结果精度越高,因此,对第一空间距离中的各个距离进行升序排序后再选取预设的比率内的第一空间距离对应的第一浮动解剖学标记集,可以提高配准的准确度。上述第一浮动解剖学标记集为从待配准浮动解剖学标记集中选取的预设的比率内的第一空间距离对应的集合。
S10313,当第一浮动解剖学标记集中的标记的数目小于预设的数目阈值时,则将第一变换矩阵作为目标变换矩阵。
具体的,上述目标变换矩阵为标记浮动图像和标记参考图像进行图像配准所用的矩阵,计算机设备可以利用目标变换矩阵实现标记浮动图像和标记参考图像的配准。可选的,计算机设备可以将第一浮动解剖学标记集中的标记的数目与预设的数目阈值进行比较,根据比较结果确定是否将上述第一变换矩阵作为目标变换矩阵。可选的,上述预设的数目阈值可以为5。当上述第一浮动解剖学标记集中的标记的数目小于预设的数目阈值时,则将第一变换矩阵作为目标变换矩阵,并继续执行S10311。
当上述第一浮动解剖学标记集中的标记的数目大于或等于预设的数目阈值时,需要进行第二阶段的配准过程。
第二阶段的配准过程可以参见S10314至S10318:
S10314,获取第一浮动解剖学标记集中与待配准参考解剖学标记集对应的第一参考解剖学标记集。
具体的,第一参考解剖学标记集为第一浮动解剖学标记集中的标记的名称或编号与待配准参考解剖学标记集中的标记的名称或编号相同的标记对应的标记构成的集合。
S10315,根据第一浮动解剖学标记集、第一参考解剖学标记集和基于解剖学标记的配准算法,确定第二变换矩阵。
具体的,和上述确定第一变换矩阵的方法相同,计算机设备可以根据第一浮动解剖学标记集、第一参考解剖学标记集和预设的基于解剖学标记的配准算法,得到第二变换矩阵。
S10316,根据第二变换矩阵和待配准浮动解剖学标记集,确定第二配准结果集。
具体的,计算机设备可以根据得到的第二变换矩阵与待配准浮动解剖学标记集的乘积,利用第二变换矩阵对待配准浮动解剖学标记集进行空间变换,并结合插值法如近邻插值、双线性插值或三线性插值等方法,得到第二配准结果集。
S10317,根据第二空间距离集合和预设的距离阈值,确定小于预设的距离阈值的第二空间距离对应的第二浮动解剖学标记集;第二空间距离集合中记录有待配准参考解剖学标记集与第二配准结果集中各个对应标记的第二空间距离。
具体的,计算机设备在得到第二配准结果集后,计算机设备可以根据公式D2=||Pf– Pre2||2,计算出待配准参考解剖学标记集与第二配准结果集中各个对应标记的第二空间距离D2,其中,Pf2为待配准参考解剖学标记集与第二配准结果集中各个标记对应的集合,Pre2为第二配准结果集。可选的,上述预设的距离阈值可以根据需要设定,比如距离阈值可以根据用户可接受的待配准参考解剖学标记集与第二配准结果集中各个对应标记的实际距离进行确定。上述第二浮动解剖学标记集为从待配准浮动解剖学标记集中选取的预设的距离阈值内的第二空间距离对应的集合。
S10318,当第二浮动解剖学标记集中的标记的数目小于预设的阈值数目时,则将第二变换矩阵作为目标变换矩阵。
具体的,计算机设备可以将第二浮动解剖学标记集中的标记的数目与预设的数目阈值进行比较,根据比较结果确定是否将上述第二变换矩阵作为目标变换矩阵。当上述第二浮动解剖学标记集中的标记的数目小于预设的数目阈值时,则将第二变换矩阵作为目标变换矩阵,并继续执行S10311。
当上述第二浮动解剖学标记集中的标记点的数目大于或等于预设的数目阈值时,需要进行第三阶段的配准过程。
第三阶段的配准过程可以参见S10319至S10321:
S10319,获取待配准参考解剖学标记集中与第二浮动解剖学标记集对应的第二参考解剖学标记集。
具体的,第二参考解剖学标记集为从待配准参考解剖学标记集中选取的与上述第二浮动解剖学标记集中标记的名称或编号相同的标记对应的集合。
S10320,根据第二浮动解剖学标记集、第二参考解剖学标记集和基于解剖学标记的配准算法,确定第三变换矩阵,并将第三变换矩阵作为目标变换矩阵。
具体的,和上述确定第一变换矩阵和第二变换矩阵的方法相同,计算机设备可以根据第二浮动解剖学标记集、第二参考解剖学标记集和预设的基于解剖学标记的配准算法,得到第三变换矩阵,在得到第三变换矩阵后,计算机设备可以直接将该第三变换矩阵作为目标变换矩阵。
S10321,根据目标变换矩阵,对参考图像和浮动图像进行图像配准。
具体的,计算机设备可以根据浮动图像的每个像素点的坐标位置构成的矩阵和目标变换矩阵的乘积,并结合插值法如近邻插值、双线性插值或三线性插值等方法,将标记浮动图像映射到标记参考图像空间下,以实现标记参考图像和标记浮动图像在解剖学结构下的对齐,从而完成对标记参考图像和标记浮动图像的图像配准。
可选的,计算机设备可以根据如下方式调整上述预设的比率和预设的距离阈值:对待配准的浮动图像和参考图像中的各个标记加噪声,利用上述三个阶段的配准方式对待配准的浮动图像和参考图像进行配准,得到新的目标变换矩阵,再利用新的目标变换矩阵,对上述浮动图像和参考图像进行图像配准,并根据得到的配准结果利用预设的相似性度量模型,计算配准后浮动图像和参考图像之间的相似性度量值,将该相似性度量值与预设的相似性度量阈值进行比较,如果小于预设的相似性度量阈值,则调整上述预设的比率和预设的距离阈值中的至少一个,直到最终得到的相似性度量值大于预设的相似性度量阈值为止,从而将预设的比率和预设的距离阈值调整为合适的值,进而可以使得利用调整的预设的比率和预设的阈值的算法进行配准的图像的配准精度更高。需要说明的是,上述添加的噪声的均值、方差和个数均可以随机设置。
在本实施例中,计算机设备可以获取标记浮动图像的待配准浮动解剖学标记集和标记参考图像的待配准参考解剖学标记集,并根据待配准浮动解剖学标记集、待配准参考解剖学标记集和基于解剖学标记的配准算法,分三个阶段对标记浮动图像和标记参考图像进行图像配准,每个阶段利用一定的条件比如预设的比率内的标记或预设的距离阈值内的标记,进行图像配准,而不是用全部标记进行图像配准,大大减小了计算量,提高了配准速度;另外,每个阶段的标记集均不同,从而可以降低部分解剖学标记可能被误检而影响配准精确度的影响,并且每个阶段的标记均是根据预设的比率或预设的距离阈值等进行筛选确定出的能够提高配准精度的标记,因此,本实施例提供的分阶段进行配准的方式可以提高图像配准的精度。
在一个实施例中,如图4所示,提供了另一种图像配准方法的流程示意图,当目标图像配准算法为基于分割的图像配准算法时,在上述实施例的基础上,作为一种可选的实施方式,上述S1022包括:
S1040,获取浮动图像对应的分割浮动图像和参考图像对应的分割参考图像。
具体的,分割浮动图像和分割参考图像可以为根据预设的已训练好的神经网络模型对待配准的浮动图像和参考图像进行语义信息提取后对应的图像。可选的,计算机设备可以利用预设的已训练好的神经网络模型对待配准的浮动图像和参考图像进行任意区域的分割,以得到分割浮动图像和分割参考图像。
S1041,根据分割浮动图像、分割参考图像和基于分割的图像配准算法,对浮动图像和参考图像进行图像配准,得到初始配准结果。
具体的,基于分割的图像配准算法可以为表面匹配算法、互信息法和灰度均方差法等算法中的任意一个。计算机设备可以根据获取的分割浮动图像、分割参考图像和基于分割的图像配准算法,确定出目标分割变换矩阵,从而根据该目标分割变换矩阵,将上述待配准的浮动图像映射到参考图像的空间坐标下,完成浮动图像和参考图像的配准,得到初始配准结果。
在本实施例中,计算机设备可以获取浮动图像对应的分割浮动图像和参考图像对应的分割参考图像,并根据分割浮动图像、分割参考图像和基于分割的图像配准算法,对浮动图像和参考图像进行图像配准,直接利用基于分割的图像配准算法对浮动图像和参考图像进行图像配准,实现方式较简单,提高了对浮动图像和参考图像进行图像配准的效率。
在一个实施例中,如图5所示,提供了另一种图像配准方法的流程示意图,在上述实施例的基础上,作为一种可选的实施方式,上述方法还包括:
S1050,获取对浮动图像和参考图像进行图像配准后的初始配准结果。
具体的,计算机设备获取上述对浮动图像和参考图像进行图像配准后的包括浮动图像和参考图像的变换矩阵的初始配准结果。
S1051,根据预设的配准结果整合方法,对不同解剖学标记得到的初始配准结果和/或不同分割区域得到的初始配准结果进行整合。
其中,预设的配准结果整合方法可以为三线性插值法和B样条插值法等方法中的任意一个方法。图像整合可以为将两幅或两幅以上来自不同成像设备或不同时刻获取的配准图像,采用某种算法,把各个图像有机地结合起来。具体的,计算机设备根据预设的配准结果整合方法,对不同解剖学标记得到的初始配准结果和/或不同分割区域得到的初始配准结果进行整合。也就是,计算机设备可以根据预设的配准结果整合方法,将初始配准结果中 的浮动图像和参考图像进行整合,以得到参考图像空间下浮动图像与参考图像整合在一起的扭曲图像。
在本实施例中,计算机设备可以获取对浮动图像和参考图像进行图像配准后的初始配准结果,从而根据预设的配准结果整合方法,对不同解剖学标记得到的初始配准结果和/或不同分割区域得到的初始配准结果进行整合,以实现将浮动图像和参考图像整合到一幅图像中,从而将各个图像的优点互补性地有机地结合起来,以获得信息量更丰富的新图像,从而较好地辅助医生利用整合后的图像判断病人的情况。
在一个实施例中,如图6所示,提供了另一种图像配准方法的流程示意图,在上述实施例的基础上,作为一种可选的实施方式,上述S1023包括:
S1060,根据变换矩阵、对参考图像进行下采样操作后得到的下采样参考图像和对浮动图像进行下采样操作后得到的下采样浮动图像,确定下采样参考图像和下采样浮动图像对应的变换后的浮动图像之间的相似性度量值。
具体的,计算机设备对参考图像进行下采样操作得到下采样参考图像,对浮动图像进行下采样操作得到下采样浮动图像,利用变换矩阵对下采样浮动图像进行空间变换,得到下采样浮动图像对应的变换后的浮动图像,然后根据变换矩阵、下采样参考图像和下采样浮动图像对应的变换后的浮动图像,确定下采样参考图像和下采样浮动图像对应的变换后的浮动图像之间的相似性度量值。可选的,计算机设备可以对参考图像和浮动图像进行一次下采样操作,得到下采样参考图像和下采样浮动图像,并利用变换矩阵对下采样浮动图像进行空间变换,得到下采样浮动图像对应的变换后的浮动图像,进而利用预设的相似性度量值的计算算法如互信息法、灰度均方差法等算法,确定下采样浮动图像对应的变换后的浮动图像与下采样参考图像之间的相似性度量值。需要说明的是,这里确定的下采样浮动图像对应的变换后的浮动图像与下采样参考图像之间的相似性度量值,指的是下采样浮动图像对应的变换后的浮动图像与下采样参考图像之间空间结构的相似性度量值。
S1061,对变换矩阵进行平移操作、旋转操作、错切操作和缩放操作中的至少一个操作,提取变换矩阵对应的初始参数。
具体的,计算机设备对浮动图像和参考图像的变换矩阵进行平移操作、旋转操作、错切操作和缩放操作中的至少一个操作,提取变换矩阵对应的初始参数。示例性地,若参考图像和浮动图像为三维图像,则其对应的变换矩阵可以为4*4的矩阵,计算机设备可以对上述变换矩阵进行平移操作、旋转操作、错切操作和缩放操作,将变换矩阵分解为平移矩阵、旋转矩阵、错切矩阵和缩放矩阵等四个4*4的矩阵,进而分别根据该四个4*4的矩阵在三维坐标系下的平移距离、旋转角度、错切角度和缩放比例等,得到12个变换矩阵对应的初始参数。类似的,若参考图像和浮动图像为二维图像,则计算机设备可以得到8个变换矩阵对应的初始参数。
S1062,根据相似性度量值、初始参数和预设的梯度下降法,确定目标变换矩阵。
具体的,计算机设备可以根据预设的梯度下降法调整上述初始参数,以使得上述相似性度量值达到最优,并将最优的相似性度量值对应的调整后的参数作为目标参数,根据目标参数确定该目标参数对应的目标变换矩阵。
S1063,根据目标变换矩阵对浮动图像进行变换,得到变换后的浮动图像。
具体的,计算机设备可以利用目标变换矩阵对浮动图像进行变换,使其映射到参考图像对应的空间坐标系下,得到变换后的浮动图像。
在本实施例中,计算机设备可以根据浮动图像和参考图像的变换矩阵、对参考图像进行下采样操作后得到的下采样参考图像和对浮动图像进行下采样操作后得到的下采样浮动图像,确定下采样参考图像和下采样浮动图像对应的变换后的浮动图像之间的相似性度量值,对变换矩阵进行平移操作、旋转操作、错切操作和缩放操作中的至少一个操作,提取变换矩阵对应的初始参数,进而根据相似性度量值、初始参数和预设的梯度下降法确定目标参数,由于目标参数是最优的相似性度量值对应的参数,因此,根据该目标参数确定出的目标变换矩阵也是较优的,这样利用该目标变换矩阵,能够准确地对浮动图像进行变换,提高了得到的变换后的浮动图像的准确度。
在上述实施例的基础上,作为一种可选的实施方式,目标配准模型包含前向配准网络和后向配准网络;目标配准模型的训练过程包括:
采用预设的无监督方法或弱监督的方法,对预设的前向配准网络和预设的后向配准网络进行迭代训练,得到目标配准模型。
其中,无监督方法是指利用无标注的医学图像作为训练样本图像,根据训练样本图像学习图像的分布或图像与图像间的关系;弱监督方法是指利用一部分已标注的医学图像作为训练样本图像,根据训练样本图像学习图像的分布或图像与图像间的关系。
具体的,计算机设备可以采用预设的无监督方法,利用无标注的医学图像作为训练样本,对预设的前向配准网络和预设的后向配准网络进行迭代训练,学习图像的分布或图像与图像间的关系,得到用于对不同模态的图像进行配准的目标配准模型;或者,计算机设备可以采用预设的弱监督方法,利用一部分已标注的医学图像和一部分没有标注的医学图像作为训练样本,对预设的前向配准网络和预设的后向配准网络进行迭代训练,学习图像的分布或图像与图像间的关系,用无标注的图像对模型的准确度与泛化能力进行进一步提升,得到用于对不同模态的图像进行配准的目标配准模型。
在本实施例中,计算机设备采用预设的无监督方法或弱监督的方法,对预设的前向配准网络和预设的后向配准网络进行迭代训练的训练过程十分有效,当医学图像没有标注的时候,也可以有效地完成模型的训练,大大提高了得到目标配准模型的效率,进而提高了对浮动图像进行配准的配准效率。
在上述实施例的基础上,作为一种可选的实施方式,采用预设的无监督方法,对预设的前向配准网络和预设的后向配准网络进行迭代训练,得到目标配准模型,包括:采用预设的第一训练模式和第二训练模式,对预设的前向配准网络和预设的后向配准网络进行迭代训练,得到目标配准模型;其中,第一训练模式为先前向配准网络再后向配准网络的训练方式,第二训练模式为先后向配准网络再前向配准网络的训练方式。
具体的,计算机设备采用预设的先训练前向配准网络再训练后向配准网络的第一训练模式和预设的先训练后向配准网络再训练前向配准网络的第二训练模式,对预设的前向配准网络和预设的后向配准网络进行迭代训练,得到目标配准模型。可选的,前向配准网络、后向配准网络可以为深度学习中的卷积神经网络(Convolutional Neural Networks,CNN)。
在本实施例中,计算机设备采用预设的第一训练模式和第二训练模式,对预设的前向配准网络和后向配准网络进行迭代训练,通过迭代训练能够提高得到的用于对不同模态图像进行配准的目标配准模型的准确度,进而提高了根据目标配准模型对浮动图像进行配准的配准的准确度。
在一个实施例中,如图7所示,提供了另一种图像配准方法的流程示意图,在上述实 施例的基础上,作为一种可选的实施方式,上述采用预设的第一训练模式,对预设的前向配准网络和预设的后向配准网络进行训练,包括:
S1070,将第一浮动图像和第一参考图像输入预设的前向配准网络,得到第一配准浮动图像;第一参考图像的模态为模态一,第一浮动图像的模态为模态二;第一配准浮动图像的模态与第一浮动图像的模态相同。
具体的,计算机设备将模态一的第一参考图像和模态二的第一浮动图像输入预设的前向配准网络,得到与第一浮动图像模态相同的第一配准浮动图像。可选的,第一参考图像和第一浮动图像可以从PACS服务器中获取,也可以直接从不同的医学影像设备中获取。示例性地,将MRI图像与CT图像进行配准时,可以将CT图像作为第一参考图像,MRI图像作为第一浮动图像输入前向配准网络,得到第一配准浮动图像,也就是配准后的MRI图像。可以理解的是,目标配准模型是用于对变换后的浮动图像和参考图像进行配准的,那么,相应地,这里所说的第一浮动图像也是经过变换后的图像,也就是,计算机设备会对第一浮动图像和第一参考图像进行语义信息的提取,得到包括提取的语义信息的标记第一浮动图像和标记第一参考图像,然后根据提取的语义信息,确定标记第一浮动图像和标记第一参考图像分别对应的目标配准算法,再根据提取的语义信息和目标图像配准算法,对第一浮动图像和第一参考图像进行配准,得到第一浮动图像和第一参考图像间的变换矩阵,根据第一浮动图像和第一参考图像间的变换矩阵、第一参考图像和第一浮动图像,得到变换后的图像,也就是这里所说的第一浮动图像。
S1071,将第一配准浮动图像确定为预设的后向配准网络的第二参考图像。
具体的,计算机设备将上述第一配准浮动图像确定为预设的后向配准网络的第二参考图像,也就是,第二参考图像的模态为模态二。对应到上述示例中,第一配准浮动图像为配准后的MRI图像。
S1072,将第二参考图像和第二浮动图像输入预设的后向配准网络,得到第二配准浮动图像;第二浮动图像的模态为模态一;第二配准浮动图像的模态与第二浮动图像的模态相同。
具体的,计算机设备先获取一幅模态为模态一的图像作为第二浮动图像,将第一配准浮动图像作为第二参考图像,再将第二参考图像和和第二浮动图像输入后向配准网络,得到与第二浮动图像模态相同的第二配准浮动图像。可选的,计算机设备可以从PACS服务器中获取第二浮动图像,也可以直接从与模态一为相同模态的医学影像设备中获取第二浮动图像。继续以上述例子为例,也就是将上述配准后的MRI图像作为第二参考图像,再获取一幅CT图像作为第二浮动图像,将MRI图像和CT图像输入后向配准网络,得到配准后的CT图像。可以理解的是,与S1070相对应,这里的第二浮动图像也是经过变换后的图像,得到这里的第二浮动图像的过程可参照上述实施例的描述,在此不再赘述。
S1073,根据第二配准浮动图像和第一参考图像,获取第二配准浮动图像与第一参考图像间的第一相似度,根据第一相似度对预设的前向配准网络、预设的后向配准网络进行训练。
具体的,计算机设备根据第二配准浮动图像和第一参考图像,获取第二配准浮动图像和第一参考图像间的第一相似度,根据第一相似度对预设的前向配准网络和预设的后向配准网络进行训练。其中,第一相似度为第二配准浮动图像和第一参考图像间的相似度测度。可选的,第一相似度可以是第二配准浮动图像与第一参考图像间的互相关、均方差、互信 息或相关性系数等,也可是一个判别器网络,用于自动判别图像间的相似度。其中,判别器网络可以是一个简单的卷积神经网络。可选的,计算机设备可以根据第一相似度的值调整预设的前向配准网络和预设的后向配准网络中的参数值,对预设的前向配准网络和预设的后向配准网络进行训练。
在本实施例中,计算机设备将第一浮动图像和第一参考图像输入预设的前向配准网络,得到与第一浮动图像模态相同的第一配准浮动图像,再将第一配准浮动图像作为预设的后向配准网络的第二参考图像,将模态为模态一的第二浮动图像和第二参考图像输入预设的后向配准网络,得到第二配准浮动图像,由于第二配准浮动图像与第一参考图像的模态相同,通过获取第二配准浮动图像与第一参考图像间的第一相似度,根据第一相似度训练预设的前向配准网络和预设的后向配准网络实现了不同模态图像的配准,解决了跨模态图像的配准问题。
在上述实施例的基础上,作为一种可选的实施方式,上述S1073中根据第一相似度对预设的前向配准网络、预设的后向配准网络进行训练,包括:将第一相似度确定为第二配准浮动图像的第一准确度,根据第一准确度指导预设的前向配准网络和预设的后向配准网络的训练。
具体的,计算机设备将上述获取的第一相似度确定为第二配准浮动图像的第一准确度,根据第一准确度对前向配准网络和后向配准网络进行训练。可选的,第一相似度的值越大配准准确度越高,第一相似度的的值越小配准准确度越低。
在本实施例中,计算机设备将第一相似度确定为第二配准浮动图像的第一准确度,根据第一准确度指导前向配准网络和后向配准网络的训练,由于第一准确度是根据第一相似度确定的,提高了确定的第一准确度的准确性,进而提高了根据第一准确度训练得到的前向配准网络和后向配准网络的准确性。
在一个实施例中,如图8所示,提供了另一种图像配准方法的流程示意图,在上述实施例的基础上,作为一种可选的实施方式,上述采用预设的第二训练模式,对预设的前向配准网络和预设的后向配准网络进行训练,包括:
S1080,将第一浮动图像确定为预设的后向配准网络的第三参考图像、将第一参考图像确定为预设的后向配准网络的第三浮动图像,将第三浮动图像和第三参考图像输入预设的后向配准网络,得到第三配准浮动图像;第三参考图像的模态为模态二,第三浮动图像的模态为模态一;第三配准浮动图像的模态与第三浮动图像的模态相同。
具体的,计算机设备将上述第一浮动图像确定为后向配准网络的第三参考图像、将上述第一参考图像确定为后向配准网络的第三浮动图像,也就是第三参考图像的模态为模态二、第三浮动图像的模态为模态一,之后计算机设备将第三浮动图像和第三参考图像输入后向配准网络,得到与第三浮动图像模态相同的第三配准浮动图像,即第三配准浮动图像的模态为模态一。对应到上述例子中,也就是将CT图像确定为第三浮动图像,将MRI图像确定为第三参考图像,将CT图像和MRI图像输入后向配准网络,得到第三配准浮动图像,也就是配准后的CT图像。
S1081,将第三配准浮动图像确定为预设的前向配准网络的第四参考图像。
具体的,计算机设备将上述第三配准浮动图像确定为预设的前向配准网络的第四参考图像,也就是,第四参考图像的模态为模态一。对应到上述示例中,第四参考图像为配准后的CT图像。
S1082,将第四参考图像和第四浮动图像输入预设的前向配准网络,得到第四配准浮动图像;第四浮动图像的模态为模态二;第四配准浮动图像的模态与第四浮动图像的模态相同。
具体的,计算机设备先获取一幅模态为模态二的图像作为第四浮动图像,将第三配准浮动图像作为第四参考图像,再将第四浮动图像和第四参考图像输入预设的前向配准网络,得到与第四浮动图像模态相同的第四配准浮动图像。可选的,计算机设备可以从PACS服务器中获取第四浮动图像,也可以直接从与模态二为相同模态的医学影像设备中获取第四浮动图像。继续以上述例子为例,也就是将上述配准后的CT图像作为第四参考图像,再获取一幅MRI图像作为第四浮动图像,将MRI图像和CT图像输入前向配准网络,得到配准后的MRI图像。可以理解的是,与S1070相对应,这里的第四浮动图像也是经过变换后的图像,得到这里的第四浮动图像的过程可参照上述实施例的描述,在此不再赘述。
S1083,根据第四配准浮动图像和第三参考图像,获取第四配准浮动图像与第三参考图像间的第二相似度,根据第二相似度对预设的后向配准网络、预设的前向配准网络进行训练。
具体的,计算机设备根据第四配准浮动图像和第三参考图像,获取第四配准浮动图像和第三参考图像间的第二相似度,根据第二相似度对预设的后向配准网络和预设的前向配准网络进行训练。其中,第二相似度为第四配准浮动图像和第三参考图像间的相似度测度。可选的,第二相似度可以是第四配准浮动图像与第三参考图像间的互相关、均方差、互信息或相关性系数,也可是一个判别器网络,用于自动判别图像间的相似度。其中,判别器网络可以是一个简单的卷积神经网络。可选的,计算机设备可以根据第二相似度的值调整预设的后向配准网络和预设的前向配准网络中的参数值,对预设的后向配准网络和预设的前向配准网络进行训练。
在本实施例中,计算机设备将第一浮动图像确定为后向配准网络的第三参考图像、将第一参考图像确定为后向配准网络的第三浮动图像,将第三浮动图像和第三参考图像输入后向配准网络,得到与第三浮动图像模态相同的第三配准浮动图像,再将第三配准浮动图像作为前向配准网络的第四参考图像,将模态为模态二的第四浮动图像和第四参考图像输入预设的前向配准网络,得到第四配准浮动图像,由于第四配准浮动图像与第三参考图像的模态相同,通过获取第四配准浮动图像与第三参考图像间的第二相似度,根据第二相似度训练预设的后向配准网络和预设的前向配准网络实现了不同模态图像的配准,解决了跨模态图像的配准问题。
在上述实施例的基础上,作为一种可选的实施方式,上述S1083中的根据第二相似度对后向配准网络、前向配准网络进行训练,包括:将第二相似度确定为第四配准浮动图像的第二准确度,根据第二准确度指导预设的后向配准网络和预设的前向配准网络的训练。
具体的,计算机设备将上述获取的第二相似度确定为第四配准浮动图像的第二准确度,根据第二准确度对预设的后向配准网络和预设的前向配准网络进行训练。可选的,第二相似度的值越大第四配准浮动图像的第二准确度越高,第二相似度的的值越小第四配准浮动图像的第二准确度越低。
在本实施例中,计算机设备将第二相似度确定为第四配准浮动图像的第二准确度,根据第二准确度指导预设的后向配准网络和预设的前向配准网络的训练,由于第二准确度是根据第二相似度确定的,大大提高了确定的第二准确度的准确性,进而提高了根据第二准 确度训练得到的后向配准网络和前向配准网络的准确性。
在一个实施例中,如图9所示,提供了另一种图像配准方法的流程示意图,在上述实施例的基础上,作为一种可选的实施方式,上述采用预设的第一训练模式和第二训练模式,对预设的前向配准网络和预设的后向配准网络进行迭代训练,得到目标配准模型,还包括:
S1090,根据第一相似度获取第一训练模式的第一损失函数的值,根据第二相似度获取第二训练模式的第二损失函数的值。
其中,损失函数是图像配准模型训练过程中的目标函数,图像配准模型训练过程中的损失函数是通过图像间的非相似度定义的。具体的,计算机设备根据第一相似度获取第一训练模式的第一损失函数,根据第二相似度获取第二训练模式的第二损失函数。例如,第一相似度为第二配准浮动图像与第一参考图像间的互相关时,第一损失函数的值等于1-互相关的值;第二相似度为第四配准浮动图像与第三参考图像间的均方差时,第二损失函数的值等于1-均方差的值。
S1091,根据第一损失函数的值和第二损失函数的值,确定目标配准模型。
具体的,计算机设备根据获取的第一损失函数的值和第二损失函数的值,确定第一损失函数和第二损失函数对应的前向配准网络和后向配准网络,将对应的前向配准网络和后向配准网络确定为目标配准模型。可选的,计算机设备可以将第一损失函数的值和第二损失函数的值达到稳定值时对应的前向配准网络和后向配准网络,确定为目标配准模型。
在本实施例中,计算机设备根据第一相似度获取第一训练模式的第一损失函数的值,根据第二相似度获取第二损失函数的值,由于第一损失函数的值和第二损失函数的值是根据相同模态图像间的相似度获取的,获取的第一损失函数的值和第二损失函数的值比较准确,进而提高了根据第一损失函数的值和第二损失函数的值确定的配准模型的准确度。
在一个实施例中,如图10所示,提供了一种图像配准装置的结构示意图,图像配准装置包括:获取模块110和配准模块111。
具体的,获取模块110,用于获取待配准的浮动图像和参考图像;浮动图像和参考图像为两个不同模态的图像;
配准模块111,用于根据浮动图像、参考图像和目标配准方法,获取配准结果;目标配准方法用于对不同模态的图像进行配准。
本实施例提供的图像配准装置,可以执行上述方法实施例,其实现原理和技术效果类似,在此不再赘述。
关于图像配准装置的具体限定可以参见上文中对于图像配准方法的限定,在此不再赘述。上述图像配准装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
在一个实施例中,提供了一种计算机设备,其内部结构图可以如图11所示。该计算机设备包括通过系统总线连接的处理器、存储器、网络接口、显示屏和输入装置。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统和计算机程序。该内存储器为非易失性存储介质中的操作系统和计算机程序的运行提供环境。该计算机设备的网络接口用于与外部的计算机设备通过网络连接通信。该计算机程序被处理器执行时以实现一种图像处理方法。该计算机设备的显示屏可以是液晶显示屏或者电子墨水显示屏,该计算机设备的 输入装置可以是显示屏上覆盖的触摸层,也可以是计算机设备外壳上设置的按键、轨迹球或触控板,还可以是外接的键盘、触控板或鼠标等。
在一个实施例中,提供了一种计算机设备,包括存储器和处理器,存储器中存储有计算机程序,该处理器执行计算机程序时实现以下步骤:
获取待配准的浮动图像和参考图像;浮动图像和参考图像为两个不同模态的图像;
根据浮动图像、参考图像和目标配准方法,获取配准结果;目标配准方法用于对不同模态的图像进行配准。
在一个实施例中,提供了一种可读存储介质,其上存储有计算机程序,计算机程序被处理器执行时实现以下步骤:
获取待配准的浮动图像和参考图像;浮动图像和参考图像为两个不同模态的图像;
根据浮动图像、参考图像和目标配准方法,获取配准结果;目标配准方法用于对不同模态的图像进行配准。
以上实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。
图像配准可以实现将不同时间、不同成像设备或不同条件下获取的两幅或多幅图像进行匹配和叠加,比如可以对电子计算机断层扫描(Computed Tomography,CT)图像和正电子发射型计算机断层显像(Positron Emission Computed Tomography,PET)图像等图像进行匹配和叠加,以在同一图像上显示参与配准的CT图像的信息和PET图像的信息,为临床医学诊断提供较好的辅助作用,是图像处理领域中的一项关键技术。
传统技术中,如果感兴趣区域(Region Of Interest,ROI)为不规则区域,则提取待配准图像中的不规则区域,并基于该不规区域进行配准;如果ROI为关键点,则提取待配准图像中的关键点,并基于该关键点进行配准。
但是,传统技术中在进行图像配准时,只能基于单个语义信息比如不规则区域或关键点等对待配准图像进行配准,导致传统的配准方法的适用范围较低。
为了解决传统技术中在进行图像配准时,只能基于单个语义信息比如不规则区域或关键点等对待配准图像进行配准,导致传统的配准方法的适用范围较低的问题,本申请另一个实施例中提出了一种图像配准的方法、装置、计算机设备和存储介质。
如图12所示,本申请实施例提供了一种图像配准的方法,方法包括:
S2010,获取待配准的参考图像和浮动图像。
具体的,参考图像和浮动图像可以是同模态的图像,也可以是异模态的图像,比如,参考图像和浮动图像可以均为CT图像,也可以一个是CT图像,另一个是PET图像。可选的,计算机设备可以对获得的两幅或多幅图像进行配准,比如将其中一幅图像作为参考图像,其它图像作为浮动图像,将浮动图像映射到参考图像,以实现参考图像与浮动图像在解剖学结构下的对齐。可选的,参考图像和浮动图像可以是同一个体的图像,也可以是不同个体的图像,可以是包含的解剖学结构均相同的图像,也可以是包含部分相同的解剖学结构的图像,本实施例对参考图像和浮动图像的来源并不做限定。可选的,参考图像和浮动图像可以是二维图像,也可以是三维图像,本实施例对此并不做具体限定。
S2011,对参考图像和浮动图像进行语义信息的提取,得到包括语义信息的标记参考图像和标记浮动图像。
具体的,计算机设备获取到输入的参考图像和浮动图像后,可以根据预设的已训练好的神经网络模型对参考图像和浮动图像中的语义信息进行提取,比如,如果检测到肺部对应的区域,计算机设备可以就把肺部对应的区域分割出来,从而提取出肺部对应的语义信息:如果检测到骨骼,就用标记点将骨骼对应的位置标记出来,从而提到骨骼对应的语义信息:解剖学标记点。计算机设备利用预设的神经网络模型对参考图像和浮动图像进行语言信息提取后,可以得到包含提取的语义信息的标记参考图像和标记浮动图像。
S2012,根据语义信息,从预设的图像配准模型中确定标记参考图像和标记浮动图像分别对应的目标图像配准模型。
具体的,上述图像配准模型为用于对提取语义信息后得到的标记参考图像和标记浮动图像进行配准的模型,比如表面匹配算法、互信息法、标准正交化矩阵法和最小二乘法等对应的算法模型。对于包含不同语义信息的标记参考图像和标记浮动图像,计算机设备可以利用不同的配准模型对二者进行配准,即包括分割区域的标记参考图像和标记浮动图像与包括解剖学标记点的标记参考图像和标记浮动图像,可以对应不同的图像配准模型。
可选的,上述语义信息包括:浮动图像的分割区域和解剖学标记点中的至少一个,以及参考图像的割区域和解剖学标记点中的至少一个。其中,上述语义信息可以为参考图像和浮动图像中的解剖学标记点,也可以为参考图像和浮动图像中的分割区域。进一步的,上述解剖学标记点可以是几何标记点,如灰度极值或线性结构交点,也可以是在解剖形态上清晰可见并可精确定位的解剖标记点,如人体组织、器官或病灶的关键标记点或特征点;上述分割区域可以是参考图像和浮动图像对应的曲线或曲面等,如肺部、肝部或不规则区域。
可选的,上述预设的图像配准模型可以包括基于分割的图像配准模型和基于解剖学标记点的配准模型。进一步的,基于分割的图像配准模型为能够对包括上述分割区域的标记参考图像和标记浮动图像进行图像配准的图像配准模型,如表面匹配算法、互信息法、灰度均方差法等方法对应的算法模型;基于解剖学标记点的配准模型为能够对包括上述解剖学标记点的标记参考图像和标记浮动图像进行图像配准的配准模型,如奇异值分解算法、迭代最近点法、标准正交化矩阵法等方法对应的算法模型。
S2013,根据语义信息和目标图像配准模型,对参考图像和浮动图像进行图像配准。
具体的,根据语义信息的不同,计算机设备可以选取对应的目标图像配准模型,对参考图像和浮动图像进行图像配准。可选的,一幅参考图像或一幅浮动图像中可以同时包括分割区域和解剖学点,此时,计算机设备可以先利用解剖学点对应的目标图像配准模型对参考图像和浮动图像中的解剖学点进行配准,再利用分割区域对应的目标图像配准模型对参考图像和浮动图像中的分割区域进行配准;也可以先利用分割区域对应的目标图像配准模型对参考图像和浮动图像中的分割区域进行配准,再利用解剖学点对应的目标图像配准模型对参考图像和浮动图像中的解剖学点进行配准,也可以同时利用解剖学点对应的目标图像配准模型对参考图像和浮动图像中的解剖学点进行配准,并利用分割区域对应的目标图像配准模型对参考图像和浮动图像中的分割区域进行配准,本实施例对此并不做限定。
可选的,计算机设备可以在确保继续使用其中的CPU进行图像配准的相关运算处理的情况下,还可以引入支持并行计算架构(Compute Unified Device Architecture,CUDA)的 图形处理器(Graphics Processing Unit,GPU)处理部分运算,以进一步加速上述对参考图像和浮动图像进行配准的配准算法的速度。
本实施例提供的图像配准方法,计算机设备可以获取待配准的参考图像和浮动图像;并对参考图像和浮动图像进行语义信息的提取,得到包括语义信息的标记参考图像和标记浮动图像;进而根据语义信息,从预设的图像配准模型中确定标记参考图像和标记浮动图像分别对应的目标图像配准模型;最终根据语义信息和目标图像配准模型,对标记参考图像和标记浮动图像进行图像配准。本实施例中,计算机设备可以先提取参考图像和浮动图像的语义信息,从而根据不同的语义信息,采用不同的目标图像配准模型对参考图像和浮动图像进行配准,以完成包括多种语义信息的参考图像和浮动图像的配准,解决了现有技术中只能基于单一的语义信息对参考图像和浮动图像进行配准的局限性,大大提高了图像配准的适用范围。
图13为另一个实施例提供的图像配准方法流程示意图。本实施例涉及的是当目标图像配准模型为上述基于解剖学标记点的配准模型时,计算机设备根据基于解剖学标记点的配准模型和语义信息对参考图像和浮动图像进行配准的过程。在上述实施例的基础上,可选的,上述S2013可以包括:
S2020,获取标记参考图像的待配准参考解剖学标记点集和标记浮动图像的待配准浮动解剖学标记点集。
具体的,上述待配准参考解剖学标记点集和待配准浮动解剖学标记点集为各个解剖学标记点的坐标信息的集合。可选的,解剖学标记点可以为人工进行预标记的标记点。
S2021,根据待配准参考解剖学标记点集、待配准浮动解剖学标记点集和基于解剖学标记点的配准模型,对参考图像和浮动图像进行图像配准。
具体的,上述基于解剖学标记点的配准模型可以为奇异值分解算法、迭代最近点算法、标准正交化矩阵法等方法对应的算法模型中的任意一个。计算机设备可以根据获取的待配准参考解剖学标记点集、待配准浮动解剖学标记点集和预设的基于解剖学标记点的配准模型,对参考图像和浮动图像进行图像配准。
可选的,上述S2021具体可以包括:根据上述待配准参考解剖学标记点集和上述待配准浮动解剖学标记点集中各个标记点的名称的匹配结果,确定标记点交集;根据上述标记点交集,从上述待配准参考解剖学标记点集和上述待配准浮动解剖学标记点集中分别确定初始参考解剖学标记点集和初始浮动解剖学标记点集;根据上述初始参考解剖学标记点集、上述初始浮动解剖学标记点集和上述基于解剖学标记点的配准模型,对上述参考图像和上述浮动图像进行图像配准。
其中,每个解剖学标记点有唯一的名称,对于待配准参考解剖学标记点集和待配准浮动解剖学标记点集中的解剖学标记点名称相同的解剖学标记点构成二者的标记点交集。可选的,计算机设备也可以将待配准参考解剖学标记点集和待配准浮动解剖学标记点集中的解剖学标记点编号相同的解剖学标记点作为二者的标记点交集。确定标记点交集后,计算机设备可以将待配准参考解剖学标记点集中上述标记点交集对应的点集作为初始参考解剖学标记点集,以及选取待配准浮动解剖学标记点集中上述标记点交集对应的点集作为初始浮动解剖学标记点集,从而可以将初始参考解剖学标记点集和初始浮动解剖学标记点集输入预设的基于解剖学标记点的配准模型,实现参考图像和浮动图像在相同解剖学结构下的对齐。
上述S2021的步骤中,计算机设备可以根据从待配准参考解剖学标记点集和待配准浮动解剖学标记点集中选取的初始参考解剖学标记点集和初始浮动解剖学标记点集,并利用基于解剖学标记点的配准模型对参考图像和浮动图像进行图像配准。可选的,利用基于解剖学标记点的配准模型对参考图像和浮动图像进行图像配准对参考图像和浮动图像进行图像配准的过程可以分为三个阶段的配准过程,每个阶段可以得到对应的配准结果,三个阶段的配准过程如下:
第一阶段的配准过程可以参见S20211至S20213:
S20211,根据初始参考解剖学标记点集和初始浮动解剖学标记点集和基于解剖学标记点的配准模型,确定第一配准结果;第一配准结果包括第一配准结果点集和第一变换矩阵。
具体的,计算机设备将初始参考解剖学标记点集和初始浮动解剖学标记点集输入预设的基于解剖学标记点的配准模型后,可以得到待配准浮动解剖学标记点集进行空间变换后的第一配准结果点集和第一变换矩阵。上述第一配准结果点集和第一变换矩阵构成第一配准结果。
S20212,根据第一空间距离集合和预设的比率,确定预设的比率内的第一空间距离对应的第一浮动解剖学标记点集;其中,第一空间距离集合中记录有待配准参考解剖学标记点集与第一配准结果点集中各个对应标记点的第一空间距离。
具体的,在得到第一配准结果点集后,计算机设备可以根据公式D1=||Pf1–Pre1||2,计算出待配准参考解剖学标记点集与第一配准结果点集中各个对应标记点的第一空间距离D1,其中,Pf1为待配准参考解剖学标记点集中与第一配准结果点集中对应的标记点构成的点集,Pre1为第一配准结果点集。可选的,上述预设的比率可以为根据需要设定的(0,1]内的任意值。可选的,可以直接选取预设的比率内的第一空间距离对应的第一浮动解剖学标记点集,也可以对第一空间距离中的各个距离进行升序排序,再选取预设的比率内的第一空间距离对应的第一浮动解剖学标记点集,由于待配准参考解剖学标记点集与第一配准结果点集中各个对应标记点的第一空间距离越小,配准结果精度越高,因此,对第一空间距离中的各个距离进行升序排序后再选取预设的比率内的第一空间距离对应的第一浮动解剖学标记点集,可以提高配准的准确度。上述第一浮动解剖学标记点集为从待配准浮动解剖学标记点集中选取的预设的比率内的第一空间距离对应的点集。
S20213,当第一浮动解剖学标记点集中的标记点的数目小于预设的数目阈值时,则将第一变换矩阵作为目标变换矩阵。
具体的,上述目标变换矩阵为标记参考图像和标记浮动图像进行图像配准所用的矩阵,计算机设备可以利用目标变换矩阵实现标记参考图像和标记浮动图像的配准。可选的,计算机设备可以将第一浮动解剖学标记点集中的标记点的数目与预设的数目阈值进行比较,根据比较结果确定是否将上述第一变换矩阵作为目标变换矩阵。可选的,上述预设的数目阈值可以为5。当上述第一浮动解剖学标记点集中的标记点的数目小于预设的数目阈值时,则将第一变换矩阵作为目标变换矩阵,并继续执行S20211。
当上述第一浮动解剖学标记点集中的标记点的数目不小于预设的数目阈值时,需要进行第二阶段的配准过程。
第二阶段的配准过程可以参见S3048至S30416:
S20214,获取待配准参考解剖学标记点集中与第一浮动解剖学标记点集对应的第一参考解剖学标记点集。
本步骤中,上述第一参考解剖学标记点集为待配准参考解剖学标记点集中的标记点的名称或编号与第一浮动解剖学标记点集中的标记的名称或编号相同的标记点对应的标记点构成的点集。
S20215,根据第一参考解剖学标记点集、第一浮动解剖学标记点集和基于解剖学标记点的配准模型,确定第二变换矩阵。
具体的,和上述确定第一变换矩阵的方法相同,计算机设备可以将第一参考解剖学标记点集和第一浮动解剖学标记点集输入预设的基于解剖学标记点的配准模型,从而得到第二变换矩阵。
S20216,根据第二变换矩阵和待配准浮动解剖学标记点集,确定第二配准结果点集。
本步骤中,计算机设备可以根据得到的第二变换矩阵与待配准浮动解剖学标记点集的乘积,利用第二变换矩阵对待配准浮动解剖学标记点集进行空间变换,并结合插值法如近邻插值、双线性插值或三线性插值等方法,得到第二配准结果点集。
S20217,根据第二空间距离集合和预设的距离阈值,确定小于预设的距离阈值的第二空间距离对应的第二浮动解剖学标记点集;第二空间距离集合中记录有待配准参考解剖学标记点集与第二配准结果点集中各个对应标记点的第二空间距离。
本步骤中,在得到第二配准结果点集后,计算机设备可以根据公式D2=||Pf–Pre2||2,计算出待配准参考解剖学标记点集与第二配准结果点集中各个对应标记点的第二空间距离D2,其中,Pf2为待配准参考解剖学标记点集与第二配准结果点集中各个标记点对应的点集,Pre2为第二配准结果点集。可选的,上述预设的距离阈值可以根据需要设定,比如距离阈值可以根据用户可接受的待配准参考解剖学标记点集与第二配准结果点集中各个对应标记点的实际距离进行确定。上述第二浮动解剖学标记点集为从待配准浮动解剖学标记点集中选取的预设的距离阈值内的第二空间距离对应的点集。
S20218,当第二浮动解剖学标记点集中的标记点的数目小于预设的阈值数目时,则将第二变换矩阵作为目标变换矩阵。
本步骤中,计算机设备可以将第二浮动解剖学标记点集中的标记点的数目与预设的数目阈值进行比较,根据比较结果确定是否将上述第二变换矩阵作为目标变换矩阵。当上述第二浮动解剖学标记点集中的标记点的数目小于预设的数目阈值时,则将第二变换矩阵作为目标变换矩阵,并继续执行S20211。
当上述第二浮动解剖学标记点集中的标记点的数目不小于预设的数目阈值时,需要进行第三阶段的配准过程。
第三阶段的配准过程可以参见S30418至S30420:
S20219,获取待配准参考解剖学标记点集中与第二浮动解剖学标记点集对应的第二参考解剖学标记点集。
本步骤中,第二参考解剖学标记点集为从待配准参考解剖学标记点集中选取的与上述第二浮动解剖学标记点集中标记点的名称或编号相同的标记点对应的点集。
S20220,根据第二参考解剖学标记点集、第二浮动解剖学标记点集和基于解剖学标记点的配准模型,确定第三变换矩阵,并将第三变换矩阵作为目标变换矩阵。
本步骤中,和上述确定第一变换矩阵和第二变换矩阵的方法相同,计算机设备可以将第二参考解剖学标记点集和第二浮动解剖学标记点集输入预设的基于解剖学标记点的配准模型,从而得到第三变换矩阵,在得到第三变换矩阵后,计算机设备可以直接将该第三 变换矩阵作为目标变换矩阵。
S20221,根据目标变换矩阵,对参考图像和浮动图像进行图像配准。
具体的,计算机设备可以根据浮动图像的每个像素点的坐标位置构成的矩阵和目标变换矩阵的乘积,并结合插值法如近邻插值、双线性插值或三线性插值等方法,将标记浮动图像映射到标记参考图像空间下,以实现标记参考图像和标记浮动图像在解剖学结构下的对齐,从而完成对标记参考图像和标记浮动图像的图像配准。
可选的,可以根据如下方式调整上述预设的比率和预设的距离阈值:对待配准的参考图像和浮动图像中的各个标记点加噪声,利用上述三个阶段的配准方式对待配准的参考图像和浮动图像进行配准,得到新的目标变换矩阵,再利用新的目标变换矩阵,对上述参考图像和浮动图像进行图像配准,并根据得到的配准结果,利用预设的相似性度量模型,计算配准后参考图像和浮动图像之间的相似性度量值,根据该相似性度量值与预设的相似性度量阈值进行比较,如果小于预设的相似性度量阈值,则调整上述预设的比率和预设的距离阈值中的至少一个,直到最终得到的相似性度量值大于预设的相似性度量阈值为止,从而将预设的比率和预设的距离阈值调整为合适的值,进而可以使得利用调整的预设的比率和预设的阈值的算法模型进行配准的图像的配准精度更高。需要说明的是,上述添加的噪声的均值、方差和个数均可以随机设置。
本实施例提供的图像配准方法,计算机设备可以获取标记参考图像的待配准参考解剖学标记点集和标记浮动图像的待配准浮动解剖学标记点集;并根据待配准参考解剖学标记点集、待配准浮动解剖学标记点集和基于解剖学标记点的配准模型,分三个阶段对标记参考图像和标记浮动图像进行图像配准,每个阶段利用通过一定的条件比如预设的比率内的标记点或预设的距离阈值内的标记点进行图像配准,而不是用全部标记点进行图像配准,大大减小了计算量,提高了配准速度;另外,每个阶段的标记点集均不同,从而可以降低部分解剖学标记点可能被误检而影响配准精确度的影响,并且每个阶段的标记点均是根据预设的比率或预设的距离阈值等进行筛选确定出的能够提高配准精度的标记点,因此,本实施例提供的分阶段进行配准的方式可以提高图像配准的精度。
当上述目标图像配准模型为基于分割的图像配准模型时,计算机设备可以利用图14所示的又一个实施例提供的图像配准方法对上述标记参考图像和标记浮动图像进行图像配准。本实施例涉及的是计算机设备根据提取的分割区域和对应的基于分割的图像配准模型,对上述标记参考图像和标记浮动图像进行图像配准的实现过程。在上述实施例的基础上,可选的,上述S2013的另一种可选的实现方式可以包括:
S2030,获取标记参考图像对应的分割参考图像和浮动图像对应的分割浮动图像。
具体的,上述分割参考图像和分割浮动图像可以为根据上述预设的已训练好的神经网络模型对上述待配准的参考图像和浮动图像进行语义信息提取后对应的图像。可选的,计算机设备可以利用上述预设的已训练好的神经网络模型对待配准的参考图像和浮动图像进行任意区域的分割,以得到分割参考图像和分割浮动图像。
S2031,根据分割参考图像、分割浮动图像和基于分割的图像配准模型,对参考图像和浮动图像进行图像配准。
具体的,上述基于分割的图像配准模型可以为表面匹配算法、互信息法和灰度均方差法等配准方法对应的算法模型中的任意一个。计算机设备可以根据获取的分割参考图像、分割浮动图像和上述基于分割的图像配准模型,确定出目标分割变换矩阵,从而根据该目 标分割变换矩阵,将上述待配准的浮动图像映射到参考图像的空间坐标下,完成参考图像和浮动图像的配准。
本实施例提供的图像配准方法,计算机设备可以获取标记参考图像对应的分割参考图像和浮动图像对应的分割浮动图像;并根据分割参考图像、分割浮动图像和基于分割的图像配准模型,对参考图像和浮动图像进行图像配准。本实施例中,计算机设备可以根据进行语义信息提取后获得的分割参考图像和分割浮动图像,直接利用预设的基于分割的图像配准模型对参考图像和浮动图像进行图像配准,实现方式较简单。
图15为另一个实施例提供的图像配准方法。本实施例涉及的是计算机设备根据上述实施例对参考图像和浮动图像进行配准后得到的配准结果,利用预设的图像整合模型,对该配准结果进行图像整合的过程。在上述实施例的基础上,可选的,上述方法还可以包括:
S2040,获取对参考图像和浮动图像进行图像配准后的配准结果。
本步骤中,上述配准结果为对上述参考图像和浮动图像进行图像配准后得到的配准后的参考图像和浮动图像。
S2041,根据配准结果和预设的图像整合模型,对配准结果进行图像整合。
本步骤中,上述预设的图像整合模型可以为三线性插值和B样条插值等方法中的任意一个。图像整合可以为将两幅或两幅以上来自不同成像设备或不同时刻获取的配准图像,采用某种算法,把各个图像有机地结合起来。计算机设备可以利用预设的图像整合模型,将上述配准结果中的参考图像和浮动图像进行整合,以得到参考图像空间下浮动图像与参考图像整合在一起的扭曲图像。
本实施例提供的图像配准方法,计算机设备可以获取对参考图像和浮动图像进行图像配准后的配准结果;从而根据配准结果和预设的图像整合模型,对配准结果进行图像整合,以实现将参考图像和浮动图像整合到一幅图像中,从而将各个图像的优点互补性地有机地结合起来,以获得信息量更丰富的新图像,从而较好地辅助医生利用整合后的图像判断病人的情况。
图16为另一个实施例提供的图像配准方法流程示意图。本实施例涉及的是计算机设备根据上述实施例获得的目标矩阵,以及对参考图像和浮动图像进行下采样后的图像,利用梯度下降法,调整相似性度量值,以确定目标参数的实现过程。在上述实施例的基础上,可选的,上述方法还可以包括:
S2050,获取目标变换矩阵。
S2051,根据目标变换矩阵、对参考图像进行下采样操作后得到的下采样参考图像和对浮动图像进行下采样操作后得到的下采样浮动图像,确定下采样参考图像和下采样浮动图像对应的变换后的浮动图像之间的相似性度量值。
具体的,计算机设备可以对上述参考图像和浮动图像进行下采样得到下采样后的下采样参考图像和下采样浮动图像,可选的,可以对上述参考图像和浮动图像进行一次下采样操作,得到下采样参考图像和下采样浮动图像,并利用上述目标变换矩阵对下采样浮动图像进行空间变换,得到变换后的浮动图像,进而利用预设的相似性度量值的计算模型如互信息法、灰度均方差法等方法对应的算法模型,确定该变换后的浮动图像与下采样参考图像之间的相似性度量值。
S2052,对目标变换矩阵进行平移操作、旋转操作、错切操作和缩放操作中的至少一个操作,提取目标变换矩阵对应的初始参数。
具体的,若参考图像和浮动图像为三维图像,则其对应的目标变换矩阵可以为4*4的矩阵,计算机设备可以对上述目标变换矩阵进行平移操作、旋转操作、错切操作和缩放操作,将目标变换矩阵分解为平移矩阵、旋转矩阵、错切矩阵和缩放矩阵等四个4*4的矩阵,进而分别根据该四个4*4的矩阵在三维坐标系下的平移距离、旋转角度、错切角度和缩放比例等,得到12个目标变换矩阵对应的初始参数。类似的,若参考图像和浮动图像为二维图像,则计算机设备可以得到8个目标变换矩阵对应的初始参数。
S2053,根据相似性度量值、初始参数和预设的梯度下降法,确定目标参数。
具体的,计算机设备可以根据预设的梯度下降法调整上述初始参数,以使得上述相似性度量值达到最优,并将最优的相似性度量值对应的调整后的参数作为目标参数。可选的,计算机设备可以根据目标参数确定该目标参数对应的最终变换矩阵,并利用该最终变换矩阵对参考图像和浮动图像进行配准。
可选的,计算机设备也可以对上述参考图像和浮动图像进行多次下采样操作,比如进行三次下采样并分别得到对应的下采样参考图像和下采样浮动图像。进一步的,下采样参考图像可以包括第一次下采样对应的第一下采样参考图像、第二次下采样对应的第二下采样参考图像和第三次下采样对应的第三下采样参考图像,类似的,下采样浮动图像可以包括第一次下采样对应的第一下采样浮动图像、第二次下采样对应的第二下采样浮动图像和第三次下采样对应的第三下采样浮动图像。此时,可以利用如下方法确定目标参数:第一步:计算机设备可以利用目标变换矩阵对第三下采样浮动图像进行空间变换,使其映射到第三下采样参考图像对应的空间坐标系下,得到变换后的第三浮动图像,并利用预设的相似性度量值的计算模型确定变换后的第三浮动图像进与第三下采样参考图像之间的第一相似性度量值;第二步:计算机设备可以利用预设的梯度下降法调整上述初始参数以使得第一相似性度量值达到最优,并根据最优的第一相似性度量值对应的参数确定新的目标变换矩阵,并对利用新的目标变换矩对第二下采样浮动图像和下采样参考图像继续执行上述第一步和第二步的操作,直至对最初的参考图像和浮动图像执行完上述第一步和第二步的操作,将最终得到的最优的相似性度量值对应的参数作为目标参数,以使得计算机设备可以根据目标参数确定该目标参数对应的最终变换矩阵,并利用该最终变换矩阵对参考图像和浮动图像进行配准。
可选的,计算机设备可以先利用图15所示的实施例对应的图像整合方法,对上述参考图像和上述浮动图像进行图像配准后的配准结果进行图像整合,再利用本实施例提供的利用最终变换矩阵对参考图像和浮动图像进行配准得到的配准结果对图15所示实施例得到的整合的结果进行优化,也可以利用本实施例提供的图像优化方法对上述参考图像和上述浮动图像进行图像配准后的配准结果进行图像优化,再利用图15所示的实施例对应的图像整合方法对本实施例利用最终变换矩阵对参考图像和浮动图像进行配准的配准结果进行图像整合,本实施例对此并不做限定。
本实施例提供的图像配准方法,计算机设备可以获取目标变换矩阵,并根据目标变换矩阵、对参考图像进行下采样操作后得到的下采样参考图像和对浮动图像进行下采样操作后得到的下采样浮动图像,确定下采样参考图像和下采样浮动图像对应的变换后的浮动图像之间的相似性度量值;对目标变换矩阵进行平移操作、旋转操作、错切操作和缩放操作中的至少一个操作,提取目标变换矩阵对应的初始参数;进而根据相似性度量值、初始参数和预设的梯度下降法确定目标参数,由于目标参数是最优的相似性度量值对应的参数, 因此,根据该目标参数确定出的最终变换矩阵也是较优的,这样利用该最终变换矩阵,对浮动图像和参考图像进行配准的精度也更高,进一步提高了图像配准的精度。
下述通过一个简单的例子,来介绍本申请实施例图像配准方法的过程。具体可以参见如下步骤:
S2060,计算机设备获取待配准的参考图像和浮动图像。
S2061,计算机设备对参考图像和浮动图像进行语义信息的提取,得到包括语义信息的标记参考图像和标记浮动图像;语义信息包括:浮动图像的分割区域和解剖学标记点中的至少一个,以及参考图像的割区域和解剖学标记点中的至少一个。
S2062,计算机设备根据语义信息,从预设的图像配准模型中确定标记参考图像和标记浮动图像分别对应的目标图像配准模型;预设的图像配准模型包括基于分割的图像配准模型和基于解剖学标记点的配准模型。
S2063,计算机设备判断上述目标图像配准模型是否为基于解剖学标记点的配准模型,若是,继续执行S2064,若否,执行S20619。
S2064,计算机设备获取标记参考图像的待配准参考解剖学标记点集和标记浮动图像的待配准浮动解剖学标记点集。
S2065,计算机设备根据待配准参考解剖学标记点集和待配准浮动解剖学标记点集的匹配结果,确定待配准参考解剖学标记点集和待配准浮动解剖学标记点集中标记点的名称相同的标记点交集,并选取待配准参考解剖学标记点集中的标记点交集作为初始参考解剖学标记点集,以及选取待配准浮动解剖学标记点集中标记点交集作为初始浮动解剖学标记点集。
S2066,计算机设备根据初始参考解剖学标记点集、初始浮动解剖学标记点集和基于解剖学标记点的配准模型,确定第一配准结果。
S2067,计算机设备根据第一空间距离集合和预设的比率,确定预设的比率内的第一空间距离对应的第一浮动解剖学标记点集;其中,第一空间距离集合中记录有待配准参考解剖学标记点集与第一配准结果点集中各个对应标记点的第一空间距离。
S2068,计算机设备判断第一浮动解剖学标记点集中的标记点的数目是否小于预设的数目阈值,若是,则继续执行S2069,若否,则执行S20610。
S2069,计算机设备将第一变换矩阵作为目标变换矩阵。
S20610,计算机设备获取待配准参考解剖学标记点集中与第一浮动解剖学标记点集对应的第一参考解剖学标记点集。
S20611,计算机设备根据第一参考解剖学标记点集、第一浮动解剖学标记点集和基于解剖学标记点的配准模型,确定第二变换矩阵。
S20612,计算机设备根据第二变换矩阵和待配准浮动解剖学标记点集,确定第二配准结果点集。
S20613,计算机设备根据第二空间距离集合和预设的距离阈值,确定小于预设的距离阈值的第二空间距离对应的第二浮动解剖学标记点集;第二空间距离集合中记录有待配准参考解剖学标记点集与第二配准结果点集中各个对应标记点的第二空间距离。
S20614,计算机设备判断第二浮动解剖学标记点集中的标记点的数目是否小于预设的阈值数目,若是,则继续执行S20615,若否,则执行S20616。
S20615,计算机设备将第二变换矩阵作为目标变换矩阵。
S20616,计算机设备获取待配准参考解剖学标记点集中与第二浮动解剖学标记点集对应的第二参考解剖学标记点集。
S20617,计算机设备根据第二参考解剖学标记点集、第二浮动解剖学标记点集和基于解剖学标记点的配准模型,确定第三变换矩阵,并将第三变换矩阵作为目标变换矩阵。
S20618,计算机设备根据目标变换矩阵,对参考图像和浮动图像进行图像配准;执行完S20618后,继续执行S20621。
S20619,计算机设备获取标记参考图像对应的分割参考图像和浮动图像对应的分割浮动图像。
S20620,计算机设备根据分割参考图像、分割浮动图像和基于分割的图像配准模型,对参考图像和浮动图像进行图像配准。
S20621,计算机设备获取对参考图像和浮动图像进行图像配准后的配准结果。
S20622,计算机设备根据配准结果和预设的图像整合模型,对配准结果进行图像整合。
S20623,计算机设备获取目标变换矩阵。
S20624,计算机设备根据目标变换矩阵、对参考图像进行下采样操作后得到的下采样参考图像和对浮动图像进行下采样操作后得到的下采样浮动图像,确定下采样参考图像和下采样浮动图像对应的变换后的浮动图像之间的相似性度量值。
S20625,计算机设备对目标变换矩阵进行平移操作、旋转操作、错切操作和缩放操作中的至少一个操作,提取目标变换矩阵对应的初始参数。
S20626,计算机设备根据相似性度量值、初始参数和预设的梯度下降法,确定目标参数。
本实施例提供的图像配准方法的工作原理和技术效果如上述实施例,在此不再赘述。
应该理解的是,虽然图12至图16的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,图2至图6中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
图17为一个实施例提供的图像配准装置结构示意图。如图17所示,该装置可以包括第一获取模块2702、第一提取模块2704、第一确定模块2706和配准模块2708。
具体的,第一获取模块2702,用于获取待配准的参考图像和浮动图像;
第一提取模块2704,用于对参考图像和浮动图像进行语义信息的提取,得到包括语义信息的标记参考图像和标记浮动图像;
第一确定模块2706,用于根据语义信息,从预设的图像配准模型中确定标记参考图像和标记浮动图像分别对应的目标图像配准模型;
配准模块2708,用于根据语义信息和目标图像配准模型,对参考图像和浮动图像进行图像配准。
可选的,语义信息包括:浮动图像的分割区域和解剖学标记点中的至少一个,以及参考图像的割区域和解剖学标记点中的至少一个;预设的图像配准模型包括基于分割的图像配准模型和基于解剖学标记点的配准模型。
本实施例提供的图像配准装置,可以执行上述方法实施例,其实现原理和技术效果类似,在此不再赘述。
在另一个实施例提供的图像配准装置中,在上述图17所示实施例的基础上,当目标图像配准模型为基于解剖学标记点的配准模型时,可选的,上述配准模块2708可以包括第一获取单元和第一配准单元。
具体的,第一获取单元,用于获取标记参考图像的待配准参考解剖学标记点集和标记浮动图像的待配准浮动解剖学标记点集;
第一配准单元,用于根据待配准参考解剖学标记点集、待配准浮动解剖学标记点集和基于解剖学标记点的配准模型,对参考图像和浮动图像进行图像配准。
本实施例提供的图像配准装置,可以执行上述方法实施例,其实现原理和技术效果类似,在此不再赘述。
在又一个实施例提供的图像配准装置中,在上述实施例的基础上,可选的,上述第一配准单元可以包括第一确定子单元、第二确定子单元和配准子单元。
具体的,第一确定子单元,用于根根据待配准参考解剖学标记点集和待配准浮动解剖学标记点集中各个标记点的名称的匹配结果,确定标记点交集;
第二确定子单元,用于根据标记点交集,从待配准参考解剖学标记点集和待配准浮动解剖学标记点集中分别确定初始参考解剖学标记点集和初始浮动解剖学标记点集;
配准子单元,用于根据初始参考解剖学标记点集、初始浮动解剖学标记点集和基于解剖学标记点的配准模型,对参考图像和浮动图像进行图像配准。
本实施例提供的图像配准装置,可以执行上述方法实施例,其实现原理和技术效果类似,在此不再赘述。
在又一个实施例提供的图像配准装置结构中,在上述实施例的基础上,可选的,上述配准模块2708还可以包括第二获取单元和第二配准单元。
第二获取单元,用于获取标记参考图像对应的分割参考图像和浮动图像对应的分割浮动图像;
第二配准单元,用于根据分割参考图像、分割浮动图像和基于分割的图像配准模型,对参考图像和浮动图像进行图像配准。
本实施例提供的图像配准装置,可以执行上述方法实施例,其实现原理和技术效果类似,在此不再赘述。
图18为又一个实施例提供的图像配准装置结构示意图。在上述实施例的基础上,可选的,上述装置还可以包括第二获取模块2710和整合模块2712。
第二获取模块2710,用于获取对参考图像和浮动图像进行图像配准后的配准结果;
整合模块2712,用于根据配准结果和预设的图像整合模型,对配准结果进行图像整合。
本实施例提供的图像配准装置,可以执行上述方法实施例,其实现原理和技术效果类似,在此不再赘述。
图19为又一个实施例提供的图像配准装置结构示意图。在上述实施例的基础上,可选的,上述装置还可以包括第三获取模块2714、第二确定模块2716、第二提取模块2718和第三确定模块2720。
第三获取模块2714,用于获取目标变换矩阵。
第二确定模块2716,用于根据目标变换矩阵、对参考图像进行下采样操作后得到的下 采样参考图像和对浮动图像进行下采样操作后得到的下采样浮动图像,确定下采样参考图像和下采样浮动图像对应的变换后的浮动图像之间的相似性度量值;
第二提取模块2718,用于对目标变换矩阵进行平移操作、旋转操作、错切操作和缩放操作中的至少一个操作,提取目标变换矩阵对应的初始参数;
第三确定模块2720,用于根据相似性度量值、初始参数和预设的梯度下降法,确定目标参数。
本实施例提供的图像配准装置,可以执行上述方法实施例,其实现原理和技术效果类似,在此不再赘述。
在一个实施例中,提供了一种计算机设备,包括存储器和处理器,存储器中存储有计算机程序,该处理器执行计算机程序时实现以下步骤:
获取待配准的参考图像和浮动图像;
对参考图像和浮动图像进行语义信息的提取,得到包括语义信息的标记参考图像和标记浮动图像;
根据语义信息,从预设的图像配准模型中确定标记参考图像和标记浮动图像分别对应的目标图像配准模型;
根据语义信息和目标图像配准模型,对参考图像和浮动图像进行图像配准。
上述实施例提供的计算机设备,其实现原理和技术效果与上述方法实施例类似,在此不再赘述。
在一个实施例中,提供了一种计算机可读存储介质,其上存储有计算机程序,计算机程序被处理器执行时实现以下步骤:
获取待配准的参考图像和浮动图像;
对参考图像和浮动图像进行语义信息的提取,得到包括语义信息的标记参考图像和标记浮动图像;
根据语义信息,从预设的图像配准模型中确定标记参考图像和标记浮动图像分别对应的目标图像配准模型;
根据语义信息和目标图像配准模型,对参考图像和浮动图像进行图像配准。
上述实施例提供的计算机可读存储介质,其实现原理和技术效果与上述方法实施例类似,在此不再赘述。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,的计算机程序可存储于一非易失性计算机可读取存储介质中,该计算机程序在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的 各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。
不同的医学图像能够反映出不同的人体解剖结构信息,医学临床上通常需要对不同的医学图像进行准确有效的配准,将不同的医学图像信息进行有效的融合,使得在临床疾病诊断或治疗上能够充分考虑不同的医学图像中互补的解剖结构信息。不同的医学图像配准对临床诊疗的精准化和智能化发展具有重要意义。根据不同的临床应用,需要实现图像配准的图像模态包含但不局限于计算机断层扫描(Computed Tomography,CT)图像,磁共振(Magnetic Resonance Imaging,MRI)图像,正电子发射计算机断层扫描(Positron Emission Tomography,PET)图像,超声(Ultrasound)图像,功能磁共振(functional Magnetic Resonance Imaging,fMRI)图像等。
现有的图像配准技术采用基于深度学习的无监督学习模型,在无监督学习模型中引入空间变换网络,将浮动图像通过模型输出的变形场进行空间变换得到配准后的图像,通过评估配准后的图像与参考图像间的非相似度定义损失函数,实现配准模型的训练,根据训练模型估计出变形场,实现相同模态图像的配准,其中,参考图像与配准后图像的非相似度是根据参考图像与配准后图像的相似度得到的。
但是,现有的的图像配准技术存在无法解决非线性跨模态图像的配准问题。基于此,为了解决现有的图像配准技术无法进行非线性跨模态图像的配准问题,本申请另一个实施例中提出了一种图像配准方法、装置、计算机设备和可读存储介质。
如图20所示,本申请实施例提供了一种图像配准的方法,方法包括:
S3010,获取待配准的浮动图像和参考图像;浮动图像和参考图像为两个不同模态的图像。
其中,不同模态的图像是指利用不同成像原理、设备得到的图像,例如,利用计算机断层扫描(Computed Tomography,CT),核磁共振(Magnetic Resonance Imaging,MRI),正电子发射计算机断层扫描(Positron Emission Tomography,PET),超声(Ultrasound),功能磁共振(functional Magnetic Resonance Imaging,fMRI)等得到的任意两个模态的图像均是不同模态的图像,上述浮动图像指的是待配准的图像,参考图像指的是浮动图像要配准过去的图像空间。在本实施例中,可选的,计算机设备可以从PACS(Picture Archiving and Communication Systems,影像归档和通信系统)服务器中获取不同模态的待配准的浮动图像和参考图像,也可以直接从不同的医学影像设备中获取不同模态的待配准的浮动图像和参考图像。
S3011,根据浮动图像、参考图像和预先训练的配准模型,获取配准结果;配准模型用于对不同模态的图像进行配准。
具体的,在获取了上述待配准的浮动图像和参考图像的基础上,计算机设备将浮动图像、参考图像输入预先训练的用于对不同模态的图像进行配准的配准模型中,得到配准结果。可选的,配准结果可以是配准后的浮动图像,也可以是浮动图像和参考图像之间的配 准参数,之后计算机设备根据配准参数对浮动图像进行变换,得到配准后浮动图像。例如,对CT图像和MRI图像进行配准时,将CT图像作为浮动图像,将MRI图像作为参考图像,计算机设备将CT图像、MRI图像输入预先训练的配准模型中,得到配准结果,可选的,计算机设备可以直接获取配准后的CT图像,也可以获取CT图像和MRI图像间的配准参数,之后根据配准参数对CT图像进行变换,得到配准后的CT图像。
在本实施例中,计算机设备可以根据预先训练的用于对不同模态的图像进行配准的配准模型,对两个不同模态的浮动图像和参考图像进行配准,解决了现有图像配准技术中无法准确有效的对跨模态图像进行配准的问题;另外,利用预先训练的配准模型对两个不同模态的图像进行配准,不用每次对图像配准时都进行训练,提高了图像配准的配准效率,同时根据配准模型对图像配准也提高了配准图像的配准准确度。
在上述实施例的基础上,作为一种可选的实施方式,方法还包括:采用预设的无监督方法或弱监督方法,对预设的前向配准网络和预设的后向配准网络进行迭代训练,得到配准模型。
其中,无监督方法是指利用无标注的医学图像作为训练样本图像,根据训练样本图像学习图像的分布或图像与图像间的关系;弱监督方法是指利用一部分已标注的医学图像作为训练样本图像,根据训练样本图像学习图像的分布或图像与图像间的关系。具体的,计算机设备可以采用预设的无监督方法,利用无标注的医学图像作为训练样本,对预设的前向配准网络和预设的后向配准网络进行迭代训练,学习图像的分布或图像与图像间的关系,得到用于对不同模态的图像进行配准的配准模型;或者,计算机设备可以采用预设的弱监督方法,利用一部分已标注的医学图像和一部分没有标注的医学图像作为训练样本,对预设的前向配准网络和预设的后向配准网络进行迭代训练,学习图像的分布或图像与图像间的关系,用无标注的图像对模型的准确度与泛化能力进行进一步提升,得到用于对不同模态的图像进行配准的配准模型。
在本实施例中,计算机设备采用预设的无监督方法或弱监督方法,对预设的前向配准网络和预设的后向配准网络进行迭代训练的训练过程十分有效,当医学图像没有标注的时候,也可以有效地完成模型的训练,大大提高了得到配准模型的效率,进而提高了对浮动图像进行配准的配准效率。
在上述实施例的基础上,作为一种可选的实施方式,采用预设的无监督方法,对预设的前向配准网络和预设的后向配准网络进行迭代训练,得到配准模型,包括:采用预设的第一训练模式和第二训练模式,对预设的前向配准网络和预设的后向配准网络进行迭代训练,得到配准模型;其中,第一训练模式为先前向配准网络再后向配准网络的训练方式,第二训练模式为先后向配准网络再前向配准网络的训练方式。
具体的,计算机设备采用预设的先训练前向配准网络再训练后向配准网络的第一训练模式和预设的先训练后向配准网络再训练前向配准网络的第二训练模式,对预设的前向配准网络和预设的后向配准网络进行迭代训练,得到配准模型。其中,前向配准网络、后向配准网络为深度学习中的卷积神经网络(Convolutional Neural Networks,CNN)。
在本实施例中,计算机设备采用预设的第一训练模式和第二训练模式,对预设的前向配准网络和后向配准网络进行迭代训练,通过迭代训练能够提高得到的用于对不同模态图像进行配准的配准模型的准确度,进而提高了根据配准模型对待配准图像进行配准的配准准确度。
图21为另一个实施例提供的图像配准方法的流程示意图。图22为一个实施例提供的第一训练模式的训练过程示意图。本实施例涉及的是计算机设备采用预设的第一训练模式,对预设的前向配准网络和预设的后向配准网络进行训练的具体实现过程。如图21所示,在上述实施例的基础上,作为一种可选的实施方式,采用预设的第一训练模式,对预设的前向配准网络和预设的后向配准网络进行训练,包括:
S3020,将第一浮动图像和第一参考图像输入前向配准网络,得到第一配准浮动图像;第一参考图像的模态为模态一,第一浮动图像的模态为模态二;第一配准浮动图像的模态与第一浮动图像的模态相同。
具体的,如图22所示,计算机设备将模态一的第一参考图像和模态二的第一浮动图像输入前向配准网络,得到与第一浮动图像模态相同的第一配准浮动图像。可选的,第一参考图像和第一浮动图像可以从PACS服务器中获取,也可以直接从不同的医学影像设备中获取。例如,将MRI图像与CT图像进行配准时,将CT图像作为第一参考图像,MRI图像作为第一浮动图像输入前向配准网络,得到第一配准浮动图像,也就是配准后的MRI图像。
S3021,将第一配准浮动图像确定为后向配准网络的第二参考图像。
具体的,如图22所示,计算机设备将上述第一配准浮动图像确定为后向配准网络的第二参考图像,也就是,第二参考图像的模态为模态二。对应到上述示例中,第一配准浮动图像为配准后的MRI图像。
S3022,将第二参考图像和第二浮动图像输入后向配准网络,得到第二配准浮动图像;第二浮动图像的模态为模态一;第二配准浮动图像的模态与第二浮动图像的模态相同。
具体的,如图22所示,计算机设备先获取一幅模态为模态一的图像作为第二浮动图像,将第一配准浮动图像作为第二参考图像,再将第二参考图像和和第二浮动图像输入后向配准网络,得到与第二浮动图像模态相同的第二配准浮动图像。可选的,计算机设备可以从PACS服务器中获取第二浮动图像,也可以直接从与模态一为相同模态的医学影像设备中获取第二浮动图像。继续以上述例子为例,也就是将上述配准后的MRI图像作为第二参考图像,再获取一幅CT图像作为第二浮动图像,将MRI图像和CT图像输入后向配准网络,得到配准后的CT图像。
S3023,根据第二配准浮动图像和第一参考图像,获取第二配准浮动图像与第一参考图像间的第一相似度,根据第一相似度对前向配准网络、后向配准网络进行训练。
具体的,计算机设备根据第二配准浮动图像和第一参考图像,获取第二配准浮动图像和第一参考图像间的第一相似度,根据第一相似度对前向配准网络和后向配准网络进行训练。其中,第一相似度为第二配准浮动图像和第一参考图像间的相似度测度。可选的,第一相似度可以是第二配准浮动图像与第一参考图像间的互相关、均方差、互信息或相关性系数等,也可是一个判别器网络,用于自动判别图像间的相似度。其中,判别器网络可以是一个简单的卷积神经网络。可选的,计算机设备可以根据第一相似度的值调整前向配准网络和后向配准网络中的参数值,对前向配准网络和后向配准网络进行训练。
在本实施例中,计算机设备将第一浮动图像和第一参考图像输入前向配准网络,得到与第一浮动图像模态相同的第一配准浮动图像,再将第一配准浮动图像作为后向配准网络的第二参考图像,将模态为模态一的第二浮动图像和第二参考图像输入后向配准网络,得到第二配准浮动图像,由于第二配准浮动图像与第一参考图像的模态相同,通过获取第二 配准浮动图像与第一参考图像间的第一相似度,根据第一相似度训练前向配准网络和后向配准网络实现了不同模态图像的配准,解决了跨模态图像的配准问题。
在上述实施例的基础上,作为一种可选的实施方式,根据第一相似度对前向配准网络、后向配准网络进行训练,包括:将第一相似度确定为第二配准浮动图像的第一准确度,根据第一准确度指导前向配准网络和后向配准网络的训练。
具体的,计算机设备将上述获取的第一相似度确定为第二配准浮动图像的第一准确度,根据第一准确度对前向配准网络和后向配准网络进行训练。可选的,第一相似度的值越大配准准确度越高,第一相似度的的值越小配准准确度越低。
在本实施例中,计算机设备将第一相似度确定为第二配准浮动图像的第一准确度,根据第一准确度指导前向配准网络和后向配准网络的训练,由于第一准确度是根据第一相似度确定的,提高了确定的第一准确度的准确性,进而提高了根据第一准确度训练得到的前向配准网络和后向配准网络的准确性。
图23为另一个实施例提供的图像配准方法的流程示意图。图24为一个实施例提供的第二训练模式的训练过程示意图。本实施例涉及的是计算机设备采用预设的第二训练模式,对预设的前向配准网络和预设的后向配准网络进行训练的具体实现过程。如图23所示,在上述实施例的基础上,作为一种可选的实施方式,采用预设的第二训练模式,对预设的前向配准网络和预设的后向配准网络进行训练,包括:
S3030,将第一浮动图像确定为后向配准网络的第三参考图像、将第一参考图像确定为后向配准网络的第三浮动图像,将第三浮动图像和第三参考图像输入后向配准网络,得到第三配准浮动图像;第三参考图像的模态为模态二,第三浮动图像的模态为模态一;第三配准浮动图像的模态与第三浮动图像的模态相同。
具体的,如图24所示,计算机设备将上述第一浮动图像确定为后向配准网络的第三参考图像、将上述第一参考图像确定为后向配准网络的第三浮动图像,也就是第三参考图像的模态为模态二、第三浮动图像的模态为模态一,之后计算机设备将第三浮动图像和第三参考图像输入后向配准网络,得到与第三浮动图像模态相同的第三配准浮动图像,即第三配准浮动图像的模态为模态一。对应到上述例子中,也就是将CT图像确定为第三浮动图像,将MRI图像确定为第三参考图像,将CT图像和MRI图像输入后向配准网络,得到第三配准浮动图像,也就是配准后的CT图像。
S3031,将第三配准浮动图像确定为前向配准网络的第四参考图像。
具体的,如图24所示,计算机设备将上述第三配准浮动图像确定为后向配准网络的第四参考图像,也就是,第四参考图像的模态为模态一。对应到上述示例中,第四参考图像为配准后的CT图像。
S3032,将第四参考图像和第四浮动图像输入前向配准网络,得到第四配准浮动图像;第四浮动图像的模态为模态二;第四配准浮动图像的模态与第四浮动图像的模态相同。
具体的,计算机设备先获取一幅模态为模态二的图像作为第四浮动图像,将第三配准浮动图像作为第四参考图像,再将第四浮动图像和第四参考图像输入前向配准网络,得到与第四浮动图像模态相同的第四配准浮动图像。可选的,计算机设备可以从PACS服务器中获取第四浮动图像,也可以直接从与模态二为相同模态的医学影像设备中获取第四浮动图像。继续以上述例子为例,也就是将上述配准后的CT图像作为第四参考图像,再获取一幅MRI图像作为第四浮动图像,将MRI图像和CT图像输入前向配准网络,得到配准 后的MRI图像。
S3033,根据第四配准浮动图像和第三参考图像,获取第四配准浮动图像与第三参考图像间的第二相似度,根据第二相似度对后向配准网络、前向配准网络进行训练。
具体的,计算机设备根据第四配准浮动图像和第三参考图像,获取第四配准浮动图像和第三参考图像间的第二相似度,根据第二相似度对后向配准网络和前向配准网络进行训练。其中,第二相似度为第四配准浮动图像和第三参考图像间的相似度测度。可选的,第二相似度可以是第四配准浮动图像与第三参考图像间的互相关、均方差、互信息或相关性系数,也可是一个判别器网络,用于自动判别图像间的相似度。其中,判别器网络可以是一个简单的卷积神经网络。可选的,计算机设备可以根据第二相似度的值调整后向配准网络和前向配准网络中的参数值,对后向配准网络和前向配准网络进行训练。
在本实施例中,计算机设备将第一浮动图像确定为后向配准网络的第三参考图像、将第一参考图像确定为后向配准网络的第三浮动图像,将第三浮动图像和第三参考图像输入后向配准网络,得到与第三浮动图像模态相同的第三配准浮动图像,再将第三配准浮动图像作为前向配准网络的第四参考图像,将模态为模态二的第四浮动图像和第四参考图像输入前向配准网络,得到第四配准浮动图像,由于第四配准浮动图像与第三参考图像的模态相同,通过获取第四配准浮动图像与第三参考图像间的第二相似度,根据第二相似度训练后向配准网络和前向配准网络实现了不同模态图像的配准,解决了跨模态图像的配准问题。
在上述实施例的基础上,作为一种可选的实施方式,根据第二相似度对后向配准网络、前向配准网络进行训练,包括:将第二相似度确定为第四配准浮动图像的第二准确度,根据第二准确度指导后向配准网络和前向配准网络的训练。
具体的,计算机设备将上述获取的第二相似度确定为第四配准浮动图像的第二准确度,根据第二准确度对后向配准网络和前向配准网络进行训练。可选的,第二相似度的值越大第四配准浮动图像的第二准确度越高,第二相似度的的值越小第四配准浮动图像的第二准确度越低。
在本实施例中,计算机设备将第二相似度确定为第四配准浮动图像的第二准确度,根据第二准确度指导后向配准网络和前向配准网络的训练,由于第二准确度是根据第二相似度确定的,大大提高了确定的第二准确度的准确性,进而提高了根据第二准确度训练得到的后向配准网络和前向配准网络的准确性。
图25为另一个实施例提供的图像配准方法的流程示意图。本实施例涉及的是计算机设备采用预设的第一训练模式和第二训练模式,对预设的前向配准网络和预设的后向配准网络进行迭代训练,得到配准模型的具体实现过程。如图25所示,在上述实施例的基础上,作为一种可选的实施方式,采用预设的第一训练模式和第二训练模式,对预设的前向配准网络和预设的后向配准网络进行迭代训练,得到配准模型,还包括:
S3030,根据第一相似度获取第一训练模式的第一损失函数的值,根据第二相似度获取第二训练模式的第二损失函数的值。
其中,损失函数是图像配准模型训练过程中的目标函数,图像配准模型训练过程中的损失函数是通过图像间的非相似度定义的。具体的,计算机设备根据第一相似度获取第一训练模式的第一损失函数,根据第二相似度获取第二训练模式的第二损失函数。例如,第一相似度为第二配准浮动图像与第一参考图像间的互相关时,第一损失函数的值等于1-互 相关的值;第二相似度为第四配准浮动图像与第三参考图像间的均方差时,第二损失函数的值等于1-均方差的值。
S3031,根据第一损失函数的值和第二损失函数的值,确定配准模型。
具体的,计算机设备可以根据上述获取的第一损失函数的值和第二损失函数的值,确定第一损失函数和第二损失函数对应的前向配准网络和后向配准网络,将对应的前向配准网络和后向配准网络确定为配准模型。可选的,计算机设备可以将第一损失函数的值和第二损失函数的值达到稳定值时对应的前向配准网络和后向配准网络,确定为配准模型。
在本实施例中,计算机设备根据第一相似度获取第一训练模式的第一损失函数的值,根据第二相似度获取第二损失函数的值,由于第一损失函数的值和第二损失函数的值是根据相同模态图像间的相似度获取的,获取的第一损失函数的值和第二损失函数的值比较准确,大大提高了根据第一损失函数的值和第二损失函数的值确定的配准模型的准确度。
应该理解的是,虽然图20-25的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,图20-25中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
图26为一个实施例提供的图像配准装置结构示意图。如图26所示,该装置可以包括:第一获取模块310和第二获取模块311。
具体的,第一获取模块310,用于获取待配准的浮动图像和参考图像;浮动图像和参考图像为两个不同模态的图像;
第二获取模块311,用于根据浮动图像、参考图像和预先训练的配准模型,获取配准结果;配准模型用于对不同模态的图像进行配准。
本实施例提供的图像配准装置,可以执行上述方法实施例,其实现原理和技术效果类似,在此不再赘述。
图27为一个实施例提供的图像配准装置结构示意图。在上述实施例的基础上,可选的,如图27所示,装置还包括:训练模块312。
具体的,训练模块312,用于采用预设的无监督方法或弱监督方法,对预设的前向配准网络和预设的后向配准网络进行迭代训练,得到配准模型。
本实施例提供的图像配准装置,可以执行上述方法实施例,其实现原理和技术效果类似,在此不再赘述。
可选的,上述训练模块312具体用于采用预设的第一训练模式和第二训练模式,对预设的前向配准网络和预设的后向配准网络进行迭代训练,得到配准模型;
其中,第一训练模式为先前向配准网络再后向配准网络的训练方式,第二训练模式为先后向配准网络再前向配准网络的训练方式。
本实施例提供的图像配准装置,可以执行上述方法实施例,其实现原理和技术效果类似,在此不再赘述。
图28为一个实施例提供的图像配准装置结构示意图。在上述实施例的基础上,可选的,如图28所示,训练模块312包括第一训练单元3121,用于将第一浮动图像和第一参 考图像输入前向配准网络,得到第一配准浮动图像;第一参考图像的模态为模态一,第一浮动图像的模态为模态二;第一配准浮动图像的模态与第一浮动图像的模态相同;将第一配准浮动图像确定为后向配准网络的第二参考图像;将第二参考图像和第二浮动图像输入后向配准网络,得到第二配准浮动图像;第二浮动图像的模态为模态一;第二配准浮动图像的模态与第二浮动图像的模态相同;根据第二配准浮动图像和第一参考图像,获取第二配准浮动图像与第一参考图像间的第一相似度,根据第一相似度对前向配准网络、后向配准网络进行训练。
本实施例提供的图像配准装置,可以执行上述方法实施例,其实现原理和技术效果类似,在此不再赘述。
在上述实施例的基础上,可选的,上述第一训练单元121根据第一相似度对前向配准网络、后向配准网络进行训练,包括:第一训练单元121将第一相似度确定为第二配准浮动图像的第一准确度,根据第一准确度指导前向配准网络和后向配准网络的训练。
本实施例提供的图像配准装置,可以执行上述方法实施例,其实现原理和技术效果类似,在此不再赘述。
图29为一个实施例提供的图像配准装置结构示意图。在上述实施例的基础上,可选的,如图29所示,训练模块312还包括第二训练单元3122,用于将第一浮动图像确定为后向配准网络的第三参考图像、将第一参考图像确定为后向配准网络的第三浮动图像,将第三浮动图像和第三参考图像输入后向配准网络,得到第三配准浮动图像;第三参考图像的模态为模态二,第三浮动图像的模态为模态一;第三配准浮动图像的模态与第三浮动图像的模态相同;将第三配准浮动图像确定为前向配准网络的第四参考图像;将第四参考图像和第四浮动图像输入前向配准网络,得到第四配准浮动图像;第四浮动图像的模态为模态二;第四配准浮动图像的模态与第四浮动图像的模态相同;根据第四配准浮动图像和第三参考图像,获取第四配准浮动图像与第三参考图像间的第二相似度,根据第二相似度对后向配准网络、前向配准网络进行训练。
本实施例提供的图像配准装置,可以执行上述方法实施例,其实现原理和技术效果类似,在此不再赘述。
在上述实施例的基础上,可选的,上述第二训练单元3122根据第二相似度对后向配准网络、前向配准网络进行训练,包括:第二训练单元3122将第二相似度确定为第四配准浮动图像的第二准确度,根据第二准确度指导后向配准网络和前向配准网络的训练。
本实施例提供的图像配准装置,可以执行上述方法实施例,其实现原理和技术效果类似,在此不再赘述。
图30为一个实施例提供的图像配准装置结构示意图。在上述实施例的基础上,可选的,如图30所示,装置还包括:第三获取模块313和确定模块314。
具体的,第三获取模块313,用于根据第一相似度获取第一训练模式的第一损失函数的值,根据第二相似度获取第二训练模式的第二损失函数的值;
确定模块314,用于根据第一损失函数的值和第二损失函数的值,确定配准模型。
本实施例提供的图像配准装置,可以执行上述方法实施例,其实现原理和技术效果类似,在此不再赘述。
图31为一个实施例提供的图像配准装置结构示意图。在上述实施例的基础上,可选的,如图31所示,上述确定模块314可以包括确定单元3141。
具体的,确定单元3141,用于将第一损失函数的值和第二损失函数的值达到稳定值时对应的前向配准网络和后向配准网络,确定为配准模型。
本实施例提供的图像配准装置,可以执行上述方法实施例,其实现原理和技术效果类似,在此不再赘述。
关于图像配准装置的具体限定可以参见上文中对于图像配准方法的限定,在此不再赘述。上述图像配准装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
在一个实施例中,提供了一种计算机设备,包括存储器和处理器,存储器中存储有计算机程序,该处理器执行计算机程序时实现以下步骤:
获取待配准的浮动图像和参考图像;浮动图像和参考图像为两个不同模态的图像;
根据浮动图像、参考图像和预先训练的配准模型,获取配准结果;配准模型用于对不同模态的图像进行配准。
上述实施例提供的计算机设备,其实现原理和技术效果与上述方法实施例类似,在此不再赘述。
在一个实施例中,提供了一种计算机可读存储介质,其上存储有计算机程序,计算机程序被处理器执行时实现以下步骤:
获取待配准的浮动图像和参考图像;浮动图像和参考图像为两个不同模态的图像;
根据浮动图像、参考图像和预先训练的配准模型,获取配准结果;配准模型用于对不同模态的图像进行配准。
上述实施例提供的计算机可读存储介质,其实现原理和技术效果与上述方法实施例类似,在此不再赘述。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,的计算机程序可存储于一非易失性计算机可读取存储介质中,该计算机程序在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上实施例仅表达了本发明的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干变形和改进,这些都属于本发明的保护范围。因此,本发明专利的保护范围应以所附权利要求为准。
最后,还需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括要素的过程、方法、物品或者设备中还存在另外的相同要素。
本说明书中各个实施例采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似部分互相参见即可。
对所公开的实施例的上述说明,使本领域专业技术人员能够实现或使用本申请。对这些实施例的多种修改对本领域的专业技术人员来说将是显而易见的,本文中所定义的一般原理可以在不脱离本申请的精神或范围的情况下,在其它实施例中实现。因此,本申请将不会被限制于本文所示的这些实施例,而是要符合与本文所公开的原理和新颖特点相一致的最宽的范围。

Claims (23)

  1. 一种图像配准方法,其特征在于,所述方法包括:
    获取待配准的浮动图像和参考图像;所述浮动图像和所述参考图像为两个不同模态的图像;
    根据所述浮动图像、所述参考图像和目标配准方法,获取配准结果;所述目标配准方法用于对不同模态的图像进行配准。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述浮动图像、所述参考图像和目标配准方法,获取配准结果,包括:
    对所述浮动图像和所述参考图像进行语义信息的提取,得到包括所述语义信息的标记浮动图像和标记参考图像;
    根据所述语义信息,从预设的图像配准算法中确定所述标记浮动图像和所述标记参考图像分别对应的目标图像配准算法;
    根据所述语义信息和所述目标图像配准算法,对所述浮动图像和所述参考图像进行图像配准,得到初始配准结果;所述初始配准结果包括所述浮动图像和所述参考图像间的变换矩阵;
    根据所述变换矩阵、所述参考图像和所述浮动图像,得到变换后的浮动图像;
    根据所述变换后的浮动图像、所述参考图像和目标配准模型,对所述变换后的浮动图像进行配准,得到所述配准结果。
  3. 根据权利要求2所述的方法,其特征在于,所述语义信息包括:所述浮动图像的分割区域和解剖学标记中的至少一个,以及所述参考图像的分割区域和解剖学标记中的至少一个;所述预设的图像配准算法包括基于分割的图像配准算法和基于解剖学标记的配准算法;所述解剖学标记包括解剖学标记点、解剖学标记线和解剖学标记面。
  4. 根据权利要求3所述的方法,其特征在于,当所述目标图像配准算法为所述基于解剖学标记的配准算法时,所述根据所述语义信息和所述目标图像配准算法,对所述浮动图像和所述参考图像进行图像配准,得到初始配准结果,包括:
    获取所述标记浮动图像的待配准浮动解剖学标记集和所述标记参考图像的待配准参考解剖学标记集;
    根据所述待配准浮动解剖学标记集、所述待配准参考解剖学标记集和所述基于解剖学标记的配准算法,对所述浮动图像和所述参考图像进行图像配准,得到所述初始配准结果。
  5. 根据权利要求4所述的方法,其特征在于,所述根据所述待配准浮动解剖学标记集、所述待配准参考解剖学标记集和所述基于解剖学标记的配准算法,对所述浮动图像和所述参考图像进行图像配准,得到所述初始配准结果,包括:
    根据所述待配准浮动解剖学标记集和所述待配准参考解剖学标记集中各个标记的名称的匹配结果,确定标记交集;
    根据所述标记交集,从所述待配准浮动解剖学标记集和所述待配准参考解剖学标记集中分别确定初始浮动解剖学标记集和初始参考解剖学标记集;
    根据所述初始浮动解剖学标记集、所述初始参考解剖学标记集和所述基于解剖学标记的配准算法,对所述浮动图像和所述参考图像进行图像配准,得到所述初始配准结果。
  6. 根据权利要求3所述的方法,其特征在于,当所述目标图像配准算法为所述基于 分割的图像配准算法时,所述根据所述语义信息和所述目标图像配准算法,对所述浮动图像和所述参考图像进行图像配准,得到所述初始配准结果,包括:
    获取所述浮动图像对应的分割浮动图像和所述参考图像对应的分割参考图像;
    根据所述分割浮动图像、所述分割参考图像和所述基于分割的图像配准算法,对所述浮动图像和所述参考图像进行图像配准,得到所述初始配准结果。
  7. 根据权利要求3所述的方法,其特征在于,所述方法还包括:
    获取对所述浮动图像和所述参考图像进行图像配准后的所述初始配准结果;
    根据预设的配准结果整合方法,对不同解剖学标记得到的初始配准结果和/或不同分割区域得到的初始配准结果进行整合。
  8. 根据权利要求7所述的方法,其特征在于,所述根据所述变换矩阵、所述参考图像和所述浮动图像,得到变换后的浮动图像,包括:
    根据所述变换矩阵、对所述参考图像进行下采样操作后得到的下采样参考图像和对所述浮动图像进行下采样操作后得到的下采样浮动图像,确定所述下采样参考图像和所述下采样浮动图像对应的变换后的浮动图像之间的相似性度量值;
    对所述变换矩阵进行平移操作、旋转操作、错切操作和缩放操作中的至少一个操作,提取所述变换矩阵对应的初始参数;
    根据所述相似性度量值、所述初始参数和预设的梯度下降法,确定目标变换矩阵;
    根据所述目标变换矩阵对所述浮动图像进行变换,得到所述变换后的浮动图像。
  9. 根据权利要求2所述的方法,其特征在于,所述目标配准模型包括前向配准网络和后向配准网络;所述目标配准模型的训练过程包括:
    采用预设的无监督方法或弱监督的方法,对预设的前向配准网络和预设的后向配准网络进行迭代训练,得到所述目标配准模型。
  10. 根据权利要求9所述的方法,其特征在于,所述采用预设的无监督方法,对预设的前向配准网络和预设的后向配准网络进行迭代训练,得到所述目标配准模型,包括:
    采用预设的第一训练模式和第二训练模式,对所述预设的前向配准网络和所述预设的后向配准网络进行迭代训练,得到所述目标配准模型;
    其中,所述第一训练模式为先前向配准网络再后向配准网络的训练方式,所述第二训练模式为先后向配准网络再前向配准网络的训练方式。
  11. 根据权利要求10所述的方法,其特征在于,所述采用预设的第一训练模式,对预设的前向配准网络和预设的后向配准网络进行训练,包括:
    将第一浮动图像和第一参考图像输入所述预设的前向配准网络,得到第一配准浮动图像;所述第一参考图像的模态为模态一,所述第一浮动图像的模态为模态二;所述第一配准浮动图像的模态与所述第一浮动图像的模态相同;
    将所述第一配准浮动图像确定为所述预设的后向配准网络的第二参考图像;
    将所述第二参考图像和第二浮动图像输入所述预设的后向配准网络,得到第二配准浮动图像;所述第二浮动图像的模态为模态一;所述第二配准浮动图像的模态与所述第二浮动图像的模态相同;
    根据所述第二配准浮动图像和所述第一参考图像,获取所述第二配准浮动图像与所述第一参考图像间的第一相似度,根据所述第一相似度对所述预设的前向配准网络、所述预设的后向配准网络进行训练。
  12. 根据权利要求11所述的方法,其特征在于,所述根据所述第一相似度对所述预设的前向配准网络、所述预设的后向配准网络进行训练,包括:
    将所述第一相似度确定为所述第二配准浮动图像的第一准确度,根据所述第一准确度指导所述预设的前向配准网络和所述预设的后向配准网络的训练。
  13. 根据权利要求10或11所述的方法,其特征在于,所述采用预设的第二训练模式,对预设的前向配准网络和预设的后向配准网络进行训练,包括:
    将所述第一浮动图像确定为所述预设的后向配准网络的第三参考图像、将所述第一参考图像确定为所述预设的后向配准网络的第三浮动图像,将所述第三浮动图像和所述第三参考图像输入所述预设的后向配准网络,得到第三配准浮动图像;所述第三参考图像的模态为模态二,所述第三浮动图像的模态为模态一;所述第三配准浮动图像的模态与所述第三浮动图像的模态相同;
    将所述第三配准浮动图像确定为所述预设的前向配准网络的第四参考图像;
    将所述第四参考图像和第四浮动图像输入所述预设的前向配准网络,得到第四配准浮动图像;所述第四浮动图像的模态为模态二;所述第四配准浮动图像的模态与所述第四浮动图像的模态相同;
    根据所述第四配准浮动图像和所述第三参考图像,获取所述第四配准浮动图像与所述第三参考图像间的第二相似度,根据所述第二相似度对所述预设的后向配准网络、所述预设的前向配准网络进行训练。
  14. 根据权利要求13所述的方法,其特征在于,所述根据所述第二相似度对所述预设的后向配准网络、所述预设的前向配准网络进行训练,包括:
    将所述第二相似度确定为所述第四配准浮动图像的第二准确度,根据所述第二准确度指导所述预设的后向配准网络和所述预设的前向配准网络的训练。
  15. 根据权利要求10-14任一项所述的方法,其特征在于,所述采用预设的第一训练模式和第二训练模式,对所述预设的前向配准网络和所述预设的后向配准网络进行迭代训练,得到所述目标配准模型,还包括:
    根据所述第一相似度获取所述第一训练模式的第一损失函数的值,根据所述第二相似度获取所述第二训练模式的第二损失函数的值;
    根据所述第一损失函数的值和所述第二损失函数的值,确定所述目标配准模型。
  16. 根据权利要求15所述的方法,其特征在于,所述根据所述第一损失函数的值和所述第二损失函数的值,确定所述目标配准模型,包括:
    将所述第一损失函数的值和所述第二损失函数的值达到稳定值时对应的前向配准网络和后向配准网络,确定为所述目标配准模型。
  17. 一种图像配准装置,其特征在于,所述装置包括:
    获取模块,用于获取待配准的浮动图像和参考图像;所述浮动图像和所述参考图像为两个不同模态的图像;
    配准模块,用于根据所述浮动图像、所述参考图像和目标配准方法,获取配准结果;所述目标配准方法用于对不同模态的图像进行配准。
  18. 一种计算机设备,包括存储器、处理器,所述存储器上存储有可在处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现权利要求1至16中任一项所述方法的步骤。
  19. 一种可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1至16中任一项所述方法的步骤。
  20. 一种图像配准方法,其特征在于,所述方法包括:
    获取待配准的浮动图像和参考图像;所述浮动图像和所述参考图像为两个不同模态的图像;
    根据所述浮动图像、所述参考图像和预先训练的配准模型,获取配准结果;所述配准模型用于对不同模态的图像进行配准。
  21. 一种图像配准装置,其特征在于,所述装置包括:
    第一获取模块,用于获取待配准的浮动图像和参考图像;所述浮动图像和所述参考图像为两个不同模态的图像;
    第二获取模块,用于根据所述浮动图像、所述参考图像和预先训练的配准模型,获取配准结果;所述配准模型用于对不同模态的图像进行配准。
  22. 一种图像配准方法,其特征在于,所述方法包括:
    获取待配准的参考图像和浮动图像;
    对所述参考图像和所述浮动图像进行语义信息的提取,得到包括所述语义信息的标记参考图像和标记浮动图像;
    根据所述语义信息,从预设的图像配准模型中确定所述标记参考图像和所述标记浮动图像分别对应的目标图像配准模型;
    根据所述语义信息和所述目标图像配准模型,对所述参考图像和所述浮动图像进行图像配准。
  23. 一种图像配准装置,其特征在于,所述装置包括:
    第一获取模块,用于获取待配准的参考图像和浮动图像;
    第一提取模块,用于对所述参考图像和所述浮动图像进行语义信息的提取,得到包括所述语义信息的标记参考图像和标记浮动图像;
    第一确定模块,用于根据所述语义信息,从预设的图像配准模型中确定所述标记参考图像和所述标记浮动图像分别对应的目标图像配准模型;
    配准模块,用于根据所述语义信息和所述目标图像配准模型,对所述参考图像和所述浮动图像进行图像配准。
PCT/CN2019/127695 2018-12-25 2019-12-24 图像配准方法、装置、计算机设备及可读存储介质 WO2020135374A1 (zh)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201811586820.8 2018-12-25
CN201811586820.8A CN109598745B (zh) 2018-12-25 2018-12-25 图像配准方法、装置和计算机设备
CN201811637721.8A CN109754396B (zh) 2018-12-29 2018-12-29 图像的配准方法、装置、计算机设备和存储介质
CN201811637721.8 2018-12-29

Publications (1)

Publication Number Publication Date
WO2020135374A1 true WO2020135374A1 (zh) 2020-07-02

Family

ID=71127624

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/127695 WO2020135374A1 (zh) 2018-12-25 2019-12-24 图像配准方法、装置、计算机设备及可读存储介质

Country Status (1)

Country Link
WO (1) WO2020135374A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160093048A1 (en) * 2014-09-25 2016-03-31 Siemens Healthcare Gmbh Deep similarity learning for multimodal medical images
CN106875401A (zh) * 2017-01-10 2017-06-20 中国科学院深圳先进技术研究院 多模态影像组学的分析方法、装置及终端
CN107667380A (zh) * 2015-06-05 2018-02-06 西门子公司 用于内窥镜和腹腔镜导航的同时场景解析和模型融合的方法和系统
CN108257134A (zh) * 2017-12-21 2018-07-06 深圳大学 基于深度学习的鼻咽癌病灶自动分割方法和系统
CN109598745A (zh) * 2018-12-25 2019-04-09 上海联影智能医疗科技有限公司 图像配准方法、装置和计算机设备
CN109754396A (zh) * 2018-12-29 2019-05-14 上海联影智能医疗科技有限公司 图像的配准方法、装置、计算机设备和存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160093048A1 (en) * 2014-09-25 2016-03-31 Siemens Healthcare Gmbh Deep similarity learning for multimodal medical images
CN107667380A (zh) * 2015-06-05 2018-02-06 西门子公司 用于内窥镜和腹腔镜导航的同时场景解析和模型融合的方法和系统
CN106875401A (zh) * 2017-01-10 2017-06-20 中国科学院深圳先进技术研究院 多模态影像组学的分析方法、装置及终端
CN108257134A (zh) * 2017-12-21 2018-07-06 深圳大学 基于深度学习的鼻咽癌病灶自动分割方法和系统
CN109598745A (zh) * 2018-12-25 2019-04-09 上海联影智能医疗科技有限公司 图像配准方法、装置和计算机设备
CN109754396A (zh) * 2018-12-29 2019-05-14 上海联影智能医疗科技有限公司 图像的配准方法、装置、计算机设备和存储介质

Similar Documents

Publication Publication Date Title
Zhuang et al. An Effective WSSENet-Based Similarity Retrieval Method of Large Lung CT Image Databases.
Jaskari et al. Deep learning method for mandibular canal segmentation in dental cone beam computed tomography volumes
US11176188B2 (en) Visualization framework based on document representation learning
Claes et al. Computerized craniofacial reconstruction: conceptual framework and review
Toga et al. The role of image registration in brain mapping
Kadry et al. Automated segmentation of leukocyte from hematological images—a study using various CNN schemes
KR20210048523A (ko) 이미지 처리 방법, 장치, 전자 기기 및 컴퓨터 판독 가능 기억 매체
US7889898B2 (en) System and method for semantic indexing and navigation of volumetric images
CN112381178B (zh) 一种基于多损失特征学习的医学影像分类方法
Devine et al. A registration and deep learning approach to automated landmark detection for geometric morphometrics
WO2021097675A1 (zh) 一种基于医学图像的智能辅助诊断方法及终端
CN104881680A (zh) 一种基于二维特征和三维特征融合的阿尔茨海默病及轻度认知功能障碍识别方法
US10878954B2 (en) Dento-craniofacial clinical cognitive diagnosis and treatment system and method
CN111724344A (zh) 一种基于对抗网络生成医学超声影像数据的方法
Hržić et al. XAOM: A method for automatic alignment and orientation of radiographs for computer-aided medical diagnosis
Kara et al. Identification and localization of endotracheal tube on chest radiographs using a cascaded convolutional neural network approach
Fatemizadeh et al. Automatic landmark extraction from image data using modified growing neural gas network
Mohite et al. Deep features based medical image retrieval
US20220076829A1 (en) Method and apparatus for analyzing medical image data in a latent space representation
Hong et al. A distance transformation deep forest framework with hybrid-feature fusion for cxr image classification
Hu et al. Craniofacial reconstruction based on a hierarchical dense deformable model
Zhu et al. Effects of differential geometry parameters on grid generation and segmentation of mri brain image
JP2023516651A (ja) 訓練データにおける欠落したアノテーションに対処するためのクラス別損失関数
WO2020135374A1 (zh) 图像配准方法、装置、计算机设备及可读存储介质
Rehman et al. Attention Res-UNet: Attention residual UNet with focal tversky loss for skin lesion segmentation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19904377

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 11.10.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19904377

Country of ref document: EP

Kind code of ref document: A1