CN111260546B - Image processing method and device and electronic equipment - Google Patents

Image processing method and device and electronic equipment Download PDF

Info

Publication number
CN111260546B
CN111260546B CN202010166003.8A CN202010166003A CN111260546B CN 111260546 B CN111260546 B CN 111260546B CN 202010166003 A CN202010166003 A CN 202010166003A CN 111260546 B CN111260546 B CN 111260546B
Authority
CN
China
Prior art keywords
image
obtaining
region
similarity value
similarity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010166003.8A
Other languages
Chinese (zh)
Other versions
CN111260546A (en
Inventor
张耀
李让
钟诚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN202010166003.8A priority Critical patent/CN111260546B/en
Publication of CN111260546A publication Critical patent/CN111260546A/en
Application granted granted Critical
Publication of CN111260546B publication Critical patent/CN111260546B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/14Transformations for image registration, e.g. adjusting or mapping for alignment of images

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The application discloses an image processing method, an image processing device and electronic equipment, wherein the method comprises the following steps: obtaining a first image and a second image; performing image transformation on the second image based on a transformation model to obtain a third image; obtaining image similarity values of the third image and the first image at least based on object regions contained in the third image and the first image respectively; and adjusting model parameters of the transformation model according to the image similarity value, so that the transformation model after parameter adjustment is used for carrying out image transformation on the second image again to obtain an updated third image until the image similarity values of the updated third image and the first image meet a similarity condition, and the updated third image and the first image are in image registration.

Description

Image processing method and device and electronic equipment
Technical Field
The present application relates to the field of medical image technologies, and in particular, to an image processing method and apparatus, and an electronic device.
Background
In the clinical diagnosis process, the images (flat scan, arterial phase, venous phase, delay phase, etc.) of computed tomography (ct) images (computed tomography) of different modalities in the same lesion region need to be compared and judged. In the image acquisition process, due to the movement of the body of the patient, respiration and the like, data of different modalities often have some deformation, so that the registration of medical images is very necessary.
Current medical image registration solutions are typically done manually by medical personnel and, therefore, there may be situations where image registration is inefficient.
Disclosure of Invention
In view of the above, the present application provides an image processing method, an image processing apparatus, and an electronic device, including:
an image processing method comprising:
obtaining a first image and a second image;
performing image transformation on the second image based on a transformation model to obtain a third image;
obtaining image similarity values of the third image and the first image at least based on object regions contained in the third image and the first image respectively;
and adjusting the model parameters of the transformation model according to the image similarity value, so that the transformation model after parameter adjustment is used for carrying out image transformation on the second image again to obtain an updated third image until the image similarity values of the updated third image and the first image meet a similarity condition, and the updated third image and the first image form image registration.
The above method, preferably, obtaining the image similarity value of the third image and the first image based on at least the object region included in each of the third image and the first image, includes:
obtaining a first object area in the first image and obtaining a second object area in a third image, wherein the first object area and the second object area belong to an image area of the same target object;
obtaining a region similarity value between a second object region in the third image and a first object region in the first image;
obtaining an image similarity value of the third image and the first image based on at least the region similarity value.
The above method, preferably, obtaining a region similarity value between a second object region in the third image and a first object region in the first image includes:
obtaining a region overlap value of the first object region and the second object region;
obtaining a region center distance value of the first object region and the second object region;
and obtaining a region similarity value between the first object region and the second object region according to the region overlap value and the region center distance value.
The above method, preferably, before obtaining the image similarity value of the third image and the first image based on at least the region similarity value, the method further includes:
obtaining mutual information values of the third image and the first image;
wherein obtaining an image similarity value of the third image and the first image based on at least the region similarity value comprises:
and obtaining the image similarity value of the third image and the first image according to the mutual information value and the region similarity value.
Preferably, the method for obtaining the image similarity value between the third image and the first image according to the mutual information value and the region similarity value includes:
obtaining a first product of the mutual information value and a first coefficient and obtaining a second product of the region similarity value and a second coefficient;
obtaining a sum of the first product and the second product as an image similarity value of the third image and the first image.
The above method, preferably, obtaining a first object region in the first image and obtaining a second object region in a third image, includes:
identifying a first object region in the first image and a second object region in the third image, which belong to image regions of a same target object contained in both the first image and the third image, using a semantic segmentation model.
The above method, preferably, the transformation model at least comprises a transformation model based on global deformation, and/or the transformation model at least comprises a transformation model based on local deformation.
The method preferably further includes the step of updating the image similarity value of the third image and the image similarity value of the first image to satisfy a similarity condition, where the method includes:
the increment value of the image similarity value is 0.
An image processing apparatus comprising:
an image obtaining unit for obtaining a first image and a second image;
the image transformation unit is used for carrying out image transformation on the second image based on a transformation model to obtain a third image;
a similarity obtaining unit, configured to obtain image similarity values of the third image and the first image based on at least object regions included in the third image and the first image, respectively;
and the parameter adjusting unit is used for adjusting the model parameters of the transformation model according to the image similarity value, so that the transformation model after parameter adjustment is used for the image transformation unit to carry out image transformation on the second image again to obtain an updated third image until the image similarity values of the updated third image and the first image obtained by the similarity obtaining unit meet a similarity condition, and the updated third image and the first image form image registration.
An electronic device, comprising:
the memory is used for storing an application program and data generated by the running of the application program;
a processor for executing the application to implement: obtaining a first image and a second image; performing image transformation on the second image based on a transformation model to obtain a third image; obtaining image similarity values of the third image and the first image at least based on object regions contained in the third image and the first image respectively; and adjusting the model parameters of the transformation model according to the image similarity value, so that the transformation model after parameter adjustment is used for carrying out image transformation on the second image again to obtain an updated third image until the image similarity values of the updated third image and the first image meet a similarity condition, and the updated third image and the first image form image registration.
From the above technical solutions, it can be seen that the image processing method, apparatus and electronic device disclosed in the present application, after obtaining the second image to be registered and carrying out image transformation on the second image based on the transformation model to obtain a third image, an image similarity value between the first image and the third image is obtained based on the object region contained in the first image and the object region contained in the third image as a reference for registration, and then adjusting the model parameters of the transformation model according to the image similarity value, so that the transformation model after parameter adjustment is used for carrying out image transformation on the second image again to obtain an updated third image until the image similarity value of the updated third image and the first image meets the similarity condition, and at the moment, the updated third image and the first image can form image registration, such as registration of a plurality of CT images acquired under different modalities of the same lesion region with respect to the lesion region. Therefore, in the method and the device, image similarity calculation is performed according to the object regions contained in the plurality of images to be registered, such as the anatomical structures of the focus regions in the CT images, and optimization of the registered images is realized after iterative model parameter adjustment and optimization, so that manual image registration is not needed, and the aim of improving the image registration efficiency is fulfilled.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart of an image processing method according to an embodiment of the present application;
FIG. 2 is an exemplary diagram of image registration in an embodiment of the present application;
FIG. 3 is a partial flow chart of a first embodiment of the present application;
FIG. 4 is a schematic diagram of an object region of an image in an embodiment of the present application;
fig. 5 is a schematic structural diagram of an image processing apparatus according to a second embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device according to a third embodiment of the present application;
fig. 7 is an exemplary diagram of an implementation process of the embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
As shown in fig. 1, which is a flowchart for implementing an image processing method provided in an embodiment of the present disclosure, the method in this embodiment is applied to an electronic device capable of performing image processing, such as a computer or a server, and is mainly used for performing image registration on acquired CT images in different modalities for a lesion region to obtain a registered image for the same lesion region.
Specifically, the method in this embodiment may include the following steps:
step 101: a first image and a second image are obtained.
The first and second images may be CT images in different modalities for the same lesion region, such as CT images acquired in different modalities such as a scout, arterial, venous, delayed, etc. for a liver region or a kidney region.
It should be noted that the first image may be a fixed CT image as a registration standard, i.e. a reference image f (fixed image), such as a first frame CT image of a lesion region in a flat scan modality acquired by an image acquisition device, and the second image may have one or more frames, i.e. an image m (moving image) to be registered, such as other frame CT images of the lesion region in any one or more modalities of an arterial phase, a venous phase, a delay phase, and the like acquired by the image acquisition device.
Step 102: and carrying out image transformation on the second image based on the transformation model to obtain a third image.
In this embodiment, the image change may be implemented by a transformation model T (transformation model), where W is T (M; μ), where μ is a parameter of the transformation model, and W is a third image, that is, an image after the second image is registered.
Specifically, the transformation model in this embodiment at least includes a transformation model based on global deformation, or the transformation model may include a transformation model based on local deformation, or the transformation model may include both a transformation model based on global deformation and a transformation model based on local deformation.
The transformation model based on the global deformation may be an affine transformation model, and the transformation model based on the local deformation may be a B-spline curve difference model, and in this embodiment, the affine transformation model may be used to perform global deformation on the second image, and then the B-spline curve difference model may be used to perform local deformation on the second image subjected to global deformation, so as to obtain a third image; or, in this embodiment, the second image is subjected to global deformation only by using the affine transformation model, so as to obtain a third image; or, in this embodiment, the second image is locally deformed only by using the B-spline difference model, so as to obtain the third image.
Step 103: and obtaining the image similarity value of the third image and the first image at least based on the object region contained in the third image and the first image respectively.
In this embodiment, the image similarity between the third image and the first image may be obtained for one or more object regions obtained by performing anatomical recognition on the lesion region in the third image and the first image.
Step 104: the model parameters of the transformed model are adjusted according to the image similarity values, and step 102 is executed again, so that the parameter-adjusted transformed model is used to transform the second image again to obtain an updated third image, until the image similarity values of the updated third image and the first image satisfy the similarity condition, and when the image similarity values of the third image and the first image satisfy the similarity condition, the updated third image and the first image are in image registration, that is, the third image and the first image are registered for the same object region, such as the same lesion region, in the image, for example, the CT image region of the liver in the third image corresponds to the CT image region of the liver in the first image, as shown in fig. 2.
Specifically, the image similarity value of the updated third image and the image similarity value of the first image satisfying the similarity condition may be: the increment value of the image similarity value is 0. For example, after the model parameter of the transformation model is adjusted according to the image similarity value in step 104, step 102 is executed again to perform image transformation on the original second image by using the transformation model with the adjusted model parameter to obtain an updated third image, and then an updated image similarity value between the updated third image and the first image is obtained, at this time, a difference value between the updated image similarity value and the last image similarity value, that is, an increment value of the image similarity value is obtained, at this time, the increment value of the image similarity value represents whether the updated third image obtained under the action of the transformation model with the adjusted model parameter is closer to the first image, thereby, in the case that the increment value of the image similarity value is not 0, indicating that the third image after the iterative image transformation is further closer to the first image, at this time, the process returns to step 104 again, the iterative process of the image transformation is continued, and if the increment value of the image similarity value is 0, it indicates that the image similarity value has not changed, at this time, it indicates that the third image after the iterative image transformation is the transformation result closest to the first image, at this time, the iteration of the image transformation is stopped, and at this time, the third image is the image obtained by registering the second image according to the first image.
As can be seen from the foregoing solutions, in the image processing method provided in the first embodiment of the present application, after the second image to be registered is obtained and the second image is subjected to image transformation based on the transformation model to obtain the third image, an image similarity value between the first image and the third image is obtained based on the object region contained in the first image and the object region contained in the third image as a reference for registration, and then adjusting the model parameters of the transformation model according to the image similarity value, so that the transformation model after parameter adjustment is used for carrying out image transformation on the second image again to obtain an updated third image until the image similarity value of the updated third image and the first image meets the similarity condition, and at the moment, the updated third image and the first image can form image registration, such as registration of a plurality of CT images acquired under different modalities for the same lesion area with respect to the lesion area. Therefore, in the embodiment, image similarity calculation is performed according to an object region included in the image to be registered, such as an anatomical structure of a lesion region in a CT image, and optimization of the registered image is achieved after iterative model parameter adjustment and optimization, so that manual image registration is not required, and the purpose of improving image registration efficiency is achieved.
In one implementation, when obtaining the image similarity value of the third image and the first image based on at least the object region included in each of the third image and the first image in step 103, the following may be implemented, as shown in fig. 3:
step 301: a first object region in the first image is obtained and a second object region in the third image is obtained.
The first object region and the second object region belong to an image region of the same target object, for example, the first object region is an image region of a liver in the first image, and the second object region is an image region of a liver in the third image, as shown in fig. 4.
Specifically, in this embodiment, the first object region in the first image and the second object region in the third image may be identified by an image recognition algorithm or a deep learning model constructed based on the image recognition algorithm. For example, when obtaining the first object region in the first image and obtaining the second object region in the third image in step 301, the following steps may be implemented:
identifying a first object region in the first image and a second object region in the third image, which belong to image regions of a same target object contained in both the first image and the third image, using a semantic segmentation model.
For example, in the present embodiment, the semantic segmentation model seg (-) is used to identify the first image and the third image (CT image) respectively, and obtain the CT imagesSegmentation masks for organs such as liver or kidney, i.e. first object region and second object region, wherein the first object region may have one or more, and the corresponding second object region may have one or more, specifically, may be represented by a 3D matrix [ D ] having the same size as the CT image 1 ,D 2 ,D i ,…,D N ]N denotes the number of different organs, D i The point with the middle voxel of 1 represents the point in the original second image corresponding to which this point belongs to organ i.
Step 302: a region similarity value between a second object region in the third image and the first object region in the first image is obtained.
In this embodiment, a euclidean distance calculation algorithm may be used to calculate a region similarity value between the second object region and the first object region, so as to characterize the similarity between the second object region in the registered third image and the first object region in the first image as a reference, and the higher the region similarity value is, the closer the second object region in the registered third image is to the first object region in the first image as a reference is characterized.
Specifically, in the embodiment, when obtaining the region similarity value between the second object region in the third image and the first object region in the first image, the following may be specifically implemented:
first, a region overlap value of a first object region and a second object region is obtained, then a region center distance value of the first object region and the second object region is obtained, and finally, a region similarity value between the first object region and the second object region is obtained according to the region overlap value and the region center distance value.
For example, in this example, [ D ] is obtained 1 ,D 2 ,D i ,…,D N ]Then, the center coordinates of each organ in the first object region and the second object region are respectively obtained:
Figure BDA0002407484300000091
and
Figure BDA0002407484300000092
then utilize
Figure BDA0002407484300000093
Obtaining a region overlap value of the first object region and the second object region, wherein D F =seg(F),D W F is pixel data of a first object region in the first image, W is pixel data of a second object region in the third image, and then, after obtaining the utilization
Figure BDA0002407484300000094
Obtaining a region center distance value of the first object region and the second object region, and further obtaining a region similarity value E between the first object region and the second object region using the following formula (1) anatomy (F,W)。
Figure BDA0002407484300000095
Step 303: and obtaining an image similarity value of the third image and the first image at least based on the region similarity value.
In one implementation, the region similarity value may be used as the image similarity value between the third image and the first image to represent the proximity of the third image to the first image.
In another implementation manner, in this embodiment, before step 303, mutual information values of the third image and the first image may also be obtained, for example, the mutual information values of the third image W and the first image F are obtained by using formula (2):
E intensity (F,W)=I MI (F, W) formula (2)
Based on this, in step 303, the image similarity value of the third image and the first image may be obtained specifically according to the mutual information value and the region similarity value.
Specifically, in this embodiment, when the image similarity value between the third image and the first image is obtained according to the mutual information value and the region similarity value, the following method may be implemented:
firstly, obtaining a first product of the mutual information value and a first coefficient and obtaining a second product of the region similarity value and a second coefficient; then, a sum of the first product and the second product is obtained as an image similarity value of the third image and the first image.
For example, the first coefficient is represented by α and the second coefficient by β, and accordingly, the similarity value E (M, W) of the third image and the first image is obtained using the following formula (3):
E(M,W)=αE intensity (F,W)+βE anatomy (F, W) formula (3)
Referring to fig. 5, a schematic structural diagram of an image processing apparatus according to a second embodiment of the present disclosure is provided, where the apparatus in this embodiment is applied to an electronic device capable of performing image processing, such as a computer or a server, and is mainly used to perform image registration on acquired CT images in different modalities for a lesion region, so as to obtain a registered image for the same lesion region.
Specifically, the apparatus in this embodiment may include the following structure:
an image obtaining unit 501 for obtaining a first image and a second image;
an image transformation unit 502, configured to perform image transformation on the second image based on a transformation model to obtain a third image;
a similarity obtaining unit 503, configured to obtain image similarity values of the third image and the first image based on at least object regions included in the third image and the first image, respectively;
a parameter adjusting unit 504, configured to adjust a model parameter of the transformation model according to the image similarity value, so that the transformation model after parameter adjustment is used by the image transforming unit 502 to perform image transformation on the second image again to obtain an updated third image, until the image similarity values of the updated third image and the first image obtained by the similarity obtaining unit 503 satisfy a similarity condition, where the updated third image is registered with the first image.
As can be seen from the above solution, in the image processing apparatus provided in the second embodiment of the present application, after the second image to be registered is obtained and the second image is subjected to image transformation based on the transformation model to obtain the third image, an image similarity value between the first image and the third image is obtained based on the object region contained in the first image and the object region contained in the third image as a reference for registration, then, the model parameters of the transformation model are adjusted according to the image similarity value, so that the transformation model after parameter adjustment is used for carrying out image transformation on the second image again to obtain an updated third image until the image similarity value of the updated third image and the first image meets the similarity condition, and the updated third image and the first image can form image registration, such as registration of a plurality of CT images acquired under different modalities of the same lesion region with respect to the lesion region. Therefore, in this embodiment, image similarity calculation is performed according to an object region included in the image to be registered, such as an anatomical structure of a lesion region in a CT image, and optimization of the registered image is achieved after iterative model parameter adjustment and optimization, so that manual image registration is not required, and the purpose of improving image registration efficiency is achieved.
In one implementation, when obtaining the image similarity values of the third image and the first image based on at least the object regions included in the third image and the first image, the similarity obtaining unit 503 may implement:
obtaining a first object area in the first image and obtaining a second object area in a third image, wherein the first object area and the second object area belong to an image area of the same target object; obtaining a region similarity value between a second object region in the third image and a first object region in the first image; obtaining an image similarity value for the third image and the first image based on at least the region similarity value.
Preferably, the obtaining of the region similarity value between the second object region in the third image and the first object region in the first image by the similarity obtaining unit 503 includes:
obtaining a region overlap value of the first object region and the second object region; obtaining a region center distance value of the first object region and the second object region; and obtaining a region similarity value between the first object region and the second object region according to the region overlap value and the region center distance value.
In one implementation, the similarity obtaining unit 503 may first obtain mutual information values of the third image and the first image before obtaining the image similarity values of the third image and the first image based on at least the region similarity value;
correspondingly, the obtaining of the image similarity value of the third image and the first image by the similarity obtaining unit 503 based on at least the region similarity value specifically includes:
and obtaining the image similarity value of the third image and the first image according to the mutual information value and the region similarity value.
Preferably, the obtaining of the image similarity between the third image and the first image by the similarity obtaining unit 503 according to the mutual information value and the region similarity includes:
obtaining a first product of the mutual information value and a first coefficient and obtaining a second product of the region similarity value and a second coefficient; obtaining a sum of the first product and the second product as an image similarity value of the third image and the first image.
In one implementation, the obtaining a first object region in the first image and obtaining a second object region in the third image by the similarity obtaining unit 503 includes:
identifying a first object region in the first image and a second object region in the third image, which belong to image regions of a same target object contained in both the first image and the third image, using a semantic segmentation model.
Preferably, the transformation model comprises at least a transformation model based on global deformation and/or the transformation model comprises at least a transformation model based on local deformation.
In one implementation, the image similarity values of the updated third image and the first image satisfy a similarity condition, including: the increment value of the image similarity value is 0.
It should be noted that, for the specific implementation of each unit in the present embodiment, reference may be made to the corresponding content in the foregoing, and details are not described here.
Referring to fig. 6, a schematic structural diagram of an electronic device according to a third embodiment of the present disclosure is provided, where the electronic device may be an electronic device capable of performing image processing, such as a computer or a server, and is mainly used to perform image registration on acquired CT images in different modalities for a lesion region, so as to obtain a registered image for the same lesion region.
Specifically, the electronic device in this embodiment may include the following structure:
a memory 601 for storing an application program and data generated by the application program running;
a processor 602 configured to execute the application to implement: obtaining a first image and a second image; performing image transformation on the second image based on a transformation model to obtain a third image; obtaining image similarity values of the third image and the first image at least based on object regions contained in the third image and the first image respectively; and adjusting model parameters of the transformation model according to the image similarity value, so that the transformation model after parameter adjustment is used for carrying out image transformation on the second image again to obtain an updated third image until the image similarity values of the updated third image and the first image meet a similarity condition, and the updated third image and the first image are in image registration.
It can be known from the foregoing solutions that, in the electronic device provided in the third embodiment of the present application, after the second image to be registered is obtained and the second image is subjected to image transformation based on the transformation model to obtain a third image, an image similarity value between the first image and the third image is obtained based on the object region contained in the first image and the object region contained in the third image as a reference for registration, and then adjusting the model parameters of the transformation model according to the image similarity value, so that the transformation model after parameter adjustment is used for carrying out image transformation on the second image again to obtain an updated third image until the image similarity value of the updated third image and the first image meets the similarity condition, and at the moment, the updated third image and the first image can form image registration, such as registration of a plurality of CT images acquired under different modalities of the same lesion region with respect to the lesion region. Therefore, in this embodiment, image similarity calculation is performed according to an object region included in the image to be registered, such as an anatomical structure of a lesion region in a CT image, and optimization of the registered image is achieved after iterative model parameter adjustment and optimization, so that manual image registration is not required, and the purpose of improving image registration efficiency is achieved.
It should be noted that, in this embodiment, reference may be made to the corresponding content in the foregoing for specific implementation of the processor 602, and details are not described here.
Taking an image as a CT image as an example, the technical solution in the present application is exemplified as follows:
for a reference Image (Fixed Image) F and an Image to be registered (Moving Image) M, the aim of Image registration is to find an Image W after registration, and the corresponding region positions of W and F at this time need to be similar as much as possible. In the present embodiment, T is used to represent a Transformation Model (Transformation Model) from an image to be registered to a registered image, i.e., W ═ T (M; μ), where μ is a Model parameter of the Transformation Model.
The core implementation in the technical scheme of the application lies in: firstly, an image to be registered is converted into W by using a conversion model, then, image features of W and F are extracted through a semantic segmentation model and the like, the similarity of W and F is calculated, and the similarity of W and F is maximized through iteration.
In particular, it is usually 3D in medical CT images, so the 3D affine transformation modeling T can be used in the present embodiment, which is defined as follows:
T(x)=Ax+b=[[a1,a2,a3],[a4,a5,a6],[a7,a8,a9]]*[x1,x2,x3]+[b1,b2,b3]
where x is the coordinate of a voxel of the image, a is a linear transformation matrix (representing the rotation and scaling relationship of the image), and b is a translation vector along the coordinate axis (representing the translation relationship of the image).
After transforming M with T to obtain W, the CT images of F and W are identified with the semantic segmentation model seg (-) to obtain segmentation masks of organs such as liver or kidney in the CT images, and [ D ] can be expressed by a 3D matrix with the same size as the CT image 1 ,D 2 ,D i ,…,D N ]N denotes the number of different organs, D i The point with the middle voxel of 1 represents the point in the original second image corresponding to which this point belongs to organ i. Then, according to the division mask, the coordinates of the center point of each organ in F and W can be obtained
Figure BDA0002407484300000141
And
Figure BDA0002407484300000142
accordingly, the anatomical structure similarity E to the two graphs F and W is obtained on the basis of the overlap degree of the overlap region and the center point distance using the formula (1) anatomy (F,W)。
Wherein, when the more the same semantic parts of the two images are overlapped, the closer the distance of the central point is, the more the two images are similar, wherein, the organ area and the central point coordinate can be automatically obtained by the segmentation model, and the aim of the registration task is to solve the parameter mu which enables the E (mu) to be maximized.
Meanwhile, in the present embodiment, Mutual Information (Mutual Information) modeling is also used for the image intensity, for example, Eintensity (F, W) ═ I MI (F, W), wherein the larger the mutual information value is, the more similar the intensity distributions of the two images are.
Based on the above, in order to adjust the weight of the two similarity degrees, the coefficients α and β are added before the two similarity degrees of mutual information and anatomical structure similarity, and the final similarity function is obtained by using the formula (3),
in a specific implementation, the optimized model parameters are iteratively solved by using a gradient descent method. The flow is shown in fig. 7, in which the reference image F and the image M to be registered are input and output as the registered image W, as follows:
firstly, initializing model parameters mu ═ { a, b }, and t ═ 0;
then, obtaining a transformed image W (T (M; A, b) by using global deformation and/or local deformation transformation, wherein A and b are respectively model parameters of an affine transformation model based on global deformation;
then, the semantic segmentation model seg (-) is used to obtain the segmentation masks of F and W, and the overlapping degree of the masks corresponding to the same semantic meaning in F and W is calculated
Figure BDA0002407484300000151
Center point of segmentation mask
Figure BDA0002407484300000152
And
Figure BDA0002407484300000153
distance of center point
Figure BDA0002407484300000154
Correspondingly obtaining the similarity E based on the anatomical structure anatomy (F,W);
At the same time, the image intensity similarity E of F and W is calculated intensity (F,W)=IMI(F,W);
Based on the above results, the overall similarity E (M, W) ═ α is calculatedE intensity (F,W)+βE anatomy (F,W);
Then, the gradient λ t ═ E of the similarity function, i.e., the incremental value of the image similarity value, is solved, and the model parameter μ of the transformation model is updated at the same time t+1 =μ tt And returning to perform image transformation on the M again, and then obtaining a new similarity function gradient until the similarity is not increased any more, namely the increment value of the similarity value is 0, and ending the iteration to obtain a final registered image W.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the technical solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (8)

1. An image processing method, comprising:
obtaining a first image and a second image;
performing image transformation on the second image based on a transformation model to obtain a third image;
obtaining image similarity values of the third image and the first image at least based on object regions contained in the third image and the first image respectively; the third image and the first image respectively comprise one or more object regions which are identified after the focus region in the image is dissected;
adjusting model parameters of the transformation model according to the image similarity value, so that the transformation model after parameter adjustment is used for carrying out image transformation on the second image again to obtain an updated third image until the image similarity values of the updated third image and the first image meet a similarity condition, and the updated third image and the first image form image registration;
obtaining an image similarity value of the third image and the first image based on at least object regions included in the third image and the first image, respectively, including:
obtaining a first object area in the first image and obtaining a second object area in a third image, wherein the first object area and the second object area belong to an image area of the same target object;
obtaining a region similarity value between a second object region in the third image and a first object region in the first image;
obtaining an image similarity value of the third image and the first image based on at least the region similarity value;
before obtaining an image similarity value for the third image and the first image based at least on the region similarity value, the method further comprises:
obtaining mutual information values of the third image and the first image;
wherein obtaining an image similarity value of the third image and the first image based on at least the region similarity value comprises:
and obtaining the image similarity value of the third image and the first image according to the mutual information value and the region similarity value.
2. The method of claim 1, obtaining a region similarity value between a second object region in the third image and a first object region in the first image, comprising:
obtaining a region overlap value of the first object region and the second object region;
obtaining a region center distance value of the first object region and the second object region;
and obtaining a region similarity value between the first object region and the second object region according to the region overlap value and the region center distance value.
3. The method of claim 1, obtaining an image similarity value for the third image and the first image from the mutual information value and the region similarity value, comprising:
obtaining a first product of the mutual information value and a first coefficient and obtaining a second product of the region similarity value and a second coefficient;
obtaining a sum of the first product and the second product as an image similarity value of the third image and the first image.
4. The method of claim 1, obtaining a first object region in the first image and obtaining a second object region in a third image, comprising:
identifying a first object region in the first image and a second object region in the third image, which belong to image regions of a same target object contained in both the first image and the third image, using a semantic segmentation model.
5. The method according to claim 1, the transformation model comprising at least a global deformation based transformation model, and/or the transformation model comprising at least a local deformation based transformation model.
6. The method of claim 1, the updated image similarity values of the third image and the first image satisfying a similarity condition, comprising:
the increment value of the image similarity value is 0.
7. An image processing apparatus comprising:
an image obtaining unit for obtaining a first image and a second image;
the image transformation unit is used for carrying out image transformation on the second image based on a transformation model to obtain a third image;
a similarity obtaining unit configured to obtain image similarity values of the third image and the first image based on at least object regions included in the third image and the first image, respectively; the third image and the first image respectively comprise one or more object regions which are identified after the focus region in the image is dissected;
a parameter adjusting unit, configured to adjust a model parameter of the transformation model according to the image similarity value, so that the transformation model after parameter adjustment is used by the image transformation unit to perform image transformation on the second image again to obtain an updated third image, until the image similarity values of the updated third image and the first image obtained by the similarity obtaining unit satisfy a similarity condition, where the updated third image is in image registration with the first image;
obtaining an image similarity value of the third image and the first image based on at least object regions included in the third image and the first image, respectively, including:
obtaining a first object area in the first image and obtaining a second object area in a third image, wherein the first object area and the second object area belong to an image area of the same target object;
obtaining a region similarity value between a second object region in the third image and a first object region in the first image;
obtaining an image similarity value of the third image and the first image based on at least the region similarity value;
before obtaining the image similarity values of the third image and the first image based on at least the region similarity value, the method further includes:
obtaining mutual information values of the third image and the first image;
wherein obtaining an image similarity value of the third image and the first image based on at least the region similarity value comprises:
and obtaining the image similarity value of the third image and the first image according to the mutual information value and the region similarity value.
8. An electronic device, comprising:
the memory is used for storing an application program and data generated by the running of the application program;
a processor for executing the application to implement: obtaining a first image and a second image; performing image transformation on the second image based on a transformation model to obtain a third image; obtaining image similarity values of the third image and the first image at least based on object regions contained in the third image and the first image respectively; the third image and the first image respectively comprise one or more object regions which are identified after the focus region in the image is dissected; adjusting model parameters of the transformation model according to the image similarity value, so that the parameter-adjusted transformation model is used for carrying out image transformation on the second image again to obtain an updated third image until the image similarity values of the updated third image and the first image meet a similarity condition, and the updated third image is in image registration with the first image;
obtaining an image similarity value of the third image and the first image based on at least object regions included in the third image and the first image, respectively, including:
obtaining a first object area in the first image and obtaining a second object area in a third image, wherein the first object area and the second object area belong to an image area of the same target object;
obtaining a region similarity value between a second object region in the third image and a first object region in the first image;
obtaining an image similarity value of the third image and the first image based on at least the region similarity value;
before obtaining an image similarity value of the third image and the first image based on at least the region similarity value, further comprising:
obtaining mutual information values of the third image and the first image;
wherein obtaining an image similarity value for the third image and the first image based on at least the region similarity value comprises:
and obtaining the image similarity value of the third image and the first image according to the mutual information value and the region similarity value.
CN202010166003.8A 2020-03-11 2020-03-11 Image processing method and device and electronic equipment Active CN111260546B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010166003.8A CN111260546B (en) 2020-03-11 2020-03-11 Image processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010166003.8A CN111260546B (en) 2020-03-11 2020-03-11 Image processing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN111260546A CN111260546A (en) 2020-06-09
CN111260546B true CN111260546B (en) 2022-09-23

Family

ID=70953149

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010166003.8A Active CN111260546B (en) 2020-03-11 2020-03-11 Image processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111260546B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116416289B (en) * 2023-06-12 2023-08-25 湖南大学 Multimode image registration method, system and medium based on depth curve learning

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105122298A (en) * 2013-03-29 2015-12-02 皇家飞利浦有限公司 Image registration
CN109767460A (en) * 2018-12-27 2019-05-17 上海商汤智能科技有限公司 Image processing method, device, electronic equipment and computer readable storage medium
CN109919987A (en) * 2019-01-04 2019-06-21 浙江工业大学 A kind of 3 d medical images registration similarity calculating method based on GPU

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7596283B2 (en) * 2004-04-12 2009-09-29 Siemens Medical Solutions Usa, Inc. Fast parametric non-rigid image registration based on feature correspondences
CN103514591A (en) * 2012-06-15 2014-01-15 深圳市蓝韵实业有限公司 ORB registration based DR image mosaic method and system thereof
EP3092618B1 (en) * 2014-01-06 2018-08-29 Koninklijke Philips N.V. Articulated structure registration in magnetic resonance images of the brain
CN106611411B (en) * 2015-10-19 2020-06-26 上海联影医疗科技有限公司 Method for segmenting ribs in medical image and medical image processing device
CN106934821B (en) * 2017-03-13 2020-06-23 中国科学院合肥物质科学研究院 Conical beam CT and CT image registration method based on ICP algorithm and B spline
CN110610202B (en) * 2019-08-30 2022-07-26 联想(北京)有限公司 Image processing method and electronic equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105122298A (en) * 2013-03-29 2015-12-02 皇家飞利浦有限公司 Image registration
CN109767460A (en) * 2018-12-27 2019-05-17 上海商汤智能科技有限公司 Image processing method, device, electronic equipment and computer readable storage medium
CN109919987A (en) * 2019-01-04 2019-06-21 浙江工业大学 A kind of 3 d medical images registration similarity calculating method based on GPU

Also Published As

Publication number Publication date
CN111260546A (en) 2020-06-09

Similar Documents

Publication Publication Date Title
US8849005B2 (en) Coronary artery motion modeling
US9561004B2 (en) Automated 3-D orthopedic assessments
US7646936B2 (en) Spatially variant image deformation
Song et al. Lung CT image registration using diffeomorphic transformation models
JP5749735B2 (en) Bone suppression in X-ray radiographs
JP5377310B2 (en) Reduction of cardiac motion artifacts in chest CT imaging
JP2008511395A (en) Method and system for motion correction in a sequence of images
Zeng et al. Liver segmentation in magnetic resonance imaging via mean shape fitting with fully convolutional neural networks
Zhou et al. Interactive medical image segmentation using snake and multiscale curve editing
Dong et al. Accelerated nonrigid image registration using improved Levenberg–Marquardt method
Yang et al. Dscgans: Integrate domain knowledge in training dual-path semi-supervised conditional generative adversarial networks and s3vm for ultrasonography thyroid nodules classification
CN115830163A (en) Progressive medical image cross-mode generation method and device based on deterministic guidance of deep learning
CN111260546B (en) Image processing method and device and electronic equipment
Wei et al. Morphology-preserving smoothing on polygonized isosurfaces of inhomogeneous binary volumes
CN111402221B (en) Image processing method and device and electronic equipment
Habijan et al. Generation of artificial CT images using patch-based conditional generative adversarial networks
Rhee et al. Scan-based volume animation driven by locally adaptive articulated registrations
Zhuang et al. Efficient contour-based annotation by iterative deep learning for organ segmentation from volumetric medical images
Szmul et al. Supervoxels for graph cuts-based deformable image registration using guided image filtering
Chang et al. Deformable registration of lung 3DCT images using an unsupervised heterogeneous multi-resolution neural network
Leonardi et al. 3D reconstruction from CT-scan volume dataset application to kidney modeling
US20220375099A1 (en) Segmentating a medical image
Sang et al. 4D-CBCT registration with a FBCT-derived plug-and-play feasibility regularizer
Al Abboodi et al. Supervised Transfer Learning for Multi Organs 3D Segmentation With Registration Tools for Metal Artifact Reduction in CT Images
WO2020137677A1 (en) Image processing device, image processing method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant