CN113506331A - Method, apparatus, computer device and storage medium for registering tissue and organ - Google Patents

Method, apparatus, computer device and storage medium for registering tissue and organ Download PDF

Info

Publication number
CN113506331A
CN113506331A CN202110728649.5A CN202110728649A CN113506331A CN 113506331 A CN113506331 A CN 113506331A CN 202110728649 A CN202110728649 A CN 202110728649A CN 113506331 A CN113506331 A CN 113506331A
Authority
CN
China
Prior art keywords
image
target organ
mask
registered
registration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110728649.5A
Other languages
Chinese (zh)
Inventor
何少文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan United Imaging Zhirong Medical Technology Co Ltd
Original Assignee
Wuhan United Imaging Zhirong Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan United Imaging Zhirong Medical Technology Co Ltd filed Critical Wuhan United Imaging Zhirong Medical Technology Co Ltd
Priority to CN202110728649.5A priority Critical patent/CN113506331A/en
Publication of CN113506331A publication Critical patent/CN113506331A/en
Priority to EP22791147.6A priority patent/EP4318393A1/en
Priority to PCT/CN2022/088607 priority patent/WO2022223042A1/en
Priority to US18/492,743 priority patent/US20240050172A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • G06T3/18
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4023Decimation- or insertion-based scaling, e.g. pixel or line decimation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Abstract

The application relates to a method, an apparatus, a computer device and a storage medium for registration of a tissue organ. The method comprises the steps of obtaining a first processing image of a first image and a second processing image of a second image, carrying out rigid registration on a target organ in the first processing image and a target organ in the second processing image to obtain a rigid registration result, carrying out elastic registration on the target organ in the rigid registration result and the target organ in the second processing image to obtain an elastic registration result, and processing the first image according to the rigid registration result and the elastic registration result to obtain a registered image. The method can realize the registration of the rich information of the target organ contained in the first image into the second image and the registration is carried out based on the conversion relation of each feature point in the target organ.

Description

Method, apparatus, computer device and storage medium for registering tissue and organ
Technical Field
The present application relates to the field of medical image processing technology, and in particular, to a method, an apparatus, a computer device, and a storage medium for tissue organ registration.
Background
During the interventional operation of the chest and abdomen, the general preoperative diagnosis is performed with CT enhanced scanning, which can be used to provide relevant anatomical information such as organs, blood vessels and tumors. Due to the limitation of clinical conditions, only CT scout is generally performed during the operation, and due to the influence of the environment and the body of the patient, the internal organs and tissues of the patient in the CT scout image during the operation are shifted compared with the CT enhanced image before the operation, so anatomical information acquired before the operation cannot be utilized in the operation, and great difficulty and challenge are brought to the interventional operation. Therefore, preoperative intraoperative registration is required, anatomical information obtained preoperatively is registered to the operation, and the difficulty and the challenge of interventional operation are reduced.
Conventional methods of registering preoperative CT-enhanced images to intraoperative CT scout images include: and extracting the features of the preoperative CT enhanced image and the intraoperative CT flat scan image, and registering the features of the preoperative CT enhanced image and the features of the CT flat scan image to complete the registration of the features of the preoperative CT enhanced image to the intraoperative CT flat scan image.
However, the two sets of data information to be registered are symmetrical in the above registration method, but the CT enhanced image before the interventional operation has a large amount of contrast highlight information, and the CT panned image during the interventional operation lacks the information, so that the registration result of detail information inside the organ, such as blood vessels, tumors and the like, is poor in the registration process, and the requirement of high accuracy of the actual interventional operation cannot be met.
Disclosure of Invention
In view of the above, there is a need to provide a method, an apparatus, a computer device and a storage medium for registering tissue organs, which can improve the accuracy of registration.
In a first aspect, a method of tissue organ registration, the method comprising:
acquiring a first processing image of the first image and a second processing image of the second image; the first image and the second image comprise target organs of the same object at different periods;
rigidly registering a target organ in the first processing image and a target organ in the second processing image to obtain a rigid registration result;
elastically registering the target organ in the rigid registration result and the target organ in the second processing image to obtain an elastic registration result;
and processing the first image according to the rigid registration result and the elastic registration result to obtain a registered image.
In one embodiment, the processing the first image according to the rigid registration result and the elastic registration result to obtain a registered image includes:
constructing a mechanical model of the target organ based on the target organ in the rigid registration result;
solving the mechanical model by taking the difference between the elastic registration result and the rigid registration result as a boundary condition to obtain a deformation field of each point in the target organ;
and processing the first image according to the rigid transformation matrix in the rigid registration result and the deformation field of each point in the target organ to obtain a registered image.
In one embodiment, the processing the first image according to the rigid transformation matrix in the rigid registration result and the deformation fields of the points inside the target organ to obtain a registered image includes:
carrying out translation and/or rotation transformation on the first image based on the rigid transformation matrix to obtain a transformed first image;
and translating and/or rotating each point in the target organ in the transformed first image based on the deformation field of each point in the target organ to obtain the registered image.
In one embodiment, the method further comprises:
carrying out interpolation processing on the deformation fields of all points in the target organ to obtain the deformation fields of all pixel points in the target organ;
the translating, based on the deformation field of each point inside the target organ, each point inside the target organ in the transformed first image to obtain the registered image includes:
and translating each pixel point inside the target organ in the transformed first image based on the deformation field of each pixel point inside the target organ to obtain the registered image.
In one embodiment, the acquiring a first processed image of the first image and a second processed image of the second image includes:
acquiring the first image and the second image;
extracting a first mask image of a target organ in the first image and a second mask image of the target organ in the second image;
and performing surface meshing on the target organ in the first mask image to obtain the first processed image, and performing surface meshing on the target organ in the second mask image to obtain the second processed image.
In one embodiment, the first image is an enhanced image, the second image is a scout image, and before the acquiring the first processed image of the first image and the second processed image of the second image, the method further comprises:
acquiring a third image; the third image is an enhanced type image;
and carrying out image transformation on the third image to obtain the second image.
In one embodiment, the image transforming the third image to obtain the second image includes:
removing highlight information from the third image to obtain a second image;
or inputting the third image into a preset image transformation network to obtain the second image.
In a second aspect, a method of registration verification of a tissue organ, the method comprising:
extracting a target organ and an internal detail mask from a third image to obtain a third mask image corresponding to the third image; the third image is an enhanced type image;
verifying the registered image according to the third mask image; the registered image is an image obtained by registering a first image and a second image according to the method of the first aspect; the second image is an image obtained by performing image transformation on the third image.
In one embodiment, the verifying the registered image according to the third mask image includes:
extracting a target organ and an internal detail mask from the registered image to obtain a fourth mask image corresponding to the registered image;
calculating the number of repeated pixel points in the region where the target organ is located in the third mask image and the region where the target organ is located in the fourth mask image;
obtaining a verification value according to the number of the repeated pixel points, the number of the pixel points in the region where the target organ is located in the third mask image and the number of the pixel points in the region where the target organ is located in the fourth mask image;
if the verification value is larger than a preset threshold value, the registered image is determined to be verified to be passed, and if the verification value is not larger than the preset threshold value, the registered image is determined to be verified to be failed.
In a third aspect, a registration apparatus for a tissue organ, the apparatus comprising:
acquiring a first processing image of the first image and a second processing image of the second image; the first image and the second image comprise target organs of the same object at different periods;
rigidly registering a target organ in the first processing image and a target organ in the second processing image to obtain a rigid registration result;
elastically registering the target organ in the rigid registration result and the target organ in the second processing image to obtain an elastic registration result;
and processing the first image according to the rigid registration result and the elastic registration result to obtain a registered image.
In a fourth aspect, a registration verification apparatus for a tissue organ, the apparatus comprising:
the segmentation module is used for extracting a target organ and an internal detail mask from a third image to obtain a third mask image corresponding to the third image; the third image is an enhanced type image;
the verification module is used for verifying the registered image according to the third mask image; the registered image is an image obtained by registering the first image and the second image according to the device of the third aspect.
In a fifth aspect, a computer device comprises a memory storing a computer program and a processor implementing the method of the first and second aspects when the processor executes the computer program.
In a sixth aspect, a computer-readable storage medium has stored thereon a computer program which, when executed by a processor, implements the method of the first and second aspects described above.
The method, the device, the computer equipment and the storage medium for registering the tissue organ acquire a first processing image of a first image and a second processing image of a second image, perform rigid registration on a target organ in the first processing image and a target organ in the second processing image to obtain a rigid registration result, perform elastic registration on the target organ in the rigid registration result and the target organ in the second processing image to obtain an elastic registration result, and process the first image according to the rigid registration result and the elastic registration result to obtain a registered image. The method provides a registration verification method for different types of images, obtains an approximate registration position of a target organ to be registered through rigid registration, obtains a registration position of a surface feature point of the target organ to be registered through elastic registration, and can calculate and obtain a conversion relation of each feature point inside the target organ in a first image and a second image to be registered based on the approximate registration position of the target organ and the registration position of the surface feature point, and then registers the first image to be registered based on the conversion relation, so that rich information of the target organ contained in the first image can be registered into the second image, and registration is carried out based on the conversion relation of each feature point inside the target organ, and the problem of inaccurate result caused by the conventional registration based on pixel information under the condition that pixel information of an enhanced image and a plain scan image is asymmetric can be avoided, therefore, the registration result obtained by the registration method provided by the application is more accurate and has strong robustness.
Drawings
FIG. 1 is a diagram of an exemplary embodiment of a method for tissue organ registration;
FIG. 2 is a schematic flow chart diagram of a method for tissue organ registration according to one embodiment;
FIG. 3 is a schematic flow chart of one implementation of S104 in the embodiment of FIG. 2;
FIG. 4 is a schematic flow chart of one implementation of S203 in the embodiment of FIG. 3;
FIG. 5 is a schematic flow chart of one implementation of S203 in the embodiment of FIG. 3;
FIG. 6 is a schematic flow chart diagram illustrating a method for tissue organ registration in one embodiment;
FIG. 7 is a schematic flow chart illustrating one implementation of S401 in the embodiment of FIG. 6;
FIG. 8 is a schematic flow chart diagram illustrating a method for verifying registration of a tissue organ according to one embodiment;
FIG. 9 is a schematic flow chart illustrating one implementation of S602 in the embodiment of FIG. 8;
FIG. 10 is a schematic flow chart diagram illustrating a method for verifying registration of a tissue organ according to one embodiment;
FIG. 11 is a block diagram of an embodiment of a registration apparatus for tissue organs;
FIG. 12 is a block diagram of an embodiment of a device for registering tissue organs;
FIG. 13 is a block diagram of an embodiment of a registration apparatus for tissue organs;
FIG. 14 is a block diagram of an embodiment of a device for registering tissue organs;
FIG. 15 is a block diagram of an embodiment of a device for registering tissue organs;
FIG. 16 is a block diagram of an embodiment of a device for registering tissue organs;
FIG. 17 is a block diagram of an embodiment of a device for verifying registration of a tissue organ;
FIG. 18 is a block diagram of an apparatus for verifying registration of a tissue organ according to an embodiment;
FIG. 19 is a diagram showing an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In practical application, when performing a thoracoabdominal intervention operation, a general preoperative diagnosis performs CT enhanced scanning to obtain a preoperative enhanced image, and the preoperative enhanced image can provide obvious anatomical information of organs, blood vessels, tumors and the like and can be used for accurately segmenting the blood vessels and the tumors. Because of the limitation of clinical conditions, only CT scout is generally performed during the operation, and therefore, the CT scout image during the operation lacks the anatomical structure information of blood vessels and tumors required by interventional operation. The blood vessel is an important tissue for avoiding in the operation path planning in the interventional operation, and the tumor is a target point of the interventional operation and is a tissue and a focus to be identified which are necessary for the operation path planning and the treatment guidance in the operation. Meanwhile, due to the environment, the body state and the respiratory motion of the patient, the CT plain scan image in the operation is compared with the CT enhanced image before the operation, and the internal organs and tissues of the patient shift, so that rich information obtained before the operation cannot be utilized in the operation, and great difficulty and challenge are brought to the interventional operation. Therefore, preoperative intraoperative registration is required, and rich information obtained preoperatively is registered to the operation, so that the difficulty and the challenge of interventional operation are reduced. The traditional registration method based on voxel or characteristic needs two groups of registered data information to be symmetrical. However, the preoperative image of the interventional operation has a large amount of contrast highlight information, and the intraoperative plain scan image lacks the information, so that in the registration process, detailed information in organs, such as blood vessels, tumors and the like, is poor in registration result, and the requirement of high accuracy of the actual interventional operation cannot be met. In order to solve the technical problems, the application provides a registration verification method for a tissue organ, which can truly simulate the deformation conditions of the surface and the interior of the organ in the registration deformation process of the organ based on a biomechanics model and a surface elastic registration method, accurately register the detail information of the interior of the organ, such as blood vessels, tumors and the like, by utilizing the simulated real deformation, and meet the requirement of high accuracy rate of the actual interventional operation. The following embodiments will describe in detail the registration verification method for tissue and organ provided in the present application.
The registration verification method for tissue and organ provided by the present application can be applied to the application environment as shown in fig. 1, wherein the terminal 102 and the server 104 are connected through wireless or wired communication. The terminal 102 is configured to acquire two image data to be registered, that is, image data including a target organ at different periods, and the server 104 is configured to acquire the two image data to be registered from the terminal 102 and process the two image data to be registered, so as to complete registration of the target organ in the two image data to be registered. The terminal 102 may be any type of image scanning device such as a Computed Tomography (CT) device, a Magnetic Resonance scanning (MR) device, and the like, the server 104 may be, but is not limited to, various personal computers, laptops, smartphones, tablets, and portable wearable devices, and the server 104 may also be implemented by an independent server or a server cluster formed by a plurality of servers.
Those skilled in the art will appreciate that the application environment shown in fig. 1 is only a block diagram of a portion of the structure associated with the present application and does not constitute a limitation on the application environment to which the present application is applied, and that a particular application environment may include more or less components than those shown in the drawings, or may combine certain components, or have a different arrangement of components.
In one embodiment, as shown in fig. 2, a method for verifying registration of a tissue organ is provided, which is exemplified by the application of the method to the server in fig. 1, and includes the following steps:
s101, a first processed image of the first image and a second processed image of the second image are obtained.
For example, the first image and the second image are different types of image data of an organ acquired by the same patient at different times, the first image may be enhanced image data, and the second image may be panned image data. The target organ may be any type of tissue organ, such as lung, liver, spleen, etc., and is not limited herein. In particular, the first and second images are regional or slice images corresponding to the target organ/tissue at different times, and are images of any modality (e.g., CT, MR, ultrasound, etc.) scanned at different times. The first processed image of the first image includes a mesh image obtained by meshing the tissue and organ in the first image, and may include a mask image obtained by performing mask extraction on the first image.
In this embodiment, the CT device may be used to perform enhanced scanning and plain scanning on the same object in advance to obtain enhanced image data and plain scanning image data corresponding to the same object, and the server obtains the enhanced image data and plain scanning image data of the same object, that is, the first image and the second image, from the CT device by docking the CT device. For example, the server may obtain enhanced image data of the preoperative thoracoabdominal region of the patient a to obtain a first image, and obtain scout image data of the intraoperative thoracoabdominal region of the patient a to obtain a second image. Alternatively, the server may obtain the image data of the target organ of the same object at different times by other means, such as directly downloading the first image and the second image corresponding to the same object from the internet. After the server acquires the first image and the second image, the first image and the second image can be processed in the same manner, such as surface reconstruction and grid division, respectively to obtain corresponding grid images, namely a first processed image corresponding to the first image and a second processed image corresponding to the second image; optionally, after the server acquires the first image and the second image, the server may also perform mask extraction on the first image and the second image respectively to obtain corresponding mask images, that is, a first processed image corresponding to the first image and a second processed image corresponding to the second image.
S102, carrying out rigid registration on the target organ in the first processing image and the target organ in the second processing image to obtain a rigid registration result.
Wherein the rigid registration result comprises a rigid transformation matrix and the image after rigid registration. The rigid transformation matrix represents the moving distance of each point on the target organ when the position of the target organ in the first processed image is moved to the position of the target organ in the second processed image in the rigid registration process. The rigidly registered image is the image of the target organ in the first processed image after horizontal or vertical movement or deflection according to the position of the target organ in the second processed image, or the image of the target organ in the second processed image after horizontal or vertical movement or deflection according to the position of the target organ in the first processed image.
In this embodiment, when the server acquires the first processed image and the second processed image, rigid registration may be performed on a target organ in the first processed image and a target organ in the second processed image, so as to obtain an image including the rigid transformation matrix and the rigidly registered image. Specifically, in the rigid registration process, the server may perform operations such as translation and deflection in different directions on the target organ in the first processed image with reference to the position of the target organ in the second processed image, so that the position of the target organ in the first processed image can be aligned with the position of the target organ in the second processed image. Optionally, the server may also use the position of the target organ in the first processed image as a reference, and perform operations such as translation and deflection in different directions on the target organ in the second processed image, so that the position of the target organ in the second processed image can be aligned with the position of the target organ in the first processed image.
S103, elastically registering the target organ in the rigid registration result and the target organ in the second processing image to obtain an elastic registration result.
When the server obtains the rigid registration result, the image after rigid registration can be obtained, and the target organ in the image after rigid registration and the target organ in the second processing image can be further subjected to elastic registration to obtain an elastic registration result comprising the image after elastic registration. Specifically, in the elastic registration process, the server may perform a translation operation in a front-back direction or a depth direction on the feature points on the upper surface of the target organ in the image after the rigid registration and the second processed image, so that the positions of the feature points on the upper surface of the target organ in the image after the rigid registration can be overlapped with the positions of the feature points on the upper surface of the target organ in the second processed image.
And S104, processing the first image according to the rigid registration result and the elastic registration result to obtain a registered image.
After the server obtains the rigid registration result and the elastic registration result, because the rigid registration result includes the conversion relation of the position of the target organ in the process of changing from the target organ in the first image to the target organ in the second image, and since the elastic registration result includes the transformation from the target organ in the first image to the surface feature points on the target organ in the second image, therefore, by analyzing the relationship of the sequential change of the target organ in the rigid registration result and the elastic registration result, a transformation relation from the target organ in the first image to each feature point inside the target organ of the target organ in the second image may be obtained, each feature point on the target organ in the first image is transformed based on the transformation relation, and determining the converted image as a registered image, namely realizing the registration of rich information in the first image to the second image.
The registration method of the tissue organ obtains a rigid registration result by obtaining a first processing image of the first image and a second processing image of the second image, performs rigid registration on a target organ in the first processing image and a target organ in the second processing image, performs elastic registration on the target organ in the rigid registration result and the target organ in the second processing image to obtain an elastic registration result, and processes the first image according to the rigid registration result and the elastic registration result to obtain a registered image. The method provides a registration verification method for different types of images, obtains an approximate registration position of a target organ to be registered through rigid registration, obtains a registration position of a surface feature point of the target organ to be registered through elastic registration, and can calculate and obtain a conversion relation of each feature point inside the target organ in a first image and a second image to be registered based on the approximate registration position of the target organ and the registration position of the surface feature point, and then registers the first image to be registered based on the conversion relation, so that rich information of the target organ contained in the first image can be registered into the second image, and registration is carried out based on the conversion relation of each feature point inside the target organ, and the problem of inaccurate result caused by the conventional registration based on pixel information under the condition that pixel information of an enhanced image and a plain scan image is asymmetric can be avoided, therefore, the registration result obtained by the registration method provided by the application is more accurate and has strong robustness.
In an embodiment, an implementation manner of the foregoing S104 is provided, and as shown in fig. 3, the foregoing S104 "processing the first image according to the rigid registration result and the elastic registration result to obtain a registered image" includes:
s201, constructing a mechanical model of the target organ based on the target organ in the rigid registration result.
After the server obtains the rigid registration result, approximate contour and volume data of the target organ in the first image and the like can be obtained, and the computer equipment can construct an algorithm according to the corresponding mechanical model based on the data and construct the mechanical model of the target organ in simulation software.
S202, solving the mechanical model by taking the difference between the elastic registration result and the rigid registration result as a boundary condition to obtain the deformation field of each point in the target organ.
When the server obtains the rigid registration result and the elastic registration result, because the rigid registration result is the approximate position matching of each point on the target organ and the elastic registration result is the approximate position matching of each point on the surface of the target organ, the difference between the elastic registration result and the rigid registration result can reflect the position change relationship of each point on the surface of the target organ.
And S203, processing the first image according to the rigid transformation matrix in the rigid registration result and the deformation field of each point in the target organ to obtain a registered image.
When the server obtains the rigid transformation matrix in the rigid registration result and the deformation fields of the points inside the target organ, the points on the target organ in the first image can be roughly registered according to the rigid transformation matrix to obtain a primary registration image, and then the internal points of the target organ in the image after primary registration are subjected to detail registration according to the deformation fields of the points inside the target organ to finally obtain the registered image.
The method solves the mechanical model of the target organ by taking the difference between the elastic registration result and the rigid registration result as the boundary condition to realize the accurate registration of each point on the target organ in different types of images, provides a method for carrying out registration based on a biomechanics model, solves the mechanical model of the target organ by utilizing finite elements, can obtain the deformation field of each point in the target organ, and then carries out registration based on the deformation field, so that the registered image can contain rich characteristic information in a first image, therefore, the registration method provided by the application can be applied to the accurate registration of the preoperative enhanced image and the intraoperative panned image, so that the intraoperative panned image can contain more highlight information in the preoperative enhanced image, and the problem of high failure rate of the operation caused by less highlight information contained in the panned image when a traditional doctor refers to the panned image to carry out the operation is solved, therefore, when the registration verification method provided by the application is applied to preoperative and intraoperative image registration, the success rate of the operation can be improved.
Further, as shown in fig. 4, the step of S203 processing the first image according to the rigid transformation matrix in the rigid registration result and the deformation fields of the points inside the target organ to obtain a registered image includes:
s301, performing translation and/or rotation transformation on the first image based on the rigid transformation matrix to obtain a transformed first image.
Specifically, when the server coarsely aligns the target organ in the first image according to the rigid transformation matrix, translation and/or rotation transformation may be performed on each point on the target organ in the first image to obtain the transformed first image.
S302, based on the deformation field of each point in the target organ, each point in the target organ in the transformed first image is translated and/or rotated to obtain a registered image.
Specifically, when the server performs fine registration on each point inside the target organ in the first image according to the deformation field of each point inside the target organ, the server may perform translation and/or rotation transformation on each point inside the target organ in the first image to obtain a registered image.
In one embodiment, as shown in fig. 5, the method described in the above embodiment of fig. 4 further includes the steps of:
s303, carrying out interpolation processing on the deformation fields of all points in the target organ to obtain the deformation fields of all pixel points in the target organ.
When the server finely registers each point inside the target organ in the first image according to the deformation field of each point inside the target organ, in order to register the first image more accurately, the server may also perform interpolation processing on the deformation field of each point inside the target organ to obtain the deformation field of each pixel point inside the target organ.
Correspondingly, when executing the step S302, the server specifically executes: and based on the deformation field of each pixel point in the target organ, translating each pixel point in the target organ in the transformed first image to obtain a registered image.
Specifically, when the server performs fine registration on each point inside the target organ in the first image according to the deformation field of each pixel point inside the target organ, each pixel point inside the target organ in the first image may be translated and/or rotated to obtain a registered image. Because each pixel point in the target organ is denser than each point in the established mechanical model of the target organ, the deformation field of each point in the mechanical model of the target organ is interpolated to make the density of each pixel point in the mechanical model of the target organ consistent, and then the deformation field of each point after interpolation can reflect the change condition of each characteristic point in the target organ more accurately, so that the registration method provided by the embodiment can obtain a more accurate registration result, and the registered image contains more abundant highlight information.
Optionally, as shown in fig. 6, before the server obtains the first processed image of the first image and the second processed image of the second image, the following steps are further performed:
s401, a first image and a second image are obtained.
The first image may be enhanced image data, and the second image may be pan image data. Alternatively, the first image may be other types of image data, and the second image may be other types of image data, as long as the first image and the second image include the same scan target.
In this embodiment, the same object may be subjected to enhanced scanning and swept scanning in advance by using a CT device to obtain enhanced image data and swept image data corresponding to the same object, and the server obtains the enhanced image data and the swept image data, that is, the first image and the second image, from the CT device by docking the CT device. For example, the server may obtain enhanced image data of the preoperative thoracoabdominal region of the patient a to obtain a first image, and obtain scout image data of the intraoperative thoracoabdominal region of the patient a to obtain a second image. Alternatively, the server may obtain image data of the target organ of the same subject at different times in other ways.
S402, extracting a first mask image of the target organ in the first image, and extracting a second mask image of the target organ in the second image.
In this embodiment, when the server acquires the first image and the second image, a preset surface reconstruction method may be further adopted to extract a Mask image (Mask) of the target organ in the first image to obtain the first Mask image, and extract a Mask image (Mask) of the target organ in the second image to obtain the second Mask image.
S403, performing surface meshing on the target organ in the first mask image to obtain a first processed image, and performing surface meshing on the target organ in the second mask image to obtain a second processed image.
In order to later establish a mechanical model for the target organ in the first image, and to perform rigid registration and elastic registration for the first image, when the server acquires a first mask image corresponding to the first image and a second mask image corresponding to the second image, further gridding the first mask image is needed, gridding the surface of the target organ in the first mask image is needed to obtain data of each feature point on the upper surface of the target organ, and performing mesh division on the second mask image, performing mesh division on the surface of the target organ in the second mask image to obtain data of each feature point on the upper surface of the target organ, so as to construct a mechanical model of the target organ based on the data of the characteristic points on the upper surface of the target organ, and realizing rigid registration and elastic registration of surface points on the target organ in the first mask image and the target organ in the second mask image.
Optionally, an implementation manner of acquiring the second image is provided, as shown in fig. 7, the implementation manner includes:
s501, acquiring a third image; the third image is an enhanced type image.
Based on the foregoing description of S101, the server may acquire a first image and a second image of the same object at different times, where the first image is an enhanced image, and the second image is a scout image, and accordingly, the server may also acquire a first image and a third image of the same object at different times, and the third image is also an enhanced image, and is only at different times with respect to the first image, for example, the first image is a CT enhanced scan image of a patient before an operation, and the third image is a CT enhanced scan image of the patient during the operation.
And S502, carrying out image transformation on the third image to obtain a second image.
When the server can acquire the enhanced type third image, the third image needs to be further converted into a flat-scan type image, i.e., the second image.
In this embodiment, three ways of obtaining the second image from the third image are provided, and the flat scan type image is obtained from the enhanced type image. Specifically, the first method is as follows: carrying out highlight information removing processing on the third image to obtain a second image; the second way is: inputting the third image into a preset image transformation network to obtain a second image; and carrying out image fusion of different phases on the third image to obtain a second image.
In the first mode, the server may segment details inside the target organ, such as blood vessels and tumors, through the enhanced image (third image) to determine the highlight position of the segmented Mask image Mask in the details inside the target organ in the original enhanced image, and uniformly assign the pixel values of the non-contrast highlight region in a certain range around the highlight position to the highlight position, and remove the highlight information in the enhanced image, thereby generating a flat scan image, i.e., a second image, according to the enhanced image.
In the second mode, the server may remove the highlighted information of the angiography in the third image by using a preset algorithm, and generate a flat-scan image in a simulation manner, where the preset algorithm may be to learn a large number of flat-scan images by generating an antagonistic network, train a model, and after the third image is imported, the antagonistic network may output the flat-scan image corresponding to the third image, that is, the second image.
In the third mode, the computer device may acquire images of different phases in advance, and then fuse the images of different phases to obtain a fused image, i.e., the second image.
The method described in the above embodiment provides a method for generating a scout image based on an enhanced image, because in an experiment, intraoperative scout images of the same scanning object are often difficult to obtain, when a registration method is provided, registration may be performed on the basis of the scout image after the enhanced image conversion first, and then the registered image may be verified on the basis of the enhanced image before the conversion, so as to verify whether the registration method provided by the present application is feasible and accurate.
In practical applications, after the server registers the first image and the second image by using the registration method described in the embodiments of fig. 1 to 7, the registration method described in the embodiments of fig. 8 to 9 may be further verified to verify whether the registration method is feasible or accurate, based on which, the present application further provides a registration verification method for a tissue organ, as shown in fig. 8, the method includes:
s601, extracting the target organ and the internal detail mask from the third image to obtain a third mask image corresponding to the third image.
Wherein the third image is an enhanced type image. The internal detail may be some detail tissues inside the object to be detected in the third image, for example, if the object to be detected is a lung tissue organ, the internal detail may be a pulmonary artery blood vessel, a lesion site, or the like. In this process, first, the server acquires a third image, and extracts a target organ and an internal detail mask in the third image by using a corresponding mask processing algorithm, so as to obtain a third mask image corresponding to the third image, so as to verify the registration method by using the third mask image.
And S602, verifying the registered image according to the third mask image.
The registered image is an image obtained by registering the first image and the second image according to the registration method described in the embodiments of fig. 1 to 7. The second image is an image obtained by performing image transformation on the third image, and the specific transformation method may refer to the method described in the foregoing embodiment of fig. 7, which is not described herein again.
In this embodiment, when the server acquires the registered image, the registered image may be further verified by using a third mask image corresponding to the third image, and since the registered image is an image obtained by registering the first image and the second image, and the second image is an image obtained by image transforming the third image, the third image is masked to obtain a corresponding third mask image, and the registered image is verified by using the third mask image, it may be effectively and accurately verified whether the registration method described in the embodiments of fig. 1 to 7 is feasible or accurate.
Further, step S602, as shown in fig. 9, includes:
and S701, extracting a target organ and an internal detail mask from the registered image to obtain a fourth mask image corresponding to the registered image.
In this embodiment, the server may extract the target organ and the internal detail mask in the registered image by using a corresponding mask image processing algorithm, so as to obtain a fourth mask image.
S702, calculating the number of repeated pixel points in the region where the target organ is located in the third mask image and the region where the target organ is located in the fourth mask image.
When the server acquires the third mask image and the fourth mask image, the server can start to verify the fourth mask image, specifically, the server can determine the pixel points with the consistent positions as the repeated pixel points by comparing the positions of the pixel points in the region where the target organ is located in the third mask image with the positions of the pixel points in the region where the target organ is located in the fourth mask image, and further determine the number of the repeated pixel points in the region where the target organ is located in the third mask image and the region where the target organ is located in the fourth mask image.
S703, obtaining a verification value according to the number of repeated pixels, the number of pixels in the region where the target organ is located in the third mask image, and the number of pixels in the region where the target organ is located in the fourth mask image, executing step S704 if the verification value is greater than a preset threshold, and executing step S705 if the verification value is not greater than the preset threshold.
And S704, determining that the registered image passes verification.
And S705, determining that the registered image fails to be verified.
Further, the server may calculate the number of repeated pixel points in the region where the target organ is located in the third mask image and the region where the target organ is located in the fourth mask image, the number of pixel points in the region where the target organ is located in the third mask image, and the number of pixel points in the region where the target organ is located in the fourth mask image, and optionally, the server may use the following relational expression (1) to obtain the verification value:
Figure BDA0003138498220000151
in the above formula, NcRepresenting the number of repeated pixel points in the region where the target organ is located in the third mask image and the region where the target organ is located in the fourth mask image; n is a radical ofaRepresenting the number of pixel points in the region of the target organ in the third mask image; n is a radical ofbRepresenting the number of pixel points in the region of the target organ in the fourth mask image; y represents a verification value.
The preset threshold is used for measuring the matching degree of the third mask image and the fourth mask image and is determined for the server in advance according to actual matching requirements. When the server obtains the verification value, the verification value may be used to verify the registered image, specifically, the verification value is compared with a preset threshold, and if the verification value is greater than the preset threshold, it indicates that the number of repeated pixel points in the region where the target organ is located in the third mask image and the region where the target organ is located in the fourth mask image is large, that is, the matching degree between the third mask image and the registered image is high, that is, the registration method described in the above embodiments of fig. 1 to 7 is feasible or the registration result is accurate, so that it is determined that the registered image passes verification in this case. If the verification value is not greater than the preset threshold, it indicates that the number of repeated pixel points in the region where the target organ is located in the third mask image and the region where the target organ is located in the fourth mask image is small, that is, the matching degree between the third mask image and the registration image is low, which indicates that the registration method described in the above-mentioned embodiments of fig. 1 to 7 is not feasible, or that the image after registration is inaccurate due to an error occurring in a certain link in the registration process, in this case, it is determined that the image after registration fails to be verified, and the steps in the registration method described in the above-mentioned embodiments of fig. 1 to 7 are returned again, where an error may occur in each step is checked to be adjusted or corrected, then the registration of the first image and the second image is performed based on each step after adjustment or correction again, and then the registration method described in the embodiments of fig. 8 to 9 is used to verify the registration method, until the verification is passed.
The registration verification method for tissue and organ provided in the embodiments of fig. 8 to 9 can verify the registration result on the basis of the completion of the registration to verify the feasibility of the registration method or the accuracy of the registration result, so as to provide a reliable and accurate registration method to improve the success rate of the operation when the registration method described in the embodiments of fig. 1 to 7 is actually applied to clinical medicine in the later period, for example, in the registration process of the preoperative enhanced image and the intraoperative scout image.
In summary, as shown in fig. 10, the present application further provides a method for verifying registration of a tissue organ, the method comprising:
s801, acquiring a third image; the third image is an enhanced type image.
And S802, performing image transformation on the third image to obtain a second image.
S803, acquiring a first image; the first image is an enhanced image, and the first image and the third image are images of the same object at different times.
S804, extracting a first mask image of the target organ in the first image.
S805, a second mask image of the target organ in the second image is extracted.
S806, performing surface gridding division on the target organ in the first mask image to obtain a first grid image.
S807, performing surface gridding division on the target organ in the second mask image to obtain a second grid image.
And S808, carrying out rigid registration on the target organ in the first grid image and the target organ in the second grid image to obtain a rigid registration result.
And S809, elastically registering the target organ in the rigid registration result and the target organ in the second grid image to obtain an elastic registration result.
And S810, constructing a mechanical model of the target organ based on the target organ in the rigid registration result.
S811, solving the mechanical model by taking the difference between the elastic registration result and the rigid registration result as a boundary condition to obtain the deformation field of each point in the target organ.
S812, performing interpolation processing on the deformation fields of all points in the target organ to obtain the deformation fields of all pixel points in the target organ.
And S813, performing translation and/or rotation transformation on the first image based on the rigid transformation matrix to obtain a transformed first image.
S814, based on the deformation field of each pixel point in the target organ, each pixel point in the target organ in the transformed first image is translated to obtain a registered image.
And S815, extracting a target organ and an internal detail mask from the registered image to obtain a fourth mask image corresponding to the registered image.
And S816, extracting the target organ and the internal detail mask from the third image to obtain a third mask image corresponding to the third image.
And S817, calculating the number of repeated pixel points in the region where the target organ is located in the third mask image and the region where the target organ is located in the fourth mask image.
S818, obtaining a verification value according to the number of repeated pixel points, the number of pixel points in the region where the target organ in the third mask image is located and the number of pixel points in the region where the target organ in the fourth mask image is located, if the verification value is larger than a preset threshold value, determining that the registered image passes verification, if the verification value is not larger than the preset threshold value, determining that the registered image fails verification, returning to the step S801, registering the first image and the second image again, and at least until the registered image passes verification.
The above steps are described in the foregoing embodiments, and for details, refer to the foregoing description, which is not repeated herein. In summary, the present application provides a method for verifying registration of tissue and organ, that is, a method for performing registration and verification on different types of images is provided, and the registration method or the registration result is verified, and then the registration method is returned and readjusted based on the verification result, so that the registration method provided by the present application is more reliable in practical application, and further provides the robustness of the system.
It should be understood that although the various steps in the flow charts of fig. 2-10 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-10 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed in turn or alternately with other steps or at least some of the other steps.
In one embodiment, as shown in fig. 11, there is provided a registration apparatus of a tissue organ, including:
an obtaining module 11, configured to obtain a first processed image of the first image and a second processed image of the second image; the first image and the second image contain target organs of the same object at different times.
And a rigid registration module 12, configured to perform rigid registration on the target organ in the first processed image and the target organ in the second processed image to obtain a rigid registration result.
And an elastic registration module 13, configured to perform elastic registration on the target organ in the rigid registration result and the target organ in the second processed image to obtain an elastic registration result.
In one embodiment, as shown in fig. 12, the elastic registration module 13 includes:
a constructing unit 131, configured to construct a mechanical model of the target organ based on the target organ in the rigid registration result;
a solving unit 132, configured to solve the mechanical model by using a difference between the elastic registration result and the rigid registration result as a boundary condition, so as to obtain a deformation field of each point inside the target organ;
and the processing unit 133 is configured to process the first image according to the rigid transformation matrix in the rigid registration result and the deformation fields of the points inside the target organ, so as to obtain a registered image.
In one embodiment, as shown in fig. 13, the processing unit 133 includes:
a first transformation subunit 1331, configured to perform translation and/or rotation transformation on the first image based on the rigid transformation matrix to obtain a transformed first image;
a second transformation subunit 1332, configured to perform translation and/or rotation on each point inside the target organ in the transformed first image based on the deformation field of each point inside the target organ, so as to obtain the registered image.
In an embodiment, as shown in fig. 14, the processing unit 133 further includes:
an interpolation processing unit 1333, configured to perform interpolation processing on the deformation fields of the points inside the target organ to obtain the deformation field of each pixel point inside the target organ;
correspondingly, the second transforming subunit 1332 is specifically configured to perform translation on each pixel point inside the target organ in the transformed first image based on the deformation field of each pixel point inside the target organ, so as to obtain the registered image.
In one embodiment, as shown in fig. 15, the obtaining module 11 includes:
an acquisition unit 111 configured to acquire the first image and the second image;
an extracting unit 112, configured to extract a first mask image of the target organ in the first image and a second mask image of the target organ in the second image;
a dividing unit 113, configured to perform surface mesh division on the target organ in the first mask image to obtain the first processed image, and perform surface mesh division on the target organ in the second mask image to obtain the second processed image.
In an embodiment, as shown in fig. 16, the above-mentioned registration apparatus for tissue and organ further includes:
an acquire initial image module 14 for acquiring a third image; the third image is an enhanced type image;
and the transformation module 15 is configured to perform image transformation on the third image to obtain the second image.
In an embodiment, the transformation module is specifically configured to perform highlight information removal processing on the third image to obtain the second image; or inputting the third image into a preset image transformation network to obtain the second image.
In one embodiment, as shown in fig. 17, there is provided a registration verification apparatus of a tissue organ, including:
the segmentation module 21 is configured to extract a target organ and an internal detail mask from a third image to obtain a third mask image corresponding to the third image; the third image is an enhanced type image;
a verification module 22, configured to verify the registered image according to the third mask image; the registered image is an image obtained by registering the first image and the second image according to the registration device as shown in fig. 11-17.
In one embodiment, as shown in fig. 18, the verification module 22 includes:
a segmentation unit 221, configured to extract a target organ and an internal detail mask from the registered image, so as to obtain a fourth mask image corresponding to the registered image;
a calculating unit 222, configured to calculate the number of repeated pixel points in the region where the target organ in the third mask image is located and the region where the target organ in the fourth mask image is located;
an obtaining unit 223, configured to obtain a verification value according to the number of repeated pixel points, the number of pixel points in the region where the target organ is located in the third mask image, and the number of pixel points in the region where the target organ is located in the fourth mask image;
a verification unit 224, configured to determine that the registered image is verified to be passed if the verification value is greater than a preset threshold, and determine that the registered image is not verified to be failed if the verification value is not greater than the preset threshold.
For specific definition of the registration apparatus for the tissue organ, reference may be made to the above definition of the registration method for the tissue organ, which is not described herein again. For specific definition of the registration verification apparatus for a tissue organ, reference may be made to the above definition of the registration verification method for a tissue organ, which is not described herein again. The modules in the above-mentioned tissue organ registration device and tissue organ registration verification device can be implemented in whole or in part by software, hardware and their combination. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
The registration method of a tissue organ and the registration verification method of a tissue organ provided by the present application may be applied to a computer device shown in fig. 19, where the computer device may be a server, the computer device may also be a terminal, and an internal structure diagram of the computer device may be as shown in fig. 19. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method of registration of a tissue organ and a method of verification of a tissue organ. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 19 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
acquiring a first processing image of the first image and a second processing image of the second image; the first image and the second image comprise target organs of the same object at different periods;
rigidly registering a target organ in the first processing image and a target organ in the second processing image to obtain a rigid registration result;
elastically registering the target organ in the rigid registration result and the target organ in the second processing image to obtain an elastic registration result;
and processing the first image according to the rigid registration result and the elastic registration result to obtain a registered image.
The implementation principle and technical effect of the computer device provided by the above embodiment are similar to those of the above method embodiment, and are not described herein again.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring a first processing image of the first image and a second processing image of the second image; the first image and the second image comprise target organs of the same object at different periods;
rigidly registering a target organ in the first processing image and a target organ in the second processing image to obtain a rigid registration result;
elastically registering the target organ in the rigid registration result and the target organ in the second processing image to obtain an elastic registration result;
and processing the first image according to the rigid registration result and the elastic registration result to obtain a registered image.
The implementation principle and technical effect of the computer-readable storage medium provided by the above embodiments are similar to those of the above method embodiments, and are not described herein again.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A method of tissue organ registration, the method comprising:
acquiring a first processing image of the first image and a second processing image of the second image; the first image and the second image comprise target organs of the same object at different periods;
rigidly registering a target organ in the first processing image and a target organ in the second processing image to obtain a rigid registration result;
elastically registering the target organ in the rigid registration result and the target organ in the second processing image to obtain an elastic registration result;
and processing the first image according to the rigid registration result and the elastic registration result to obtain a registered image.
2. The method of claim 1, wherein the processing the first image according to the rigid registration result and the elastic registration result to obtain a registered image comprises:
constructing a mechanical model of the target organ based on the target organ in the rigid registration result;
solving the mechanical model by taking the difference between the elastic registration result and the rigid registration result as a boundary condition to obtain a deformation field of each point in the target organ;
and processing the first image according to the rigid transformation matrix in the rigid registration result and the deformation field of each point in the target organ to obtain a registered image.
3. The method according to claim 2, wherein the processing the first image according to the rigid transformation matrix in the rigid registration result and the deformation fields of the points inside the target organ to obtain a registered image comprises:
carrying out translation and/or rotation transformation on the first image based on the rigid transformation matrix to obtain a transformed first image;
and translating and/or rotating each point in the target organ in the transformed first image based on the deformation field of each point in the target organ to obtain the registered image.
4. The method of claim 3, further comprising:
carrying out interpolation processing on the deformation fields of all points in the target organ to obtain the deformation fields of all pixel points in the target organ;
the translating, based on the deformation field of each point inside the target organ, each point inside the target organ in the transformed first image to obtain the registered image includes:
and translating each pixel point inside the target organ in the transformed first image based on the deformation field of each pixel point inside the target organ to obtain the registered image.
5. The method of any of claims 1-4, wherein said obtaining a first processed image of the first image and a second processed image of the second image comprises:
acquiring the first image and the second image;
extracting a first mask image of a target organ in the first image and extracting a second mask image of the target organ in the second image;
and performing surface meshing on the target organ in the first mask image to obtain the first processed image, and performing surface meshing on the target organ in the second mask image to obtain the second processed image.
6. The method of claim 1, wherein the first image is an enhanced type image, the second image is a scout type image, and the method further comprises, prior to acquiring the first processed image of the first image and the second processed image of the second image:
acquiring a third image, wherein the third image is an enhanced image;
and carrying out image transformation on the third image to obtain the second image.
7. The method of claim 6, wherein the image transforming the third image to obtain the second image comprises:
removing highlight information from the third image to obtain a second image;
or inputting the third image into a preset image transformation network to obtain the second image.
8. A method of registration verification of a tissue organ, the method comprising:
extracting a target organ and an internal detail mask from a third image to obtain a third mask image corresponding to the third image; the third image is an enhanced type image;
verifying the registered image according to the third mask image; the registered images are images of the first and second images registered according to the method of any of claims 1-7.
9. The method of claim 8, wherein verifying the registered image from the third mask image comprises:
extracting a target organ and an internal detail mask from the registered image to obtain a fourth mask image corresponding to the registered image;
calculating the number of repeated pixel points in the region where the target organ is located in the third mask image and the region where the target organ is located in the fourth mask image;
obtaining a verification value according to the number of the repeated pixel points, the number of the pixel points in the region where the target organ is located in the third mask image and the number of the pixel points in the region where the target organ is located in the fourth mask image;
if the verification value is larger than a preset threshold value, the registered image is determined to be verified to be passed, and if the verification value is not larger than the preset threshold value, the registered image is determined to be verified to be failed.
10. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 9 when executing the computer program.
CN202110728649.5A 2021-04-23 2021-06-29 Method, apparatus, computer device and storage medium for registering tissue and organ Pending CN113506331A (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN202110728649.5A CN113506331A (en) 2021-06-29 2021-06-29 Method, apparatus, computer device and storage medium for registering tissue and organ
EP22791147.6A EP4318393A1 (en) 2021-04-23 2022-04-22 Surgical path processing system, method, apparatus and device, and storage medium
PCT/CN2022/088607 WO2022223042A1 (en) 2021-04-23 2022-04-22 Surgical path processing system, method, apparatus and device, and storage medium
US18/492,743 US20240050172A1 (en) 2021-04-23 2023-10-23 Surgical pathway processing system, method, device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110728649.5A CN113506331A (en) 2021-06-29 2021-06-29 Method, apparatus, computer device and storage medium for registering tissue and organ

Publications (1)

Publication Number Publication Date
CN113506331A true CN113506331A (en) 2021-10-15

Family

ID=78009315

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110728649.5A Pending CN113506331A (en) 2021-04-23 2021-06-29 Method, apparatus, computer device and storage medium for registering tissue and organ

Country Status (1)

Country Link
CN (1) CN113506331A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022223042A1 (en) * 2021-04-23 2022-10-27 武汉联影智融医疗科技有限公司 Surgical path processing system, method, apparatus and device, and storage medium
CN115908515A (en) * 2022-11-11 2023-04-04 北京百度网讯科技有限公司 Image registration method, and training method and device of image registration model
CN116503453A (en) * 2023-06-21 2023-07-28 福建自贸试验区厦门片区Manteia数据科技有限公司 Image registration method, image registration device, computer-readable storage medium and electronic device
WO2024002221A1 (en) * 2022-06-30 2024-01-04 武汉联影智融医疗科技有限公司 Imaging-assisted method, system and apparatus for interventional operation, and storage medium

Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005038412A (en) * 2003-06-30 2005-02-10 Sony Corp Image verification device, image verification method, and program
US20070208234A1 (en) * 2004-04-13 2007-09-06 Bhandarkar Suchendra M Virtual Surgical System and Methods
US20120155734A1 (en) * 2009-08-07 2012-06-21 Ucl Business Plc Apparatus and method for registering two medical images
CN103700086A (en) * 2012-09-28 2014-04-02 西门子公司 Image registration verification method and system
US20140371911A1 (en) * 2013-06-17 2014-12-18 International Electronic Machines Corporation Pre-Screening for Robotic Work
CN105550993A (en) * 2016-01-18 2016-05-04 中国空间技术研究院 Multiple transform domain based super-resolution reconstruction method
CN105640583A (en) * 2016-03-31 2016-06-08 上海联影医疗科技有限公司 Angiography method
US20170046833A1 (en) * 2015-08-10 2017-02-16 The Board Of Trustees Of The Leland Stanford Junior University 3D Reconstruction and Registration of Endoscopic Data
CN108135565A (en) * 2015-10-09 2018-06-08 因赛泰克有限公司 For being registrated the image system and method that simultaneously authentication image is registrated obtained using various image modes
CN109859833A (en) * 2018-12-28 2019-06-07 北京理工大学 The appraisal procedure and device of ablative surgery therapeutic effect
CN110175958A (en) * 2019-04-24 2019-08-27 艾瑞迈迪科技石家庄有限公司 A kind of ablation interpretation of result method and system based on medical image
CN110223303A (en) * 2019-05-13 2019-09-10 清华大学 HE dyes organ pathological image dividing method, device
CN110473196A (en) * 2019-08-14 2019-11-19 中南大学 A kind of abdominal CT images target organ method for registering based on deep learning
CN110517300A (en) * 2019-07-15 2019-11-29 温州医科大学附属眼视光医院 Elastic image registration algorithm based on partial structurtes operator
CN110838104A (en) * 2019-10-30 2020-02-25 上海联影智能医疗科技有限公司 Multi-time point region of interest matching method, device and storage medium
CN110838140A (en) * 2019-11-27 2020-02-25 艾瑞迈迪科技石家庄有限公司 Ultrasound and nuclear magnetic image registration fusion method and device based on hybrid supervised learning
CN111062997A (en) * 2019-12-09 2020-04-24 上海联影医疗科技有限公司 Angiography imaging method, system, equipment and storage medium
CN111145160A (en) * 2019-12-28 2020-05-12 上海联影医疗科技有限公司 Method, device, server and medium for determining coronary artery branch where calcified area is located
CN111210431A (en) * 2019-12-27 2020-05-29 上海联影智能医疗科技有限公司 Blood vessel segmentation method, device, equipment and storage medium
CN112419377A (en) * 2020-11-20 2021-02-26 推想医疗科技股份有限公司 Method and device for determining registered image
CN112419378A (en) * 2020-11-20 2021-02-26 上海联影智能医疗科技有限公司 Medical image registration method, electronic device, and storage medium
CN112927274A (en) * 2021-02-02 2021-06-08 深圳蓝韵医学影像有限公司 Dual-energy subtraction image registration method, device and equipment and readable storage medium

Patent Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005038412A (en) * 2003-06-30 2005-02-10 Sony Corp Image verification device, image verification method, and program
US20070208234A1 (en) * 2004-04-13 2007-09-06 Bhandarkar Suchendra M Virtual Surgical System and Methods
US20120155734A1 (en) * 2009-08-07 2012-06-21 Ucl Business Plc Apparatus and method for registering two medical images
CN103700086A (en) * 2012-09-28 2014-04-02 西门子公司 Image registration verification method and system
US20140371911A1 (en) * 2013-06-17 2014-12-18 International Electronic Machines Corporation Pre-Screening for Robotic Work
US20170046833A1 (en) * 2015-08-10 2017-02-16 The Board Of Trustees Of The Leland Stanford Junior University 3D Reconstruction and Registration of Endoscopic Data
CN108135565A (en) * 2015-10-09 2018-06-08 因赛泰克有限公司 For being registrated the image system and method that simultaneously authentication image is registrated obtained using various image modes
CN105550993A (en) * 2016-01-18 2016-05-04 中国空间技术研究院 Multiple transform domain based super-resolution reconstruction method
CN105640583A (en) * 2016-03-31 2016-06-08 上海联影医疗科技有限公司 Angiography method
CN109859833A (en) * 2018-12-28 2019-06-07 北京理工大学 The appraisal procedure and device of ablative surgery therapeutic effect
CN110175958A (en) * 2019-04-24 2019-08-27 艾瑞迈迪科技石家庄有限公司 A kind of ablation interpretation of result method and system based on medical image
CN110223303A (en) * 2019-05-13 2019-09-10 清华大学 HE dyes organ pathological image dividing method, device
CN110517300A (en) * 2019-07-15 2019-11-29 温州医科大学附属眼视光医院 Elastic image registration algorithm based on partial structurtes operator
CN110473196A (en) * 2019-08-14 2019-11-19 中南大学 A kind of abdominal CT images target organ method for registering based on deep learning
CN110838104A (en) * 2019-10-30 2020-02-25 上海联影智能医疗科技有限公司 Multi-time point region of interest matching method, device and storage medium
CN110838140A (en) * 2019-11-27 2020-02-25 艾瑞迈迪科技石家庄有限公司 Ultrasound and nuclear magnetic image registration fusion method and device based on hybrid supervised learning
CN111062997A (en) * 2019-12-09 2020-04-24 上海联影医疗科技有限公司 Angiography imaging method, system, equipment and storage medium
CN111210431A (en) * 2019-12-27 2020-05-29 上海联影智能医疗科技有限公司 Blood vessel segmentation method, device, equipment and storage medium
CN111145160A (en) * 2019-12-28 2020-05-12 上海联影医疗科技有限公司 Method, device, server and medium for determining coronary artery branch where calcified area is located
CN112419377A (en) * 2020-11-20 2021-02-26 推想医疗科技股份有限公司 Method and device for determining registered image
CN112419378A (en) * 2020-11-20 2021-02-26 上海联影智能医疗科技有限公司 Medical image registration method, electronic device, and storage medium
CN112927274A (en) * 2021-02-02 2021-06-08 深圳蓝韵医学影像有限公司 Dual-energy subtraction image registration method, device and equipment and readable storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
XINGANG LIU ET AL.: "A New Hybridized Rigid-Elastic Multiresolution Algorithm for Medical Image Registration", 《2005 IEEE ENGINEERING IN MEDICINE AND BIOLOGY 27TH ANNUAL CONFERENCE》 *
崔巧玉: "基于多模成像的医学图像配准及融合", 《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》 *
杨杰等: "《医学影像分析和三维重建及其应用》", 31 January 2015, 上海交通大学出版社 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022223042A1 (en) * 2021-04-23 2022-10-27 武汉联影智融医疗科技有限公司 Surgical path processing system, method, apparatus and device, and storage medium
WO2024002221A1 (en) * 2022-06-30 2024-01-04 武汉联影智融医疗科技有限公司 Imaging-assisted method, system and apparatus for interventional operation, and storage medium
CN115908515A (en) * 2022-11-11 2023-04-04 北京百度网讯科技有限公司 Image registration method, and training method and device of image registration model
CN115908515B (en) * 2022-11-11 2024-02-13 北京百度网讯科技有限公司 Image registration method, training method and device of image registration model
CN116503453A (en) * 2023-06-21 2023-07-28 福建自贸试验区厦门片区Manteia数据科技有限公司 Image registration method, image registration device, computer-readable storage medium and electronic device
CN116503453B (en) * 2023-06-21 2023-09-26 福建自贸试验区厦门片区Manteia数据科技有限公司 Image registration method, image registration device, computer-readable storage medium and electronic device

Similar Documents

Publication Publication Date Title
CN113506331A (en) Method, apparatus, computer device and storage medium for registering tissue and organ
CN111161326B (en) System and method for unsupervised deep learning of deformable image registration
Schnabel et al. Validation of nonrigid image registration using finite-element methods: application to breast MR images
Haouchine et al. Image-guided simulation of heterogeneous tissue deformation for augmented reality during hepatic surgery
CN107123137B (en) Medical image processing method and equipment
Han et al. A nonlinear biomechanical model based registration method for aligning prone and supine MR breast images
US11382603B2 (en) System and methods for performing biomechanically driven image registration using ultrasound elastography
Zhang et al. 3-D reconstruction of the spine from biplanar radiographs based on contour matching using the hough transform
Haouchine et al. Monocular 3D reconstruction and augmentation of elastic surfaces with self-occlusion handling
CN110473226B (en) Training method of image processing network, computer device and readable storage medium
Shao et al. Augmented reality calibration using feature triangulation iteration-based registration for surgical navigation
Zhou et al. A real-time and registration-free framework for dynamic shape instantiation
CN109350059B (en) Combined steering engine and landmark engine for elbow auto-alignment
CN113989110A (en) Lung image registration method and device, computer equipment and storage medium
CN113129418B (en) Target surface reconstruction method, device, equipment and medium based on three-dimensional image
CN110473241B (en) Image registration method, storage medium and computer device
Sun et al. Design of the image-guided biopsy marking system for gastroscopy
JP6716228B2 (en) Medical image processing apparatus and medical image processing method
CN111566699A (en) Registration of static pre-procedural planning data to dynamic intra-procedural segmentation data
Zakkaroff et al. Patient-specific coronary blood supply territories for quantitative perfusion analysis
CN116368516A (en) Multi-modal clinical image alignment method and apparatus using joint synthesis, segmentation and registration
WO2020090445A1 (en) Region correction device, method, and program
JP7153261B2 (en) IMAGE PROCESSING DEVICE, OPERATING METHOD OF IMAGE PROCESSING DEVICE, AND IMAGE PROCESSING PROGRAM
Zeng et al. Low‐dose three‐dimensional reconstruction of the femur with unit free‐form deformation
Kitasaka et al. Lung area extraction from 3D chest X‐ray CT images using a shape model generated by a variable Bézier surface

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination