CN117670960A - Image registration method, system and equipment based on lung CT-CBCT - Google Patents

Image registration method, system and equipment based on lung CT-CBCT Download PDF

Info

Publication number
CN117670960A
CN117670960A CN202410138906.3A CN202410138906A CN117670960A CN 117670960 A CN117670960 A CN 117670960A CN 202410138906 A CN202410138906 A CN 202410138906A CN 117670960 A CN117670960 A CN 117670960A
Authority
CN
China
Prior art keywords
image
tumor
images
target
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410138906.3A
Other languages
Chinese (zh)
Inventor
袁鹏
奚岩
王峰
李巍
陈阳
左雨杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Yiying Information Technology Co ltd
Jiangsu Yiying Medical Equipment Co ltd
Original Assignee
Shanghai Yiying Information Technology Co ltd
Jiangsu Yiying Medical Equipment Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Yiying Information Technology Co ltd, Jiangsu Yiying Medical Equipment Co ltd filed Critical Shanghai Yiying Information Technology Co ltd
Priority to CN202410138906.3A priority Critical patent/CN117670960A/en
Publication of CN117670960A publication Critical patent/CN117670960A/en
Pending legal-status Critical Current

Links

Abstract

The application discloses an image registration method, system and equipment based on lung CT-CBCT, wherein the method comprises the following steps: acquiring a plurality of first CT images and a plurality of second CT images; matching respiratory phases of the first CT images and the second CT images by using a similarity matching method; carrying out rigid registration on the lung of the first target CT image and the second target CT image which are in the same respiratory phase; after rigid registration, a deep learning network model is adopted to segment a tumor area of the first/second target CT image, so as to respectively obtain a tumor area image and an ablation needle area image; and performing non-rigid registration on the segmented tumor region image and the ablation needle region image, performing subtraction operation on the CT image after the non-rigid registration and the second target CT image, and mapping the segmented tumor region image and the ablation needle region image onto the subtracted image so as to observe whether the ablation needle region covers the tumor region or not.

Description

Image registration method, system and equipment based on lung CT-CBCT
Technical Field
The present application relates to the field of computer image processing technology, and in particular, to a method, system and apparatus for image registration based on lung CT-CBCT.
Background
CT is an acronym for Computed Tomography, computed tomography. It is a medical examination method that uses X-rays to scan different parts of the body through multiple angles to obtain images of tissue structures. When in use, the X-ray emitting path is fan-shaped, the patient lies on the back, and the CT machine shaft position scans the head to obtain an image and reconstruct a section. CBCT is an abbreviation for Cone beam CT, i.e. Cone beam CT. CBCT is where the X-ray emission path is cone-shaped, the patient stands in the transillumination area, and the CBCT machine scans the head 360 ° to obtain images and performs stereo reconstruction. Compared with CT, CBCT has the advantages of small radiation dose, faster imaging, etc.
CT/CBCT is a mature means in the field of modern medicine, and in the tumor ablation operation scene of the lung, including bronchoscope lung biopsy, it is usually required to check CT scanning before one tumor ablation operation, and cooperate with multiple thick-layer scanning of the tumor ablation operation in operation, so as to determine the target position in the tumor ablation operation in real time. The advantage of this is that: compared with the scheme of using multiple CT, the scheme of combining CT with CBCT has the advantage that the radiation dose is greatly reduced on the premise of ensuring the clarity of pulmonary nodules. That is, registration between the CT image of the lung and the CBCT image is required.
However, since the lungs are the most important organ in the respiratory system of the human body, it is impossible to remain completely stationary during the process of taking CT images or CBCT images multiple times. In particular, the difference in pixel gray level between the lung and other organs (including subcutaneous tissue, kidney and stomach) is relatively small, and the registration difficulty is also relatively high, so that blurring and artifact of the image are necessarily caused in the process of registering the CT image with the CBCT image. Especially when the direction in the CT image is different from that in the CBCT image. In this case, the lung deformation between the CT image and the CBCT image is more irregular and larger than the deformation of the patient with the same direction in both images. It is more difficult to obtain a clear image of the lungs.
Disclosure of Invention
In order to solve the technical problems, the application provides an image registration method, an image registration system and image registration equipment based on lung CT-CBCT, which realize automatic registration of a CT image before lung tumor ablation operation and a CBCT image in lung tumor ablation operation through a series of automatic registration operations, so as to obtain clear lung CT images with tumor boundaries and ablation needle boundaries.
Specifically, the technical scheme of the application is as follows:
in a first aspect, the present application discloses a pulmonary CT-CBCT based image registration method, comprising:
Acquiring a plurality of first CT images and a plurality of second CT images; the first CT image is a CT image before lung tumor ablation operation; the second CT image is a CBCT image in lung tumor ablation operation;
matching respiratory phases of a plurality of first CT images and a plurality of second CT images by using a similarity matching method; matching to obtain a first target CT image and a second CT target image with the same respiratory phase;
performing rigid registration of the lungs on the first target CT image and the second target CT image;
after rigid registration, a deep learning network model is adopted to segment a tumor area of the first target CT image, so as to obtain a tumor area image; dividing an ablation needle region by adopting the deep learning network model to the second target CT image to obtain an ablation needle region image;
non-rigid registration is carried out on the tumor area image and the ablation needle area image which are obtained through segmentation, so that a third CT image with tumor and ablation needle is obtained;
after non-rigid registration, performing subtraction operation on the third CT image and the second target CT image to obtain a fourth CT image with pure blood vessels; the tumor region image and the ablation needle region image are mapped onto the fourth CT image to see if the ablation needle region covers the tumor region.
In some embodiments, the matching of respiratory phases of the plurality of first CT images and the plurality of second CT images using the similarity matching method further includes:
and respectively performing pre-segmentation processing on the first CT images and the second CT images to obtain a lung region serving as an interested region.
In some embodiments, the pre-segmentation processing is performed on a plurality of first/second CT images, respectively, including:
initially screening a plurality of first/second CT images with lung expansion degree meeting target expansion requirements during respiratory movement;
analyzing the first/second CT images by using a Kmeans clustering algorithm, and identifying lung areas and other tissue areas;
a threshold range is set and a threshold segmentation algorithm is used to segment the lung region.
In some embodiments, the rigidly registering the first target CT image with the second target CT image for the lung comprises:
by using a rigid registration method in a simpleitk library, the first target CT image and the second target CT image with the same respiratory phase are subjected to simple rigid transformation such as translation, rotation, scaling and the like, so that the first target CT image and the second target CT image are aligned in the same space, and rigid registration is realized.
In some embodiments, the deep learning network model is a Teacher-Student model based on knowledge distillation technology.
In some embodiments, the non-rigid registration of the segmented tumor region image with the ablation needle region image comprises:
the tumor region image is non-rigidly registered with the ablation needle region image using a differential-embryo-based Demons deformation registration algorithm.
In some embodiments, said mapping said tumor region image and said ablation needle region image onto said fourth CT image comprises:
mapping the tumor area image onto the fourth CT image to obtain a fifth CT image with a tumor boundary;
and mapping the ablation needle area image to the fifth CT image to obtain a sixth CT image with a tumor boundary and an ablation needle boundary.
In a second aspect, the present application further discloses a pulmonary CT-CBCT based image registration system, which uses the pulmonary CT-CBCT based image registration method described in any one of the above embodiments to implement image registration, the system includes:
the image acquisition module acquires a plurality of first CT images and a plurality of second CT images; the first CT image is a CT image before lung tumor ablation operation; the second CT image is a CBCT image in lung tumor ablation operation;
The similarity matching module is used for matching breathing phases of the plurality of first CT images and the plurality of second CT images by using a similarity matching method; matching to obtain a first target CT image and a second CT target image with the same respiratory phase;
a first registration module for rigid registration of the lungs with the first target CT image and the second target CT image;
the region segmentation module is used for segmenting the tumor region of the first target CT image by adopting a deep learning network model after rigid registration to obtain a tumor region image; dividing an ablation needle region by adopting the deep learning network model to the second target CT image to obtain an ablation needle region image;
the second registration module is used for carrying out non-rigid registration on the segmented tumor area image and the ablation needle area image to obtain a third CT image with tumor and ablation needle;
the subtraction mapping module is used for performing subtraction operation on the third CT image and the second target CT image after non-rigid registration to obtain a fourth CT image with pure blood vessels; and is further configured to map the tumor region image and the ablation needle region image onto the fourth CT image to see if the ablation needle region covers the tumor region.
In some embodiments, the pulmonary CT-CBCT-based image registration system further comprises:
and the preprocessing module is used for respectively carrying out pre-segmentation processing on the first CT images and the second CT images to obtain a lung region serving as a region of interest.
In a third aspect, the present application also discloses an apparatus comprising a pulmonary CT-CBCT based image registration system as described in any of the embodiments above.
Compared with the prior art, the application has at least one of the following beneficial effects:
1. all steps in the method are carried out based on an image domain, and a series of automatic registration operations are carried out on a CT image before the lung tumor ablation operation and a CBCT image in the lung tumor ablation operation, so that a clear lung CT image with a tumor boundary and an ablation needle boundary can be obtained. And the registration accuracy is high, and the method can also be used for observing whether the ablation needle area covers the tumor area.
2. The registration operation in the method comprises the operations of image segmentation, respiratory matching, rigid registration, non-rigid registration, subtraction, image mapping and the like, which are automatically executed based on an artificial intelligence algorithm, so that the manual participation is reduced, the labor cost is saved, the operation is faster and the accuracy is higher.
Drawings
The above features, technical features, advantages and implementation of the present application will be further described in the following description of preferred embodiments in a clear and easily understood manner with reference to the accompanying drawings.
FIG. 1 is a schematic flow chart of one embodiment of a method provided herein;
FIG. 2 is a schematic flow chart of another method embodiment provided herein;
FIG. 3 is a schematic representation of the effect of a substantially segmented image of a lung region in an embodiment of the present application;
FIG. 4 is a schematic CT image of the lung prior to an ablative procedure and a CBCT image of the lung during a tumor ablative procedure in an embodiment of the present application;
FIG. 5 is a schematic, rigid registered image effect in an embodiment of the present application;
FIG. 6 is a non-rigid registered image effect illustrated in an embodiment of the present application;
FIG. 7 is a schematic diagram of a process of micro-state operation of respiratory subtraction in an embodiment of the present application;
FIG. 8 is a region map image effect illustrated in an embodiment of the present application;
FIG. 9 is a schematic diagram of an embodiment of a system provided herein;
fig. 10 is a schematic flow chart of the operation of another system embodiment provided in the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system configurations, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. However, it will be apparent to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
For simplicity of the drawing, only the parts relevant to the invention are schematically shown in each drawing, and they do not represent the actual structure thereof as a product. Additionally, in order to simplify the drawing for ease of understanding, components having the same structure or function in some of the drawings are shown schematically with only one of them, or only one of them is labeled. Herein, "a" means not only "only this one" but also "more than one" case.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
In this context, it should be noted that the terms "mounted," "connected," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected, unless explicitly stated or limited otherwise; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the terms in this application will be understood by those of ordinary skill in the art in a specific context.
In particular implementations, the terminal devices described in embodiments of the present application include, but are not limited to, other portable devices such as mobile phones, laptop computers, home teaching machines, or tablet computers having a touch-sensitive surface (e.g., a touch screen display and/or a touch pad). It should also be appreciated that in some embodiments, the terminal device is not a portable communication device, but rather a desktop computer having a touch-sensitive surface (e.g., a touch screen display and/or a touch pad).
In addition, in the description of the present application, the terms "first," "second," and the like are used merely to distinguish between descriptions and are not to be construed as indicating or implying relative importance.
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the following description will explain specific embodiments of the present application with reference to the accompanying drawings. It is obvious that the drawings in the following description are only examples of the present application, and that other drawings and other embodiments may be obtained from these drawings by those skilled in the art without undue effort.
Image registration plays a very important role in tumor ablation procedures, especially in multi-modality tumor ablation procedures. In this application, the purpose of registering the CT and CBCT images of the lung is to align the lung regions in the two images to observe the outcome of the tumor ablation procedure and to reference the tumor ablation procedure. It takes a lot of time and affects the accuracy of the registration if global registration is used. In particular, the difference in pixel gray level between the lungs and other organs (including subcutaneous tissue, kidneys and stomach) is small, which affects the overall registration, and furthermore, the optimization process may converge to local minima. Therefore, it is necessary to divide the lungs into regions of interest (ROIs, region of interest). Some researchers have attempted to register CT images before and during tumor ablation procedures with segmented lungs as a mask. However, most of their lung segmentation techniques are manual, time consuming and not accurate enough. Although related researchers have solved the problem of requiring manual segmentation by segmenting the lungs using a deep learning algorithm, the results of final image registration are still not accurate enough.
Currently, pulmonary registration between multimodal images remains a challenge, especially when the orientation in the CT image is different from that in the CBCT image. In this case, the lung deformation between the CT image and the CBCT image is more irregular and larger than the deformation of the patient with the same direction in both images. In order to solve the problem, the application provides an image registration method based on lung CT-CBCT, which has high efficiency and accuracy and can be applied to lung tumor ablation operation.
Specifically, referring to fig. 1 of the specification, an embodiment of an image registration method based on lung CT-CBCT provided in the present application is applied to a lung tumor ablation scene, and includes the following steps:
s100, acquiring a plurality of first CT images and a plurality of second CT images; the first CT image is a CT image before lung tumor ablation operation; the second CT image is a CBCT image in a lung tumor ablation operation. Specifically, in this embodiment, the first CT image, the second CT image, and all subsequently generated images may be presented in the form of three-dimensional images.
S200, matching respiratory phases of a plurality of first CT images and a plurality of second CT images by using a similarity matching method; and matching to obtain a first target CT image and a second CT target image which are in the same respiratory phase. Specifically, the CT image (first target CT image) before the segmented tumor ablation operation and the CT image (second target CT image) in the tumor ablation operation are matched with each other by selecting the image with the largest similarity by using a template matching method.
S300, carrying out rigid registration on the lung of the first target CT image and the second target CT image. Preferably, the rigid registration is realized by using a rigid registration method in a simpleitk library to perform simple rigid transformation such as translation, rotation, scaling and the like on the first target CT image and the second target CT image which are in the same respiratory phase, so that the first target CT image and the second target CT image are aligned in the same space.
Specifically, a simple 3D rigid registration alignment is performed on a CT image (a first target CT image) before a tumor ablation operation and a CT image (a second target CT image) during the tumor ablation operation, which are well-matched in respiratory phases, by using a rigid registration method in a simpleitk library as an image preprocessing process of a respiratory subtraction technique, and in the rigid registration step, a binary image of a lung in the first/second target CT image is registered, instead of a gray image of the lung, because the binary image registration is faster and more accurate than a registration method based on gray information.
S400, after rigid registration, a Teacher-Student model based on a knowledge distillation technology is adopted to segment a tumor area of the first target CT image, so that a tumor area image is obtained; and adopting the deep learning network model to segment the ablation needle region of the second target CT image so as to obtain an ablation needle region image. Specifically, for a CT image (first target CT image) before a tumor ablation operation and a CT image (second target CT image) in the tumor ablation operation that have been simply registered, a semi-supervised teacher-student model of a deep convolutional neural network is used to segment lung tumors from CT scans, taking a 3DCT scan as input, and outputting a 3D segmentation of the primary tumor. The parameters of the model need to be set according to the actual training situation of the model.
S500, performing non-rigid registration on the segmented tumor region image and the ablation needle region image by using a differential synblast-based Demons deformation registration algorithm to obtain a third CT image with tumor and ablation needle. Specifically, the deformation field generated by differential co-fetal non-rigid registration based on demons is used for non-rigid registration of a CT image before a tumor ablation operation and a CT image in the tumor ablation operation of a segmented tumor region.
S600, after non-rigid registration, performing subtraction operation on the third CT image and the second target CT image to obtain a fourth CT image with pure blood vessels; the tumor region image and the ablation needle region image are mapped onto the fourth CT image to see if the ablation needle region covers the tumor region.
Specifically, mapping the tumor area image onto the fourth CT image to obtain a fifth CT image with a tumor boundary; and mapping the ablation needle area image to the fifth CT image to obtain a sixth CT image with a tumor boundary and an ablation needle boundary. Preferably, respiratory subtraction is performed on the registered CT images to obtain a fourth CT image, namely a CT subtraction image of the pure pulmonary blood vessels, and then mapping of tumor/ablation needle region images is performed. A clear three-dimensional CT image can be generated for viewing the tumor region and the ablation needle region. Finally, experiments prove that the image registration results generated by the embodiment can help interventional radiologists to adjust the probe position in tumor ablation operation.
In another embodiment of the image registration method based on lung CT-CBCT, in step S200, a similarity matching method is used to match respiratory phases of the plurality of first CT images and the plurality of second CT images, which further includes:
s110, respectively performing pre-segmentation processing on the first CT images and the second CT images to obtain a lung region serving as a region of interest. In this embodiment, before pre-segmentation, the first/second CT image is a full-slice digital pathology image WSI; after pre-segmentation, the first/second CT images are segmented into lung parenchyma images.
The method specifically comprises the following steps: s111, primarily screening a plurality of first/second CT images with lung expansion degree meeting target expansion requirements during respiratory movement. In other embodiments, the first/second CT images with the greatest degree of lung distension during respiratory motion are initially screened for easier viewing.
S112, analyzing the first/second CT images by using a Kmeans clustering algorithm, and identifying lung regions and other tissue regions.
S113, setting a threshold range, and segmenting the lung region by using a threshold segmentation algorithm. Specifically, the segmented lung is used as the region of interest, and a threshold segmentation algorithm is used to segment the lung parenchyma.
Another embodiment of the image registration method based on lung CT-CBCT provided in the present application, as shown in fig. 2 of the specification, includes the following steps:
s01, acquiring CT images before tumor ablation operation and CBCT image data in tumor ablation operation of a patient, and dividing tumors and normal tissues by using a Kmeans clustering algorithm on the CT images before and CBCT images in operation of a plurality of groups of lung tumor ablation operation. In particular, the lung is set as the region of interest, and the effect of the lung region identification is shown with reference to fig. 3 of the specification. Both three-dimensional CT and CBCT images can accurately identify the patient's lungs.
CT (Computed Tomography) CT is an electron Computed Tomography (CT) method in which a human body cross-section is imaged by emitting X-rays of a fan-shaped beam, and then the data of each cross-section (slice) is reconstructed to an image. The CT X-ray tube and detector acquire data after one revolution (sometimes not requiring one revolution) and then reconstruct a two-dimensional image, i.e., an image of this slice. The acquisition is repeated continuously through multi-layer scanning (whether spiral scanning or axial scanning), and finally a multi-layer section CT image is obtained, namely a three-dimensional CT image is obtained.
CBCT (Cone-beam computed tomography (Cone-ct-Beam Computed Tomography)) is the first choice for standing three-dimensional imaging because of its advantages of fast imaging speed and low radiation dose.
Specifically, CBCT is a process in which a cone beam rotates one revolution, similar to a three-dimensional projection, and a three-dimensional image is reconstructed through an algorithm. The CBCT can reconstruct three-dimensional image data, namely data of different faults by adopting cone beam rays to rotate for one circle without passing through multiple fault sections. The CBCT adopts cone beam X-ray scanning to remarkably improve the utilization rate of X-rays, and can acquire all original data required by reconstruction by rotating 360 degrees without axial scanning or spiral scanning similar to the traditional CT and acquiring data in multiple layers.
Kmeans clustering algorithm: and clustering pixels with similar properties in the image into the same region or image block, and continuously iterating and correcting the clustering result until convergence, thereby forming an image segmentation result. The cluster analysis is an unsupervised learning method, and can discover association rules from feature data of a study object, so that the cluster analysis is a powerful information processing method. Clustering is performed in the feature space of the image pixels, namely, a group of pixels with similar features is found in the space. Clustering is an unsupervised statistical method because there is no training sample set, and the clustering algorithm iteratively performs image classification and extracts feature values of various classes. In a sense, clustering is an autonomously trained classification algorithm. The current class is averaged, then the pixels are reclassified according to the average value (the pixels are classified into the class with the nearest average value), and the previous steps are iterated again for the newly generated class. The algorithm has good stability and robustness and wide application; however, the speed is low, and semantic information contained in the segmentation is less, so that the segmentation effect is not ideal, and the number of image blocks cannot be effectively controlled. The Kmeans clustering algorithm is used in CT image data, so that a better dividing and labeling can be carried out on the areas of the tumor and the normal tissue according to the information of the image features.
S02, performing lung parenchyma segmentation on CT images before and CBCT images in operation of a plurality of groups of lung tumor ablation operations by using threshold segmentation. Specifically, a lung CT image (binary image) before an ablation operation and a lung CBCT image (binary image) during a tumor ablation operation are shown in figure 4 of the specification. In the tumor ablation operation, an ablation needle is injected, so that the CBCT image acquired at the moment is different from the CT image before the ablation operation to a certain extent.
Specifically, the threshold segmentation method is a traditional image segmentation method, and is the most basic and widely applied segmentation technology in image segmentation due to simple implementation, small calculation amount and stable performance. The basic principle of the threshold segmentation method is as follows: by setting different feature thresholds, the image pixels are divided into classes of target and background regions with different gray levels. It is particularly applicable to images where the target and background occupy different gray level ranges, and has been applied in many fields where the choice of threshold is a key technique in the image thresholding method. The key point is that the acquisition of the optimal threshold value needs to be solved according to a certain function criterion. The selection of the threshold value is very important, and the rationality and the effect of image segmentation are directly affected. The threshold segmentation method is simple in calculation and easy to realize, and can obtain a better segmentation effect on an image with stronger contrast between an object and a background. So in tumor ablation procedures, segmentation of the lung parenchyma may be selected. The ablation needle of the image is segmented by threshold segmentation.
S03, matching the respiratory phases of the lung parenchyma images of the CBCT before and during the segmentation operation by using a similarity matching method.
Specifically, because the respiratory movement of the patient is changeable, the amplitude of each breath is different, the swing is different, and the movement of the organ can influence the CT image of the tumor ablation operation, the CT image before the tumor ablation operation and the CT image in the tumor ablation operation cannot be identical, and for this purpose, the template matching is adopted to carry out similarity matching on the respiratory phases of the CT image in the tumor ablation operation and the CT image checked before the tumor ablation operation.
The CT images before and during the tumor ablation procedure of the same respiratory phase are registered by using a rigid registration method in a simpleitk library to perform simple rigid transformation 3D image registration such as translation, rotation and scaling so as to align the CT images in space. As image preprocessing before respiratory subtraction and tumor ablation operations.
S04, carrying out 3D rigid registration on the matched CT image before operation and the CBCT image in operation with the same respiratory phase. The effect of the rigid alignment can be seen in figure 5 of the drawings, the alignment effect resulting from this step is not clear and the presence of blurring artifacts is still seen.
Specifically, some common rigid registration methods available in SimpleITK libraries include:
1. minimizing mean square error (Mean Squared Error, MSE): this method achieves registration by minimizing the sum of squares of the gray value differences between the two images. This means that it tries to find a transformation such that the gray values are similar between corresponding points in a given area.
2. Mutual information (Mutual Information): mutual information is another widely used index, especially in the field of medical imaging. This index measures the amount of shared information between two pictures and can be utilized to perform registration operations as an optimized objective function.
3. Minimizing cross-correlation (Normalized Cross Correlation): this technique attempts to find the appropriate mapping by maximizing or minimizing the total number of products per set of data points at the same location in the respective region of the original and target pictures.
4. Multi-resolution fitting (Multi-Resolution Registration): in many cases, the resolution of the initial image may be different, or the accuracy requirements for the registration result may be different. SimpleITK allows setting the resolution and step size used in the registration step execution.
The CT images before and during the tumor ablation procedure of the same respiratory phase are registered by using a rigid registration method in a simpleitk library to perform simple rigid transformation 3D image registration such as translation, rotation and scaling so as to align the CT images in space. As image preprocessing before respiratory subtraction and tumor ablation operations.
S05, segmenting the tumor by using a test-student model on the registered pre-operation CT image and the in-operation CBCT image.
Specifically, the main purpose of the teacher-student model architecture is to compress the deep learning model, which belongs to a popular method in the model compression field. Because the trained network is often complex in structure under deep learning to obtain better accuracy, and for some online prediction tasks, the complex model structure is unfavorable for the quick response requirement of the online prediction tasks, so the requirement of model compression is generated. Therefore, under the model framework, the teacher structure is equivalent to the original complex deep neural network structure, and the student is a lightweight network structure; thus teacher will have a higher prediction accuracy, which will guide student to the best model effect after simplifying parameters.
A semi-supervised teacher-student model of a deep convolutional neural network is used to segment lung tumors from CT scans, with 3DCT scans as input, and 3D segmentation of the primary tumor is output.
S06, performing differential co-fetal non-rigid registration based on demons on the CT image of the segmented tumor. Specifically, segmenting the lung parenchyma removes the tumor signature, and the final registration results are shown in fig. 6 of the specification, in which clear lung boundaries and tumor region boundaries are visible.
Specifically, the method is realized by using the Liqun composite principle to replace a deformation field updating method of a Demons algorithm. In mathematics, the lie group is a real manifold or a complex manifold with a group structure, and the correspondence between the lie algebra and the lie group is as follows: phi=exp (v) -1
Wherein phi is the lie group, v is the lie algebra, the formula expresses that the map promotes the lie algebra property to the lie group and the result is a smooth map, thus ensuring that the finally obtained deformation field has differential homoembryology and the image has topology maintainability.
The input end of the differential synblast Demons deformation registration algorithm has an index map of a velocity field (v) besides the reference image F and the floating image M data, and if the index map calculates the velocity field at each optimization iteration, a great deal of time is consumed, so that a simple and rapid method is needed to approximate the index map, thereby improving the operation efficiency. For a generally linear group, the index map is given by the matrix index. The exponential mapping of the velocity field is calculated quickly by using the SS (Scaling and Squaring) method of calculating the matrix index. The algorithm is executed as follows:
(1) Let phi=2-Nv, select integer N, make max phi less than epsilon, typically epsilon is chosen to be 0.5.
(2) First-order explicit integral of all pixel points is obtained, phi is ≡2 -N v。
(3) The method carries out the recursive compound operation on phi for N times, and comprises the following steps of. The differential-embryo-based Demons deformation registration algorithm is similar to the original Demons deformation registration algorithm in frame, and the registration process can be reduced to the problem of minimizing the following energy generalization function:
in the formula, c and v respectively represent differential synblast transformation which is not regularized and is regularized, and the introduction of an intermediate variable c can consider the differential synblast Demons algorithm as an optimization problem of a proper criterion. In the formulaThe formula represents the use of nearest neighbor searches in feature space to minimize similarity measures.
The formula is a regular term. />Indicating the distance error before and after the use of regularization.
Updating of deformation field by differential embryo Demons algorithm is mapped from velocity field v, from F toAnd mapping from M toIs determined by the velocity field update average value, and the mapping formula is as follows:
wherein IF andgray scale of reference image and floating image, respectively, < >>Representing the gray scale gradient of the floating image.
The gray level characteristic, the spatial characteristic and the spectral characteristic of the image are fused into the global spectral characteristic by using a differential syntonic Demons algorithm to replace the gray level characteristic of the traditional Demons algorithm for calculation. The differential synembryo Demons registration algorithm comprises the following basic steps:
(1) The reference image F, the floating image M, the initial velocity field v, which is calculated by the above-mentioned SS method of computer matrix index.
(2) In order to reduce the spectral decomposition time, first use is made ofThe sampling formula estimates the Laplace matrix and then calculates the mapping from F to +.>And mapping from M to->The velocity field update value u of (2).
(3) According toThe average value u of the velocity field updates is calculated.
(4) According toAnd carrying out Gaussian convolution operation to obtain the regularization effect of the fluid mapping model generated by the speed field update.
(5) UsingUpdating the velocity field is completed by utilizing the exponential mapping and the compounding of the velocity field, in general approximate consider v+ u+v.
(6) Gaussian convolution is carried out on the updated velocity field, namely v makes the finally acquired updated velocity field v smooth;
(7) Judging the convergence of v, if not, returning to the step 2 for recalculation; if convergence is the best transformation, the output is needed.
The final output result phi=exp (v) is the transformation function from M deformation to F, i.e. the deformation field.
Because the shape of the tumor and normal tissue does not change much during the tumor ablation procedure, efficient registration can be achieved in a short time using the differential co-fetal non-rigid registration method based on demons.
S07, carrying out respiratory subtraction technology on the CT image after non-rigid registration and the CBCT image in operation, and mapping the CT image before operation to the CT image after respiratory subtraction.
Specifically, the respiratory subtraction in this embodiment is respiratory synchronous CT subtraction. Digital Subtraction Angiography (DSA) is a medical imaging technique that arose in the 80 s of the 20 th century. In the process of normal angiography (CTA), a digital computer tool is used to take 2 frames of different digital images of the same part of a human body, subtraction processing is carried out, and the same parts of the 2 frames of images are eliminated, so that a blood vessel image filled with a contrast agent is obtained. After the iodine-containing contrast agent is injected intravenously, the CT image with the largest amplitude at each respiratory movement is processed by a computer, and then the pulmonary blood vessel can be displayed in three dimensions, so that partial DSA examination can be replaced. Because the attenuation effect of lung tissues on the contrast agent is remarkable, a clean and clear blood vessel image can be obtained by using the method. And CTA also has the characteristics of small invasiveness, small volume of contrast agent, short scanning time, quick data acquisition, three-dimensional volume data analysis and display and the like.
More preferably, the micro-state image of respiratory subtraction can refer to fig. 7 of the specification, and the CT subtraction image obtained by respiratory subtraction on the registered CT images is a clean image of the pure pulmonary blood vessels. The CT subtraction map in the tumor ablation procedure obtained at this time is free of tumor and ablation needle, since the areas other than the enhanced blood vessels are subtracted by subtraction.
The CT image before the tumor ablation operation is used as a priori image, and the tumor image before the tumor ablation operation is mapped onto the CT subtraction image after the subtraction by breathing, so that the CT image after the tumor ablation operation has a clear image of the tumor boundary, and reference is made to figure 8 of the specification.
And adding the CT image of the ablation needle subjected to threshold segmentation to the image domain after respiratory subtraction with clear tumor boundary, and carrying out nearest neighbor interpolation around the CT image domain of the ablation needle to finally obtain the three-dimensional CT image with clear tumor boundary and no metal artifact at low dose in tumor ablation operation.
The application of the respiratory subtraction technique is to take CT before tumor ablation operation as a priori image, respectively change different gray values of a tumor part and other areas, map the changed image onto the respiratory subtraction image of the CT image in the tumor ablation operation, and observe whether the mapped tumor part is completely wrapped by the area with the effect used by the ablation needle in the respiratory subtraction image.
In another embodiment of the lung CT-CBCT-based image registration method provided in the present application, on the basis of any one of the foregoing embodiments, the first CT image is a preoperative 3D CT image, and the second CT image is an intra-operative 3D CBCT image. The aim of the embodiment is to realize the navigation of an ablation needle in the operation of lung tumor ablation under the guidance of an image registration technology, and the related method, system and equipment can be applied to clinic.
Specifically, the image registration method based on lung CT-CBCT provided in this embodiment includes the following steps:
step 1: the lungs were segmented from pre-operative 3D CT images and intra-operative 3D CBCT images using image segmentation based on Kmeans clustering algorithm.
Step 2: and using the segmented lung as a region of interest, and matching respiratory phases of the segmented preoperative 3D CT image and the segmented intraoperative 3D CBCT image by using a similarity matching method. The pre-operative CT image and the intra-operative CBCT image are first registered using a rigid registration method.
Step 3: tumors were segmented using a test-student model on the registered preoperative 3D CT images and the intra-operative 3D CBCT images. The pre-operative CT image and the intra-operative CBCT image after the segmentation of the test-student model are subjected to a second registration using differential-embryo based demons non-rigid registration.
Step 4: and carrying out respiration subtraction on the registered CT images to obtain a CT subtraction image, obtaining a clean image of the pure lung blood vessels, and obtaining the clean image of the pure lung blood vessels, wherein the obtained intraoperative CT image is free of tumors and ablation needles as the areas except for the enhanced blood vessels are subtracted by subtraction.
Step 5: the preoperative CT image is used as a priori image, and the preoperative tumor is fused to the CT image after respiratory subtraction through image fusion, so that the image with clear tumor boundary is formed on the intraoperative CT image. The ablation needle of the intra-operative image is segmented by a threshold.
Step 6: and 5, adding the CT image of the ablation needle subjected to threshold segmentation to the image domain after respiratory subtraction with the clear tumor boundary, and carrying out nearest neighbor interpolation around the CT image domain of the ablation needle to finally obtain the three-dimensional CT image with clear tumor boundary and no metal artifact at low dose in operation. All of the above operations are performed based on the image domain. CT images of the intraoperative high-quality tumor boundary are obtained through respiratory subtraction and image fusion, and intraoperative endoscopic image navigation is established. Finally realizing the three-dimensional image in operation.
More preferably, in the embodiment, the endoscopic image navigation of the tumor ablation operation is established through the CT image of the high-quality tumor boundary obtained through respiratory subtraction and image mapping. Finally, three-dimensional images of tumor ablation operation are realized, so that a doctor can better position the tumor in the tumor ablation operation.
Based on the same technical concept, the application also discloses an image registration system based on lung CT-CBCT, which can be used for realizing any image registration method based on lung CT-CBCT, specifically, an embodiment of the image registration system based on lung CT-CBCT, as shown in fig. 9 of the specification, comprises the following steps:
An image acquisition module 10 for acquiring a plurality of first CT images and a plurality of second CT images; the first CT image is a CT image before lung tumor ablation operation; the second CT image is a CBCT image in a lung tumor ablation operation.
An similarity matching module 20, configured to perform respiratory phase matching on the plurality of first CT images and the plurality of second CT images using a similarity matching method; and matching to obtain a first target CT image and a second CT target image which are in the same respiratory phase.
A first registration module 30 for rigid registration of the lungs with the first target CT image and the second target CT image.
The region segmentation module 40 is configured to segment the tumor region of the first target CT image by using a deep learning network model after rigid registration, so as to obtain a tumor region image; and adopting the deep learning network model to segment the ablation needle region of the second target CT image so as to obtain an ablation needle region image.
A second registration module 50, configured to perform non-rigid registration on the segmented tumor area image and the ablation needle area image, so as to obtain a third CT image with tumor and ablation needle.
The subtraction mapping module 60 is configured to perform subtraction operation on the third CT image and the second target CT image after non-rigid registration, so as to obtain a fourth CT image with a pure blood vessel; and is further configured to map the tumor region image and the ablation needle region image onto the fourth CT image to see if the ablation needle region covers the tumor region.
In another embodiment of the image registration system based on lung CT-CBCT provided in the present application, on the basis of the above system embodiment, the image registration system based on lung CT-CBCT further includes: and the preprocessing module is used for respectively carrying out pre-segmentation processing on the first CT images and the second CT images to obtain a lung region serving as a region of interest.
Specifically, the preprocessing module includes:
and the deleting sub-module is used for primarily screening a plurality of first/second CT images of which the lung expansion degree meets the target expansion requirement during respiratory movement.
And the identification sub-module is used for analyzing the first/second CT images by using a Kmeans clustering algorithm and identifying the lung region and other tissue regions.
And the dividing sub-module is used for setting a threshold range and dividing the lung region by using a threshold dividing algorithm.
Specifically, in this embodiment, the image registration system based on lung CT-CBCT performs the step flow shown in fig. 10 of the specification to implement image registration of lung CT-CBCT. In particular, the method comprises the steps of,
an image acquisition module 10 is used for acquiring CT images before tumor ablation operation of a patient and CBCT image data in tumor ablation operation.
The preprocessing module is used for dividing tumor and normal tissue by using Kmeans clustering algorithm on CT before the operation of the multi-group lung tumor ablation and CBCT images in the operation. And is also used for segmenting lung parenchyma by using threshold segmentation from CT images before and CBCT images in operation of a plurality of groups of lung tumor ablation operations.
The similarity matching module 20 is used for matching respiratory phases by using a similarity matching method on lung parenchyma images of CT before and during the segmentation operation.
A first registration module 30 for 3D rigid registration of the matched pre-operative CT map and the in-operative CBCT map of the same respiratory phase.
The region segmentation module 40 is used for segmenting the tumor using the test-student model on the registered pre-operative CT map and the in-operative CBCT map.
A second registration module 50 for non-rigid registration of the deformation field generated by the differential co-fetal non-rigid registration based on demons of the CT images of segmented tumors.
The subtraction mapping module 60 is configured to perform a respiratory subtraction technique on the CT image after non-rigid registration and the CBCT image in operation, and map the CT image before operation onto the CT image after respiratory subtraction.
Based on the same conception, the application also discloses a device comprising at least the pulmonary CT-CBCT-based image registration system as described in any of the above embodiments. In other embodiments, the apparatus further comprises: input terminals, storage media, and processors.
The image registration method, system and equipment based on lung CT-CBCT have the same technical conception, and the technical details of the three embodiments are mutually applicable, so that repetition is reduced, and the repeated description is omitted.
It will be apparent to those skilled in the art that the above-described program modules are only illustrated in the division of the above-described program modules for convenience and brevity, and that in practical applications, the above-described functional allocation may be performed by different program modules, i.e., the internal structure of the apparatus is divided into different program units or modules, to perform all or part of the above-described functions. The program modules in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one processing unit, where the integrated units may be implemented in a form of hardware or in a form of a software program unit. In addition, the specific names of the program modules are also only for distinguishing from each other, and are not used to limit the protection scope of the present application.
In the foregoing embodiments, the descriptions of the embodiments are focused on, and the parts of a certain embodiment that are not described or depicted in detail may be referred to in the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The above-described embodiments of the apparatus are exemplary only, and exemplary, the division of the modules or units is merely a logical function division, and there may be additional divisions in actual implementation, exemplary, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
While preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various modifications and variations can be made in the present application without departing from the spirit or scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims and the equivalents thereof, the present application is intended to cover such modifications and variations.

Claims (10)

1. An image registration method based on lung CT-CBCT, which is characterized by comprising the following steps:
acquiring a plurality of first CT images and a plurality of second CT images; the first CT image is a CT image before lung tumor ablation operation; the second CT image is a CBCT image in lung tumor ablation operation;
matching respiratory phases of a plurality of first CT images and a plurality of second CT images by using a similarity matching method; matching to obtain a first target CT image and a second CT target image with the same respiratory phase;
performing rigid registration of the lungs on the first target CT image and the second target CT image;
after rigid registration, a deep learning network model is adopted to segment a tumor area of the first target CT image, so as to obtain a tumor area image; dividing an ablation needle region by adopting the deep learning network model to the second target CT image to obtain an ablation needle region image;
non-rigid registration is carried out on the tumor area image and the ablation needle area image which are obtained through segmentation, so that a third CT image with tumor and ablation needle is obtained;
after non-rigid registration, performing subtraction operation on the third CT image and the second target CT image to obtain a fourth CT image with pure blood vessels; the tumor region image and the ablation needle region image are mapped onto the fourth CT image to see if the ablation needle region covers the tumor region.
2. The pulmonary CT-CBCT based image registration method of claim 1, wherein the matching of respiratory phases for the first CT images and the second CT images using a similarity matching method further comprises:
and respectively performing pre-segmentation processing on the first CT images and the second CT images to obtain a lung region serving as an interested region.
3. A method of pulmonary CT-CBCT based image registration as in claim 2, wherein said pre-segmentation of each of said first/second CT images includes:
initially screening a plurality of first/second CT images with lung expansion degree meeting target expansion requirements during respiratory movement;
analyzing the first/second CT images by using a Kmeans clustering algorithm, and identifying lung areas and other tissue areas;
a threshold range is set and a threshold segmentation algorithm is used to segment the lung region.
4. The pulmonary CT-CBCT based image registration method of claim 1, wherein the rigidly registering the first target CT image with the second target CT image includes:
by using a rigid registration method in a simpleitk library, the first target CT image and the second target CT image with the same respiratory phase are subjected to simple rigid transformation such as translation, rotation, scaling and the like, so that the first target CT image and the second target CT image are aligned in the same space, and rigid registration is realized.
5. The pulmonary CT-CBCT based image registration method of claim 1, wherein the deep learning network model is a Teacher-Student model based on a knowledge distillation technique.
6. The pulmonary CT-CBCT based image registration method of claim 1, wherein the non-rigid registration of the segmented tumor region image with the ablation needle region image includes:
the tumor region image is non-rigidly registered with the ablation needle region image using a differential-embryo-based Demons deformation registration algorithm.
7. The pulmonary CT-CBCT based image registration method of claim 1, wherein the mapping the tumor region image and the ablation needle region image onto the fourth CT image includes:
mapping the tumor area image onto the fourth CT image to obtain a fifth CT image with a tumor boundary;
and mapping the ablation needle area image to the fifth CT image to obtain a sixth CT image with a tumor boundary and an ablation needle boundary.
8. A pulmonary CT-CBCT based image registration system, characterized in that the system employs the pulmonary CT-CBCT based image registration method of any of claims 1-7 to achieve image registration, the system comprising:
The image acquisition module acquires a plurality of first CT images and a plurality of second CT images; the first CT image is a CT image before lung tumor ablation operation; the second CT image is a CBCT image in lung tumor ablation operation;
the similarity matching module is used for matching breathing phases of the plurality of first CT images and the plurality of second CT images by using a similarity matching method; matching to obtain a first target CT image and a second CT target image with the same respiratory phase;
a first registration module for rigid registration of the lungs with the first target CT image and the second target CT image;
the region segmentation module is used for segmenting the tumor region of the first target CT image by adopting a deep learning network model after rigid registration to obtain a tumor region image; dividing an ablation needle region by adopting the deep learning network model to the second target CT image to obtain an ablation needle region image;
the second registration module is used for carrying out non-rigid registration on the segmented tumor area image and the ablation needle area image to obtain a third CT image with tumor and ablation needle;
the subtraction mapping module is used for performing subtraction operation on the third CT image and the second target CT image after non-rigid registration to obtain a fourth CT image with pure blood vessels; and is further configured to map the tumor region image and the ablation needle region image onto the fourth CT image to see if the ablation needle region covers the tumor region.
9. The pulmonary CT-CBCT based image registration system of claim 8, further comprising:
and the preprocessing module is used for respectively carrying out pre-segmentation processing on the first CT images and the second CT images to obtain a lung region serving as a region of interest.
10. An apparatus comprising the pulmonary CT-CBCT based image registration system of claim 8 or 9.
CN202410138906.3A 2024-02-01 2024-02-01 Image registration method, system and equipment based on lung CT-CBCT Pending CN117670960A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410138906.3A CN117670960A (en) 2024-02-01 2024-02-01 Image registration method, system and equipment based on lung CT-CBCT

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410138906.3A CN117670960A (en) 2024-02-01 2024-02-01 Image registration method, system and equipment based on lung CT-CBCT

Publications (1)

Publication Number Publication Date
CN117670960A true CN117670960A (en) 2024-03-08

Family

ID=90075372

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410138906.3A Pending CN117670960A (en) 2024-02-01 2024-02-01 Image registration method, system and equipment based on lung CT-CBCT

Country Status (1)

Country Link
CN (1) CN117670960A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107049475A (en) * 2017-04-19 2017-08-18 纪建松 Liver cancer local ablation method and system
CN111973271A (en) * 2020-08-31 2020-11-24 北京理工大学 Preoperative ablation region simulation method and device for tumor thermal ablation
CN114469052A (en) * 2022-02-10 2022-05-13 中国人民解放军总医院第五医学中心 Quantitative calculation method and device for tumor shrinkage deformation after liver ablation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107049475A (en) * 2017-04-19 2017-08-18 纪建松 Liver cancer local ablation method and system
CN111973271A (en) * 2020-08-31 2020-11-24 北京理工大学 Preoperative ablation region simulation method and device for tumor thermal ablation
CN114469052A (en) * 2022-02-10 2022-05-13 中国人民解放军总医院第五医学中心 Quantitative calculation method and device for tumor shrinkage deformation after liver ablation

Similar Documents

Publication Publication Date Title
US11501485B2 (en) System and method for image-based object modeling using multiple image acquisitions or reconstructions
US10198872B2 (en) 3D reconstruction and registration of endoscopic data
CN109272510B (en) Method for segmenting tubular structure in three-dimensional medical image
US9514530B2 (en) Systems and methods for image-based object modeling using multiple image acquisitions or reconstructions
US10796464B2 (en) Selective image reconstruction
EP2245592B1 (en) Image registration alignment metric
US10867375B2 (en) Forecasting images for image processing
CN103942772A (en) Multimodal multi-dimensional blood vessel fusion method and system
CN113327225B (en) Method for providing airway information
Rashed et al. Probabilistic atlas prior for CT image reconstruction
CN117670960A (en) Image registration method, system and equipment based on lung CT-CBCT
CN116051553A (en) Method and device for marking inside three-dimensional medical model
Cetin et al. An automatic 3-d reconstruction of coronary arteries by stereopsis
CN113554647A (en) Registration method and device for medical images
Gu et al. Contrast-enhanced to noncontrast CT transformation via an adjacency content-transfer-based deep subtraction residual neural network
Antonsanti et al. How to register a live onto a liver? partial matching in the space of varifolds
Longuefosse et al. Lung CT Synthesis Using GANs with Conditional Normalization on Registered Ultrashort Echo-Time MRI
US20230260141A1 (en) Deep learning for registering anatomical to functional images
US20220323035A1 (en) Devices, systems, and methods for motion-corrected medical imaging
US20220398752A1 (en) Medical image registration method and apparatus
Qin et al. Three Dimensional Reconstruction of Blood Vessels and Evaluation of Vascular Stenosis Based on DSA
Haseljic et al. A review of the image segmentation and registration methods in liver motion correction in C-arm perfusion imaging
Jiang et al. Super Resolution of Pulmonary Nodules Target Reconstruction Using a Two-Channel GAN Models
Chen et al. Segmentation of liver tumors with abdominal computed tomography using fully convolutional networks
Babaee et al. 3D reconstruction of vessels from two uncalibrated mammography images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination