CN114820731A - CT image and three-dimensional body surface image registration method and related device - Google Patents
CT image and three-dimensional body surface image registration method and related device Download PDFInfo
- Publication number
- CN114820731A CN114820731A CN202210233256.1A CN202210233256A CN114820731A CN 114820731 A CN114820731 A CN 114820731A CN 202210233256 A CN202210233256 A CN 202210233256A CN 114820731 A CN114820731 A CN 114820731A
- Authority
- CN
- China
- Prior art keywords
- point
- mapping
- body surface
- dimensional body
- surface image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 49
- 238000013507 mapping Methods 0.000 claims abstract description 170
- 238000012545 processing Methods 0.000 claims abstract description 12
- 230000009466 transformation Effects 0.000 claims abstract description 11
- 238000002591 computed tomography Methods 0.000 claims description 37
- 239000011159 matrix material Substances 0.000 claims description 14
- 238000004422 calculation algorithm Methods 0.000 claims description 13
- 238000004590 computer program Methods 0.000 claims description 13
- 238000005070 sampling Methods 0.000 claims description 5
- 238000013519 translation Methods 0.000 claims description 5
- 238000012216 screening Methods 0.000 claims description 2
- 210000000056 organ Anatomy 0.000 abstract description 10
- 230000000694 effects Effects 0.000 abstract description 3
- 230000004927 fusion Effects 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 14
- 230000006870 function Effects 0.000 description 8
- 230000003287 optical effect Effects 0.000 description 8
- 210000001519 tissue Anatomy 0.000 description 8
- 238000013170 computed tomography imaging Methods 0.000 description 5
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 238000012544 monitoring process Methods 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000012752 Hepatectomy Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 210000001981 hip bone Anatomy 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 230000008447 perception Effects 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 210000003484 anatomy Anatomy 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001364 causal effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 210000003689 pubic bone Anatomy 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 231100000241 scar Toxicity 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
- 210000002417 xiphoid bone Anatomy 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
- G06T7/337—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/14—Transformations for image registration, e.g. adjusting or mapping for alignment of images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/74—Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
The application relates to the technical field of image processing, in particular to a method and a related device for registering a CT image and a three-dimensional body surface image, which are used for solving the problem of how to effectively register the three-dimensional body surface image and a preoperative medical image so as to ensure the effect of fusion of the three-dimensional body surface image and the preoperative medical image. The method comprises the steps of extracting a first group of feature points and a second group of feature points from a CT image and a three-dimensional body surface image respectively, carrying out geometric transformation on the CT image based on a first mapping relation between the first group of feature points and the second group of feature points, projecting the CT image in the three-dimensional body surface image, and obtaining a first mapping point set of the CT image. In order to further improve the accuracy in projection, the biological tissue in the CT image is converted to the position of the same biological tissue in the three-dimensional body surface image based on the second mapping relation between the first mapping point set and the three-dimensional body surface image. Therefore, the internal structure of the organ in the CT image is displayed in the three-dimensional body surface image, so that a doctor can more directly diagnose the position of the focus.
Description
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a method and a related apparatus for registering a CT image and a three-dimensional body surface image.
Background
The image acquired by the three-dimensional positioning equipment to the organ can be called a three-dimensional body surface image. The three-dimensional body surface image can show the shape of an organ, but cannot show the internal structure of the organ, so that a doctor cannot directly observe the structure below the surface of the organ.
Taking the hepatectomy as an example, the hepatectomy has the advantages of small incision, little pain, quick recovery, small scar and the like, and quickly becomes the development trend of the current operation. But presents greater challenges to the surgeon due to the lack of depth perception information. The augmented reality technology can fuse and display model data reconstructed based on preoperative medical images and intraoperative images, so that doctors can visually see anatomical structures below the surfaces of organs, the problem of depth perception information loss is effectively solved, and the surgical operation is quicker, more accurate and safer. How to effectively register the three-dimensional body surface image and the preoperative medical image to ensure the effect of fusing the three-dimensional body surface image and the preoperative medical image still needs to be solved at present.
Disclosure of Invention
The application provides a registration method and a related device for an electronic computed tomography image and a three-dimensional body surface image, which are used for solving the problem of how to effectively register the three-dimensional body surface image and a preoperative medical image so as to ensure the fusion effect of the three-dimensional body surface image and the preoperative medical image.
In a first aspect, the present application provides a method for registering an electronic computed tomography image and a three-dimensional body surface image, the method comprising:
extracting a first group of feature points of a plurality of feature positions from a Computed Tomography (CT) image, and extracting a second group of feature points of the plurality of feature positions from a three-dimensional body surface image; the three-dimensional body surface image is an image acquired by adopting three-dimensional positioning equipment;
determining a first mapping relationship between the first set of feature points and the second set of feature points; the first mapping relation is used for converting the first set of feature points to the positions of the second set of feature points;
performing geometric transformation on the CT image based on the first mapping relation to obtain a first mapping point set of the CT image;
and determining a second mapping relation between the first mapping point set and the three-dimensional body surface image, wherein the second mapping relation is used for converting the biological tissue in the CT image to the position of the same biological tissue in the three-dimensional body surface image.
In a possible embodiment, the determining a first mapping relationship between the first set of feature points and the second set of feature points includes:
determining the first mapping relationship based on the following first mapping relationship determination formula:
wherein S represents a matrix formed by the first group of characteristic points and the second group of characteristic points, E represents a matrix formed by characteristic values of S, U and V represent orthogonal matrices formed by decomposing singular values, and V T A transpose matrix representing V, R represents a rotation matrix between the first set of feature points and the second set of feature points, t represents a translation vector between the first set of feature points and the second set of feature points,represents a centroid of the first set of feature points,representing the centroid of the second set of feature points.
In a possible embodiment, the determining a second mapping relationship between the first mapping point set and the three-dimensional body surface image includes:
extracting a point pair between the first mapping point set and the point set of the three-dimensional body surface image, wherein two points in the point pair represent the same characteristic point;
determining a third mapping relation between the first mapping point set and the point set of the three-dimensional body surface image based on the acquired point pairs;
processing the first mapping point set by adopting the third mapping relation to obtain a second mapping point set;
determining the distance between two points of the same point pair in the second mapping point set and the point set of the three-dimensional body surface image;
determining a distance threshold value based on the distance of each point pair;
screening out point pairs of which the distance between the two points is smaller than or equal to the distance threshold value from the second mapping point set and the point pairs of the point set of the three-dimensional body surface image;
and obtaining the second mapping relation by adopting a random sampling consistency algorithm for the screened point pairs.
In a possible embodiment, the determining a second mapping relationship between the first mapping point set and the three-dimensional body surface image includes:
extracting a point pair between the first mapping point set and the point set of the three-dimensional body surface image, wherein two points in the point pair represent the same characteristic point;
determining rigid body transformation which enables the average distance between two points in the obtained point pair to be minimum to obtain a fourth mapping relation;
processing the first mapping point set by adopting the fourth mapping relation to obtain a third mapping point set;
determining the average distance between two points in the same point pair of a third mapping point set and the point set of the three-dimensional body surface image;
if the average distance is smaller than a preset value, taking the fourth mapping relation as the second mapping relation;
and if the average distance is larger than or equal to the preset value, taking the third mapping point set as a new first mapping point set and returning to execute the step of extracting the point pairs between the first mapping point set and the point set of the three-dimensional body surface image.
In a possible embodiment, the determining a distance threshold based on the distance of the point pairs includes:
and determining the average distance of the point pairs as the distance threshold value.
In a possible embodiment, after the applying a random sampling consistency algorithm to the screened point pairs to obtain the second mapping relationship, the method further includes:
verifying the accuracy of the second mapping relationship;
if the accuracy of the second mapping relation is lower than an accuracy threshold value, extracting a point pair between the first mapping point set and the point set of the three-dimensional body surface image, wherein two points in the point pair represent the same characteristic point;
determining rigid body transformation which enables the average distance between two points in the obtained point pair to be minimum to obtain a fourth mapping relation;
processing the first mapping point set by adopting the fourth mapping relation to obtain a third mapping point set;
determining the average distance between two points in the same point pair of a third mapping point set and the point set of the three-dimensional body surface image;
if the average distance is smaller than a preset value, taking the fourth mapping relation as the second mapping relation;
and if the average distance is larger than or equal to the preset value, returning the third mapping point set as a new first mapping point set to execute the step of extracting the point pairs between the first mapping point set and the point set of the three-dimensional body surface image.
In one possible embodiment, the extracting of the point pairs between the first mapped point set and the point set of the three-dimensional body surface image includes:
and extracting point pairs among the point sets of the three-dimensional body surface image in the first mapping point set by adopting a fast point feature histogram descriptor.
In a third aspect, the present application provides an electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement any of the methods as described in the first aspect above.
In a fourth aspect, the present application further provides a computer-readable storage medium comprising:
the instructions in the computer readable storage medium, when executed by the processor of the projection device, enable the projection device to perform the method of any of the first aspects described above.
In a fifth aspect, the present application further provides a computer program product comprising a computer program, which when executed by the processor, implements the focusing method of the projection apparatus according to any one of the above first aspects.
The technical scheme provided by the embodiment of the application at least has the following beneficial effects:
the method comprises the steps of extracting a first group of feature points and a second group of feature points from a CT image and a three-dimensional body surface image respectively, carrying out geometric transformation on the CT image based on a first mapping relation between the first group of feature points and the second group of feature points, projecting the CT image in the three-dimensional body surface image, and obtaining a first mapping point set of the CT image. In order to further improve the accuracy in projection, the biological tissue in the CT image is converted to the position of the same biological tissue in the three-dimensional body surface image based on the second mapping relation between the first mapping point set and the three-dimensional body surface image. Therefore, the internal structure of the organ in the CT image is displayed in the three-dimensional body surface image, so that a doctor can diagnose the position of the focus more directly, and the surgical operation is quicker, more accurate and safer.
On the basis of the common knowledge in the field, the above preferred conditions can be combined randomly to obtain the preferred embodiments of the application.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the embodiments of the present application will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic connection diagram of an electronic device, a CT imaging device and a three-dimensional positioning device according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a method for registering an electron computed tomography image and a three-dimensional body surface image according to an embodiment of the present disclosure;
fig. 3a is one of the flow diagrams for determining the second mapping relationship provided in the embodiment of the present application;
fig. 3b is a schematic diagram of determining a point pair according to an embodiment of the present application;
fig. 4 is a second schematic flowchart of determining a second mapping relationship according to the embodiment of the present application;
fig. 5 is a third schematic flowchart of determining a second mapping relationship according to the embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. The embodiments described are some, but not all embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Also, in the description of the embodiments of the present application, "/" indicates an inclusive meaning unless otherwise specified, for example, a/B may indicate a or B; "and/or" in the text is only an association relationship describing an associated object, and means that three relationships may exist, for example, a and/or B may mean: three cases of a alone, a and B both, and B alone exist, and in addition, "a plurality" means two or more than two in the description of the embodiments of the present application.
In the following, the terms "first", "second" are used for descriptive purposes only and are not to be understood as implying or implying relative importance or implicitly indicating the number of technical features indicated. Thus, features defined as "first", "second", may explicitly or implicitly include one or more of the features, and in the description of embodiments of the application, unless stated otherwise, "plurality" means two or more.
For ease of understanding, the present application first explains the terms involved.
Three-dimensional positioning equipment: in the medical field, a three-dimensional model of a human organ is generally established by using a device with a positioning function, and the device generally comprises a three-dimensional electromagnetic positioning system and an optical body surface imaging device.
Iterative Closest Point (ICP) algorithm: the method is based on a data registration method and utilizes a closest point search method, thereby solving an algorithm based on a free form curved surface.
RANdom SAmple Consensus (RANSAC, RANdom SAmple Consensus) algorithm: the algorithm is an algorithm for calculating mathematical model parameters of data according to a group of sample data sets containing abnormal data to obtain effective sample data.
Fig. 1 schematically shows a connection diagram of an electronic device, a CT imaging device and a three-dimensional positioning device.
As shown in fig. 1, the electronic device 100 may include, for example: processor 110, memory 120, three-dimensional positioning device 130, and CT imaging device 140 are coupled to electronic device 10, wherein:
a CT imaging device 140 for acquiring CT images;
a three-dimensional positioning device 130 for acquiring a three-dimensional body surface image;
the memory 120 is configured to store data required for executing a processing method of an electronic image, and may include software programs, application interface data, and the like;
the processor 110 is configured to execute the registration method of the electronic computed tomography image and the three-dimensional body surface image provided by the embodiment of the application.
The electronic device 100 is connected to the CT imaging device 140 and the three-dimensional positioning device 130.
To further illustrate the technical solutions provided by the embodiments of the present application, the following detailed description is made with reference to the accompanying drawings and the detailed description. Although the embodiments of the present application provide the method operation steps as shown in the following embodiments or figures, more or less operation steps may be included in the method based on the conventional or non-inventive labor. In steps where no necessary causal relationship exists logically, the order of execution of the steps is not limited to that provided by the embodiments of the present application.
Fig. 2 is a schematic flow chart of a registration method of an electronic computed tomography image and a three-dimensional body surface image according to the present application.
In step 201, a first set of feature points at a plurality of feature positions is extracted from the CT image, and a second set of feature points at a plurality of feature positions is extracted from the three-dimensional body surface image. The three-dimensional body surface image is an image acquired by adopting three-dimensional positioning equipment.
In the embodiment of the application, in order to simplify the calculation process and not affect the calculation result, the application selects the feature points which are most easy to obtain the feature positions in the CT image and the optical body surface image, for example, the first group of feature points and the second group of feature points are extracted from five feature positions of human body surface xiphoid process, navel, pubis, left hip bone and right hip bone, so that a smaller number of feature positions can be adopted to improve the calculation efficiency.
In this embodiment, the coordinates of the first set of feature points are obtained from a coordinate system in the CT image, and the coordinates of the second set of feature points are obtained from an optical coordinate system in the three-dimensional positioning apparatus, and the coordinates of the two coordinate systems are different, so that the first set of feature points cannot be directly converted to the positions of the second set of feature points, and therefore, a first mapping relationship between the first set of feature points and the second set of feature points needs to be determined in step 202, where the first mapping relationship is used to convert the first set of feature points to the positions of the second set of feature points.
In the embodiment of the present application, based on formula (1), a first mapping relationship is determined:
wherein S represents a matrix formed by a first group of characteristic points and a second group of characteristic points, E represents a matrix formed by characteristic values of S, U and V represent orthogonal matrices formed by decomposing singular values, and V T A transposed matrix representing V, R representing a rotation matrix between the first set of feature points and the second set of feature points, t representing a translation vector between the first set of feature points and the second set of feature points,represents the centroid of the first set of feature points,representing the centroid of the second set of feature points.
Taking t and R as the first mapping relationship, in step 203, the CT image is geometrically transformed based on the first mapping relationship to obtain a first mapping point set of the CT image.
In step 204, a second mapping relationship between the first set of mapping points and the three-dimensional body surface image is determined, wherein the second mapping relationship is used for transforming the biological tissue in the CT image to the same position of the biological tissue in the three-dimensional body surface image.
In the embodiment of the present application, since the first mapping relationship is obtained only based on the specific feature position, and a certain error still exists, the first mapping point set obtained based on the first mapping relationship is only a rough matching process, and in order to accurately obtain the mapping relationship between the three-dimensional body surface image and the CT image, the present application further provides a fine matching process. In practice, the steps shown in fig. 3a may be adopted to determine a second mapping relationship between the first mapping point set and the three-dimensional body surface image, where the second mapping relationship is more accurate than the first mapping relationship.
In step 301, pairs of points between a first mapped point set and a point set of a three-dimensional body surface image are extracted, where two points in a pair of points represent the same feature point.
In the embodiment of the present application, if the first mapping point is a point a, the other point of the point a in the point pair is a 1. When the point A is calculated to correspond to the point A1 in the point set of the three-dimensional body surface image, firstly, the transformation point A 'of the point A in the point set coordinate of the three-dimensional body surface image is calculated according to the rotation matrix and the translation vector (namely, the first mapping relation), and the Euclidean distance between the A' and all the points in the point set of the three-dimensional body surface image is calculated. The point having the smallest euclidean distance with the point a' is taken as the point a 1.
In another embodiment of the present application, a fast point feature histogram descriptor may be further used to extract a point pair between the first mapping point set and the point set of the three-dimensional body surface image. As shown in fig. 3b, any point in the first mapping point set is taken as a central point (i.e., a star-shaped point in fig. 3 b), and a plurality of points closest to the central point are obtained from the point set of the three-dimensional body surface image as first proximity points. As shown in fig. 3b, five black circular points a to E closest to the center point are set as first proximity points. Then, the radius r is set by taking the first near point as the center of the circle, and circles (i.e., circles 1 to 5 in fig. 3 b) with the first near point as the center of the circle are constructed, each circle including points other than the first near point as second near points (i.e., white points in fig. 3 b). And constructing a plurality of binary trees by the central point respectively with the first near point and a second near point determined based on the first near point, wherein the first near point or the second near point which repeatedly appears in the binary trees can form a point pair with the central point. For example, in fig. 3b, circle 5 further includes a first near point a for determining the position of circle 1, when a binary tree is established, point a and point D are located in two circles respectively, so point a and point D are repeatedly used twice, and by repeating point E three times, point E and the central point are selected to form a pair of point pairs.
If only one point closest to the center point is obtained as a first proximity point, the first proximity point and the center point are used as a pair of point pairs.
In step 302, a third mapping relationship between the first mapping point set and the point set of the three-dimensional body surface image is determined based on the acquired point pairs.
In step 303, the first set of mapping points is processed using the third mapping relationship to obtain a second set of mapping points.
In step 304, the distance between two points of the same pair of points in the set of points of the second mapped point set and the three-dimensional body surface image is determined.
In step 305, a distance threshold is determined based on the distance of each point pair. For convenience of calculation, the average distance of each point pair is used as a distance threshold value.
In one possible embodiment, the distances between two points in all the point pairs may be calculated, and an average distance may be calculated based on the distances of the respective point pairs, the average distance being used as the distance threshold. It is also possible to remove the maximum and minimum values of the distances after calculating the distances of all the point pairs, calculate an average distance based on the distances of the remaining point pairs, and use the average distance as the distance threshold. The manner in which the distance threshold is determined is not limited in this application.
In step 306, from the point pairs in the second mapping point set and the point set of the three-dimensional body surface image, a point pair is screened out, wherein the distance between the two points is smaller than or equal to the distance threshold value.
In step 307, a second mapping relationship is obtained for the screened point pairs by using a random sampling consistency algorithm.
In addition to the above-mentioned second mapping relationship obtained by using the RANSAC algorithm in fig. 3a, in another embodiment of the present application, the step of using the ICP algorithm shown in fig. 4 may also be used to determine the second mapping relationship between the first mapping point set and the three-dimensional body surface image, including the following steps:
in step 401, a pair of points between the first mapped point set and a point set of the three-dimensional body surface image is extracted, two points in the pair of points representing the same feature point.
In this embodiment of the present application, in order to ensure that an error after matching between each point pair is minimum after the first mapping point set is converted into the three-dimensional body surface image, in step 402, the present application further needs to determine a rigid body transformation that minimizes an average distance between two points in the obtained point pair, so as to obtain a fourth mapping relationship.
In step 403, the first mapping point set is processed by using the fourth mapping relationship to obtain a third mapping point set.
In step 404, it is determined whether the average distance between two points in the same pair is smaller than a predetermined value in the third mapping point set and the point set of the three-dimensional body surface image. If the value is smaller than the preset value, go to step 405, otherwise go to step 406.
In step 405, the fourth mapping relationship is used as the second mapping relationship.
In step 406, the third set of mapping points is treated as a new first set of mapping points and step 401 is performed back.
In addition to the above two embodiments for determining the second mapping relationship, in the embodiment of the present application, when the step shown in fig. 3a or fig. 4 is performed to determine the second mapping relationship, the step shown in fig. 3a or the step shown in fig. 4 may be used, or after the second mapping relationship is determined by using the RANSAC algorithm shown in fig. 3a, the accuracy of the mapping relationship is further verified in the method shown in fig. 5, and the ICP method is used to determine the second mapping relationship when the second mapping relationship is not accurate.
In step 501, it is verified whether the accuracy of the second mapping is below an accuracy threshold. If the accuracy is below the accuracy threshold, step 502 is performed.
In step 502, a pair of points between the first mapped point set and a point set of the three-dimensional body surface image is extracted, two points in the pair of points representing the same feature point.
In step 503, a rigid body transformation that minimizes the average distance between two points in the acquired point pair is determined, resulting in a fourth mapping relationship.
In step 504, the first set of mapping points is processed using the fourth mapping relationship to obtain a third set of mapping points.
In step 505, in the third mapping point set and the point set of the three-dimensional body surface image, it is determined whether the average distance between two points in the same point pair is smaller than a preset value, if the average distance is smaller than the preset value, step 506 is executed, otherwise, step 507 is executed.
In step 506, the fourth mapping relationship is used as the second mapping relationship.
In step 507, the third set of mapping points is returned to performing step 502 as the new first set of mapping points.
Therefore, after the first mapping point set is obtained through the rotation matrix and the translation vector, the second mapping relation between the first mapping point set and the point set of the three-dimensional body surface image is established, so that the points in the first mapping point set correspond to the points in the point set of the three-dimensional body surface image one to one, the biological tissue in the CT image is converted to the position of the same biological tissue in the three-dimensional body surface image, and the organs in the CT image and structures below the surface of the organs are conveniently displayed in the three-dimensional body surface image.
In an exemplary embodiment, the present application further provides a computer-readable storage medium, such as the memory 120, comprising instructions executable by the processor 110 of the electronic device 100 to perform the above-described method for registering an electron computed tomography image and a three-dimensional body surface image. Alternatively, the computer readable storage medium may be a non-transitory computer readable storage medium, for example, which may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product is also provided, comprising a computer program which, when executed by the processor 110, implements the registration method of the electron computer tomography image and the three-dimensional body surface image as provided herein.
Having described the registration method and related apparatus for an electronic computed tomography image and a three-dimensional body surface image according to an exemplary embodiment of the present application, an electronic device according to an exemplary embodiment of the present application is described next.
As will be appreciated by one skilled in the art, aspects of the present application may be embodied as a system, method or program product. Accordingly, various aspects of the present application may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
In some possible implementations, an electronic device according to the present application may include at least one processor, and at least one memory. The memory stores therein program code which, when executed by the processor, causes the processor to perform the steps of the method for searching for a monitoring node according to various exemplary embodiments of the present application described above in the present specification. For example, the processor may perform steps in a search method such as monitoring nodes.
The electronic device 100 according to this embodiment of the present application is described below with reference to fig. 6. The electronic device 100 shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 6, the electronic apparatus 100 is represented in the form of a general electronic apparatus. The components of the electronic device 100 may include, but are not limited to: the at least one processor 110, the at least one memory 120, and a bus 133 that connects the various system components (including the memory 120 and the processor 110).
The memory 120 may include readable media in the form of volatile memory, such as Random Access Memory (RAM)1201 and/or cache memory 1202, and may further include Read Only Memory (ROM) 1203.
In some possible embodiments, the aspects of a search method for a monitoring node provided by the present application may also be implemented in the form of a program product including program code for causing a computer device to perform the steps in a monitoring according to various exemplary embodiments of the present application described above in this specification when the program product is run on the computer device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The program product for monitoring of the embodiments of the present application may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on an electronic device. However, the program product of the present application is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the consumer electronic device, partly on the consumer electronic device, as a stand-alone software package, partly on the consumer electronic device and partly on a remote electronic device, or entirely on the remote electronic device or server. In the case of remote electronic devices, the remote electronic devices may be connected to the consumer electronic device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external electronic device (e.g., through the internet using an internet service provider).
It should be noted that although several units or sub-units of the apparatus are mentioned in the above detailed description, such division is merely exemplary and not mandatory. Indeed, the features and functions of two or more units described above may be embodied in one unit, according to embodiments of the application. Conversely, the features and functions of one unit described above may be further divided into embodiments by a plurality of units.
Further, while the operations of the methods of the present application are depicted in the drawings in a particular order, this does not require or imply that these operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and block diagrams, and combinations of flows and blocks in the flow diagrams and block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and block diagram block or blocks.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.
Claims (10)
1. A method of registering an electronic computed tomography image with a three-dimensional body surface image, the method comprising:
extracting a first group of feature points of a plurality of feature positions from an electronic Computed Tomography (CT) image, and extracting a second group of feature points of the plurality of feature positions from a three-dimensional body surface image; the three-dimensional body surface image is an image acquired by adopting three-dimensional positioning equipment;
determining a first mapping relationship between the first set of feature points and the second set of feature points; the first mapping relation is used for converting the first set of feature points to the positions of the second set of feature points;
performing geometric transformation on the CT image based on the first mapping relation to obtain a first mapping point set of the CT image;
and determining a second mapping relation between the first mapping point set and the three-dimensional body surface image, wherein the second mapping relation is used for converting the biological tissue in the CT image to the position of the same biological tissue in the three-dimensional body surface image.
2. The method of claim 1, wherein determining the first mapping relationship between the first set of feature points and the second set of feature points comprises:
determining the first mapping relationship based on the following first mapping relationship determination formula:
wherein S represents a matrix formed by the first group of characteristic points and the second group of characteristic points, E represents a matrix formed by characteristic values of S, U and V represent orthogonal matrices formed by decomposing singular values, and V T A transpose matrix representing V, R represents a rotation matrix between the first set of feature points and the second set of feature points, t represents a translation vector between the first set of feature points and the second set of feature points,represents a centroid of the first set of feature points,representing the centroid of the second set of feature points.
3. The method of claim 1, wherein determining a second mapping relationship between the first set of mapping points and the three-dimensional body surface image comprises:
extracting a point pair between the first mapping point set and the point set of the three-dimensional body surface image, wherein two points in the point pair represent the same characteristic point;
determining a third mapping relation between the first mapping point set and the point set of the three-dimensional body surface image based on the acquired point pairs;
processing the first mapping point set by adopting the third mapping relation to obtain a second mapping point set;
determining the distance between two points of the same point pair in the second mapping point set and the point set of the three-dimensional body surface image;
determining a distance threshold value based on the distance of each point pair;
screening out point pairs of which the distance between the two points is smaller than or equal to the distance threshold value from the second mapping point set and the point pairs of the point set of the three-dimensional body surface image;
and obtaining the second mapping relation by adopting a random sampling consistency algorithm for the screened point pairs.
4. The method of claim 1, wherein determining a second mapping relationship between the first set of mapping points and the three-dimensional body surface image comprises:
extracting a point pair between the first mapping point set and the point set of the three-dimensional body surface image, wherein two points in the point pair represent the same characteristic point;
determining rigid body transformation which enables the average distance between two points in the obtained point pair to be minimum to obtain a fourth mapping relation;
processing the first mapping point set by adopting the fourth mapping relation to obtain a third mapping point set;
determining the average distance between two points in the same point pair of a third mapping point set and the point set of the three-dimensional body surface image;
if the average distance is smaller than a preset value, taking the fourth mapping relation as the second mapping relation;
and if the average distance is larger than or equal to the preset value, taking the third mapping point set as a new first mapping point set and returning to execute the step of extracting the point pairs between the first mapping point set and the point set of the three-dimensional body surface image.
5. The method of claim 3, wherein determining a distance threshold based on the distance of the respective point pair comprises:
and determining the average distance of the point pairs as the distance threshold value.
6. The method according to claim 3, wherein after the second mapping relationship is obtained by applying a random sampling consistency algorithm to the screened point pairs, the method further comprises:
verifying the accuracy of the second mapping relationship;
if the accuracy of the second mapping relation is lower than an accuracy threshold value, extracting a point pair between the first mapping point set and the point set of the three-dimensional body surface image, wherein two points in the point pair represent the same characteristic point;
determining rigid body transformation which enables the average distance between two points in the obtained point pair to be minimum to obtain a fourth mapping relation;
processing the first mapping point set by adopting the fourth mapping relation to obtain a third mapping point set;
determining the average distance between two points in the same point pair of a third mapping point set and the point set of the three-dimensional body surface image;
if the average distance is smaller than a preset value, taking the fourth mapping relation as the second mapping relation;
and if the average distance is larger than or equal to the preset value, returning the third mapping point set as a new first mapping point set to execute the step of extracting the point pairs between the first mapping point set and the point set of the three-dimensional body surface image.
7. The method of claim 4 or 6, wherein the extracting of the point pairs between the first mapped point set and the point set of the three-dimensional body surface image comprises:
and extracting point pairs among the point sets of the three-dimensional body surface image in the first mapping point set by adopting a fast point feature histogram descriptor.
8. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement any of the methods recited in claims 1-7.
9. A computer program product, comprising a computer program which, when executed by the processor, implements the method of any one of claims 1-7.
10. A computer-readable storage medium, wherein instructions in the computer-readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the method of any of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210233256.1A CN114820731B (en) | 2022-03-10 | 2022-03-10 | Registration method and related device for CT image and three-dimensional body surface image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210233256.1A CN114820731B (en) | 2022-03-10 | 2022-03-10 | Registration method and related device for CT image and three-dimensional body surface image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114820731A true CN114820731A (en) | 2022-07-29 |
CN114820731B CN114820731B (en) | 2024-10-15 |
Family
ID=82529306
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210233256.1A Active CN114820731B (en) | 2022-03-10 | 2022-03-10 | Registration method and related device for CT image and three-dimensional body surface image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114820731B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116977381A (en) * | 2023-07-14 | 2023-10-31 | 武汉人工智能研究院 | Image registration method, device and storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050096515A1 (en) * | 2003-10-23 | 2005-05-05 | Geng Z. J. | Three-dimensional surface image guided adaptive therapy system |
CN105078514A (en) * | 2014-04-22 | 2015-11-25 | 重庆海扶医疗科技股份有限公司 | Construction method and device of three-dimensional model, image monitoring method and device |
CN106361366A (en) * | 2016-11-02 | 2017-02-01 | 上海联影医疗科技有限公司 | Multimode image registration method and system |
CN106469445A (en) * | 2015-08-18 | 2017-03-01 | 青岛海信医疗设备股份有限公司 | A kind of calibration steps of 3-D view, device and system |
WO2021091282A1 (en) * | 2019-11-06 | 2021-05-14 | 재단법인대구경북과학기술원 | Three-dimensional diagnostic system |
CN113808177A (en) * | 2021-09-26 | 2021-12-17 | 深圳市精迈医疗科技有限公司 | Registration method and system for medical image and three-dimensional pathological image |
WO2022027251A1 (en) * | 2020-08-04 | 2022-02-10 | 深圳迈瑞生物医疗电子股份有限公司 | Three-dimensional display method and ultrasonic imaging system |
-
2022
- 2022-03-10 CN CN202210233256.1A patent/CN114820731B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050096515A1 (en) * | 2003-10-23 | 2005-05-05 | Geng Z. J. | Three-dimensional surface image guided adaptive therapy system |
CN105078514A (en) * | 2014-04-22 | 2015-11-25 | 重庆海扶医疗科技股份有限公司 | Construction method and device of three-dimensional model, image monitoring method and device |
CN106469445A (en) * | 2015-08-18 | 2017-03-01 | 青岛海信医疗设备股份有限公司 | A kind of calibration steps of 3-D view, device and system |
CN106361366A (en) * | 2016-11-02 | 2017-02-01 | 上海联影医疗科技有限公司 | Multimode image registration method and system |
WO2021091282A1 (en) * | 2019-11-06 | 2021-05-14 | 재단법인대구경북과학기술원 | Three-dimensional diagnostic system |
WO2022027251A1 (en) * | 2020-08-04 | 2022-02-10 | 深圳迈瑞生物医疗电子股份有限公司 | Three-dimensional display method and ultrasonic imaging system |
CN113808177A (en) * | 2021-09-26 | 2021-12-17 | 深圳市精迈医疗科技有限公司 | Registration method and system for medical image and three-dimensional pathological image |
Non-Patent Citations (3)
Title |
---|
田金文等: "《图像匹配导航定位技术》", 31 January 2021, 武汉:华中科技大学出版社, pages: 138 - 139 * |
赵兴东等: "《矿用三维激光数字测量原理及其工程应用》", 31 January 2016, 北京:冶金出版社, pages: 75 - 78 * |
陈锐锋;方路平;潘清;曹平;高坤;: "多模态医学图像融合超声检查系统的设计与实现", 计算机工程, no. 04, 15 April 2015 (2015-04-15) * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116977381A (en) * | 2023-07-14 | 2023-10-31 | 武汉人工智能研究院 | Image registration method, device and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN114820731B (en) | 2024-10-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP2011110429A (en) | System and method for measurement of object of interest in medical image | |
CN112861961B (en) | Pulmonary blood vessel classification method and device, storage medium and electronic equipment | |
US10610124B2 (en) | Localization of fibrous neural structures | |
US11132793B2 (en) | Case-adaptive medical image quality assessment | |
CN114098744A (en) | Automatic identification and processing of anatomical structures in anatomical maps | |
CN115005981A (en) | Surgical path planning method, system, equipment, medium and surgical operation system | |
CN107481254A (en) | Processing method, device, medium and the electronic equipment of medical image | |
CN109859213A (en) | Bone critical point detection method and device in joint replacement surgery | |
CN111062390A (en) | Region-of-interest labeling method, device, equipment and storage medium | |
CN113610826A (en) | Puncture positioning method and device, electronic device and storage medium | |
CN114820731B (en) | Registration method and related device for CT image and three-dimensional body surface image | |
CN112863625A (en) | Follow-up analysis of patients | |
CN113888566B (en) | Target contour curve determination method and device, electronic equipment and storage medium | |
CN113409333B (en) | Three-dimensional image cutting method and electronic equipment | |
CN113610824A (en) | Puncture path planning method and device, electronic device and storage medium | |
CN115861283A (en) | Medical image analysis method, device, equipment and storage medium | |
Pandey et al. | Standardized evaluation of current ultrasound bone segmentation algorithms on multiple datasets | |
CN116747017A (en) | Cerebral hemorrhage operation planning system and method | |
US20190251691A1 (en) | Information processing apparatus and information processing method | |
JP2022519634A (en) | Patient-specific cortical surface tessellation to dipole patch | |
CN112365959B (en) | Method and device for modifying annotation of three-dimensional image | |
CN111476768B (en) | Image registration method, image registration device, path planning method, path planning device, path planning system and medium | |
CN115546174B (en) | Image processing method, device, computing equipment and storage medium | |
CN114463323B (en) | Focal region identification method and device, electronic equipment and storage medium | |
CN118402867B (en) | Bone surface registration guide device, bone surface registration apparatus, and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |