WO2015035462A1 - Point feature based 2d-3d registration - Google Patents
Point feature based 2d-3d registration Download PDFInfo
- Publication number
- WO2015035462A1 WO2015035462A1 PCT/AU2014/000909 AU2014000909W WO2015035462A1 WO 2015035462 A1 WO2015035462 A1 WO 2015035462A1 AU 2014000909 W AU2014000909 W AU 2014000909W WO 2015035462 A1 WO2015035462 A1 WO 2015035462A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- volume
- features
- model
- descriptors
- image
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
Definitions
- This invention relates to image registration and more particularly to registering a 2D image with the corresponding slice of a 3D volume.
- Image registration of 2D images and 3D volumes has many industrial and medical applications. Industrial applications include grain structure analysis and data fusion for fault detection.
- Available methods for 2D-3D image registration can be categorized into two groups, namely global optimization based registration and point feature-based registration.
- Presently available registration techniques for the registration of a Mega-pixel 2D image to a Giga-voxel 3D volume require many hundreds of CPU hours utilizing state of the art CPU technology.
- Major drawbacks of global optimization are the flatness of the similarity measure maxima (caused by self similarity of images) and high computational complexity. Global techniques are still not as easily parallelizable like point feature-based techniques.
- global registration techniques suffer from a local maxima problem.
- feature-based methods are not suitable for high accuracy registration as feature misregistration can perturb the estimated model. Further, the accuracy of existing methods for 2D-3D feature matching is quite low as existing descriptors do not account for 2D-3D transformations.
- the invention relates generally to a repeatable keypoint detection methodology for 2D images and 3D volumes, to present a novel way of descriptor formation for matching 2D keypoints and 3D keypoints and to model 2D to 3D transformations for registration up to an affinity under low feature matching accuracy.
- a method for identifying the placement and orientation of a 2D image produced from a slice of a 3D volume comprising the following steps:
- a model is computed from a small number of 2D-3D feature matches and is used to guide the matching of further features.
- the model is computed from one 2D-3D match.
- the model is computed from two 2D-3D matches.
- the method of guiding the match of further features is to consider only further matches that are consistent with the model as regards some or all the following properties:
- a method of identifying scale invariant features of a 3D volume defined by a plurality of voxels comprising: locating voxel amplitude extrema in a plurality of filtered volumes produced from said image by:
- the method comprises producing said filtered volumes.
- producing a filtered volume comprises subtracting two blurred volumes derived from the initial volume, to produce a difference volume.
- producing a plurality of difference volumes comprises producing a plurality of blurred volumes from the initial volume and subtracting each blurred volume from a blurred volume produced previously as recited in claim 8 of the appended claims.
- producing subregion volume descriptors for a voxel extremum further comprises producing a plurality of 2D descriptors, associated with a set of planes within a subregion around each voxel.
- producing a plurality of 2D descriptors associated with a set of planes further comprises selecting a set of planes through a 3D keypoint with different orientations and selecting a 2D keypoint descriptor for each plane.
- an apparatus for computing scale invariant features of a volume defined by a plurality of voxels comprising a processor circuit configured to locate voxel extrema in a plurality of filtered volumes by:
- said processor circuit is configured to produce said filtered volumes.
- said processor circuit in order to produce a filtered volume, is configured to compute two blurred volumes from the initial volume and to subtract them to produce a difference volume.
- the processor circuit is configured to successively blur and subtract as recited in claim 14 of the appended claims and is configured to use a blurred volume produced in a preceding blurring function as said original volume in successive blurring functions.
- a method for matching 2D features of an image defined by a plurality of pixels and 3D features of a volume defined by a plurality of voxels comprising:
- a single 2D-3D correspondence is selected at a time, and used to guide the selection of further correspondences, consistent with the plurality of compatible transformation models.
- 2D image features are matched with descriptors belonging to 3D features in a way that different 2D image descriptors are matched with the 3D features associated with planes with similar orientation.
- the method of matching 2D features comprises the following steps.
- the method further comprises producing the best model under low feature matching accuracy. In another embodiment, the
- transformation model is computed from 2D-3D correspondences, and specifies a hypothesized position of the 2D image with respect to the 3D volume.
- the method comprises producing the
- transformation model by selecting 3 2D-3D correspondences each time successively until the best model is found.
- an apparatus for matching of 3D features of a volume defined by a plurality of voxels with 2D features of an image defined by a plurality of pixels comprising a processor circuit configured to:
- said processor circuit is configured to produce said scale invariant 2D features for each plane. In another embodiment, said processor circuit is configured to evaluate the transformation model by selecting 3 2D-3D
- a sixth aspect of the present invention there is provided a method for outlier rejection of matches, the method comprising the steps of:
- the method comprises producing the mapping model.
- producing the mapping model comprises performing registration.
- an apparatus for outlier rejection of matches comprising:
- said processor circuit is configured for producing the mapping model.
- the present invention addresses one or more of the deficiencies identified in the prior art through novel techniques for repeatable keypoint detection for 2D images and 3D volumes, a novel way of descriptor formation for matching 2D keypoints and 3D keypoints and model estimation for registration up to an affinity under low feature matching accuracy.
- the algorithm is based on Branch-and-Bound for rigid model estimation with guaranteed convergence.
- handling image scale is also addressed by resolving the ambiguity of 2D image scale and 3D image scale by showing that the 2D scale and the 3D scale do not represent similar image and volume neighborhoods.
- a novel feature descriptor based on image curvature founded on mathematically sound principles for improving feature matching accuracy. We indicate that the 2D-3D registration accuracy improves under this novel feature descriptor.
- This invention uses the Difference of Gaussian (DoG) as our detector measure for detecting repeatable keypoints among 2D-3D images.
- DoG Difference of Gaussian
- 3D volumes we extend the DoG approach with a 3D Gaussian kernel.
- the scale-space of 3D volumes is generated using its Laplacian approximation obtained as the difference of adjacent volumes in the scale-space.
- the method and the apparatus may further involve generating descriptors for keypoints.
- a 3D keypoint descriptor cannot be directly matched with a 2D keypoint descriptor, so the apparatus hypothesizes that a descriptor computed on a patch around a 2D keypoint and a 2D descriptor computed on a plane passing through the 3D keypoint with sufficiently close orientation to the 2D keypoint patch should produce a matchable descriptor if the two keypoints are correspondences.
- Sampling planes through a 3D keypoint to approximate all possible orientations produces matchable descriptors with the 2D image.
- descriptors are generated by sampling planes through each 3D keypoint using bilinear or higher-order interpolation. In this way, one 3D keypoint has many descriptors.
- the scale of a 3D keypoint is not equivalent to the scale of a 2D keypoint for computing descriptors, as shown later in this document.
- the apparatus uses a novel image model. This is done by representing any plane in R 3 by a normal vector and a point on the plane. By selecting a coordinate frame corresponding oriented with the normal vector, a point on the 2D plane can be related with the corresponding plane in the 3D space via an affine transformation as described later in this document.
- the invention accordingly comprises several steps, and the relation of one or more of such steps with each of the others, and the apparatus embodying features of construction, combinations of elements and arrangement of parts that are adapted to affect such steps, is exemplified in the following detailed disclosure, and the scope of the invention will be indicated in the claims.
- overbar designation appears to the right of the relevant character rather than strictly above it. For instance, is to be interpreted as having the overbar designation directly above the "X"', i.e., the keypoint Brief Description of the Figures
- FIG. 1 shows an exemplary arrangement of a preferred embodiment of the present invention
- FIG. 2 explains the 2D to 3D feature matching process implemented on the processor circuit
- FIG. 3 depicts two different means of implementing the RANSAC operation of the inventive methods.
- FIG. 4 depicts circle restricting 2D feature detection to the vicinity of the 3D point to resolve scale ambiguity.
- the 2D detector on each orientation plane represented by a matrix in estimates the corresponding 2D scale for the 3D keypoint. Detection is allowed to happen in a radius to counter misregistration of 3D keypoint detection.
- the CT volume is designated [40];
- the feature circle (the circle restricting 2D feature detection in the vicinity of the 3D point to solve scale ambiguity) is [41];
- the 3D keypoint is [42];
- the extract patch is [43] and the 2D feature detection of the extracted plane is designated [44].
- Image registration of 2D images and 3D volumes has many industrial and medical applications.
- Industrial applications include grain structure analysis, data fusion for fault detection.
- State of the art 2D-3D registration techniques are computationally expensive. For example, registration of a Mega-pixel 2D image to a Giga-voxel 3D volume computation time is in many hundreds of CPU hours utilizing state of the art CPU technology.
- the present invention introduces repeatable keypoint detection for 2D images and 3D volumes, a novel way of descriptor formation for matching 2D keypoints and 3D keypoints and model estimation for registration up to an affinity under low feature matching accuracy.
- the invention relates to an affine registration estimation based on an algorithm, which requires three true positive matches of feature points and extend this to estimate the model based on only a single true positive match, which we call the one-point algorithm.
- handling image scale is also addressed by resolving the ambiguity of 2D image scale and 3D image scale.
- the Applicant demonstrates that 2D scale and 3D scale do not represent similar image and volume neighborhoods.
- the Applicant compares the inventive technique with state of the art global registration techniques, such as correlation based registration and indicate the superior performance of the inventive method which is several hundred times faster.
- the arrangement in FIG. 1 shows an exemplary arrangement of a preferred embodiment.
- the apparatus in arrangement FIG. 1 includes a 3D scanner [1] shown generally, which can be any 3D source like a micro CT scanner scanning an object [2] and a 2D scanner [3] shown generally, which can be any 2D source like a electron microscope scanning any section of the object [2].
- the 2D section [4] captured from the 2D scanner [3] is sent to the computer [5] along with the 3D scan [6].
- the computer [5] is programmed to solve the problem of registering 2D-3D images by detecting repeatable point features in 2D-3D images, describing them with descriptors, successfully matching them, finding correspondences and estimating registration by rejecting outliers using RANSAC.
- the block diagram in FIG. 2 explains the 2D to 3D feature matching process implemented on the processor circuit.
- the 3D scanner generates a 3D volume [7] of the object of interest.
- the 3D point feature detector [8] comprises a 3D DoG point feature detector formulated by extrema detection, a stability test and interpolation steps.
- extrema detection For the 3D case, scale is not used when detecting extrema. Using scale also for extrema detection reduces the number of keypoints considerably, hence leading to lower repeatability. Then, these detected extrema are chosen as keypoints after the stability test and interpolation to sub-pixel accuracy. Interpolation is performed by fitting the keypoint extrema response to a Taylor series.
- region extractor [9] For each generated 3D keypoint, region extractor [9] resamples planes through that point, with orientation chosen among all possible orientations. Subsequently, 2D features are computed on this resampled plane to produce matchable descriptors with the 2D image.
- region extractor For the 3D case, we use bilinear interpolation to compute sampled planes through each 3D keypoint before computing descriptors.
- One 3D keypoint has many descriptors (typically the number of differently oriented planes). For convenience, we take a nearly equally spaced set of radially outward vectors in a sphere as possible orientations for the sample planes.
- the 2D scanner [10] scans for interest points on all selected planes and descriptors are generated for each interest point.
- the processor circuit is programmed to associate [13] these descriptors with their corresponding keypoints.
- the 2D scanner generates a 2D image [14] section of the object of interest.
- the 2D point feature detector [15] detects interest points of the 2D image and 2D descriptors [16] are extracted from the image. These descriptors of the 2D image are then matched [17] with descriptors generated for the 3D volume.
- the system employs an image model [18] with RANSAC for outlier rejection.
- a feature point such as one produced by the DoG feature point extractor assigns an intrinsic scale to any feature point, determined by response of the detector to filtered versions of the image computed by convolution of different kernels.
- the first is the result of slicing the filtered 3D image
- the second is the result of filtering the sliced 2D image.
- KNN K Nearest neighbor
- any 3D keypoint will have only a closely oriented sample plane corresponding to the 2D key- point patch.
- the patch structure changes; thus, the descriptor computed on it changes. This reduces the descriptor matching accuracy.
- block diagram shows an alternate processor circuit implementation which selects 3 points [22] to generate the model and do RANSAC RANSAC [23].
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201361876786P | 2013-09-12 | 2013-09-12 | |
US61/876,786 | 2013-09-12 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015035462A1 true WO2015035462A1 (en) | 2015-03-19 |
Family
ID=52664838
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/AU2014/000909 WO2015035462A1 (en) | 2013-09-12 | 2014-09-12 | Point feature based 2d-3d registration |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2015035462A1 (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109727240A (en) * | 2018-12-27 | 2019-05-07 | 深圳开立生物医疗科技股份有限公司 | A kind of three-dimensional ultrasound pattern blocks tissue stripping means and relevant apparatus |
CN112633304A (en) * | 2019-09-23 | 2021-04-09 | 中国科学院沈阳自动化研究所 | Robust fuzzy image matching method |
US11321862B2 (en) | 2020-09-15 | 2022-05-03 | Toyota Research Institute, Inc. | Systems and methods for multi-camera modeling with neural camera networks |
US20220245843A1 (en) * | 2020-09-15 | 2022-08-04 | Toyota Research Institute, Inc. | Systems and methods for generic visual odometry using learned features via neural camera models |
US11494927B2 (en) | 2020-09-15 | 2022-11-08 | Toyota Research Institute, Inc. | Systems and methods for self-supervised depth estimation |
US11615544B2 (en) | 2020-09-15 | 2023-03-28 | Toyota Research Institute, Inc. | Systems and methods for end-to-end map building from a video sequence using neural camera models |
CN116402675A (en) * | 2023-03-23 | 2023-07-07 | 中国地质科学院地质力学研究所 | Image registration method based on shale component calibration |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080175463A1 (en) * | 2007-01-17 | 2008-07-24 | Mediguide Ltd. | Method and a system for registering a 3d pre acquired image coordinates system with a medical positioning system coordinate system and with a 2d image coordinate system |
US20130051647A1 (en) * | 2011-08-23 | 2013-02-28 | Siemens Corporation | Automatic Initialization for 2D/3D Registration |
US20130076865A1 (en) * | 2010-06-18 | 2013-03-28 | Canon Kabushiki Kaisha | Position/orientation measurement apparatus, processing method therefor, and non-transitory computer-readable storage medium |
-
2014
- 2014-09-12 WO PCT/AU2014/000909 patent/WO2015035462A1/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080175463A1 (en) * | 2007-01-17 | 2008-07-24 | Mediguide Ltd. | Method and a system for registering a 3d pre acquired image coordinates system with a medical positioning system coordinate system and with a 2d image coordinate system |
US20130076865A1 (en) * | 2010-06-18 | 2013-03-28 | Canon Kabushiki Kaisha | Position/orientation measurement apparatus, processing method therefor, and non-transitory computer-readable storage medium |
US20130051647A1 (en) * | 2011-08-23 | 2013-02-28 | Siemens Corporation | Automatic Initialization for 2D/3D Registration |
Non-Patent Citations (1)
Title |
---|
FLITTON G ET AL.: "Object Recognition Using 3D SIFT in Complex CT Volumes", PROCEEDINGS OF THE BRITISH MACHINE VISION CONFERENCE, September 2010 (2010-09-01), pages 1 - 12.1-11.12 * |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109727240A (en) * | 2018-12-27 | 2019-05-07 | 深圳开立生物医疗科技股份有限公司 | A kind of three-dimensional ultrasound pattern blocks tissue stripping means and relevant apparatus |
CN109727240B (en) * | 2018-12-27 | 2021-01-19 | 深圳开立生物医疗科技股份有限公司 | Method and related device for stripping shielding tissues of three-dimensional ultrasonic image |
CN112633304A (en) * | 2019-09-23 | 2021-04-09 | 中国科学院沈阳自动化研究所 | Robust fuzzy image matching method |
CN112633304B (en) * | 2019-09-23 | 2023-07-25 | 中国科学院沈阳自动化研究所 | Robust fuzzy image matching method |
US11321862B2 (en) | 2020-09-15 | 2022-05-03 | Toyota Research Institute, Inc. | Systems and methods for multi-camera modeling with neural camera networks |
US20220245843A1 (en) * | 2020-09-15 | 2022-08-04 | Toyota Research Institute, Inc. | Systems and methods for generic visual odometry using learned features via neural camera models |
US11494927B2 (en) | 2020-09-15 | 2022-11-08 | Toyota Research Institute, Inc. | Systems and methods for self-supervised depth estimation |
US11508080B2 (en) | 2020-09-15 | 2022-11-22 | Toyota Research Institute, Inc. | Systems and methods for generic visual odometry using learned features via neural camera models |
US11615544B2 (en) | 2020-09-15 | 2023-03-28 | Toyota Research Institute, Inc. | Systems and methods for end-to-end map building from a video sequence using neural camera models |
CN116402675A (en) * | 2023-03-23 | 2023-07-07 | 中国地质科学院地质力学研究所 | Image registration method based on shale component calibration |
CN116402675B (en) * | 2023-03-23 | 2023-11-28 | 中国地质科学院地质力学研究所 | Image registration method based on shale component calibration |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2015035462A1 (en) | Point feature based 2d-3d registration | |
EP2491531B1 (en) | Alignment of an ordered stack of images from a specimen. | |
JP5940453B2 (en) | Method, computer program, and apparatus for hybrid tracking of real-time representations of objects in a sequence of images | |
US20150262346A1 (en) | Image processing apparatus, image processing method, and image processing program | |
US9767383B2 (en) | Method and apparatus for detecting incorrect associations between keypoints of a first image and keypoints of a second image | |
US11145080B2 (en) | Method and apparatus for three-dimensional object pose estimation, device and storage medium | |
US20110038540A1 (en) | Method and apparatus extracting feature points and image based localization method using extracted feature points | |
JP2008107860A (en) | Method for estimating transformation between image pair, method for expressing image, device for this method, controller, and computer program | |
US11676301B2 (en) | System and method for efficiently scoring probes in an image with a vision system | |
CN111738045B (en) | Image detection method and device, electronic equipment and storage medium | |
US8582810B2 (en) | Detecting potential changed objects in images | |
JPWO2014203687A1 (en) | Image processing method, image processing apparatus, and image processing program | |
CN109785370A (en) | A kind of weak texture image method for registering based on space time series model | |
Lundin et al. | Automatic registration of 2D histological sections to 3D microCT volumes: Trabecular bone | |
Yetiş et al. | Adaptive vision based condition monitoring and fault detection method for multi robots at production lines in industrial systems | |
CN110458177B (en) | Method for acquiring image depth information, image processing device and storage medium | |
Yasein et al. | A feature-based image registration technique for images of different scale | |
Kanuki et al. | Automatic compensation of radial distortion by minimizing entropy of histogram of oriented gradients | |
Petrou et al. | Super-resolution in practice: the complete pipeline from image capture to super-resolved subimage creation using a novel frame selection method | |
CN111476821B (en) | Target tracking method based on online learning | |
Kang et al. | Checkerboard Corner Localization Accelerated with Deep False Detection for Multi-camera Calibration | |
CN115294358A (en) | Feature point extraction method and device, computer equipment and readable storage medium | |
Amankwah | Image registration by automatic subimage selection and maximization of combined mutual information and spatial information | |
CN115937115A (en) | Cloth detection method, device, equipment and medium | |
Thein et al. | Feature Matching Approach for 3D Reconstruction from Multiple Images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14843682 Country of ref document: EP Kind code of ref document: A1 |
|
WPC | Withdrawal of priority claims after completion of the technical preparations for international publication |
Ref document number: 61/876,786 Country of ref document: US Date of ref document: 20160217 Free format text: WITHDRAWN AFTER TECHNICAL PREPARATION FINISHED |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 14843682 Country of ref document: EP Kind code of ref document: A1 |