CN116402867A - Three-dimensional reconstruction image alignment method for fusing SIFT and RANSAC - Google Patents
Three-dimensional reconstruction image alignment method for fusing SIFT and RANSAC Download PDFInfo
- Publication number
- CN116402867A CN116402867A CN202310328243.7A CN202310328243A CN116402867A CN 116402867 A CN116402867 A CN 116402867A CN 202310328243 A CN202310328243 A CN 202310328243A CN 116402867 A CN116402867 A CN 116402867A
- Authority
- CN
- China
- Prior art keywords
- points
- feature point
- screening
- image
- grid
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 36
- 238000012216 screening Methods 0.000 claims abstract description 87
- 238000005070 sampling Methods 0.000 claims abstract description 15
- 238000001514 detection method Methods 0.000 claims abstract description 13
- 230000009466 transformation Effects 0.000 claims description 16
- 239000011159 matrix material Substances 0.000 claims description 15
- 238000012937 correction Methods 0.000 claims description 9
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 6
- 238000012360 testing method Methods 0.000 claims description 5
- 239000013598 vector Substances 0.000 claims description 5
- 238000006243 chemical reaction Methods 0.000 claims 1
- 230000008569 process Effects 0.000 description 5
- 230000006870 function Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/35—Determination of transform parameters for the alignment of images, i.e. image registration using statistical methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/757—Matching configurations of points or features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computing Systems (AREA)
- Evolutionary Biology (AREA)
- Probability & Statistics with Applications (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Computational Biology (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a three-dimensional reconstruction image alignment method fusing SIFT and RANSAC, relating to the field of computer vision; dividing grids with the same size for an original image and a target image, respectively carrying out feature point detection on each grid of the original image and the target image based on a scale invariant feature transform SIFT algorithm to obtain feature point description of each grid of the original image and feature point description of each grid of the target image, and carrying out coarse screening feature point matching according to the feature point description of each grid of the original image and the feature point description of each grid of the target image; secondary screening is carried out according to the matching result of the rough screening characteristic points; and (3) carrying out fine screening on the secondary screening result based on a random sampling consensus algorithm RANSAC to finish image alignment.
Description
Technical Field
The invention discloses a method, which relates to the field of computer vision, in particular to a method for aligning three-dimensional reconstruction images by fusing SIFT and RANSAC.
Background
In computer vision, three-dimensional reconstruction is one of the key technologies of physical environment perception, and can be used for space flight, remote sensing mapping, smart city, number Wen Bo, autopilot, virtual reality, digital twinning and other scenes. Three-dimensional reconstruction refers to a recovery and reconstruction of some three-dimensional objects or three-dimensional scenes, and the reconstructed model is convenient for computer representation and processing. In the actual reconstruction process, three-dimensional reconstruction is an inverse process of describing images of objects, scenes, human bodies and the like in a three-dimensional space, and three-dimensional objects, scenes and dynamic human bodies are restored by two-dimensional images. The three-dimensional reconstruction technique is a technique of establishing virtual reality expressing an objective world in a computer.
The essence of the three-dimensional reconstruction technology based on the image is that the two-dimensional images which are acquired by photographic equipment or video equipment and are discrete for displaying the three-dimensional scene or object are taken as basic data, three-dimensional data information of the scene or object is obtained through processing, a real scene or object is generated, then the panoramic image is organized into a virtual real scene space through a proper space model, and a user can move forward, backward, look around, look near, look far and the like in the space, so that the effect of the user for observing the three-dimensional scene in all directions is realized. In the reconstruction process, two-dimensional image alignment of each scene and object is the basis of reconstruction, but the existing image alignment method has the problems of higher error matching rate, more foreground interference, and inaccurate image alignment caused by large parallax among images and other reasons.
Disclosure of Invention
The invention mainly aims at the problem of inaccurate image alignment caused by foreground interference, large parallax among images and the like, and provides a three-dimensional reconstruction image alignment method for fusing SIFT and RANSAC, which improves the accuracy and speed of feature point matching and completes image alignment more efficiently.
The specific scheme provided by the invention is as follows:
the invention provides a three-dimensional reconstruction image alignment method integrating SIFT and RANSAC, which is characterized in that grids with the same size are divided for an original image and a target image, feature point detection is respectively carried out on each grid of the original image and the target image based on a scale-invariant feature transformation SIFT algorithm, feature point description of each grid of the original image and feature point description of each grid of the target image are obtained, and coarse screening feature point matching is carried out according to the feature point description of each grid of the original image and the feature point description of each grid of the target image;
secondary screening is carried out according to the matching result of the rough screening characteristic points: setting the slope of a matching line segment between two horizontal characteristic points in the calibration and correction values as a standard slope, comparing whether the slope of the matching line segment in the matching result of the coarse screening characteristic points is equal to the standard slope, if so, regarding the matching as effective matching, otherwise, rejecting the matching line segment;
based on random sampling consensus (RANSAC) algorithm, fine screening is carried out on the secondary screening result, and image alignment is completed: and (3) through fitting an estimation model by local points, screening corresponding characteristic points of the original image and the target image by using the estimation model, and finishing image alignment according to the corresponding characteristic points.
Further, in the method for aligning three-dimensional reconstruction images by fusing SIFT and RANSAC, feature point detection is performed by the scale-invariant feature transform SIFT algorithm, including: and searching the image positions of the original image and the target image in all scale spaces under each grid respectively, identifying key points with unchanged scale and rotation through a Gaussian differential function, positioning the key points, and determining the characteristic points according to the key points.
Further, in the method for aligning three-dimensional reconstruction images by fusing SIFT and RANSAC, the feature point description is obtained based on a scale-invariant feature transform SIFT algorithm, and the method comprises the following steps: determining the main direction of the feature points, comparing the feature vectors of the key points in pairs to obtain a plurality of pairs of feature points matched with each other, establishing the corresponding relation between scenes in the original image and the target image, and generating feature point descriptions of all grids.
Further, in the method for aligning three-dimensional reconstructed images by fusing SIFT and RANSAC, the fine screening of the secondary screening result based on the random sampling consensus algorithm RANSAC includes: randomly assuming a group of local points as initial values, fitting an estimation model according to the local points, testing corresponding characteristic points of an original image and a target image by the estimation model, if the corresponding characteristic points are suitable for the estimation model, considering the corresponding characteristic points as local points, expanding the local points, obtaining enough local points according to application conditions,
and optimizing an affine transformation matrix based on a random sampling consensus algorithm RANSAC, so that the transformation matrix is connected with corresponding characteristic points to finish the alignment of images.
The invention also provides a device for aligning three-dimensional reconstruction images fusing SIFT and RANSAC, which comprises a coarse screening module, a secondary screening module and a fine screening alignment module,
the coarse screening module divides grids with the same size for the original image and the target image, performs feature point detection on each grid of the original image and the target image based on a scale-invariant feature transform SIFT algorithm to obtain feature point description of each grid of the original image and feature point description of each grid of the target image, and performs coarse screening feature point matching according to the feature point description of each grid of the original image and the feature point description of each grid of the target image;
the secondary screening module performs secondary screening according to the matching result of the coarse screening characteristic points: setting the slope of a matching line segment between two horizontal characteristic points in the calibration and correction values as a standard slope, comparing whether the slope of the matching line segment in the matching result of the coarse screening characteristic points is equal to the standard slope, if so, regarding the matching as effective matching, otherwise, rejecting the matching line segment;
the fine screening alignment module performs fine screening on the secondary screening result based on a random sampling consensus algorithm RANSAC to finish image alignment: and (3) through fitting an estimation model by local points, screening corresponding characteristic points of the original image and the target image by using the estimation model, and finishing image alignment according to the corresponding characteristic points.
Further, in the device for aligning three-dimensional reconstruction images fusing SIFT and RANSAC, the coarse screening module performs feature point detection based on a scale-invariant feature transform SIFT algorithm, and the method includes: and searching the image positions of the original image and the target image in all scale spaces under each grid respectively, identifying key points with unchanged scale and rotation through a Gaussian differential function, positioning the key points, and determining the characteristic points according to the key points.
Further, in the device for aligning three-dimensional reconstruction images fusing SIFT and RANSAC, the coarse screening module obtains feature point descriptions based on a scale-invariant feature transform SIFT algorithm, and the method comprises the following steps: determining the main direction of the feature points, comparing the feature vectors of the key points in pairs to obtain a plurality of pairs of feature points matched with each other, establishing the corresponding relation between scenes in the original image and the target image, and generating feature point descriptions of all grids.
Further, in the device for aligning three-dimensional reconstructed images fusing SIFT and RANSAC, the fine screening and aligning module performs fine screening on the secondary screening result based on a random sampling consensus algorithm RANSAC, and the method comprises the following steps: randomly assuming a group of local points as initial values, fitting an estimation model according to the local points, testing corresponding characteristic points of an original image and a target image by the estimation model, if the corresponding characteristic points are suitable for the estimation model, considering the corresponding characteristic points as local points, expanding the local points, obtaining enough local points according to application conditions,
and optimizing an affine transformation matrix based on a random sampling consensus algorithm RANSAC, so that the transformation matrix is connected with corresponding characteristic points to finish the alignment of images.
The invention has the advantages that:
the invention provides a three-dimensional reconstruction image alignment method fusing SIFT and RANSAC, which comprises the steps of dividing an image into grids, representing different depth planes of the image by corresponding a homography matrix to each grid, extracting SIF characteristics of each grid in two images, and obtaining a rough matching result; performing secondary filtering through calibrating the slope of the connection line between the calibration value and the correction value; the method of the invention improves the accuracy and the speed of feature point matching and more efficiently completes image alignment.
Drawings
FIG. 1 is a schematic diagram of the application flow of the method of the present invention.
Detailed Description
The feature point extraction method comprises the steps of scale-invariant feature transformation SIFT (Scale Invariant Feature Transform), searching key points on different scale spaces by SIFT, and calculating the direction of the key points, wherein the feature points have scale invariance and are local feature descriptors. The display device has good stability and invariance, can adapt to rotation, scaling and brightness change, and can not be interfered by visual angle change, affine transformation and noise to a certain extent; the distinguishing performance is good, and quick and accurate distinguishing information can be matched in a mass characteristic database.
A random sample consensus algorithm (RANdomSAmpleConsensus, RANSAC) iteratively estimates parameters of a mathematical model from a set of observed data containing outliers. The RANSAC algorithm is widely used in the field of computer vision and mathematics, such as straight line fitting, plane fitting, calculating a transformation matrix between images or point clouds, calculating a base matrix, and the like.
The present invention will be further described with reference to the accompanying drawings and specific examples, which are not intended to be limiting, so that those skilled in the art will better understand the invention and practice it.
The invention provides a three-dimensional reconstruction image alignment method integrating SIFT and RANSAC, which is characterized in that grids with the same size are divided for an original image and a target image, feature point detection is respectively carried out on each grid of the original image and the target image based on a scale-invariant feature transformation SIFT algorithm, feature point description of each grid of the original image and feature point description of each grid of the target image are obtained, and coarse screening feature point matching is carried out according to the feature point description of each grid of the original image and the feature point description of each grid of the target image;
secondary screening is carried out according to the matching result of the rough screening characteristic points: setting the slope of a matching line segment between two horizontal characteristic points in the calibration and correction values as a standard slope, comparing whether the slope of the matching line segment in the matching result of the coarse screening characteristic points is equal to the standard slope, if so, regarding the matching as effective matching, otherwise, rejecting the matching line segment;
based on random sampling consensus (RANSAC) algorithm, fine screening is carried out on the secondary screening result, and image alignment is completed: and (3) through fitting an estimation model by local points, screening corresponding characteristic points of the original image and the target image by using the estimation model, and finishing image alignment according to the corresponding characteristic points.
The three-dimensional reconstruction image alignment method fusing SIFT and RANSAC well solves the problems of image alignment error matching and low calculation speed under the conditions of foreground interference, large parallax among images and the like.
In specific applications, based on the technical solution of the method according to the present invention, in some embodiments of the method according to the present invention, the following may be referred to for specific procedures:
step 1: dividing grids with the same size for an original image and a target image, respectively carrying out feature point detection on each grid of the original image and the target image based on a scale invariant feature transform SIFT algorithm to obtain feature point description of each grid of the original image and feature point description of each grid of the target image, and carrying out coarse screening feature point matching according to the feature point description of each grid of the original image and the feature point description of each grid of the target image;
further, in the step 1, feature point detection is performed based on a scale invariant feature transform SIFT algorithm, including: searching image positions on all scale spaces under each grid of an original image and a target image respectively, identifying key points with scales and rotation invariance, such as corner points, edge points, bright points of a dark area and dark points of the bright area through Gaussian differential functions, positioning the key points, determining characteristic points according to the key points, determining positions and scales on each candidate position, and distributing one or more directions to each key point position based on the local gradient direction of the image according to the stability degree of the key points;
further, the feature point description obtained by the scale-invariant feature transform SIFT algorithm in the step 1 includes: determining the main direction of the feature points, comparing the feature vectors of the key points in pairs to obtain a plurality of pairs of feature points matched with each other, establishing the corresponding relation between scenes in the original image and the target image, and generating feature point descriptions of all grids.
Step 2: secondary screening is carried out according to the matching result of the rough screening characteristic points: setting the slope of a matching line segment between two horizontal characteristic points in the calibration and correction values as a standard slope, comparing whether the slope of the matching line segment in the matching result of the coarse screening characteristic points is equal to the standard slope, if so, regarding the matching as effective matching, otherwise, rejecting the matching line segment.
Step 3: based on random sampling consensus (RANSAC) algorithm, fine screening is carried out on the secondary screening result, and image alignment is completed: through fitting the local points to the estimation model, the estimation model is utilized to screen the corresponding characteristic points of the original image and the target image, the image alignment is completed according to the corresponding characteristic points,
further, the fine screening of the secondary screening result based on the random sample consensus algorithm RANSAC in step 3 includes: randomly assuming a group of local points as initial values, fitting an estimation model according to the local points, testing corresponding characteristic points of an original image and a target image by the estimation model, if the corresponding characteristic points are suitable for the estimation model, considering the corresponding characteristic points as local points, expanding the local points, obtaining enough local points according to application conditions, re-optimizing the estimation model according to all assumed local points, estimating the estimation model by estimating error rates of the local points and the estimation model,
and optimizing an affine transformation matrix based on a random sampling consensus algorithm RANSAC, so that the transformation matrix is connected with corresponding characteristic points to finish the alignment of images.
The invention also provides a device for aligning three-dimensional reconstruction images fusing SIFT and RANSAC, which comprises a coarse screening module, a secondary screening module and a fine screening alignment module,
the coarse screening module divides grids with the same size for the original image and the target image, performs feature point detection on each grid of the original image and the target image based on a scale-invariant feature transform SIFT algorithm to obtain feature point description of each grid of the original image and feature point description of each grid of the target image, and performs coarse screening feature point matching according to the feature point description of each grid of the original image and the feature point description of each grid of the target image;
the secondary screening module performs secondary screening according to the matching result of the coarse screening characteristic points: setting the slope of a matching line segment between two horizontal characteristic points in the calibration and correction values as a standard slope, comparing whether the slope of the matching line segment in the matching result of the coarse screening characteristic points is equal to the standard slope, if so, regarding the matching as effective matching, otherwise, rejecting the matching line segment;
the fine screening alignment module performs fine screening on the secondary screening result based on a random sampling consensus algorithm RANSAC to finish image alignment: and (3) through fitting an estimation model by local points, screening corresponding characteristic points of the original image and the target image by using the estimation model, and finishing image alignment according to the corresponding characteristic points.
The content of information interaction and execution process between the modules in the device is based on the same conception as the embodiment of the method of the present invention, and specific content can be referred to the description in the embodiment of the method of the present invention, which is not repeated here.
Similarly, the device divides the image into grids, each grid corresponds to a homography matrix to represent different depth planes of the image, SIF features of each grid in the two images are extracted, and a rough matching result is obtained; performing secondary filtering through calibrating the slope of the connection line between the calibration value and the correction value; the invention uses RANSAC algorithm to screen the matching points and calculate the transformation matrix, and connects the matching points of two images to realize the alignment of the two images.
It should be noted that not all the steps and modules in the above processes and the structures of the devices are necessary, and some steps or modules may be omitted according to actual needs. The execution sequence of the steps is not fixed and can be adjusted as required. The system structure described in the above embodiments may be a physical structure or a logical structure, that is, some modules may be implemented by the same physical entity, or some modules may be implemented by multiple physical entities, or may be implemented jointly by some components in multiple independent devices.
The above-described embodiments are merely preferred embodiments for fully explaining the present invention, and the scope of the present invention is not limited thereto. Equivalent substitutions and modifications will occur to those skilled in the art based on the present invention, and are intended to be within the scope of the present invention. The protection scope of the invention is subject to the claims.
Claims (8)
1. The three-dimensional reconstruction image alignment method integrating SIFT and RANSAC is characterized in that grids with the same size are divided for an original image and a target image, feature point detection is respectively carried out on each grid of the original image and each grid of the target image based on a scale-invariant feature conversion SIFT algorithm, feature point description of each grid of the original image and feature point description of each grid of the target image are obtained, and coarse screening feature point matching is carried out according to the feature point description of each grid of the original image and the feature point description of each grid of the target image;
secondary screening is carried out according to the matching result of the rough screening characteristic points: setting the slope of a matching line segment between two horizontal characteristic points in the calibration and correction values as a standard slope, comparing whether the slope of the matching line segment in the matching result of the coarse screening characteristic points is equal to the standard slope, if so, regarding the matching as effective matching, otherwise, rejecting the matching line segment;
based on random sampling consensus (RANSAC) algorithm, fine screening is carried out on the secondary screening result, and image alignment is completed: and (3) through fitting an estimation model by local points, screening corresponding characteristic points of the original image and the target image by using the estimation model, and finishing image alignment according to the corresponding characteristic points.
2. The method for aligning three-dimensional reconstruction images fusing SIFT and RANSAC according to claim 1, wherein the feature point detection is performed by the scale-invariant feature transform SIFT algorithm, comprising: and searching the image positions of the original image and the target image in all scale spaces under each grid respectively, identifying key points with unchanged scale and rotation through a Gaussian differential function, positioning the key points, and determining the characteristic points according to the key points.
3. The method for aligning three-dimensional reconstruction images fusing SIFT and RANSAC according to claim 2, wherein the scale-invariant feature transform SIFT algorithm-based feature point description is obtained, comprising: determining the main direction of the feature points, comparing the feature vectors of the key points in pairs to obtain a plurality of pairs of feature points matched with each other, establishing the corresponding relation between scenes in the original image and the target image, and generating feature point descriptions of all grids.
4. The method for aligning three-dimensional reconstruction images fusing SIFT and RANSAC according to claim 1, wherein the fine screening of the secondary screening result based on the random sample consensus algorithm RANSAC comprises: randomly assuming a group of local points as initial values, fitting an estimation model according to the local points, testing corresponding characteristic points of an original image and a target image by the estimation model, if the corresponding characteristic points are suitable for the estimation model, considering the corresponding characteristic points as local points, expanding the local points, obtaining enough local points according to application conditions,
and optimizing an affine transformation matrix based on a random sampling consensus algorithm RANSAC, so that the transformation matrix is connected with corresponding characteristic points to finish the alignment of images.
5. The device for aligning the three-dimensional reconstruction images fusing SIFT and RANSAC is characterized by comprising a coarse screening module, a secondary screening module and a fine screening alignment module,
the coarse screening module divides grids with the same size for the original image and the target image, performs feature point detection on each grid of the original image and the target image based on a scale-invariant feature transform SIFT algorithm to obtain feature point description of each grid of the original image and feature point description of each grid of the target image, and performs coarse screening feature point matching according to the feature point description of each grid of the original image and the feature point description of each grid of the target image;
the secondary screening module performs secondary screening according to the matching result of the coarse screening characteristic points: setting the slope of a matching line segment between two horizontal characteristic points in the calibration and correction values as a standard slope, comparing whether the slope of the matching line segment in the matching result of the coarse screening characteristic points is equal to the standard slope, if so, regarding the matching as effective matching, otherwise, rejecting the matching line segment;
the fine screening alignment module performs fine screening on the secondary screening result based on a random sampling consensus algorithm RANSAC to finish image alignment: and (3) through fitting an estimation model by local points, screening corresponding characteristic points of the original image and the target image by using the estimation model, and finishing image alignment according to the corresponding characteristic points.
6. The device for aligning three-dimensional reconstruction images fusing SIFT and RANSAC according to claim 5, wherein the coarse screening module performs feature point detection based on a scale-invariant feature transform SIFT algorithm, and comprises: and searching the image positions of the original image and the target image in all scale spaces under each grid respectively, identifying key points with unchanged scale and rotation through a Gaussian differential function, positioning the key points, and determining the characteristic points according to the key points.
7. The device for aligning three-dimensional reconstruction images fusing SIFT and RANSAC according to claim 6, wherein the coarse screening module obtains a feature point description based on a scale-invariant feature transform SIFT algorithm, comprising: determining the main direction of the feature points, comparing the feature vectors of the key points in pairs to obtain a plurality of pairs of feature points matched with each other, establishing the corresponding relation between scenes in the original image and the target image, and generating feature point descriptions of all grids.
8. The apparatus for aligning three-dimensional reconstructed images fusing SIFT and RANSAC of claim 5, wherein said fine screening alignment module performs fine screening on the secondary screening result based on a random sample consensus algorithm RANSAC, comprising: randomly assuming a group of local points as initial values, fitting an estimation model according to the local points, testing corresponding characteristic points of an original image and a target image by the estimation model, if the corresponding characteristic points are suitable for the estimation model, considering the corresponding characteristic points as local points, expanding the local points, obtaining enough local points according to application conditions,
and optimizing an affine transformation matrix based on a random sampling consensus algorithm RANSAC, so that the transformation matrix is connected with corresponding characteristic points to finish the alignment of images.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310328243.7A CN116402867A (en) | 2023-03-28 | 2023-03-28 | Three-dimensional reconstruction image alignment method for fusing SIFT and RANSAC |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310328243.7A CN116402867A (en) | 2023-03-28 | 2023-03-28 | Three-dimensional reconstruction image alignment method for fusing SIFT and RANSAC |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116402867A true CN116402867A (en) | 2023-07-07 |
Family
ID=87019291
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310328243.7A Pending CN116402867A (en) | 2023-03-28 | 2023-03-28 | Three-dimensional reconstruction image alignment method for fusing SIFT and RANSAC |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116402867A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117150698A (en) * | 2023-11-01 | 2023-12-01 | 广东新禾道信息科技有限公司 | Digital twinning-based smart city grid object construction method and system |
-
2023
- 2023-03-28 CN CN202310328243.7A patent/CN116402867A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117150698A (en) * | 2023-11-01 | 2023-12-01 | 广东新禾道信息科技有限公司 | Digital twinning-based smart city grid object construction method and system |
CN117150698B (en) * | 2023-11-01 | 2024-02-23 | 广东新禾道信息科技有限公司 | Digital twinning-based smart city grid object construction method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110322500B (en) | Optimization method and device for instant positioning and map construction, medium and electronic equipment | |
CN109242913B (en) | Method, device, equipment and medium for calibrating relative parameters of collector | |
US10334168B2 (en) | Threshold determination in a RANSAC algorithm | |
Schindler et al. | Line-based structure from motion for urban environments | |
US9942535B2 (en) | Method for 3D scene structure modeling and camera registration from single image | |
CN107481279B (en) | Monocular video depth map calculation method | |
US8953847B2 (en) | Method and apparatus for solving position and orientation from correlated point features in images | |
Choi et al. | Depth analogy: Data-driven approach for single image depth estimation using gradient samples | |
JP6744747B2 (en) | Information processing apparatus and control method thereof | |
CN110349212B (en) | Optimization method and device for instant positioning and map construction, medium and electronic equipment | |
CN111612731B (en) | Measuring method, device, system and medium based on binocular microscopic vision | |
Eichhardt et al. | Affine correspondences between central cameras for rapid relative pose estimation | |
EP3185212B1 (en) | Dynamic particle filter parameterization | |
US20230394833A1 (en) | Method, system and computer readable media for object detection coverage estimation | |
CN107067441B (en) | Camera calibration method and device | |
CN116402867A (en) | Three-dimensional reconstruction image alignment method for fusing SIFT and RANSAC | |
CN111161348B (en) | Object pose estimation method, device and equipment based on monocular camera | |
CN108447092B (en) | Method and device for visually positioning marker | |
CN112102404B (en) | Object detection tracking method and device and head-mounted display equipment | |
CN117870659A (en) | Visual inertial integrated navigation algorithm based on dotted line characteristics | |
Seetharaman et al. | A piecewise affine model for image registration in nonrigid motion analysis | |
CN116894876A (en) | 6-DOF positioning method based on real-time image | |
CN112767457A (en) | Principal component analysis-based plane point cloud matching method and device | |
CN116843754A (en) | Visual positioning method and system based on multi-feature fusion | |
CN112991419B (en) | Parallax data generation method, parallax data generation device, computer equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |