CN117036488B - Binocular vision positioning method based on geometric constraint - Google Patents
Binocular vision positioning method based on geometric constraint Download PDFInfo
- Publication number
- CN117036488B CN117036488B CN202311280755.7A CN202311280755A CN117036488B CN 117036488 B CN117036488 B CN 117036488B CN 202311280755 A CN202311280755 A CN 202311280755A CN 117036488 B CN117036488 B CN 117036488B
- Authority
- CN
- China
- Prior art keywords
- camera
- image
- matrix
- query
- database
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 43
- 239000011159 matrix material Substances 0.000 claims abstract description 69
- 238000013519 translation Methods 0.000 claims abstract description 26
- 238000004364 calculation method Methods 0.000 claims abstract description 5
- 238000000354 decomposition reaction Methods 0.000 claims description 6
- 230000003287 optical effect Effects 0.000 claims description 6
- 238000000605 extraction Methods 0.000 claims description 4
- 238000004458 analytical method Methods 0.000 claims description 3
- 230000000007 visual effect Effects 0.000 description 21
- 238000010586 diagram Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 230000004807 localization Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000003203 everyday effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000010845 search algorithm Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
- G01C21/206—Instruments for performing navigational calculations specially adapted for indoor navigation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
- G06T7/85—Stereo camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Library & Information Science (AREA)
- Databases & Information Systems (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Automation & Control Theory (AREA)
- Image Processing (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The invention discloses a binocular vision positioning method based on geometric constraint, which comprises the following steps of S1, solving normalized image feature point position coordinates; s2, solving a rotation matrix and a translation vector; s3, calculating the distance between the query camera and the target point; s4, constructing a geometric constraint relation; according to the invention, the geometric constraint relation established by using one database image and one query image is utilized, so that the solving of scale coefficients is avoided, the common scale ambiguity problem in 2D-2D indoor positioning is solved, the uncertainty and the calculation complexity of image retrieval are reduced by reducing the number of matched images, and the stability, the accuracy and the instantaneity of a positioning algorithm are enhanced; the global position information of the query camera can be obtained through geometric constraint conditions, so that accurate estimation of the position of the query camera is realized, and the method has obvious advantages in solving the problem of scale ambiguity, improving the positioning accuracy and obtaining global position estimation.
Description
Technical Field
The invention relates to the technical field of visual positioning methods, in particular to a binocular visual positioning method based on geometric constraint.
Background
With the rapid development of the internet and the popularization of wearable equipment, the demands of people on self-position positioning are continuously improved. Currently, about 80% of humans are active indoors for most of the time every day, and thus indoor positioning is of great concern. The visual indoor positioning system is similar to a mode that a human person estimates the position of the visual indoor positioning system through eyes, and a user shoots a query image through a handheld intelligent mobile terminal and uploads the query image to a network server. And estimating the position of the user according to the query image provided by the user at the server side, and sending the position information back to the intelligent mobile terminal of the user so as to realize the estimation of the position of the user. Visual positioning systems have significant advantages over other positioning systems in terms of perceiving the user's surroundings through images only and estimating the user's location in complex indoor environments. The method can overcome the problems of signal interference, transmission limitation and the like possibly suffered by other sensors, and provides more reliable and accurate positioning results. The advantages of the visual positioning system promote the intensive research and wide application of indoor visual positioning. In existing indoor vision positioning system research, two main phases are generally involved: an offline phase and an online phase.
Before visual localization is implemented (i.e., off-line phase), it is necessary to model the indoor scene and create a visual map. The off-line stage map creation device adopts a 3D stereoscopic vision information acquisition device with two RGB color lenses, and a typical device is a 3D binocular stereoscopic vision depth camera ZED 2i of STEREOLABS company. The device calculates parallax between images by utilizing a stereo matching algorithm by simultaneously acquiring binocular stereo vision images in an indoor scene, so that depth information of each pixel point is deduced. Based on the principle of triangulation and known camera internal and external parameters, the three-dimensional spatial position of each pixel point relative to the camera is accurately estimated, and the spatial position information is constructed into a point cloud to represent the geometric structure of an indoor scene. Further, stable feature points are extracted from the point cloud data by utilizing a feature extraction and description algorithm, and map construction is performed by establishing local and global feature descriptors. Finally, a three-dimensional Dense Map (3D Dense Map) containing high-density geometric information is generated. In the visual feature map creation process, it is necessary to save the key frame database image and record the shooting pose (including position and pose) of the current frame at the same time. In the invention, the visual map creation method in the off-line stage is not discussed, and the visual map is considered to be established, and the pose of the database image and the spatial position of the database image pixel point are both known conditions. The visual map created in the off-line stage is stored in the server side, and is helpful for visual positioning.
In the actual visual localization process (i.e., the online stage), it is necessary to upload the query image to the server side and retrieve the database image matching it in the visual map. After the database image matching the query image is obtained, the precise positioning method can be performed. In general, accurate positioning methods fall into three categories: 2D-2D method, 3D-2D method, and 3D-3D method. Among them, the 2D-2D method is the most commonly used method in indoor vision positioning, which estimates a user's position using only two-dimensional image information, and generally adopts a position estimation method based on epipolar constraint. By means of the epipolar geometry constraint relationship, the relative position relationship between the query camera and the database camera can be estimated. It should be noted that, in the epipolar geometry constraint relationship established by a query image and a database image, the relative position of the query camera is estimated only according to the relationship, and the absolute position of the query camera cannot be obtained due to the Scale Ambiguity (Scale ambience) problem. In general, a method for solving the problem of scale ambiguity is to build a plurality of epipolar geometry constraint relationships by using a plurality of matching database images, so as to avoid solving the scale coefficients. However, not every query image may retrieve a plurality of matching database images. Another common method is to solve the scale coefficients in the epipolar geometry constraint relationship by iterative re-weighted least squares using the spatial locations of the matching feature points. Specifically, by weighting the spatial positions of the feature points, the importance of the feature points in estimating the scale can be adjusted, thereby reducing the influence of the outliers on the scale estimation. However, this approach does not guarantee that each iteration converges to an accurate result, but as close to the optimal solution as possible, due to noise and matching errors.
Disclosure of Invention
The invention aims to provide a binocular vision positioning method based on geometric constraint, which can solve the problem of scale ambiguity, improve positioning accuracy, reduce influence of abnormal disturbance factors and has obvious advantages in the aspect of global position estimation capability, so as to solve the problems in the background technology.
In order to achieve the above purpose, the present invention provides the following technical solutions: a binocular vision positioning method based on geometric constraints, comprising:
s1, solving the feature point position coordinates of the normalized image;
s2, solving a rotation matrix and a translation vector;
s3, calculating the distance between the query camera and the target point;
s4, constructing a geometric constraint relation.
Preferably, the step S1 of solving the coordinates of the feature points of the normalized image: SIFT feature point extraction is respectively carried out on the left and right images of the query camera and the database image matched with the query image, so as to obtain a feature point position matrix of the left and right images of the query camera and the database image SIFT (Scale-invariant feature transform, scale invariant feature transform)、/>、/>The method comprises the steps of carrying out a first treatment on the surface of the Then, obtaining the left image of the query camera through BF (Brute Force) feature point matching algorithm>Right image of query camera->Between->For matching feature points, query camera left image and matching database image +.>For matching feature points, query camera right image and matching database image +.>For the matching characteristic points, the coordinate matrixes of the three-view common matching characteristic points of the left image, the right image and the matching database image of the query camera are respectively +.>、/>And->Here, it is necessary to normalize the matching feature point positions and obtain a normalized position coordinate matrix +.>、/>And->:
(1)
(2)
(3)。
Preferably, the step S2 of solving the rotation matrix and the translation vector: normalized position matrix、/>And essence matrix->The epipolar constraint relation is satisfied:
(4)
normalized position matrix、/>And essence matrix->The epipolar constraint relation is satisfied:
(5)
according to the epipolar constraint relation between the two images of the query camera and the database camera image shown in the formulas (4) and (5), respectively, an essential matrix can be obtainedAnd->Essence matrix->And->Respectively reflect the relative position relation between the left and right images of the query camera and the database camera, the relation can be realized by rotating the matrix +.>、/>And translation vector->、Describing the relationship between the essence matrix and the rotation matrix and translation vector is:
(6)
(7)
wherein,、/>is vector->、/>By matching the matrix of the inverse symmetry of (a)And->The rotation matrix between cameras can be solved by singular value decomposition (Singular Value Decomposition, SVD)>、/>And translation vector->、/>;
Rotation matrixIs +.>Dimension matrix, composed of elements->The composition is as follows:
(8)
translation vectorIs +.>Dimension vector, by element->The composition is as follows:
(9)
rotation matrixIs +.>Dimension matrix, composed of elements->The composition is as follows:
(10)
translation vectorIs +.>Dimension vector, by element->The composition is as follows:
(11)。
preferably, the step S3 of calculating the distance between the query camera and the target point: binocular ranging is a principle of simulating biological binocular ranging, a left picture and a right picture are obtained through a binocular camera, the obtained images are transmitted to a computer for analysis and calculation of parallax, and then three-dimensional space information of a target object is obtained; assume thatIs the object to be measured, is->、/>Is the optical center of the left and right cameras, < >>Is the distance between the optical centers of the left and right cameras, also called the baseline distance, +>Is the focal length of the camera +.>Is->Coordinates of points in the left and right camera image coordinate system,/->Is->Point-to-camera projectionShadow distance;
the formula can be obtained according to the principle of similar triangles:
(12)
further, the expression of the distance D between the object to be measured and the camera can be deduced as shown in formula (13):
(13)
in the formula (13) of the present invention,and->Respectively->The dot is on the abscissa of the pixel points in the left and right images, < +.>Is the parallax between the left and right cameras, i.e. the difference of the image positions of the target point in the left and right cameras,/>Focal length->And baseline distance->Obtained by calibration.
Preferably, the step S4 of constructing a geometric constraint relation:
、/>respectively representing the position coordinates of the left camera and the right camera under the world coordinate system, and inquiring the base line of the camera according to +.>The length can be known as follows:
(14)
according to the S2, solving the relative position relationship between the left lens and the right lens of the query camera and the database camera, which are obtained by the rotation matrix and the translation vector, and obtaining the left camera according to the proportional relationshipAnd database camera->Positional relationship between:
(15)
similarly, right cameraAnd database camera->Positional relationship between:
(16)
wherein,、/>、/>、/>respectively represent left camera->And right camera->And database camera->Edge->Shaft and->An offset of the shaft;
according to the S3, calculating the distance between the query camera and the target point, and calculating the binocular camera and the spatial pointPThe distance between them can be measured and is recorded asDLet the straight line equation determined between cameras C1 and C2 bePoint thenPTo the straight line:
(17)
wherein,,/>,/>the simultaneous formulas (14), (15), (16), (17) can be solved>、/>I.e. the position of the query camera in the world coordinate system.
Compared with the prior art, the invention has the beneficial effects that:
according to the method, the existing binocular vision positioning method based on multiple geometric constraints is optimized, and the epipolar geometric constraint relation established by using one query image and one database image is utilized, so that the method can avoid solving the scale coefficient, and further the problem of scale ambiguity is solved; secondly, compared with the traditional method for establishing the epipolar geometry constraint relation by using a plurality of matching database images, the method only uses one database image, reduces the dependence and the calculation complexity on the matching images, realizes accurate visual positioning results through a plurality of geometry constraint conditions, and improves the positioning precision; in addition, the invention reduces the influence of noise and matching errors, and provides a more stable and robust positioning algorithm through multiple geometric constraint conditions; the method can acquire the global position information of the query camera, not just the relative position, thereby realizing accurate estimation of the position of the query camera. In conclusion, the method has obvious advantages in solving the problem of scale ambiguity, improving the positioning accuracy, reducing the influence of matching errors and the global position estimation capability, and solves the common problem of scale ambiguity and the limitation of the positioning accuracy in the 2D-2D fine visual positioning method.
Drawings
FIG. 1 is a binocular vision ranging schematic diagram in the present invention;
FIG. 2 is a schematic diagram of a multiple geometry constraint in accordance with the present invention;
fig. 3 is a flowchart of the algorithm of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1
Please refer to a binocular vision positioning method based on geometric constraint in the diagram of fig. 3, which comprises solving the coordinates of feature points of the normalized image, solving the rotation matrix and translation vector, calculating the distance between the query camera and the target point and constructing the geometric constraint relation, and comprises the following specific steps
S1, solving the feature point position coordinates of the normalized image;
s2, solving a rotation matrix and a translation vector;
s3, calculating the distance between the query camera and the target point;
s4, constructing a geometric constraint relation.
The S1 is used for solving the coordinates of the characteristic points of the normalized image: SIFT feature point extraction is respectively carried out on the left and right images of the query camera and the database image matched with the query image, so as to obtain a feature point position matrix of the left and right images of the query camera and the database image SIFT (Scale-invariant feature transform, scale invariant feature transform)、/>、/>The method comprises the steps of carrying out a first treatment on the surface of the Then, obtaining the left image of the query camera through BF (Brute Force) feature point matching algorithm>Right image of query camera->Between->For matching feature points, query camera left image and matching database image +.>For matching characteristic points, inquiring camera right imageAnd matching database images +.>For the matching characteristic points, the coordinate matrixes of the three-view common matching characteristic points of the left image, the right image and the matching database image of the query camera are respectively +.>、/>And->Here, it is necessary to normalize the matching feature point positions and obtain a normalized position coordinate matrix +.>、/>And->:
(1)
(2)
(3)。
S2, solving a rotation matrix and a translation vector:
normalized position matrix、/>And essence matrix->The epipolar constraint relation is satisfied:
(4)
normalized position matrix、/>And essence matrix->The epipolar constraint relation is satisfied:
(5)
according to the epipolar constraint relation between the two images of the query camera and the database camera image shown in the formulas (4) and (5), respectively, an essential matrix can be obtainedAnd->Essence matrix->And->Respectively reflect the relative position relation between the left and right images of the query camera and the database camera, the relation can be realized by rotating the matrix +.>、/>And translation vector->、Describing the relationship between the essence matrix and the rotation matrix and translation vector is:
(6)
(7)
wherein,、/>is vector->、/>By matching the matrix of the inverse symmetry of (a)And->The rotation matrix between cameras can be solved by singular value decomposition (Singular Value Decomposition, SVD)>、/>And translation vector->、/>;
Rotation matrixIs +.>Dimension matrix, composed of elements->The composition is as follows:
(8)
translation vectorIs +.>Dimension vector, by element->The composition is as follows:
(9)
rotation matrixIs +.>Dimension matrix, composed of elements->The composition is as follows:
(10)
translation vectorIs +.>Dimension vector, by element->The composition is as follows:
(11)。
s3, calculating the distance between the query camera and the target point:
binocular ranging is a principle of simulating biological binocular ranging, a left picture and a right picture are obtained through a binocular camera, the obtained images are transmitted to a computer for analysis and calculation of parallax, and then three-dimensional space information of a target object is obtained; the schematic diagram is shown in fig. 1: assume thatIs the object to be measured, is->、/>Is the optical center of the left and right cameras, < >>Is the distance between the optical centers of the left and right cameras, also called the baseline distance, +>Is the focal length of the camera +.>Is->Coordinates of points in the left and right camera image coordinate system,/->Is->Point-to-camera projectionShadow distance;
as shown in figure 1 of the drawings,the formula can be obtained according to the principle of similar triangles:
(12)
further, the expression of the distance D between the object to be measured and the camera can be deduced as shown in formula (13):
(13)
in the formula (13) of the present invention,and->Respectively->The dot is on the abscissa of the pixel points in the left and right images, < +.>Is the parallax between the left and right cameras, i.e. the difference of the image positions of the target point in the left and right cameras,/>Focal length->And baseline distance->Obtained by calibration.
S4, constructing a geometric constraint relation;
as shown in the figure 2 of the drawings,、/>respectively representing the position coordinates of the left camera and the right camera under the world coordinate system, and inquiring the base line of the camera according to +.>The length can be known as follows:
(14)
according to the S2, solving the relative position relationship between the left lens and the right lens of the query camera and the database camera, which are obtained by the rotation matrix and the translation vector, and obtaining the left camera according to the proportional relationshipAnd database camera->Positional relationship between:
(15)
similarly, right cameraAnd database camera->Positional relationship between:
(16)
wherein,、/>、/>、/>respectively represent left camera->And right camera->And database camera->Edge->Shaft and->An offset of the shaft;
according to the S3, calculating the distance between the query camera and the target point, and calculating the binocular camera and the spatial pointPThe distance between them can be measured and is recorded asDLet the straight line equation determined between cameras C1 and C2 bePoint thenPTo the straight line:
(17)
wherein,,/>,/>the simultaneous formulas (14), (15), (16), (17) can be solved>、/>I.e. the position of the query camera in the world coordinate system.
The method comprises the steps of inquiring a left image of a camera, inquiring a right image of the camera, matching a database image with the camera, photographing a position matrix of the camera, spatial positions of pixels of the database image, an internal parameter matrix of the camera, an internal reference matrix of the camera and a baseline length (namely left-right lens spacing) of the camera. Specifically, the method comprises the steps of firstly determining feature point position coordinates of a left image, a right image of a query camera and a matching image of a database camera by using a three-view feature matching algorithm, and then solving the relative position relationship between the database camera and the left lens and the right lens of the query camera based on epipolar geometric constraints. Then, the projection distance from the binocular camera to the target point is calculated by using the triangulation principle, and global coordinates of the corresponding database image feature points are obtained. And finally, calculating the absolute positions of the left lens and the right lens of the query camera by solving a set of nonlinear equations.
For the known conditions of the present invention (i.e., input variables): querying camera left imageRight image->Matching database image of query image +.>The method comprises the steps of carrying out a first treatment on the surface of the Shooting position matrix of data camera>Pixel space position coordinate matrix of database image +.>Inquiring about the internal reference matrix of the left camera of the camera>Right camera internal reference matrix>Reference matrix for database cameraThe method comprises the steps of carrying out a first treatment on the surface of the Inquiring about the baseline length of the camera>(i.e., left-right lens spacing).
The variables to be solved: inquiring the position of the left and right lens of the camera、/>。
(description of known conditions: color image taken by left camera of query image)And the right camera, the image taken by the right camera is +.>The basic idea of visual positioning is to estimate the shooting position of the query camera, so as to realize the positioning of the user; matching database image +.>The database image is obtained through a certain search algorithm, has a certain visual characteristic similarity with the query image, and has a certain number of visual characteristic matching points between the query image and the database image; shooting position of database camera->Is a matching database image +.>Is used to query an absolute position estimate of the camera; database image +.>Pixel space position coordinate matrix of (2)>Is +.>A dimension matrix comprising +.>Three-dimensional position coordinates>Is the total number of pixels matching the database image, matrix +.>The three-dimensional position coordinates stored in the matching database image correspond to the data points in the matching database image one by one according to the matrix +.>The spatial position of each pixel point in the matched database image can be found, and the internal parameter matrix of the camera is queried>、/>Database camera internal parameter matrix>And query the baseline length of the camera->Which need to be obtained by camera calibration before positioning).
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.
Claims (1)
1. A binocular vision positioning method based on geometric constraints, comprising:
s1, solving the feature point position coordinates of the normalized image;
s2, solving a rotation matrix and a translation vector;
s3, calculating the distance between the query camera and the target point;
s4, constructing a geometric constraint relation;
the S1 is used for solving the coordinates of the characteristic points of the normalized image: SIFT feature point extraction is respectively carried out on the left and right images of the query camera and the database image matched with the query image, so as to obtain a SIFT feature point position matrix of the left and right images of the query camera and the database image、/>、/>The method comprises the steps of carrying out a first treatment on the surface of the Then, obtaining the left image of the query camera through BF characteristic point matching algorithm>Right image of query camera->Between->For matching feature points, query camera left image and matching database image +.>For matching feature points, query camera right image and matching database image +.>For the matching characteristic points, the coordinate matrixes of the three-view common matching characteristic points of the left image, the right image and the matching database image of the query camera are respectively +.>、/>Andhere, it is necessary to normalize the matching feature point positions and obtain a normalized position coordinate matrix +.>、And->:
(1)
(2)
(3)
Wherein,for inquiring the camera left camera internal reference matrix, < >>Is a right camera internal reference matrix and +.>An internal reference matrix of the database camera;
s2, solving a rotation matrix and a translation vector: normalized position matrix、/>And essence matrix->The epipolar constraint relation is satisfied:
(4)
normalized position matrix、/>And essence matrix->The epipolar constraint relation is satisfied:
(5)
according to the epipolar constraint relation between the two images of the query camera and the database camera image shown in the formulas (4) and (5), respectively, an essential matrix can be obtainedAnd->Essence matrix->And->Respectively reflect the relative position relation between the left and right images of the query camera and the database camera, the relation can be realized by rotating the matrix +.>、/>And translation vector->、/>Describing the relationship between the essence matrix and the rotation matrix and translation vector is:
(6)
(7)
wherein,、/>is vector->、/>By applying +.>And->The rotation matrix between cameras can be solved by singular value decomposition (Singular Value Decomposition, SVD)>、/>And translation vector->、/>;
Rotation matrixIs +.>Dimension matrix, composed of elements->The composition is as follows:
(8)
translation vectorIs +.>Dimension vector, by element->The composition is as follows:
(9)
rotation matrixIs +.>Dimension matrix, composed of elements->The composition is as follows:
(10)
translation vectorIs +.>Dimension vector, by element->The composition is as follows:
(11);
s3, calculating the distance between the query camera and the target point: binocular ranging is a principle of simulating biological binocular ranging, a left picture and a right picture are obtained through a binocular camera, the obtained images are transmitted to a computer for analysis and calculation of parallax, and then three-dimensional space information of a target object is obtained; assume thatIs the object to be measured, is->、/>Is the optical center of the left and right cameras, < >>Is the distance between the optical centers of the left and right cameras, also called the baseline distance, +>Is the focal length of the camera +.>Is->Coordinates of points in the left and right camera image coordinate system,/->Is->The projection distance of the point to the camera;
according to similar trianglesThe shape principle can be given by the formula:
(12)
thereby the distance between the object to be measured and the camera can be deducedThe expression of (2) is shown in formula (13):
(13)
in the formula (13) of the present invention,and->Respectively->The dot is on the abscissa of the pixel points in the left and right images, < +.>Is the parallax between the left and right cameras, i.e. the difference of the image positions of the target point in the left and right cameras,/>Focal length->And baseline distance->The calibration is carried out;
s4, constructing a geometric constraint relation:
、/>respectively representing the position coordinates of the left camera and the right camera under the world coordinate system, and inquiring the base line of the camera according to +.>The length can be known as follows:
(14)
according to the S2, solving the relative position relationship between the left lens and the right lens of the query camera and the database camera, which are obtained by the rotation matrix and the translation vector, and obtaining the left camera according to the proportional relationshipAnd database camera->Positional relationship between:
(15)
similarly, right cameraAnd database camera->Positional relationship between:
(16)
wherein,、/>、/>、/>respectively represent left camera->And right camera->And database camera->Edge->Shaft and->An offset of the shaft;
according to the S3, calculating the distance between the query camera and the target point, and calculating the binocular camera and the spatial pointThe distance between them is measured and is denoted +.>Set up camera->And->The straight line equation determined between is +.>Point->To the straight line:
(17)
wherein,,/>,/>the simultaneous formulas (14), (15), (16), (17) can be solved>、/>I.e. the position of the query camera in the world coordinate system.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311280755.7A CN117036488B (en) | 2023-10-07 | 2023-10-07 | Binocular vision positioning method based on geometric constraint |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311280755.7A CN117036488B (en) | 2023-10-07 | 2023-10-07 | Binocular vision positioning method based on geometric constraint |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117036488A CN117036488A (en) | 2023-11-10 |
CN117036488B true CN117036488B (en) | 2024-01-02 |
Family
ID=88630256
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311280755.7A Active CN117036488B (en) | 2023-10-07 | 2023-10-07 | Binocular vision positioning method based on geometric constraint |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117036488B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102506757A (en) * | 2011-10-10 | 2012-06-20 | 南京航空航天大学 | Self-positioning method of binocular stereo measuring system in multiple-visual angle measurement |
CN106023230A (en) * | 2016-06-02 | 2016-10-12 | 辽宁工程技术大学 | Dense matching method suitable for deformed images |
CN114812558A (en) * | 2022-04-19 | 2022-07-29 | 中山大学 | Monocular vision unmanned aerial vehicle autonomous positioning method combined with laser ranging |
CN115830116A (en) * | 2022-12-05 | 2023-03-21 | 北京眸星科技有限公司 | Robust visual odometer method |
CN116309820A (en) * | 2023-01-04 | 2023-06-23 | 长春理工大学 | Monocular vision positioning method, monocular vision positioning system and application of monocular vision positioning system |
CN116817920A (en) * | 2023-06-29 | 2023-09-29 | 杭州师范大学 | Visual positioning method and device for plane mobile robot without three-dimensional map model |
-
2023
- 2023-10-07 CN CN202311280755.7A patent/CN117036488B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102506757A (en) * | 2011-10-10 | 2012-06-20 | 南京航空航天大学 | Self-positioning method of binocular stereo measuring system in multiple-visual angle measurement |
CN106023230A (en) * | 2016-06-02 | 2016-10-12 | 辽宁工程技术大学 | Dense matching method suitable for deformed images |
CN114812558A (en) * | 2022-04-19 | 2022-07-29 | 中山大学 | Monocular vision unmanned aerial vehicle autonomous positioning method combined with laser ranging |
CN115830116A (en) * | 2022-12-05 | 2023-03-21 | 北京眸星科技有限公司 | Robust visual odometer method |
CN116309820A (en) * | 2023-01-04 | 2023-06-23 | 长春理工大学 | Monocular vision positioning method, monocular vision positioning system and application of monocular vision positioning system |
CN116817920A (en) * | 2023-06-29 | 2023-09-29 | 杭州师范大学 | Visual positioning method and device for plane mobile robot without three-dimensional map model |
Non-Patent Citations (2)
Title |
---|
A novel method for fast positioning of non-standardized ground control points in drone image;Zheng Zhu等;《MDPI》;论文正文 * |
复杂动态场景下视觉SLAM研究;黄关;《硕士电子期刊》;论文正文 * |
Also Published As
Publication number | Publication date |
---|---|
CN117036488A (en) | 2023-11-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109308719B (en) | Binocular parallax estimation method based on three-dimensional convolution | |
Ruchay et al. | Fusion of information from multiple Kinect sensors for 3D object reconstruction | |
JP2010513907A (en) | Camera system calibration | |
CN111429571B (en) | Rapid stereo matching method based on spatio-temporal image information joint correlation | |
CN107038753B (en) | Stereoscopic vision three-dimensional reconstruction system and method | |
CN112634379B (en) | Three-dimensional positioning measurement method based on mixed vision field light field | |
CN117456114B (en) | Multi-view-based three-dimensional image reconstruction method and system | |
WO2021195939A1 (en) | Calibrating method for external parameters of binocular photographing device, movable platform and system | |
CN116129037B (en) | Visual touch sensor, three-dimensional reconstruction method, system, equipment and storage medium thereof | |
CN113240749B (en) | Remote binocular calibration and ranging method for recovery of unmanned aerial vehicle facing offshore ship platform | |
CN112132900B (en) | Visual repositioning method and system | |
Ann et al. | Study on 3D scene reconstruction in robot navigation using stereo vision | |
Gadasin et al. | Reconstruction of a Three-Dimensional Scene from its Projections in Computer Vision Systems | |
CN107452036B (en) | A kind of optical tracker pose calculation method of global optimum | |
CN117036488B (en) | Binocular vision positioning method based on geometric constraint | |
CN114255279A (en) | Binocular vision three-dimensional reconstruction method based on high-precision positioning and deep learning | |
CN116721149A (en) | Weed positioning method based on binocular vision | |
CN112381721A (en) | Human face three-dimensional reconstruction method based on binocular vision | |
CN116777973A (en) | Heterogeneous image binocular stereoscopic vision ranging method and system based on deep learning | |
CN114935316B (en) | Standard depth image generation method based on optical tracking and monocular vision | |
Zakharov et al. | An algorithm for 3D-object reconstruction from video using stereo correspondences | |
Kwon et al. | Vergence control of binocular stereoscopic camera using disparity information | |
Nguyen et al. | Real-time obstacle detection for an autonomous wheelchair using stereoscopic cameras | |
CN111739068B (en) | Light field camera relative pose estimation method | |
CN114693764A (en) | Matching image acquisition algorithm based on binocular camera |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |