CN112651427B - Image point rapid and efficient matching method for wide-baseline optical intersection measurement - Google Patents
Image point rapid and efficient matching method for wide-baseline optical intersection measurement Download PDFInfo
- Publication number
- CN112651427B CN112651427B CN202011409685.7A CN202011409685A CN112651427B CN 112651427 B CN112651427 B CN 112651427B CN 202011409685 A CN202011409685 A CN 202011409685A CN 112651427 B CN112651427 B CN 112651427B
- Authority
- CN
- China
- Prior art keywords
- image point
- image
- point
- base station
- matching
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 77
- 230000003287 optical effect Effects 0.000 title claims abstract description 74
- 238000005259 measurement Methods 0.000 title claims abstract description 34
- 238000004364 calculation method Methods 0.000 claims abstract description 25
- 238000012937 correction Methods 0.000 claims description 29
- 230000002194 synthesizing effect Effects 0.000 claims description 4
- 238000012216 screening Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 3
- 239000013598 vector Substances 0.000 description 3
- QVGXLLKOCUKJST-UHFFFAOYSA-N atomic oxygen Chemical compound [O] QVGXLLKOCUKJST-UHFFFAOYSA-N 0.000 description 2
- 239000013078 crystal Substances 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000008030 elimination Effects 0.000 description 2
- 238000003379 elimination reaction Methods 0.000 description 2
- 229910052760 oxygen Inorganic materials 0.000 description 2
- 239000001301 oxygen Substances 0.000 description 2
- 241000226585 Antennaria plantaginifolia Species 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/757—Matching configurations of points or features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computing Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Databases & Information Systems (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention provides a rapid and efficient image point matching method for wide-baseline optical intersection measurement, which solves the problems of large calculated amount, low matching efficiency and high mismatching rate of the traditional wide-baseline optical intersection measurement image point group matching method. The method comprises the following steps: 1) The image points are ordered, two image points are ordered and matched according to the included angle between the two image points and the coordinate plane, so that a large number of repeated calculations are omitted; 2) The image point matching, in particular to solving the difference plane linear distance between the image point and the rays formed by the respective base stations, combining the threshold value to judge whether the two image points are homonymous image points, along with the matching, the matching speed is faster and faster, the quick and orderly matching of the homonymous image points of the two cameras can be realized, and the simplified difference plane linear distance solving method is invented, the calculation complexity of the distance between the straight lines is effectively reduced, and the matching speed and the accuracy of the wide baseline optical intersection measurement image point group are effectively improved.
Description
Technical Field
The invention belongs to a three-dimensional coordinate optical intersection measurement technology of a space point group target, and particularly relates to a rapid and efficient image point matching method for wide-baseline optical intersection measurement, which is suitable for rapid image point matching of two photoelectric theodolite cameras.
Background
In the optical intersection measurement of a target range and the industrial non-contact three-dimensional measurement, a space coordinate measurement system consisting of two or more wide baseline distribution distributed photoelectric theodolite cameras is utilized to measure the three-dimensional coordinates of a space point group target in an intersection manner, and the system has the advantages of simple structure, low cost, high precision, high reliability and the like, and becomes an important means for three-dimensional measurement of the space point group. A measuring station camera can measure the projection coordinate position of the target on the image plane, and the space three-dimensional coordinate of the target can be obtained only through the image plane position coordinate intersection calculation between two or more photoelectric theodolite cameras (base stations). Therefore, the three-dimensional coordinates of the targets of the space point group are measured in an intersection way, the problem of matching the same name points of image points of different measuring stations is solved, namely, any point in a multi-target group in a given space is imaged to two photoelectric theodolite cameras respectively (the two photoelectric theodolite cameras are the photoelectric theodolite camera 1 and the photoelectric theodolite camera 2 respectively), the positions of the image points in the image plane of the photoelectric theodolite camera 1 are known, the corresponding positions of the image points in the image plane of the photoelectric theodolite camera 2 are accurately sought, and the intersection way is realized to obtain the space coordinates.
At present, the optical intersection measurement of the space point group targets generally adopts a different-plane straight line elimination mismatching point method to match the point group targets, and the method has the advantages of image and intuitiveness. However, the method is based on the principle that 'two measurement stations are first intersected in pairs and the distance between different planes of the spatial intersection points is used for eliminating the mismatching points', and because of the exhaustive intersection, when the number of targets in the field of view increases, the calculated amount is increased in geometric progression, and the method has the following problems:
1) If the correct matching point pair of the two photoelectric theodolite cameras is n, the calculated quantity is n-order of magnitude n, the repeated calculated quantity is huge, and the matching efficiency is low;
2) Because a large number of mismatching points are obtained through exhaustive intersection, n (n-1) mismatching points are obtained through the method, and the number of mismatching points is too large, so that the calculation amount for eliminating the mismatching points is huge;
3) The mismatching points are difficult to reject, so that the correct matching rate is low.
Disclosure of Invention
The invention provides a rapid and efficient image point matching method for wide-baseline optical intersection measurement, which aims to solve the technical problems of large calculated amount, low matching efficiency and high mismatching rate of the existing wide-baseline optical intersection measurement image point group matching method.
In order to achieve the above purpose, the technical scheme provided by the invention is as follows:
the quick and efficient image point matching method for wide-baseline optical intersection measurement is characterized by comprising the following steps of:
1) Image point ordering
Target image data set S of base station 1 1 The image point of (1) is connected with the optical center of the base station according to the image point and the coordinate system O 1 The included angles of the XY planes are ordered from big to small to obtain an image point group S 1 ′={P′ 11 ,P′ 12 ,…,P′ 1n Image point P' 11 ,P′ 12 ,…,P′ 1n The straight line connecting the optical center of the base station 1 is denoted as l a1 、l a2 、…、l an ;
Target image data set S of base station 2 2 The image point in (2) is connected with the optical center line of the base station according to the image point and the coordinate system O 2 The included angles of the XY planes are ordered from big to small to obtain an image point group S 2 ′={P′ 21 ,P′ 22 ,…,P′ 2m Image point P' 21 ,P′ 22 ,…,P′ 2m The straight line connecting the optical center of the base station 2 is denoted as l b1 、l b2 、…、l bm ;
Wherein the coordinate system O 1 XY base station 1 optical center O 1 Is the origin of the coordinate system and is parallel to the optical center O of the base station 1 1 And base station 2 optical center O 2 The connecting line is an X axis, and the clockwise direction perpendicular to the X axis is a Y axis;
coordinate system O 2 XY base station 2 optical center O 2 Is the origin of the coordinate system and is parallel to the optical center O of the base station 2 2 And base station 1 optical center O 1 The connecting line is an X axis, and the clockwise direction perpendicular to the X axis is a Y axis;
2) Image point matching
2.1 Is the image point group S 1 'first image point P' 11 Searching for matching points
From groups of pixels S 1 'first image point P' 11 Corresponding straight line l a1 Sequentially solving the pixel group S 2 'middle image point P' 21 ,P′ 22 ,…,P′ 2m Corresponding straight line l b1 、l b2 、…、l bm Straight line distance of different surfacesLine l a1 And straight line l bi The distance of (2) is denoted as d a1bi I=1, 2, …, m, each time a different-plane straight line distance d is obtained a1bi Judging whether the threshold value E is smaller than or equal to the threshold value D a1bi Less than or equal to the threshold E, straight line l a1 Corresponding image point P' 11 And straight line l bi Corresponding image point P' 2i Is the correct matching point pair; if the straight line distance d a1bi Greater than threshold E, seek S 2 ' the next image point in the process carries out the out-of-plane straight line distance calculation; until the correct matching point pair is found or S is traversed 2 ' all pixels in;
2.2 Is the image point group S 1 'second image point P' 12 Searching for matching points
If the first image point P' 11 Find the correct matching point P' 2i S is then 1 'next image point P' 12 From the next image point P 'of the correct matching point by the method of step 2.1)' 2(i+1) Starting to perform image point matching;
if the first image point P' 11 If the correct matching point is not found, S is taken 1 'next image point P' 12 From the group of pixels S by the method of step 2.1) 2 The first image point P 'in' 21 Starting to perform image point matching;
2.3 Is the image point group S 1 ' finding matching points for the remaining image points
Sequentially performing image point groups S by using the method of the step 2.2) 1 ' remaining pixels and pixel clusters S 2 ' image point matching calculation until the image point group S is traversed 1 ' all pixels in a pixel group S 1 'middle image point P' 1s And pixel group S 2 'middle image point P' 2m Is the correct matching point pair;
where s=1, 2, …, n.
Further, in step 1), the method further comprises the step of comparing the target image data set S 1 The specific process is as follows:
a) Target image dataset S 1 The coordinates p (x, y) of the intermediate image point in the physical coordinate system of the image are determined by radial directionA distortion correction model, obtaining a coordinate P ' (x ', y ') after distortion correction;
the physical coordinate system of the image takes the upper left corner of the image as an origin, the horizontal direction as an X axis and the vertical height direction as a Y axis;
b) Azimuth angle a at base station 1 10 And pitch angle E 10 Synthesizing with the coordinate P ' (x ', y ') to obtain a synthesized azimuth angle A 1 And a synthesized pitch angle E 1 :
Wherein x' is the coordinate of the image point in the x direction; y' is the coordinate of the image point in the y direction;
pix is the single pixel size of the base station; f, focal length of the base station lens;
A 10 the azimuth angle of the image point in the center of the image plane of the base station 1 at the base station 1;
E 10 the pitch angle of the image point in the center of the image plane of the base station 1 in the direction of the base station 1;
c) Target image dataset S 1 The corrected image point is represented as (A) 1 ,E 1 );
Further, in step a), the radial distortion correction model expression is as follows:
x=x′(1+k 1 r 2 +k 2 r 4 )
y=y′(1+k 1 r 2 +k 2 r 4 )
wherein r is 2 =x′ 2 +y′ 2 ;
k 1 And k 2 Is the radial distortion coefficient.
Further, in step 1), the method further comprises the step of comparing the target image data set S 2 The image point in the image is corrected, and the correction process and the target image dataSet S 1 The middle image point correction process is the same.
Further, the method for calculating the different-plane linear distance in the step 2) comprises the following steps:
definition, optical center O of base station 1 1 Sum image point group S 1 Any image point M forms a straight line l 1 Optical center O of base station 2 2 Sum image point group S 2 Any one of the image points N constitutes a straight line l 2 Different-plane straight line l 1 And l 2 The distance d between them is calculated as follows:
d=|z 1 -z 2 |
wherein,
(x 0 ,y 0 ) XO at base station 1 for base station 2 1 Projection point coordinates of the Y plane;
(A 1m ,E 1m ) Azimuth and pitch angles on the base station 1 after correction for the image point M;
(A 2n ,E 2n ) Azimuth and pitch angles on the base station 2 after correction for the image point N;
meanwhile, the invention also provides another rapid and efficient image point matching method for wide-baseline optical intersection measurement, which is characterized by comprising the following steps:
1) Image point ordering
Target image data set S of base station 1 1 The image point of (1) is connected with the optical center of the base station according to the image point and the coordinate system O 1 The included angles of the XY planes are ordered from big to small to obtain an image point group S 1 ′={P′ 11 ,P′ 12 ,…,P′ 1n Image point P' 11 ,P′ 12 ,…,P′ 1n The straight line connecting the optical center of the base station 1 is denoted as l a1 、l a2 、…、l an ;
Target image data set S of base station 2 2 The image point in (2) is connected with the optical center line of the base station according to the image point and the coordinate system O 2 The included angles of the XY planes are ordered from big to small to obtain an image point group S 2 ′=PP′ 21 ,P′ 22 ,…,P′ 2m Image point P' 21 ,P′ 22 ,…,P′ 2m The straight line connecting the optical center of the base station 2 is denoted as l b1 、l b2 、…、l bm ;
Wherein the coordinate system O 1 XY base station 1 optical center O 1 Is the origin of the coordinate system and is parallel to the optical center O of the base station 1 1 And base station 2 optical center O 2 The connecting line is an X axis, and the clockwise direction perpendicular to the X axis is a Y axis;
coordinate system O 2 XY base station 2 optical center O 2 Is the origin of the coordinate system and is parallel to the optical center O of the base station 2 2 And base station 1 optical center O 1 The connecting line is an X axis, and the clockwise direction perpendicular to the X axis is a Y axis;
2) Image point matching
2.1 Is the image point group S 1 'first image point P' 11 Searching for matching points
2.2.1 Computing the image point group S 1 'first image point P' 11 Corresponding straight line l a1 And pixel group S 2 ' all image points P ' in ' 21 ,P′ 22 ,…,P′ 2m Corresponding straight line l b1 、l b2 、…、l bm Different-plane linear distance to obtain different-plane linear distance data set
Wherein d a1bi Is a straight line l a1 And straight line l bi I=1, 2, …, m;
2.2.2 Traversing a datasetAll elements in (a) and comparing with a threshold E, if d a1bi Less than or equal to the threshold E, straight line l a1 Corresponding image point P' 11 And straight line l bi Corresponding image point P' 2i Is the correct matching point pair;or go through S 2 If the distance between the non-planar straight lines is not found to be equal to or less than the threshold E, the straight line l a1 Corresponding image point P' 11 At the pixel group S 2 There are no correct matching points in';
2.2 Is the image point group S 1 'second image point P' 12 Searching for matching points
If the first image point P' 11 Find the correct matching point P' 2i S is then 1 'next image point P' 12 From the next image point P 'of the correct matching point by the method of step 2.1)' 2(i+1) Starting to perform image point matching;
if the first image point P' 11 If the correct matching point is not reached, S 1 'next image point P' 12 From the group of pixels S by the method of step 2.1) 2 The first image point P 'in' 21 Starting to perform image point matching;
2.3 Is the image point group S 1 ' finding matching points for the remaining image points
Sequentially performing image point groups S by using the method of the step 2.2) 1 ' remaining pixels and pixel clusters S 2 ' image point matching calculation until the image point group S is traversed 1 ' all pixels in a pixel group S 1 'middle image point P' 1s And pixel group S 2 'middle image point P' 2m Is the correct matching point pair;
where s=1, 2, …, n.
Further, in step 1), the method further comprises the step of comparing the target image data set S 1 The specific process is as follows:
a) Target image dataset S 1 The coordinates P (x, y) of the middle image point under the physical coordinate system of the image are obtained by using a radial distortion correction model, and the coordinates P ' (x ', y ') after distortion correction are obtained;
the physical coordinate system of the image takes the upper left corner of the image as an origin, the horizontal direction as an X axis and the vertical height direction as a Y axis;
b) Azimuth angle a at base station 1 10 And pitch angle E 10 With the coordinates P ' (x ', y ')Synthesizing to obtain the synthesized azimuth angle A 1 And a synthesized pitch angle E 1 :
Wherein x' is the coordinate of the image point in the x direction; y' is the coordinate of the image point in the y direction;
pix is the single pixel size of the base station; f, focal length of the base station lens;
A 10 the azimuth angle of the image point in the center of the image plane of the base station 1 at the base station 1;
E 10 the pitch angle of the image point in the center of the image plane of the base station 1 in the direction of the base station 1; c) Target image dataset S 1 The corrected image point is represented as (A) 1 ,E 1 );
Further, in step a), the radial distortion correction model expression is as follows:
x=x′(1+k 1 r 2 +k 2 r 4 )
y=y′(1+k 1 r 2 +k 2 r 4 )
wherein r is 2 =x′ 2 +y′ 2 ;
k 1 And k 2 Radial distortion coefficient;
further, in step 1): also comprises the step of collecting the target image data set S 2 The image points in (a) are corrected, and the correction process is carried out with the target image data set S 1 The middle image point correction process is the same.
Further, the method comprises the steps of,
the method for calculating the different-plane linear distance in the step 2) comprises the following steps:
definition, optical center O of base station 1 1 Sum image point group S 1 Any image point M forms a straight line l 1 Optical center O of base station 2 2 Sum image point group S 2 Any one of the image points N constitutes a straight line l 2 Different-plane straight line l 1 And l 2 The distance d between them is calculated as follows:
d=|z 1 -Z 2 |
wherein,
(x 0 ,y 0 ) XO at base station 1 for base station 2 1 Projection point coordinates of the Y plane;
(A 1m ,E 1m ) Azimuth and pitch angles on the base station 1 after correction for the image point M;
(A 2n ,E 2n ) Azimuth and elevation angles at the base station 2 after correction for the image point N.
Compared with the prior art, the invention has the advantages that:
1. the invention is different from the traditional exhaustive different-surface straight line matching method, adopts the constraint principle based on image point ordering projection, and the two stations of images are ordered and matched according to the included angle with the coordinate horizontal plane, so as to realize the rapid and orderly matching of the same-name image points of the two cameras, and effectively improve the matching speed and the accuracy of the wide-base line optical intersection measurement image point group along with the progress of matching, with fewer target points to be matched and faster matching speed.
2. The method of the invention performs image point matching based on the principle of the combination of the out-of-plane linear vector intersection, the nearest neighbor distance judgment and the projection ordering constraint, and improves the matching speed and efficiency, thereby improving the real-time performance and the automation degree of the point group target optical intersection measurement, overcoming the problems of low accuracy and efficiency of the existing multi-target matching method and solving the problems of quick matching and real-time matching of the multi-target image points of the multi-station photoelectric theodolite wide-baseline station arrangement measurement.
3. The method is different from the conventional multi-target optical intersection measurement, and is used for firstly exhausting matching intersection and then eliminating wrong matching point pairs. The invention is based on the principle of combining the nearest neighbor distance judgment and projection ordering constraint by the intersection of the different-plane linear vectors, improves the matching accuracy and efficiency, and compared with the existing exhaustive matching method, the invention greatly saves the calculated amount and improves the matching speed.
4. The invention further provides a normalized rapid matching method for image point ordering based on the out-of-plane straight line nearest neighbor matching, and the method avoids redundant calculation of mutual traversal among image points in the conventional matching method through the corresponding relation of the projection ordering of the image points in the vertical direction, reduces the calculated amount, improves the matching speed of multiple target image points, gradually reduces the matching searching range along with the progress of the matching process, and increases the matching speed.
5. The method can finish ordered matching of image points, the matching speed is in an accelerating trend, and the matching speed can be greatly improved, so that the rapid and accurate matching of the image point groups of the two cameras is realized.
Drawings
FIG. 1 is a flow chart of a method for fast and efficient matching of image points for wide baseline optical intersection measurement in accordance with the present invention;
FIG. 2 is a schematic diagram of a two-point intersection of the method of the present invention;
figure 3 shows a different-plane linear in XO of the method of the present invention 1 A Y-plane projection drawing;
FIG. 4 is a schematic diagram of a target image of a dual base station in a second embodiment of a fast and efficient image point matching method for wide baseline optical intersection measurement according to the present invention;
fig. 5 is a diagram showing a spatial point matching correspondence relationship in a second embodiment of the fast and efficient image point matching method for wide-baseline optical intersection measurement according to the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and specific embodiments.
The invention converts the matching problem of two image points into the distance judgment problem between two different-plane straight lines formed by the two image points and the base station based on the nearest neighbor judgment of the straight line distance and the corresponding constraint principle of the image point ordering projection, thereby reducing the complexity of a matching algorithm and effectively improving the matching accuracy and efficiency.
Example 1
As shown in fig. 1, a method for fast and efficient matching of image points in wide baseline optical intersection measurement includes the following steps:
step one, image point correction and ordering
1.1 Distortion correction technique
In a wide baseline dual-station theodolite measurement system, the distance between two or more station photoelectric theodolite cameras varies from hundreds of meters to tens of kilometers, and distortion of the photoelectric theodolite cameras themselves can affect measured image point data.
The lens distortion of the electro-optic theodolite camera mainly comprises the following three types: radial distortion, pincushion distortion, and barrel distortion, the radial distortion affects imaging much more than the other two distortions. Therefore, in the intersection measuring system, the invention mainly corrects the radial distortion of the camera lens. The radial distortion correction is specifically as follows:
the space target point is imaged on an image plane of the photoelectric theodolite camera to obtain a coordinate p (X, Y) of the image point under an image physical coordinate system, wherein the image physical coordinate system takes an upper left corner of an image as an original point, a horizontal direction is an X axis, a vertical height direction is a Y axis, the coordinate with distortion errors is processed through a radial distortion correction model, and a calculation formula is as follows:
x=x′(1+k 1 r 2 +k 2 r 4 )
y=y′(1+k 1 r 2 +k 2 r 4 )
wherein: r is (r) 2 =x′ 2 +y′ 2 The method comprises the steps of carrying out a first treatment on the surface of the (x ', y') is the corrected coordinates;
k 1 and k 2 The radial distortion coefficient can be obtained by a camera calibration method.
The two photoelectric theodolite cameras are respectively a photoelectric theodolite camera 1 (marked as a base station 1 or a camera 1) and a photoelectric theodolite camera 2, and the photoelectric theodolite camera 1 and the photoelectric theodolite camera 2 are respectively marked as a base station 1 (a camera 1) and a base station 2(camera 2); the space target point is respectively projected on the image surfaces of two photoelectric theodolite cameras of the two-station intersection measuring system to obtain a target image data set S of the base station 1 1 And target image data set S of base station 2 2 The image point coordinate position p (x, y) may be obtained by image extraction software.
The base station 1 target image data set S 1 And base station 2 target image dataset S 2 Substituting the coordinates P (x, y) of all the image points into the radial distortion correction model formula, and obtaining coordinates P ' (x ', y ') of corrected image points after distortion phase difference correction;
1.2 Calculating the resultant angle
The current azimuth angle A of the base station 1 10 And pitch angle E 10 Synthesizing with the pixel coordinates P ' (x ', y ') to obtain a synthesized azimuth angle A 1 And a synthesized pitch angle E 1 :
Wherein x' is the coordinate of the image point in the x direction (also called miss distance x);
y' is the coordinate of the image point in the y direction (also called miss distance y); pix is the camera single pel size; f camera lens focal length;
A 10 the azimuth angle of the image point in the center of the image plane of the base station 1 at the base station 1;
E 10 the pitch angle of the image point in the center of the image plane of the base station 1 in the direction of the base station 1
A plurality of target image points obtained by the base station 1 and the base station 2 are respectively calculated to obtain a target point set S corresponding to the base station 1 after the synthetic angles are calculated 1 "set of target points S corresponding to base station 2 2 ″;
1.3 Image point ordering
Target image dataset S 1 Corresponding target point set S after image point correction 1 The image point in the "image point,according to the connection line of the image point and the optical center of the base station 1 and the coordinate system O 1 The included angles of the XY planes are ordered from big to small to obtain an image point group S 1 ′={P′ 11 ,P′ 12 ,…,P′ 1n Image point P' 11 ,P′ 12 ,,P′ 1n The straight line connecting the optical center of the base station 1 is denoted as l a1 、l a2 、…、l an The method comprises the steps of carrying out a first treatment on the surface of the Wherein, P' 11 For example, P' 11 Image point group S representing base station 1 1 The first image point in' the azimuth angle of this image point is A 11 The pitch angle corresponding to the image point is E 11 Then the pixel group S 1 ′={P′ 11 ,P′ 12 ,…,P′ 1n }={(A 11 ,E 11 ),(A 12 ,E 12 ),…,(A 1n ,E 1n ) }, i.e. image point P' 11 ,P′ 12 ,…,P′ 1n Respectively denoted as (A) 11 ,B 11 ),(A 12 ,B 12 ),…,(A 1n ,B 1n );
Target image dataset S 2 Corresponding target point set S' after image point correction 2 The image point in (2) is connected with the optical center line of the base station according to the image point and the coordinate system O 2 The included angles of the XY planes are ordered from big to small to obtain an image point group S 2 ′={P′ 21 ,P′ 22 ,…,P′ 2m Image point P' 21 ,P′ 22 ,…,P′ 2m The straight line connecting the optical center of the base station 2 is denoted as l b1 、l b2 、…、l bm The method comprises the steps of carrying out a first treatment on the surface of the Wherein, P' 21 For example, P' 21 Image point group S representing base station 2 2 The first image point in' the azimuth angle of this image point is A 21 The pitch angle corresponding to the image point is E 21 Then the pixel group S 2 ′={P′ 21 ,P′ 22 ,…,P′ 2m }={(A 21 ,E 21 ),(A 22 ,E 22 ),…,(A 2m ,E 2m ) }, i.e. image point P' 21 ,P′ 22 ,…,P′ 2m Respectively denoted as (A) 21 ,B 21 ),(A 22 ,B 22 ),…,(A 2m ,B 2m );
Wherein the method comprises the steps ofCoordinate system O 1 XY base station 1 optical center O 1 Is the origin of the coordinate system and is parallel to the optical center O of the base station 1 1 And base station 2 optical center O 2 The connecting line is an X axis, and the clockwise direction perpendicular to the X axis is a Y axis;
coordinate system O 2 XY base station 2 optical center O 2 Is the origin of the coordinate system and is parallel to the optical center O of the base station 2 2 And base station 1 optical center O 1 The connecting line is an X axis, and the clockwise direction perpendicular to the X axis is a Y axis;
n and m are the number of target points acquired by the base station 1 and the base station 2, respectively, and m=n in theory, but in an actual scene, factors such as noise are considered, and the number of target points acquired by the two base stations may be unequal at a certain moment.
Step two, image point matching
The image point matching technology of image point ordering projection correspondence and nearest neighbor is adopted, and the technology is based on the principle of intersection measurement, wherein two rays formed by the homonymous image point and the position coordinates of the base station are coplanar, namely the distance between the two rays is theoretically zero. However, considering the error effect, two rays formed by the homonymous image point and the base station in the actual scene are generally different-plane straight lines. Therefore, by determining the distance between a certain image point in the base station 1 and all the image points in the base station 2 and the rays formed by the respective base stations, it can be determined whether the two image points are the same name image points. The image point matching technology mainly comprises the following steps:
2.1 A group of pixels S of the base station 1 1 ' first straight line l in a1 Corresponding image point P' 11 At base station 2 pixel group S 2 ' find matching points in
From groups of pixels S 1 'first image point P' 11 Corresponding straight line l a1 Sequentially solving the pixel group S 2 'middle image point P' 21 ,P′ 22 ,…,P′ 2m Corresponding straight line l b1 、l b2 、…、l bm Straight line distance of different surfaces, straight line l a1 And straight line l bi The distance of (2) is denoted as d a1bi I=1, 2, …, m, each time a different-plane straight line distance d is obtained a1bi Judging whether the threshold value E is smaller than or equal to the threshold value D a1bi Less than or equal toThreshold E, straight line l a1 Corresponding image point P' 11 And straight line l bi Corresponding image point P' 2i Is the correct matching point pair; or go through S 2 All pixels in' if the straight line distance d a1bi All greater than the threshold E, the pixel group S 1 'first image point P' 11 Finding a group S of image points 2 ' incorrect matching points;
2.2 For the base station 1 image point group S 1 'second image point P' 12 Searching for matching points
If the first image point P' 11 If the correct matching point is not reached, S is taken 1 'next image point P' 12 From group of image points S 2 The first image point P 'in' 21 Starting to calculate the straight line distance of two opposite sides, wherein the calculation principle is the same as that of the step 2.1), until the correct matching point pair is found out or the S is traversed 2 ' all pixels in;
if the first image point P' 11 Find the correct matching point P' 2i S is then 1 'next image point P' 12 From the next image point P 'of the correct matching point' 2(i+1) Starting to calculate the straight line distance of two opposite sides, wherein the calculation principle is the same as that of the step 2.1), until the correct matching point pair is found out or the S is traversed 2 ' all pixels in;
2.3 Is the image point group S 1 ' finding matching points for the remaining image points
By the method of step 2.2), and so on until the image point group S is traversed 1 ' all points in the image; or a group of image points S 1 'middle image point P' 1s And pixel group S 2 'middle image point P' 2m For correctly matching point pairs (traversing the image point group S 2 ' all pixels in) then the group of pixels S 1 'middle image point P' 1s To the image point P' 1n No matching point exists in the base station 2;
it can be seen that the pixel group S at the base station 2 2 In the process of searching the matching point for the image point of the base station 1 in' the image point group S of the base station 2 along with the matching 2 The number of image points in' is smaller and the calculation amount is smaller.
In this embodiment, as shown in fig. 2, the different-plane straight-line distance calculation is performed by taking a point in the target image data set at the same time as the base station 1 and the base station 2, that is, by taking a point arbitrarily from the target scan map. Suppose from the corrected pixel group S 1 ' extraction Point (A) 11 ,E 11 ) From the group of image points S 2 ' middle point (A) 21 ,E 21 ). The selected base station 1 establishes a space rectangular coordinate system O for an origin 1 XYZ, as shown in fig. 2.
According to the space geometry knowledge, a space ray can be uniquely determined by the position coordinates of the base station and the azimuth angle A and the pitch angle E of the target point. In FIG. 2, base station 1 optical center O 1 Sum point (A) 11 ,E 11 ) Constituting a spatial ray l 1 Base station 2 optical center O 2 Sum point (A) 21 ,E 21 ) Constituting a spatial ray l 2 。
If point (A) 11 ,E 11 ) Sum point (A) 21 ,E 21 ) For the image point of the same spatial target point on the target scan of base station 1 and base station 2, then l 1 And/l 2 Must intersect at a point (i.e., meet), and the meeting point must be at l' 1 And l' 2 Directly above the intersection point M'. If l 1 And/l 2 Intersecting, the distance Δz between the point M and the point N is 0, i.e., in fig. 2, the point M coincides with the point N. It can be determined according to this principle whether two points on the camera images of two base stations meet, and if so, the two points are corresponding matching points.
However, in consideration of the reasons of observation errors and the like, two space rays formed by the corresponding matching points and the respective base stations in the actual scene are not all intersected, and a part of the intersected rays theoretically form different-plane straight lines in space. The calculation ray l can be considered in the actual calculation 1 And/l 2 And a smaller distance to determine if they intersect.
In FIG. 2, judge l 1 And/l 2 The meeting first needs to calculate l 1 And/l 2 Distance d of (2) a1b2 However, the distance between two different straight lines is complex, and the distance deltaz between the point M and the point N can be used as a judgment basis instead. Because of the difference betweenWhen the angle change of the plane line is small, the distance between the two different plane lines is very close to the distance between M, N.
The M and N points are represented as follows: overray l 1 And/l 2 In XO (Crystal oxygen) 1 Y plane projection line l' 1 And l' 2 Is a straight line l parallel to the Z axis m ,l m And/l 2 Intersecting at point N, l m And/l 1 Intersecting at point M. Obviously, if l 1 And/l 2 Intersecting, the distance Δz between the point M and the point N is 0.
In order to calculate the distance between M and N, the coordinates of M' need to be obtained first. Fig. 3 shows the lines in the rectangular space coordinate system of fig. 2 in the XO 1 Projection of the Y-plane.
Let M point coordinates be (x, y), O 2 ' for base station 2 in XO 1 The projected point of the Y plane has a coordinate (x 0 ,y 0 )。
Note Δx=x-x 0 ,Δy=y-y 0 The following steps are:
the coordinates of the M' point can be calculated as follows:
passing through two rays l 1 And/l 2 In XO (Crystal oxygen) 1 The projection intersection M' on the Y plane makes a straight line l parallel to the Z axis m ,l m And/l 1 And l 2 Intersecting at points M and N, respectively: m (x) 1 ,y 1 ,z 1 ),N(x 2 ,y 2 ,z 2 )。
The coordinates of the M point are:
the coordinates of the N points are:
order theI.e. the distance between the two points M, N.
And because of x 1 =x 2 =x,y 1 =y 2 =y。
Thus, the distance Δz= |z between the point M and the point N 1 -z 2 |。
Step three, obtaining the three-dimensional coordinates of the space target point
According to the determined correct matching point pair, the three-dimensional coordinates of the matching image point corresponding to the space target point can be calculated by combining the calculation method of the different-plane straight line distance.
As shown in fig. 2, it is assumed that the image point group S is 1 The image point M (A) 11 ,E 11 ) And from the group of image points S 2 The point N (A) 21 ,E 21 ) To correctly match the image point. Then the image point M (A) 11 ,E 11 ) Sum image point N (A) 21 ,E 21 ) The three-dimensional coordinates of the determined target point are:
x=x 1 =x 2
y=y 1 =y 2
the method of the embodiment is different from the traditional exhaustion out-of-plane straight line matching method, and provides group target image point matching based on out-of-plane straight line vector intersection combined with nearest neighbor distance judgment and projection ordering constraint principle, two stations of image points are ordered and matched according to the included angle with a coordinate plane, a large number of repeated calculations are omitted, along with the progress of matching, the matching speed is faster and faster, the same-name image points of the two cameras can be matched in order, a simplified out-of-plane straight line distance solving method is invented, the complexity of calculating the distance between straight lines is effectively reduced, and the matching speed and the accuracy of the group of wide-base line optical intersection measurement image points are effectively improved.
Example two
The first difference from the embodiment is that in the second step, the image point is matched:
the image point matching technology of image point ordering projection correspondence and nearest neighbor is adopted, and the technology is based on the principle of intersection measurement, wherein two rays formed by the homonymous image point and the position coordinates of the base station are coplanar, namely the distance between the two rays is theoretically zero. However, considering the error effect, two rays formed by the homonymous image point and the base station in the actual scene are generally different-plane straight lines. Therefore, by determining the distance between a certain image point in the base station 1 and all the image points in the base station 2 and the rays formed by the respective base stations, it can be determined whether the two image points are the same name image points. The image point matching technology mainly comprises the following steps:
2.1 A group of pixels S of the base station 1 1 ' first straight line l in a1 Corresponding image point P' 11 At base station 2 pixel group S 2 ' find matching points in
2.2.1 Computing the image point group S 1 'first image point P' 11 Corresponding straight line l a1 And pixel group S 2 ' all image points P ' in ' 21 ,P′ 22 ,…,P′ 2m Corresponding straight line l b1 、l b2 、…、l bm Different-plane linear distance to obtain different-plane linear distance data setd a1bi Is a straight line l a1 And straight line l bi I=1, 2, …, m;
2.2.2 From a datasetSelecting the smallest element +.>And satisfies the distance threshold epsilon screening condition: />Then the corresponding straight line l a1 Corresponding image point P' 11 And straight line l bi Corresponding image point P' 2i For the correct matching point pair, from the dataset +.>Selecting image point pairs to be registered which meet the requirements;
if different plane straight line distance data setIf none of the data elements satisfies the matching screening condition, then the image point group S 2 'none of which corresponds to the image point P' 11 Is a correct match to the matching point of the (c). From group of image points S 1 ' data Point set S 1 Middle elimination of P' 11 From the group of image points S 1 'select the next image point P' 12 。
2.2 For base station)1 image point group S 1 'second image point P' 12 Searching for matching points
If the first image point P' 11 If the correct matching point is not found, S 1 'next image point P' 12 From group of image points S 2 The first image point P 'in' 21 Starting to calculate the straight line distance of two opposite sides, wherein the calculation and screening principles are the same as those of the step 2.1), until the correct matching point pair is found or the step S is completed 2 ' all pixels in;
if the first image point P' 11 Find the correct matching point P' 2i S is then 1 'next image point P' 12 From the next image point P 'of the correct matching point' 2(i+1) Starting to calculate the straight line distance of two opposite sides, wherein the calculation and screening principles are the same as those of the step 2.1), until the correct matching point pair is found or the step S is completed 2 ' all pixels in;
2.3 Is the image point group S 1 ' finding matching points for the remaining image points
By the method of step 2.2), and so on until the image point group S is traversed 1 ' all points in the image; or a group of image points S 1 'middle image point P' 1s And pixel group S 2 'middle image point P' 2m For correctly matching point pairs (traversing the image point group S 2 ' all pixels in) then the group of pixels S 1 'middle image point P' 1s To the image point P' 1n No matching point exists in the base station 2;
it can be seen that the pixel group S at the base station 2 2 In the process of searching the matching point for the image point of the base station 1 in' the image point group S of the base station 2 along with the matching 2 The number of image points in' is smaller and the calculation amount is smaller.
As shown in fig. 4, it is assumed that the base station 1 resembles a point group S 1 ' and base station 2 image point group S 2 ' image point information of the target point at the same time acquired by the base station 1 and the base station 2 respectively, S 1 ' and S 2 The number of pixels in' is the same, and there are 4 target pixels in each scan. Denoted as S 1 ′={a 1 ,a 2 ,a 3 ,a 4 },S 2 ′={b 1 ,b 2 ,b 3 ,b 4 }。O 1 And O 2 The optical center positions of base station 1 and base station 2, respectively.
Base station 1 and scan pattern S 1 The 4 image points in' generate 4 rays, which can be expressed as a ray set
Base station 2 and scan pattern S 2 The 4 image points in' generate 4 rays, which can be expressed as a ray set
a) Select L 1 In (a)Is the main ray, then->And L is equal to 2 The 4 rays in (a) respectively form four groups of different-plane straight lines. The distances between four groups of different-plane straight lines are calculated by the different-plane straight line distance calculation method in the first embodiment to obtain a data set ∈>In theory, two rays corresponding to the correct matching point pair intersect, i.e. the two rays are coplanar, and the distance d is 0.
b) CalculatingAnd L is equal to 2 Distance of 4 rays forming four different-plane lines +.> As shown in fig. 5.
New out-of-plane straight line distance data set obtained from screeningThe smallest element is taken out-> And satisfies the distance threshold epsilon screening condition: />The corresponding point a 1 And b i For the correct matching point pair, from the dataset +.>And selecting the image point pairs to be registered which meet the requirements.
If different plane straight line distance data setIf the data elements in the data elements meet the matching screening condition, then in S 2 None of them corresponds to point a 1 Is a correct match to the matching point of the (c). From camera 1 data Point set S 1 Middle knockout a 1 Selecting the next point a 2 And returning to the previous step, and continuing to calculate and screen the different-plane linear distance.
Assume a heterofacial straight line distance datasetMiddle data element->If the matching screening condition is satisfied, at S 2 Midpoint a 1 The correct matching point of (b) 2 . From camera 1 data Point set S 1 Middle knockout a 1 Selecting the next point a 2 At this time, at the camera 2 point set S 2 In (3) only from b 3 It is initially calculated from the image point a in the camera 1 2 The distance between the different-plane straight lines is such that as matching proceeds, the image plane of the camera 2The number of the matchable point groups is smaller, the calculated amount is smaller, and the matching speed is improved from the algorithm point of view.
And so on until the matching calculation is completed on the camera 1 image point group S 1 ' all image points in the image.
Through matching calculation, as shown in fig. 5, O is the origin of coordinates, and can be set randomly according to measurement tasks; the X axis is the north direction, the Y axis is the west direction, the Z axis is the vertical height direction, and the image point a 1 Sum image point b 1 To correctly match the point pairs, the point a is imaged 2 Sum image point b 2 To correctly match the point pairs, the point a is imaged 3 Sum image point b 3 To correctly match the point pairs, the point a is imaged 4 Sum image point b 4 For correctly matching the point pair, the point a 1 Sum image point b 1 Image point a 2 Sum image point b 2 Image point a 3 Sum image point b 3 Image point a 4 Sum image point b 4 Real spatial target points P1, P2, P3, P4 are determined, respectively.
The foregoing description of the preferred embodiments of the present invention is merely illustrative, and the technical solution of the present invention is not limited thereto, and any known modifications may be made by those skilled in the art based on the main technical concept of the present invention, which falls within the technical scope of the present invention.
Claims (4)
1. The fast and efficient image point matching method for wide-base line optical intersection measurement is characterized by comprising the following steps of:
1) Image point ordering
Target image data set S of base station 1 1 The image point of (1) is connected with the optical center of the base station according to the image point and the coordinate system O 1 The included angles of the XY planes are ordered from big to small to obtain an image point group S 1 ′={P′ 11 ,P′ 12 ,…,P′ 1n Image point P' 11 ,P′ 12 ,…,P′ 1n The straight line connecting the optical center of the base station 1 is denoted as l a1 、l a2 、…、l an ;
Target image data set S of base station 2 2 In (1) image points, according to image points and base station2 optical center line and coordinate system O 2 The included angles of the XY planes are ordered from big to small to obtain an image point group S 2 ′={P′ 21 ,P′ 22 ,…,P′ 2m Image point P' 21 ,P′ 22 ,…,P′ 2m The straight line connecting the optical center of the base station 2 is denoted as l b1 、l b2 、…、l bm ;
Wherein the coordinate system O 1 XY base station 1 optical center O 1 Is the origin of the coordinate system and is parallel to the optical center O of the base station 1 1 And base station 2 optical center O 2 The connecting line is an X axis, and the clockwise direction perpendicular to the X axis is a Y axis;
coordinate system O 2 XY base station 2 optical center O 2 Is the origin of the coordinate system and is parallel to the optical center O of the base station 2 2 And base station 1 optical center O 1 The connecting line is an X axis, and the clockwise direction perpendicular to the X axis is a Y axis;
2) Image point matching
2.1 Is the image point group S 1 'first image point P' 11 Searching for matching points
From groups of pixels S 1 'first image point P' 11 Corresponding straight line l a1 Sequentially solving the pixel group S 2 'middle image point P' 21 ,P′ 22 ,…,P′ 2m Corresponding straight line l b1 、l b2 、…、l bm Straight line distance of different surfaces, straight line l a1 And straight line l bi The distance of (2) is denoted as d a1bi I=1, 2, …, m, each time a different-plane straight line distance d is obtained a1bi Judging whether the threshold value E is smaller than or equal to the threshold value D a1bi Less than or equal to the threshold E, straight line l a1 Corresponding image point P' 11 And straight line l bi Corresponding image point P' 2i Is the correct matching point pair; if the straight line distance d a1bi Greater than threshold E, seek S 2 ' the next image point in the process carries out the out-of-plane straight line distance calculation; until the correct matching point pair is found or S is traversed 2 ' all pixels in;
the method for calculating the different-surface linear distance comprises the following steps:
definition, optical center O of base station 1 1 Sum image point group S 1 Any image point M forms a straight line l 1 Optical center O of base station 2 2 Sum image point group S 2 Any one of the image points N constitutes a straight line l 2 Different-plane straight line l 1 And l 2 The distance d between them is calculated as follows:
d=|z 1 -z 2 |
wherein,
(x 0 ,y 0 ) XO at base station 1 for base station 2 1 Projection point coordinates of the Y plane;
(A 1m ,E 1m ) Azimuth and pitch angles on the base station 1 after correction for the image point M;
(A 2n ,E 2n ) Azimuth and pitch angles on the base station 2 after correction for the image point N;
2.2 Is the image point group S 1 'second image point P' 12 Searching for matching points
If the first image point P' 11 Find the correct matching point P' 2i S is then 1 'next image point P' 12 From the next image point P 'of the correct matching point by the method of step 2.1)' 2(i+1) Starting to perform image point matching;
if the first image point P' 11 If the correct matching point is not found, S is taken 1 'next image point P' 12 From the group of pixels S by the method of step 2.1) 2 The first image point P 'in' 21 Starting to perform image point matching;
2.3 Is the image point group S 1 ' finding matching points for the remaining image points
Sequentially performing image point groups S by using the method of the step 2.2) 1 ' remaining pixels and pixel clusters S 2 ' image point matching calculation until the image point group S is traversed 1 All the image points in the' are,or a group of image points S 1 'middle image point P' 1s And pixel group S 2 'middle image point P' 2m Is the correct matching point pair;
where s=1, 2, …, n.
2. The method of fast and efficient image point matching for wide baseline optical intersection measurements as recited in claim 1, further comprising, in step 1), a step of comparing the target image dataset S with the image dataset 1 The specific process is as follows:
a) Target image dataset S 1 The coordinates P (x, y) of the middle image point under the physical coordinate system of the image are obtained by using a radial distortion correction model, and the coordinates P ' (x ', y ') after distortion correction are obtained;
the physical coordinate system of the image takes the upper left corner of the image as an origin, the horizontal direction as an X axis and the vertical height direction as a Y axis;
b) Azimuth angle a at base station 1 10 And pitch angle E 10 Synthesizing with the coordinate P ' (x ', y ') to obtain a synthesized azimuth angle A 1 And a synthesized pitch angle E 1 :
Wherein x' is the coordinate of the image point in the x direction; y' is the coordinate of the image point in the y direction;
pix is the single pixel size of the base station; f, focal length of the base station lens;
A 10 the azimuth angle of the image point in the center of the image plane of the base station 1 at the base station 1;
E 10 the pitch angle of the image point in the center of the image plane of the base station 1 in the direction of the base station 1;
c) Target image dataset S 1 The corrected image point is represented as (A) 1 ,E 1 )。
3. The method of fast and efficient matching of image points for wide baseline optical cross-over measurements according to claim 2, wherein in step a), the radial distortion correction model expression is as follows:
x=x′(1+k 1 r 2 +k 2 r 4 )
y=y′(1+k 1 r 2 +k 2 r 4 )
wherein r is 2 =x′ 2 +y′ 2 ;
k 1 And k 2 Is the radial distortion coefficient.
4. A method for fast and efficient matching of image points for wide baseline optical cross-over measurements as set forth in claim 3, wherein: in step 1), the method further comprises the step of comparing the target image data set S 2 The image points in (a) are corrected, and the correction process is carried out with the target image data set S 1 The middle image point correction process is the same.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011409685.7A CN112651427B (en) | 2020-12-03 | 2020-12-03 | Image point rapid and efficient matching method for wide-baseline optical intersection measurement |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011409685.7A CN112651427B (en) | 2020-12-03 | 2020-12-03 | Image point rapid and efficient matching method for wide-baseline optical intersection measurement |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112651427A CN112651427A (en) | 2021-04-13 |
CN112651427B true CN112651427B (en) | 2024-04-12 |
Family
ID=75350926
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011409685.7A Active CN112651427B (en) | 2020-12-03 | 2020-12-03 | Image point rapid and efficient matching method for wide-baseline optical intersection measurement |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112651427B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114066988B (en) * | 2022-01-18 | 2022-04-08 | 中国人民解放军63921部队 | Automatic calibration method of photoelectric measurement and control equipment and photoelectric measurement and control equipment |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101609144A (en) * | 2009-07-29 | 2009-12-23 | 中国气象科学研究院 | Three-dimensional positioning system of lightning radiation source |
WO2011143813A1 (en) * | 2010-05-19 | 2011-11-24 | 深圳泰山在线科技有限公司 | Object projection method and object projection sysytem |
CN104504683A (en) * | 2014-12-02 | 2015-04-08 | 中国科学院西安光学精密机械研究所 | Image point fast matching method for long-baseline optical intersection measurement |
-
2020
- 2020-12-03 CN CN202011409685.7A patent/CN112651427B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101609144A (en) * | 2009-07-29 | 2009-12-23 | 中国气象科学研究院 | Three-dimensional positioning system of lightning radiation source |
WO2011143813A1 (en) * | 2010-05-19 | 2011-11-24 | 深圳泰山在线科技有限公司 | Object projection method and object projection sysytem |
CN104504683A (en) * | 2014-12-02 | 2015-04-08 | 中国科学院西安光学精密机械研究所 | Image point fast matching method for long-baseline optical intersection measurement |
Non-Patent Citations (2)
Title |
---|
数字近景摄影测量中人工标志点快速自动匹配;冯其强;阎晓东;安海峰;;测绘科学技术学报(01);全文 * |
靶场光电经纬仪交会测量数据处理方法比较分析;刘亮;李海燕;胡云安;何友金;杨建强;侯建军;娄树理;;舰船电子工程(09);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN112651427A (en) | 2021-04-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107014312B (en) | A kind of integral calibrating method of mirror-vibrating line laser structured light three-dimension measuring system | |
CN107301654B (en) | Multi-sensor high-precision instant positioning and mapping method | |
CN109544628B (en) | Accurate reading identification system and method for pointer instrument | |
CN109163657B (en) | Round target pose detection method based on binocular vision three-dimensional reconstruction | |
CN109272574B (en) | Construction method and calibration method of linear array rotary scanning camera imaging model based on projection transformation | |
CN105066962B (en) | A kind of high-precision photogrammetric apparatus of the big angle of visual field of multiresolution | |
CN110378969A (en) | A kind of convergence type binocular camera scaling method based on 3D geometrical constraint | |
WO2020199439A1 (en) | Single- and dual-camera hybrid measurement-based three-dimensional point cloud computing method | |
CN107339935B (en) | Target space intersection measuring method for full-view scanning measuring system | |
CN101354796B (en) | Omnidirectional stereo vision three-dimensional rebuilding method based on Taylor series model | |
Yang et al. | Precision calibration method for binocular vision measurement systems based on arbitrary translations and 3D-connection information | |
CN112651427B (en) | Image point rapid and efficient matching method for wide-baseline optical intersection measurement | |
CN104504683B (en) | Image point fast matching method for long-baseline optical intersection measurement | |
Li et al. | Method for detecting pipeline spatial attitude using point cloud alignment | |
CN111968182B (en) | Calibration method for nonlinear model parameters of binocular camera | |
CN108921936A (en) | A kind of underwater laser grating matching and stereo reconstruction method based on ligh field model | |
CN106228593B (en) | A kind of image dense Stereo Matching method | |
CN111754584A (en) | Remote large-field-of-view camera parameter calibration system and method | |
CN114877826B (en) | Binocular stereo matching three-dimensional measurement method, system and storage medium | |
Yang et al. | Improved calibration method of binocular vision measurement system for large hot forging | |
CN112819900B (en) | Method for calibrating internal azimuth, relative orientation and distortion coefficient of intelligent stereography | |
CN110689582B (en) | Total station camera calibration method | |
Zhang et al. | Phase-aided online self-correction method for high-accuracy three-dimensional measurement | |
Wang et al. | Estimation of extrinsic parameters for dynamic binocular stereo vision using unknown-sized rectangle images | |
Liu et al. | Calibration Method for Wide Field Machine Vision System Based on Laser Projection Turntable |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |