CN110044374B - Image feature-based monocular vision mileage measurement method and odometer - Google Patents

Image feature-based monocular vision mileage measurement method and odometer Download PDF

Info

Publication number
CN110044374B
CN110044374B CN201810044762.XA CN201810044762A CN110044374B CN 110044374 B CN110044374 B CN 110044374B CN 201810044762 A CN201810044762 A CN 201810044762A CN 110044374 B CN110044374 B CN 110044374B
Authority
CN
China
Prior art keywords
image
camera
coordinate system
points
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810044762.XA
Other languages
Chinese (zh)
Other versions
CN110044374A (en
Inventor
樊晓东
孟俊华
王飞
唐文平
高成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kuanyan Beijing Technology Development Co ltd
Original Assignee
Kuanyan Beijing Technology Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kuanyan Beijing Technology Development Co ltd filed Critical Kuanyan Beijing Technology Development Co ltd
Priority to CN201810044762.XA priority Critical patent/CN110044374B/en
Publication of CN110044374A publication Critical patent/CN110044374A/en
Application granted granted Critical
Publication of CN110044374B publication Critical patent/CN110044374B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C22/00Measuring distance traversed on the ground by vehicles, persons, animals or other moving solid bodies, e.g. using odometers, using pedometers

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides a method for measuring mileage based on image characteristics by monocular vision and a milemeter, wherein the method comprises the following steps: the method comprises the following steps of (1) calibrating a camera; (2) Calculating 2D characteristic points of two adjacent frames of images in the front and back direction along the advancing direction; (3) Matching the 2D feature points to find out corresponding feature points in the two frames of images; (4) Calculating the 3D coordinates of the corresponding feature points in the two frames of images, and calculating the camera pose according to the 3D coordinates and the 2D coordinates of the corresponding feature points to obtain the relative displacement of the camera; (5) And performing the same operation on the subsequent frames, and finally accumulating all the displacements to obtain the mileage. Compared with a method based on binocular vision, the method has the advantages that the method measures the mileage by adopting monocular vision, is simple in equipment and reduces cost; compared with a method based on sift and Harris angular points, the method has the advantages that the image characteristic speed is calculated faster, the rotation scale invariance is realized, and the real-time processing can be realized.

Description

Image feature-based monocular vision mileage measurement method and odometer
Technical Field
The invention relates to the technical field of image processing, in particular to a method for measuring mileage by monocular vision based on image characteristics and a milemeter.
Background
In the subway operation process, apparent defects such as water leakage or cracks, peeling and the like of the outlet wire of a tunnel structure mainly made of concrete materials and deformation of the section of the tunnel are unavoidable defects, and long-term development of the defects can cause irreversible negative effects on the safety of the tunnel. Therefore, the maintenance of the tunnel structure in subway operation is a necessary means for ensuring the long-term safe operation of the tunnel. The position control of the sensor in the detection process directly influences the effectiveness of detection data acquisition. At present, the positions of sensors for detecting most subway tunnel defects are set in advance at the present stage, and for different tunnel section environments, the validity of data cannot be improved through position adjustment, and the difficulty of software analysis is reduced. In recent years, with the rapid development of computer technology, automatic control theory, embedded development, chip design and sensor technology, the tunnel disease automatic detection is realized, a scene image or an image sequence is extracted from a detection vehicle running in real time for processing, effective characteristics of a detected target are extracted, space target real-time pose information is acquired, and support is provided for the position and mileage of a subsequent tunnel disease image. However, it is very difficult to obtain three-dimensional coordinate information of a moving object due to technical limitations of the monocular camera. Monocular vision has three generation modes, one is generated by perspective geometry, the reference vanishing point is formed by displacement of the target, and the generation mode is that the limiting conditions, such as camera fixation, background fixation and constant speed of a person, are required, so that the faster the target moves, the closer the target is to the camera. Yet another is through the focal length. The blurring effect of the same scene shot by different focal lengths is measured. This method does not work well for the entire image that is generated, but the values are relatively accurate. Binocular vision relies on parallax effects. This effect is a main cause of being able to form a three-dimensional stereoscopic impression. At present, monocular vision also mainly depends on generating three-dimensional depth information by finding a reference object and finding a parallax effect. Monocular pose estimation is a problem of a three-dimensional scene structure, and a triangular geometric relation of corresponding feature points needs to be formed through interframe movement. After the triangle geometric relationship is established, the three-dimensional coordinates of the pose and the feature points are solved simultaneously, which is a classic three-dimensional scene structure problem. Therefore, the problem that the prior chicken is the prior egg is not existed. The solutions for three-dimensional scene structures are many, and the rotation R and the displacement T of the camera can be obtained by estimating an approximate matrix and then decomposing the approximate matrix in the simplest way. In binocular stereo vision, the three-dimensional coordinates of the characteristic points can be directly triangulated because the base line is fixed and known. Then the motion information between frames is the motion parameter fitting between two piles of three-dimensional points; the dual purpose disadvantage is that the base line is fixed and is not generally very wide due to the size limitations of the carrier. The accuracy of the triangulated reconstruction is generally not very high.
Therefore, there is a need to develop a monocular vision-based mileage calculation method, which has faster speed for measuring and calculating image features, and has rotation scale invariance, and can process images in real time, compared with the sift, harris corner-based method.
Disclosure of Invention
The invention aims to solve the technical problem of providing a monocular vision-based mileage measurement method, which has simple equipment and low cost; compared with a method based on sift and Harris corners, the method for measuring monocular vision mileage based on image characteristics has the advantages of higher speed of measuring and calculating image characteristics, rotation scale invariance and real-time processing.
In order to solve the technical problems, the invention adopts the technical scheme that: the method for measuring the mileage based on the monocular vision of the image characteristics specifically comprises the following steps:
(1) Calibrating a camera to obtain internal and external parameters of the camera;
(2) Calculating 2D characteristic points of two adjacent frames of images in the forward direction;
(3) Matching the 2D feature points to find out corresponding feature points in the two frames of images;
(4) Calculating the 3D coordinates of the corresponding characteristic points in the two frames of images, and calculating the camera pose according to the 3D coordinates and the 2D coordinates of the corresponding characteristic points to obtain the relative displacement of the camera;
(5) And (4) repeating the steps (1) to (4) for subsequent frames in sequence, calculating the displacement of the camera relative to the previous frame when shooting each frame, and finally accumulating all the displacements to obtain the mileage.
By adopting the technical scheme, the mileage is measured by adopting monocular vision, compared with a method based on binocular vision, the method has the advantages of simple equipment and low cost; comparison is based on sift, harri s The angular point method has the advantages of higher image feature calculation speed, rotation scale invariance and real-time processing.
The invention is further improved in that the step (1) comprises the following steps:
1-1, obtaining a conversion relation among an image coordinate system, a camera coordinate system and a world coordinate system according to a pinhole imaging model;
1-2, shooting a plurality of checkerboard calibration plates under different visual angles, extracting angular points on images of the calibration plates, and obtaining pixel coordinates and physical coordinates of the angular points according to the checkerboard size so as to obtain a homography matrix H of all images of the calibration plates;
1-3, solving internal and external parameters;
and 1-4, solving the minimized projection error through a Levenberg-Marquardt algorithm, and optimizing the internal and external parameters of the camera.
Preferably, the step (2) of calculating the orb feature points of the two previous and next frames of images, which is the 2D feature point, specifically includes the following steps: constructing an image pyramid, extracting key points from each layer according to a fast algorithm, selecting point pairs around the key points according to a brief algorithm, generating a descriptor by comparing pixel values, adjusting the descriptor according to an included angle between the key points and a gray centroid, enabling the descriptor to have rotation invariance, and finally obtaining an orb descriptor.
The 2D feature point matching in the step (3) specifically comprises the following steps:
3-1, establishing a k-d tree for the feature point set in the image, namely selecting a dimension k with the maximum variance in the data set; then selecting a characteristic point with a median value m on a k dimension as a splitting node; dividing the value of k smaller than m into left subspace, and dividing the value of k larger than m into right subspace; respectively carrying out the operations on the left subspace and the right subspace until the left subspace and the right subspace can not be divided, and obtaining a k-d tree;
3-2, performing feature matching search by using a bbf search algorithm: performing binary search starting from a root node of the k-d tree, and sequencing nodes on a query path according to respective distances from the query path to query points; and during backtracking, starting from the tree node with the highest priority, and when all nodes are checked or exceed the running time limit, taking the point with the shortest distance as the nearest matching feature point.
The step (4) specifically comprises the following steps:
4-1, establishing a coordinate system by taking the upper left corner of the image as a coordinate origin and taking the area shot by the image as a plane according to the pixel size and the physical size of the image to obtain a 3D coordinate of the feature point;
and 4-2, according to camera internal parameters, the 3D coordinates of the characteristic points in the previous frame image and the 2D coordinates of the characteristic points in the next frame image, calculating the pose of the camera when the camera shoots the next frame by using a coordinate conversion relation, and further calculating the displacement of the camera between the positions of shooting the previous frame and the next frame.
The conversion relation among the image coordinate system, the camera coordinate system and the world coordinate system in the step 1-1 is specifically as follows:
(1.11) in the world coordinate system, the coordinate of a certain point is [ X ] w ,Y w ,Z w ]Passing through the camera coordinate system, the corner point coordinate is [ X ] c ,Y c ,Z c ]Through the corresponding transformation relation of rotation and translation,
Figure GDA0001601006210000031
Figure GDA0001601006210000032
where R is the rotation matrix and T is the displacement of two coordinate origins, then
Figure GDA0001601006210000033
(1.12) after the point is imaged by the camera, the point is [ x, y ] in the coordinate system represented by the physical size of the image]According to a similar trigonometric relationship, have
Figure GDA0001601006210000041
Where f is the focal length of the camera, i.e
Figure GDA0001601006210000042
(1.13) the relationship between the image pixel size coordinate system and the image physical size coordinate system is shown in formula (3), and the coordinate system of the point expressed in the image pixel size is [ u, v [ ]]Then there is a pairThe relationship should be:
Figure GDA0001601006210000043
wherein (u) 0 ,v 0 ) Is the center of an image pixel, d x Is the physical dimension of a pixel in the x-axis direction, d y Is the physical size of a pixel in the y-axis direction, namely
Figure GDA0001601006210000044
(1.14) by combining the relationships of the above formulae (1), (2) and (3):
Figure GDA0001601006210000045
Figure GDA0001601006210000046
(1.15) consideration of the addition of the skewness parameter C, ultimately
Figure GDA0001601006210000047
(1.16) since the checkerboard is a plane, Z is set w =0, let a denote the camera matrix,
Figure GDA0001601006210000048
r 1 ,r 2 ,r 3 for a column vector of R, t is a translation column vector, then equation (5) can be written as
Figure GDA0001601006210000049
And solving the homography matrix H, shooting a plurality of chessboard pattern calibration plates under different visual angles, and extracting angular points on the images of the calibration plates. And the size of the checkerboard is known, so that the pixel coordinates and the physical coordinates of the corner points can be obtained. By the least square method, the homography matrix H, H = [ H ] of all calibration board images can be obtained 1 h 2 h 3 ]According to the formula (6), let λ denote a constant, and [ h ] can be obtained 1 h 2 h 3 ]=λA[r 1 r 2 t];(7)
Let alpha, beta and gamma be the rotation angles in the directions of x-axis, y-axis and z-axis, respectively, then the rotation matrix
Figure GDA0001601006210000051
Figure GDA0001601006210000052
Can obtain
Figure GDA0001601006210000053
And
Figure GDA0001601006210000054
to obtain | | | r 1 ||=(cosγ cosβ+sinγ sinα sinβ) 2 +(-sinγ cosβ+cosγ sinα sinβ) 2 +(cosα sinβ) 2 =1, and | | r 2 ||=(sinγ cosα) 2 +(cosγ cosα) 2 +(-sinα) 2 =1, so | | r 1 ||=||r 2 ||=1。 (8)
Calculating r 1 ·r 2 =(cosγ cosβ+sinγ sinα sinβ)(sinγ cosα)+(-sinγ cosβ+cosγ sinα sinβ)(cosγ cosα)+(cosα sinβ)(-sinα)=0, (9)
From the above equations (7), (8) and (9), it is possible to obtain:
Figure GDA0001601006210000055
i.e. h 1 T A -T A -1 h 2 =0; (10)
Figure GDA0001601006210000056
To obtain h 1 T A -T A -1 h 1 =h 2 T A -T A -1 h 2 。 (11)
Establishing an equation set according to the formula (10) and the formula (11), and substituting the homography matrix values of the groups obtained in the step (1.2) into the equation set to obtain an internal reference matrix A;
order to
Figure GDA0001601006210000057
Let h i =[h i1 ,h i2 ,h i3 ] T Then there is
Figure GDA0001601006210000058
Wherein B = [ B = 11 ,B 12 ,B 22 ,B 13 ,B 23 ,B 33 ] T ,v ij =[h i1 h j1 ,h i1 h j2 +h i2 h j1 ,h i2 h j2 ,h i3 h j1 +h i1 h j3 ,h i3 h j2 +h i2 h j3 ,h i3 h j3 ] T (ii) a Therefore, the above-mentioned formulas (10) and (11) can be written as
Figure GDA0001601006210000059
And (4) substituting the values of all the homography matrixes, solving b, and then solving the values of all the elements and the external parameters in the internal parameter matrix A.
The invention also provides a monocular vision odometer based on the image characteristics, and the method for measuring the mileage based on the monocular vision odometer based on the image characteristics is utilized for calculating the mileage.
Compared with the prior art, the invention has the following beneficial effects: compared with a method based on binocular vision, the method has the advantages of simple equipment and low cost; compared with a method based on sift and Harris angular points, the method has the advantages that the image characteristic speed is measured and calculated more quickly, the rotation scale invariance is realized, and the real-time processing can be realized.
Drawings
FIG. 1 is a diagram of a pinhole imaging model for a monocular vision mileage measurement method based on image features according to the present invention;
fig. 2 is a schematic diagram of two adjacent frames before and after shooting of the method for monocular vision mileage measurement based on image characteristics according to the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the drawings of the embodiments of the present invention.
Example 1: the image feature-based monocular vision mileage measuring method is applied to vehicle-mounted tunnel detection equipment, and specifically comprises the following steps:
(1) Firstly, calibrating a camera to obtain parameters of the camera;
(2) Sequentially calculating 2D characteristic points of front and rear frames in the advancing direction of the vehicle;
(3) Matching the 2D characteristic points to find out corresponding characteristic points;
(4) Calculating the 3D coordinates of the characteristic points, and calculating the posture according to the 3D coordinates and the 2D coordinates of the characteristic points to obtain relative displacement;
(5) Sequentially adopting the same method for the subsequent measured frames, calculating the displacement of the camera relative to the previous frame when shooting each frame, and finally accumulating all the displacements to obtain the mileage;
as shown in fig. 1, the calibrating the camera by using the checkerboard calibration algorithm in step (1), and the acquiring the camera internal reference specifically includes the following steps:
(1.1) according to the pinhole imaging model, the transformation relation under an image coordinate system, a camera coordinate system and a world coordinate system is as follows:
(1.11) in the world coordinate system, the coordinate of a certain point is [ X ] w ,Y w ,Z w ]Passing under the camera coordinate system, the corner point coordinate is [ X ] c ,Y c ,Z c ]By the corresponding transformation relation of rotation and translation,
Figure GDA0001601006210000061
Figure GDA0001601006210000071
where R is the rotation matrix and T is the displacement of two coordinate origins, then
Figure GDA0001601006210000072
(1.12) after the point is imaged by the camera, the point is [ x, y ] in the coordinate system represented by the physical size of the image]According to the similar trigonometric relationship, there are
Figure GDA0001601006210000073
Where f is the focal length of the camera, i.e.
Figure GDA0001601006210000074
(1.13) the relationship between the image pixel size coordinate system and the image physical size coordinate system is shown in formula (3), and the coordinate system of the point expressed in the image pixel size is [ u, v [ ]]Then, there is a corresponding relationship:
Figure GDA0001601006210000075
wherein (u) 0 ,v 0 ) Center of image pixel, d x Is the physical dimension of a pixel in the x-axis direction, d y Is the physical size of a pixel in the y-axis direction
Figure GDA0001601006210000076
(1.14) by combining the relationships of the above formulae (1), (2) and (3):
Figure GDA0001601006210000077
Figure GDA0001601006210000078
(1.15) consideration of the addition of the skewness parameter C, ultimately
Figure GDA0001601006210000079
(1.16) since the checkerboard calibration plate is a plane, Z is set w =0, let a denote the camera matrix,
Figure GDA0001601006210000081
r 1 ,r 2 ,r 3 for a column vector of R, t is a translation column vector, then equation (5) can be written as
Figure GDA0001601006210000082
And (1.2) solving the homography matrix H, shooting a plurality of chessboard pattern calibration plates under different visual angles, and extracting angular points on the images of the calibration plates. And the size of the checkerboard is known, so that the pixel coordinates and the physical coordinates of the corner points can be obtained. By the least square method, the homography matrix H of all calibration plate images can be found.
(1.3) homography matrix H = [ H = 1 h 2 h 3 ]According to the formula (6), let λ denote a constant, and [ h ] can be obtained 1 h 2 h 3 ]=λA[r 1 r 2 t]; (7)
Let alpha, beta and gamma be the rotation angles in the directions of x-axis, y-axis and z-axis, respectively, then the rotation matrix
Figure GDA0001601006210000083
Figure GDA0001601006210000084
Can obtain
Figure GDA0001601006210000085
And
Figure GDA0001601006210000086
to obtain | | | r 1 ||=(cosγ cosβ+sinγ sinα sinβ) 2 +(-sinγ cosβ+cosγ sinαs inβ) 2 +(cosα sinβ) 2 =1, and | | | r 2 ||=(sinγ cosα) 2 +(cosγ cosα) 2 +(-sinα) 2 =1, so | | | r 1 ||=||r 2 ||=1。 (8)
Calculating r 1 ·r 2 =(cosγ cosβ+sinγ sinα sinβ)(sinγ cosα)+(-sinγ cosβ+cosγ sinα sinβ)(cosγ cosα)+(cosα sinβ)(-sinα)=0, (9)
From the above equations (7), (8) and (9), it is possible to obtain:
Figure GDA0001601006210000087
Figure GDA0001601006210000088
then h is obtained 1 T A -T A -1 h 1 =h 2 T A -T A -1 h 2 。 (11)
(1.4) solving internal and external parameters: establishing an equation set according to the formula (10) and the formula (11), and substituting the homography matrix values obtained in the step (1.2) into the equation set to obtain an internal reference matrix A;
order to
Figure GDA0001601006210000089
Is provided with h i =[h i1 ,h i2 ,h i3 ] T Then there is
Figure GDA00016010062100000810
Wherein B = [ B = 11 ,B 12 ,B 22 ,B 13 ,B 23 ,B 33 ] T ,v ij =[h i1 h j1 ,h i1 h j2 +h i2 h j1 ,h i2 h j2 ,h i3 h j1 +h i1 h j3 ,h i3 h j2 +h i2 h j3 ,h i3 h j3 ] T (ii) a Therefore, the above equations (10) and (11) can be written as
Figure GDA0001601006210000091
Taking in all the values of the homography matrix, solving b, and then solving each element value and external parameter in the internal parameter matrix A;
(1.5) solving the minimized projection error through a Levenberg-Marquardt algorithm to optimize internal and external parameters of the camera; extracting pixel points with larger difference values with the pixel points in the surrounding area as key points according to a fast algorithm in the step 2); and selecting point pairs around the key points according to a brief algorithm, and generating descriptors by comparing pixel values.
Example 2: the method for measuring the mileage based on the monocular vision of the image characteristics specifically comprises the following steps:
1) Calibrating the camera according to a checkerboard calibration algorithm to obtain camera internal parameters;
2) Calculating image characteristics of the front frame and the rear frame: firstly, constructing an image pyramid, and extracting pixel points with larger difference values with pixel points in surrounding areas on each layer as key points; selecting point pairs around the key points, and generating descriptors by comparing pixel values; adjusting the descriptor according to an included angle between the key point and the gray scale centroid, so that the descriptor has rotation invariance; finally, obtaining a descriptor of the image characteristics;
3) Matching the feature points on the front frame and the rear frame to obtain corresponding feature points: establishing a k-d tree for a feature point set on an image: selecting a dimension k having a maximum variance in the dataset; then selecting a characteristic point with a value of a median m on a k dimension as a split node; dividing the value of the dimension k smaller than m to obtain a left subspace, and dividing the value of the dimension k larger than m to a right subspace; respectively carrying out the operations on the left subspace and the right subspace until the left subspace and the right subspace can not be divided, and obtaining a k-d tree; and (3) performing feature matching search by using a bbf search algorithm: starting from the root node of the k-d tree, performing binary search, and sequencing the nodes on the query path according to the respective distances from the query points; when backtracking is carried out, starting from a tree node with a high priority, and when all nodes are checked or exceed the running time limit, taking the best result found at present as a nearest neighbor matching feature point;
4) Calculating the 3D coordinates of the matched feature points: according to the pixel size and the physical size of an image, establishing a coordinate system by taking the upper left corner of the image as a coordinate origin and taking the area shot by the image as a plane, and obtaining a 3D coordinate of a characteristic point;
5) Calculating the moving distance of the camera relative to the previous frame when shooting the next frame according to the 3D coordinates and the 2D coordinates of the feature points;
6) And sequentially calculating the poses of the cameras under all the frames to obtain the displacement and obtain the mileage.
As shown in fig. 1, the calibrating the camera by using the checkerboard calibration algorithm in step 1), and the acquiring the camera internal reference specifically includes the following steps:
(1.1) according to the pinhole imaging model, the transformation relation under an image coordinate system, a camera coordinate system and a world coordinate system is as follows:
(1.11) in the world coordinate system, the coordinate of a certain point is [ X ] w ,Y w ,Z w ]Passing through the camera coordinate system, the corner point coordinate is [ X ] c ,Y c ,Z c ]By the corresponding transformation relation of rotation and translation,
Figure GDA0001601006210000101
Figure GDA0001601006210000102
where R is the rotation matrix and T is the displacement of two coordinate origins, then
Figure GDA0001601006210000103
(1.12) after the point is imaged by the camera, the point is [ x, y ] in the coordinate system represented by the physical size of the image]According to the similar trigonometric relationship, there are
Figure GDA0001601006210000104
Where f is the focal length of the camera, i.e
Figure GDA0001601006210000105
(1.13) the relationship between the image pixel size coordinate system and the image physical size coordinate system is shown in formula (3), and the coordinate system of the point expressed in the image pixel size is [ u, v [ ]]Then, there is a corresponding relationship:
Figure GDA0001601006210000106
wherein (u) 0 ,v 0 ) Center of image pixel, d x Is the physical dimension of a pixel in the x-axis direction, d y Is the physical size of a pixel in the y-axis direction, namely
Figure GDA0001601006210000107
(1.14) by combining the relationships of the above formulae (1), (2) and (3):
Figure GDA0001601006210000108
Figure GDA0001601006210000109
Figure GDA0001601006210000111
(1.15) consideration of the addition of skewness parameter C, ultimately
Figure GDA0001601006210000112
(1.16) since the checkerboard calibration plate is a plane, Z is set w =0, let a denote the camera matrix,
Figure GDA0001601006210000113
r 1 ,r 2 ,r 3 for a column vector of R, t is a translation column vector, then equation (5) can be written as
Figure GDA0001601006210000114
And (1.2) solving the homography matrix H, shooting a plurality of chessboard pattern calibration plates under different visual angles, and extracting angular points on the images of the calibration plates. And the size of the checkerboard is known, so that the pixel coordinates and the physical coordinates of the corner points can be obtained. By the least square method, the homography matrix H of all calibration plate images can be obtained.
(1.3) homography matrix H = [ H ] 1 h 2 h 3 ]According to the formula (6), let λ denote a constant, we can obtain [ h ] 1 h 2 h 3 ]=λA[r 1 r 2 t]; (7)
And if the alpha, the beta and the gamma are respectively the rotation angles in the directions of the x axis, the y axis and the z axis, the rotation matrix is formed
Figure GDA0001601006210000115
Figure GDA0001601006210000116
Can obtain
Figure GDA0001601006210000117
And
Figure GDA0001601006210000118
to obtain | | | r 1 ||=(cosγ cosβ+sinγ sinα sinβ) 2 +(-sinγ cosβ+cosγ sinα sinβ) 2 +(cosα sinβ) 2 =1, and | | r 2 ||=(sinγ cosα) 2 +(cosγ cosα) 2 +(-sinα) 2 =1, so | | r 1 ||=||r 2 ||=1。 (8)
Calculating r 1 ·r 2 =(cosγ cosβ+sinγ sinα sinβ)(sinγ cosα)+(-sinγ cosβ+cosγ sinα sinβ)(cosγ cosα)+(cosα sinβ)(-sinα)=0, (9)
From the above equations (7), (8) and (9), it is possible to obtain:
Figure GDA0001601006210000121
Figure GDA0001601006210000122
to obtain h 1 T A -T A -1 h 1 =h 2 T A -T A -1 h 2 。 (11)
(1.4) solving internal and external parameters: establishing an equation set according to the formula (10) and the formula (11), and substituting the homography matrix values of the groups obtained in the step (1.2) into the equation set to obtain an internal reference matrix A;
order to
Figure GDA0001601006210000123
Let h i =[h i1 ,h i2 ,h i3 ] T Then there is
Figure GDA0001601006210000124
Wherein B = [ B = 11 ,B 12 ,B 22 ,B 13 ,B 23 ,B 33 ] T ,v ij =[h i1 h j1 ,h i1 h j2 +h i2 h j1 ,h i2 h j2 ,h i3 h j1 +h i1 h j3 ,h i3 h j2 +h i2 h j3 ,h i3 h j3 ] T (ii) a Therefore, the above formulas (10) and (II)
(11) Can be written as
Figure GDA0001601006210000125
Taking in all the values of the homography matrix, solving b, and then solving each element value and the external parameter in the internal parameter matrix A;
(1.5) solving the minimized projection error through a Levenberg-Marquardt algorithm to optimize internal and external parameters of the camera; extracting pixel points with larger difference value with the pixel points in the surrounding area as key points according to a fast algorithm in the step 2); selecting point pairs around the key points according to a brief algorithm, and generating a descriptor by comparing pixel values; the step 5) specifically comprises the following steps: by utilizing the camera internal parameters obtained in the step 1), the 3D coordinates of the characteristic points obtained in the step 4) and the 2D pixel coordinates of the characteristic points in the next frame, the pose of the camera in shooting the next frame can be obtained, wherein the displacement component represents the displacement of the camera between the positions of the previous frame and the next frame in shooting; the step 6) specifically comprises the following steps: and (4) sequentially repeating the operations from the step 2) to the step 5) on the images shot in the advancing direction, and sequentially accumulating the displacement components of the camera in the front frame and the rear frame of shooting to obtain the mileage.
Example 3: the method for measuring the mileage based on the monocular vision of the image characteristics specifically comprises the following steps:
1) Calibrating the camera according to a checkerboard calibration algorithm to obtain camera internal parameters;
2) Calculating image characteristics for the front frame and the rear frame: firstly, constructing an image pyramid, and extracting pixel points with larger difference values with pixel points in surrounding areas on each layer as key points; selecting point pairs around the key points, and generating descriptors by comparing pixel values; adjusting the descriptor according to an included angle between the key point and the gray scale centroid, so that the descriptor has rotation invariance; finally, obtaining a descriptor of the image characteristics;
3) Matching the feature points on the front frame and the rear frame to obtain corresponding feature points: establishing a k-d tree for a feature point set on an image: selecting a dimension k having a maximum variance in the dataset; then selecting a characteristic point with a median value m on a k dimension as a splitting node; dividing the value of the dimension k smaller than m to obtain a left subspace, and dividing the value of the dimension k larger than m to a right subspace; respectively carrying out the operations on the left subspace and the right subspace until the left subspace and the right subspace can not be divided, and obtaining a k-d tree; and (3) performing feature matching search by using a bbf search algorithm: starting from a root node of the k-d tree, performing binary search, and sequencing nodes on the query path according to respective distances from the query points; when backtracking is carried out, starting from a tree node with a high priority, and when all nodes are checked or exceed the running time limit, taking the best result found at present as a nearest neighbor matching feature point;
4) Calculating the 3D coordinates of the matched feature points: according to the pixel size and the physical size of an image, establishing a coordinate system by taking the upper left corner of the image as a coordinate origin and taking the area shot by the image as a plane, and obtaining a 3D coordinate of a characteristic point;
5) Calculating the moving distance of the camera relative to the previous frame when shooting the next frame according to the 3D coordinates and the 2D coordinates of the feature points;
6) And sequentially calculating the poses of the cameras under all the frames to obtain the displacement and obtain the mileage.
As shown in fig. 1, the calibrating the camera by using the checkerboard calibration algorithm in step 1), and the acquiring the camera internal reference specifically includes the following steps:
(1.1) according to the pinhole imaging model, the transformation relation under an image coordinate system, a camera coordinate system and a world coordinate system is as follows:
(1.11) in the world coordinate system, the coordinate of a certain point is [ X ] w ,Y w ,Z w ]Passing through the camera coordinate system, the corner point coordinate is [ X ] c ,Y c ,Z c ]By the corresponding transformation relation of rotation and translation,
Figure GDA0001601006210000131
Figure GDA0001601006210000132
where R is the rotation matrix and T is the displacement of two coordinate origins, then
Figure GDA0001601006210000133
(1.12) after the point is imaged by the camera, the point is [ x, y ] in the coordinate system represented by the physical size of the image]According to the similar trigonometric relationship, there are
Figure GDA0001601006210000134
Where f is the focal length of the camera, i.e.
Figure GDA0001601006210000135
(1.13) the relationship between the image pixel size coordinate system and the image physical size coordinate system is shown in formula (3), and the coordinate system of the point expressed in the image pixel size is [ u, v [ ]]Then, there is a corresponding relationship:
Figure GDA0001601006210000141
wherein (u) 0 ,v 0 ) Center of image pixel, d x Is the physical dimension of a pixel in the x-axis direction, d y Is the physical size of a pixel in the y-axis direction
Figure GDA0001601006210000142
(1.14) the relationships among the above formulae (1), (2) and (3) are integrated to obtain:
Figure GDA0001601006210000143
Figure GDA0001601006210000144
(1.15) consideration of the addition of the skewness parameter C, ultimately
Figure GDA0001601006210000145
(1.16) since the checkerboard calibration plate is a plane, Z is set w =0, let a denote the camera matrix,
Figure GDA0001601006210000146
r 1 ,r 2 ,r 3 for a column vector of R and t a shifted column vector, equation (5) can be written as
Figure GDA0001601006210000147
(1.2) solving a homography matrix H; shooting a plurality of chessboard pattern calibration plates under different visual angles, extracting angular points on images of the calibration plates, wherein the sizes of the chessboard patterns are known, so that pixel coordinates and physical coordinates of the angular points can be obtained, and a homography matrix H of all images of the calibration plates can be obtained by a least square method;
(1.3) homography matrix H = [ H = 1 h 2 h 3 ]According to the formula (6), let λ denote a constant, and [ h ] can be obtained 1 h 2 h 3 ]=λA[r 1 r 2 t]; (7)
And if the alpha, the beta and the gamma are respectively the rotation angles in the directions of the x axis, the y axis and the z axis, the rotation matrix is formed
Figure GDA0001601006210000151
Figure GDA0001601006210000152
Can obtain
Figure GDA0001601006210000153
And
Figure GDA0001601006210000154
to obtain | | | r 1 ||=(cosγ cosβ+sinγ sinα sinβ) 2 +(-sinγ cosβ+cosγ sinα sinβ) 2 +(cosα sinβ) 2 =1, and | | r 2 ||=(sinγ cosα) 2 +(cosγ cosα) 2 +(-sinα) 2 =1, so | | r 1 ||=||r 2 ||=1 (8)
Calculating r 1 ·r 2 =(cosγ cosβ+sinγ sinα sinβ)(sinγ cosα)+(-sinγ cosβ+cosγ sinα sinβ)(cosγ cosα)+(cosα sinβ)(-sinα)=0, (9)
From the above equations (7), (8) and (9), it is possible to obtain:
Figure GDA0001601006210000155
Figure GDA0001601006210000156
to obtain h 1 T A -T A -1 h 1 =h 2 T A -T A -1 h 2 (11)
(1.4) solving internal and external parameters: establishing an equation set according to the formula (10) and the formula (11), and substituting the homography matrix values of the groups obtained in the step (1.2) into the equation set to obtain an internal reference matrix A;
order to
Figure GDA0001601006210000157
Is provided with h i =[h i1 ,h i2 ,h i3 ] T Then there is
Figure GDA0001601006210000158
Wherein B = [ B = 11 ,B 12 ,B 22 ,B 13 ,B 23 ,B 33 ] T ,v ij =[h i1 h j1 ,h i1 h j2 +h i2 h j1 ,h i2 h j2 ,h i3 h j1 +h i1 h j3 ,h i3 h j2 +h i2 h j3 ,h i3 h j3 ] T (ii) a Therefore, the above equations (10) and (11) can be written as
Figure GDA0001601006210000159
Taking in all the values of the homography matrix, solving b, and then solving each element value and the external parameter in the internal parameter matrix A;
(1.5) solving the minimized projection error through a Levenberg-Marquardt algorithm to optimize internal and external parameters of the camera; extracting pixel points with larger difference value with the pixel points in the surrounding area as key points according to a fast algorithm in the step 2); selecting point pairs around the key points according to a brief algorithm, and generating descriptors by comparing pixel values; the step 5) specifically comprises the following steps: by utilizing the camera internal reference obtained in the step 1), the 3D coordinates of the feature points obtained in the step 4) and the 2D pixel coordinates of the feature points in the next frame, the pose of the camera in shooting the next frame can be obtained, wherein the displacement component represents the displacement of the camera between the positions of the previous frame and the next frame in shooting; the step 6) specifically comprises the following steps: sequentially repeating the operations from the step 2) to the step 5) on the images shot in the advancing direction, and sequentially accumulating the displacement components of the camera before and after shooting two frames to obtain the mileage; the monocular vision odometer for the image characteristics in tunnel detection is used for the monocular vision odometer for the image characteristics based on the image characteristics; in the step (3), the most significant feature detection result on the target is used as an initial condition of feature matching, the surface code of the target visible in the field of view is determined, and the feature matching is started by using the surface code as an initial state.
Example 4
A monocular visual odometer based on image characteristics is located on a vehicle-mounted detection platform for tunnel detection, and the method in the embodiments 1 to 3 is used for the carried program.
The following table shows the pose six parameters of the monocular vision measuring camera:
TABLE 1 pose six parameters of monocular vision measuring camera
Figure GDA0001601006210000161
The above description is only exemplary of the present invention and should not be taken as limiting the invention, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (5)

1. A monocular vision mileage measurement method based on image features is characterized by specifically comprising the following steps:
(1) Calibrating a camera to obtain internal and external parameters of the camera;
(2) Calculating 2D characteristic points of two adjacent frames of images in the front and back direction along the advancing direction;
(3) Matching the 2D feature points to find out corresponding feature points in the two frames of images;
(4) Calculating the 3D coordinates of the corresponding feature points in the two frames of images, and calculating the camera pose according to the 3D coordinates and the 2D coordinates of the corresponding feature points to obtain the relative displacement of the camera;
(5) Repeating the steps (1) to (4) for subsequent frames in sequence, calculating the displacement of the camera relative to the previous frame when shooting each frame, and finally accumulating all the displacements to obtain the mileage;
the step (1) comprises the following steps:
1-1, obtaining a conversion relation among an image coordinate system, a camera coordinate system and a world coordinate system according to a pinhole imaging model;
1-2, shooting a plurality of checkerboard calibration plates under different visual angles, extracting angular points on images of the calibration plates, and obtaining pixel coordinates and physical coordinates of the angular points according to the checkerboard size so as to obtain a homography matrix H of all images of the calibration plates;
1-3, solving internal and external parameters;
1-4, solving a minimized projection error through a Levenberg-Marquardt algorithm, and optimizing internal and external parameters of the camera;
the 2D feature points, that is, orb feature points in step (2), specifically calculating orb feature points of two frames of images before and after the first frame of image, includes the following steps: constructing an image pyramid, extracting key points from each layer according to a fast algorithm, selecting point pairs around the key points according to a brief algorithm, generating a descriptor by comparing pixel values, adjusting the descriptor according to an included angle between the key points and a gray scale centroid, enabling the descriptor to have rotation invariance, and finally obtaining an orb descriptor;
the 2D feature point matching in the step (3) specifically comprises the following steps:
3-1, establishing a k-d tree for the feature point set in the image, namely selecting a dimension k with the maximum variance in the data set; then selecting a characteristic point with a median value m on a k dimension as a splitting node; dividing the part with the value of k being less than m into a left subspace, and dividing the part with the value of k being more than m into a right subspace; respectively carrying out the operations on the left subspace and the right subspace until the left subspace and the right subspace can not be divided, and obtaining a k-d tree;
3-2, performing feature matching search by using a bbf search algorithm: performing binary search starting from a root node of the k-d tree, and sequencing nodes on a query path according to respective distances between the nodes and query points; when backtracking is carried out, starting from the tree node with the highest priority, and when all nodes are checked or exceed the running time limit, taking the point with the shortest distance as the nearest neighbor matching feature point;
the monocular vision odometer of the image characteristics is used for tunnel detection; in the step (3), the most significant feature detection result on the target is used as an initial condition of feature matching, the visible surface code of the target in the visual field is determined, and the feature matching is started by using the visible surface code as an initial state;
the step (4) specifically comprises the following steps:
4-1, establishing a coordinate system by taking the upper left corner of the image as a coordinate origin and taking the area shot by the image as a plane according to the pixel size and the physical size of the image to obtain a 3D coordinate of the characteristic point;
and 4-2, solving the pose of the camera when shooting the next frame by utilizing a coordinate conversion relation according to camera internal parameters, the 3D coordinates of the feature points in the previous frame image and the 2D coordinates of the feature points in the next frame image, and further solving the displacement of the camera between the positions of shooting the previous frame and the next frame.
2. The method for monocular vision to measure mileage based on image characteristics as claimed in claim 1, wherein the scaling relationship among the image coordinate system, the camera coordinate system and the world coordinate system in the step 1-1 is specifically:
(1.11) in the world coordinate system, the coordinate of a certain point is [ X ] w ,Y w ,Z w ]In the camera coordinate system, the point coordinate is [ X ] c ,Y c ,Z c ]Through the corresponding transformation relation of rotation and translation,
Figure FDA0003904078330000021
Figure FDA0003904078330000022
where R is the rotation matrix and T is the displacement of two coordinate origins, then
Figure FDA0003904078330000023
(1.12) after the point is imaged by the camera, the point is [ x, y ] in the coordinate system represented by the physical size of the image]According to the similar trigonometric relationship, there are
Figure FDA0003904078330000024
Where f is the focal length of the camera, i.e
Figure FDA0003904078330000025
(1.13) let the point be [ u, v ] in a coordinate system expressed by the image pixel size]Then, the relationship between the image pixel size coordinate system and the image physical size coordinate system is:
Figure FDA0003904078330000031
wherein (u) 0 ,v 0 ) Is the center of an image pixel, d x Is the physical dimension of a pixel in the x-axis direction, d y Is the physical size of a pixel in the y-axis direction, namely
Figure FDA0003904078330000032
(1.14) by combining the relationships of the above formulae (1), (2) and (3):
Figure FDA0003904078330000033
Figure FDA0003904078330000034
(1.15) consideration of the addition of the skewness parameter C, ultimately
Figure FDA0003904078330000035
(1.16) since the checkerboard calibration plate is a plane, Z is set w =0, let a denote the camera matrix,
Figure FDA0003904078330000036
r 1 ,r 2 ,r 3 is the column vector of R, t is the translation column vector, then equation (5) is written as
Figure FDA0003904078330000037
3. The method for monocular vision distance measurement based on image features of claim 2, wherein the homography matrix H = [ H ] 1 h 2 h 3 ]Let λ represent a constant according to equation (6) to obtain
[h 1 h 2 h 3 ]=λA[r 1 r 2 t] (7);
Let alpha, beta and gamma be the rotation angles in the directions of x-axis, y-axis and z-axis, respectively, then the rotation matrix
Figure FDA0003904078330000041
Figure FDA0003904078330000042
Figure FDA0003904078330000043
To obtain
Figure FDA0003904078330000044
And
Figure FDA0003904078330000045
to obtain | | | r 1 ||=(cosγcosβ+sinγsinαsinβ) 2 +(-sinγcosβ+cosγsinαsinβ) 2 +(cosαsinβ) 2 =1, and | | | r 2 ||=(sinγcosα) 2 +(cosγcosα) 2 +(-sinα) 2 =1,
So r 1 ||=||r 2 ||=1 (8);
Calculating r 1 ·r 2 =(cosγcosβ+sinγsinαsinβ)(sinγcosα)+(-sinγcosβ+cosγsinαsinβ)(cosγcosα)+(cosαsinβ)(-sinα)=0, (9);
From the above equations (7), (8) and (9), it is possible to obtain:
Figure FDA0003904078330000046
i.e. h 1 T A -T A -1 h 2 =0; (10);
Figure FDA0003904078330000047
To obtain h 1 T A -T A -1 h 1 =h 2 T A -T A -1 h 2 (11)。
4. The method for monocular vision to measure mileage based on image characteristics of claim 3, wherein the specific process of solving the internal and external parameters of the camera is: establishing an equation set according to the formula (10) and the formula (11), substituting the homography matrix values of the groups obtained in the step (1-2) into the equation set, and solving to obtain an internal reference matrix A;
order to
Figure FDA0003904078330000051
Is provided with h i =[h i1 ,h i2 ,h i3 ] T Then there is
Figure FDA0003904078330000052
Wherein B = [ B = 11 ,B 12 ,B 22 ,B 13 ,B 23 ,B 33 ] T ,v ij =[h i1 h j1 ,h i1 h j2 +h i2 h j1 ,h i2 h j2 ,h i3 h j1 +h i1 h j3 ,h i3 h j2 +h i2 h j3 ,h i3 h j3 ] T (ii) a Therefore, the above equations (10) and (11) can be written as
Figure FDA0003904078330000053
And substituting the values of all the homography matrixes to solve b, and then solving each element value and the external parameter in the internal parameter matrix A.
5. A monocular visual odometer based on image characteristics, characterized by: the monocular vision odometer adopts the image characteristic-based monocular vision mileage measuring method of any one of claims 1 to 4 to calculate mileage.
CN201810044762.XA 2018-01-17 2018-01-17 Image feature-based monocular vision mileage measurement method and odometer Active CN110044374B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810044762.XA CN110044374B (en) 2018-01-17 2018-01-17 Image feature-based monocular vision mileage measurement method and odometer

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810044762.XA CN110044374B (en) 2018-01-17 2018-01-17 Image feature-based monocular vision mileage measurement method and odometer

Publications (2)

Publication Number Publication Date
CN110044374A CN110044374A (en) 2019-07-23
CN110044374B true CN110044374B (en) 2022-12-09

Family

ID=67273048

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810044762.XA Active CN110044374B (en) 2018-01-17 2018-01-17 Image feature-based monocular vision mileage measurement method and odometer

Country Status (1)

Country Link
CN (1) CN110044374B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112449176B (en) * 2019-09-03 2023-05-26 浙江舜宇智能光学技术有限公司 Test method and test system of lifting type camera device
CN110687929B (en) * 2019-10-10 2022-08-12 辽宁科技大学 Aircraft three-dimensional space target searching system based on monocular vision and motor imagery
CN110929567B (en) * 2019-10-17 2022-09-27 北京全路通信信号研究设计院集团有限公司 Monocular camera monitoring scene-based target position and speed measuring method and system
CN111990314A (en) * 2020-08-25 2020-11-27 中国水产科学研究院渔业机械仪器研究所 System and method for quantitative observation of fish behaviors
CN112066876B (en) * 2020-08-27 2021-07-02 武汉大学 Method for rapidly measuring object size by using mobile phone
CN111922510B (en) * 2020-09-24 2021-10-01 武汉华工激光工程有限责任公司 Laser visual processing method and system
CN112798812B (en) * 2020-12-30 2023-09-26 中山联合汽车技术有限公司 Target speed measuring method based on monocular vision
CN114764005A (en) * 2021-03-11 2022-07-19 深圳市科卫泰实业发展有限公司 Monocular vision odometer method for unmanned aerial vehicle
CN113223163A (en) * 2021-04-28 2021-08-06 Oppo广东移动通信有限公司 Point cloud map construction method and device, equipment and storage medium
CN113223007A (en) * 2021-06-28 2021-08-06 浙江华睿科技股份有限公司 Visual odometer implementation method and device and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102663767A (en) * 2012-05-08 2012-09-12 北京信息科技大学 Method for calibrating and optimizing camera parameters of vision measuring system
CN105354273A (en) * 2015-10-29 2016-02-24 浙江高速信息工程技术有限公司 Method for fast retrieving high-similarity image of highway fee evasion vehicle
CN106920259A (en) * 2017-02-28 2017-07-04 武汉工程大学 A kind of localization method and system
CN106952299A (en) * 2017-03-14 2017-07-14 大连理工大学 A kind of 3 d light fields Implementation Technology suitable for Intelligent mobile equipment
CN107580175A (en) * 2017-07-26 2018-01-12 济南中维世纪科技有限公司 A kind of method of single-lens panoramic mosaic

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103984037B (en) * 2014-04-30 2017-07-28 深圳市墨克瑞光电子研究院 The mobile robot obstacle detection method and device of view-based access control model

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102663767A (en) * 2012-05-08 2012-09-12 北京信息科技大学 Method for calibrating and optimizing camera parameters of vision measuring system
CN105354273A (en) * 2015-10-29 2016-02-24 浙江高速信息工程技术有限公司 Method for fast retrieving high-similarity image of highway fee evasion vehicle
CN106920259A (en) * 2017-02-28 2017-07-04 武汉工程大学 A kind of localization method and system
CN106952299A (en) * 2017-03-14 2017-07-14 大连理工大学 A kind of 3 d light fields Implementation Technology suitable for Intelligent mobile equipment
CN107580175A (en) * 2017-07-26 2018-01-12 济南中维世纪科技有限公司 A kind of method of single-lens panoramic mosaic

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于单目视觉的三维信息重构的研究与实现;高成;《中国优秀硕士学位论文(电子期刊)信息科技辑》;20120315;正文第5-41页 *

Also Published As

Publication number Publication date
CN110044374A (en) 2019-07-23

Similar Documents

Publication Publication Date Title
CN110044374B (en) Image feature-based monocular vision mileage measurement method and odometer
Chen et al. High-accuracy multi-camera reconstruction enhanced by adaptive point cloud correction algorithm
CN106408609B (en) A kind of parallel institution end movement position and posture detection method based on binocular vision
CN103714571B (en) A kind of based on photogrammetric single camera three-dimensional rebuilding method
JP5618569B2 (en) Position and orientation estimation apparatus and method
JP5832341B2 (en) Movie processing apparatus, movie processing method, and movie processing program
CN105021124B (en) A kind of planar part three-dimensional position and normal vector computational methods based on depth map
CN107588721A (en) The measuring method and system of a kind of more sizes of part based on binocular vision
Tamas et al. Targetless calibration of a lidar-perspective camera pair
CN109579695B (en) Part measuring method based on heterogeneous stereoscopic vision
US20140111507A1 (en) 3-dimensional shape reconstruction device using depth image and color image and the method
CN105043350A (en) Binocular vision measuring method
CN104424630A (en) Three-dimension reconstruction method and device, and mobile terminal
CN105043250B (en) A kind of double-visual angle data alignment method based on 1 common indicium points
Ahmed et al. Pothole 3D reconstruction with a novel imaging system and structure from motion techniques
CN112184811B (en) Monocular space structured light system structure calibration method and device
CN102788572A (en) Method, device and system for measuring attitude of engineering machinery lifting hook
Zhang et al. Relative orientation based on multi-features
CN104167001B (en) Large-visual-field camera calibration method based on orthogonal compensation
CN116188558B (en) Stereo photogrammetry method based on binocular vision
JP6410231B2 (en) Alignment apparatus, alignment method, and computer program for alignment
CN116563377A (en) Mars rock measurement method based on hemispherical projection model
CN102881040A (en) Three-dimensional reconstruction method for mobile photographing of digital camera
CN114092564B (en) External parameter calibration method, system, terminal and medium for non-overlapping vision multi-camera system
CN109493378B (en) Verticality detection method based on combination of monocular vision and binocular vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20200703

Address after: No.5014-112, 5 / F, No.36, Haidian Street, Haidian District, Beijing 100080

Applicant after: Kuanyan (Beijing) Technology Development Co.,Ltd.

Address before: Huaxi Securities Building, No. 9 Yuhuatai East Road, Yuhua District of Nanjing City, Jiangsu province 210012 Room 203

Applicant before: NANJING HUOYANHOU INFORMATION TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant