CN113688917B - Binocular video image matching method based on DEM constraint - Google Patents

Binocular video image matching method based on DEM constraint Download PDF

Info

Publication number
CN113688917B
CN113688917B CN202111004645.9A CN202111004645A CN113688917B CN 113688917 B CN113688917 B CN 113688917B CN 202111004645 A CN202111004645 A CN 202111004645A CN 113688917 B CN113688917 B CN 113688917B
Authority
CN
China
Prior art keywords
point
points
image
candidate
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111004645.9A
Other languages
Chinese (zh)
Other versions
CN113688917A (en
Inventor
马黎明
戚浩平
彭震
沈鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN202111004645.9A priority Critical patent/CN113688917B/en
Publication of CN113688917A publication Critical patent/CN113688917A/en
Application granted granted Critical
Publication of CN113688917B publication Critical patent/CN113688917B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a binocular video image matching method based on DEM constraint, which aims at improving the correctness of homonymy point matching in binocular image matching.

Description

Binocular video image matching method based on DEM constraint
Technical Field
The invention belongs to the field of image matching, and particularly relates to a binocular video image matching method.
Background
The binocular image matching is a key technology of a binocular vision system, and the accuracy and speed of the matching of the characteristic points obviously influence the performance effects of technologies such as real scene reconstruction.
At present, the feature matching algorithm mainly comprises SIFT, SURF, FAST, ORB and other algorithms. The SIFT algorithm is mostly applied to binocular image matching of scene transformation such as scale, rotation, translation and the like, but the calculation is complex and the efficiency is low. The SURF algorithm is an improvement of the SIFT algorithm and can complete tasks more efficiently. The FAST algorithm is an emerging algorithm in recent years, and has high calculation efficiency and high detection speed. The matching accuracy of the existing feature matching algorithm still needs to be improved.
Disclosure of Invention
In order to solve the technical problems mentioned in the background art, the invention provides a binocular video image matching method based on DEM constraint, which can ensure the detection efficiency and solve the problem of low accuracy.
In order to achieve the technical purpose, the technical scheme of the invention is as follows:
a binocular video image matching method based on DEM constraint comprises the following steps:
(1) Respectively taking images shot by two cameras at the same time from different angles as a left image and a right image, extracting feature points of the left image and the right image subjected to distortion correction by using a FAST operator, solving homonymy epipolar lines of the two images by using a coplanar condition method, matching the feature points of the left image on the right image, obtaining three candidate matching points, and taking the best candidate matching point of the three candidate matching points and the feature points of the left image as a pair of candidate homonymy points, wherein the matching objects comprise the homonymy epipolar lines and the feature points on the upper and lower adjacent epipolar lines;
(2) Calculating and recording the elevations of all candidate homonymous points by adopting a stereopair forward intersection method, judging the aggregation degree of the candidate homonymous points around all pixel points, taking the pixel points with the aggregation degree exceeding a set threshold value as aggregation points, performing elevation resampling on the aggregation points, and performing bilinear interpolation by utilizing the elevation values of the aggregation points to obtain a grid DEM;
(3) And (3) calculating the elevation values of the three candidate matching points obtained in the step (1) according to the sequence of the correlation coefficients from large to small, comparing the elevation values with the values of the points at the same positions in the DEM, and if the absolute value of the difference value is less than or equal to a threshold value sigma, determining the corresponding candidate matching point as the optimal matching point.
Further, the specific process of step (1) is as follows:
(101) Given a left image threshold t 1 And right image threshold t 2 Respectively aiming at the left image and the right image, adopting FAST operator with radius of 2 pixels to extract gray values of 12 pixel points around the detection point, if the difference between the gray values of at least 8 pixel points and the detection point is larger than the corresponding threshold value t 1 、t 2 Or both are less than-t 1 、-t 2 If so, the point is regarded as a characteristic point; setting a characteristic point position recording matrix M, M' with the same size as the left image and the right image;
(102) The shooting center of the left image and the shooting center of the right image are S, S', all pixel points in a certain column in the left image are set as the initial point a of each left epipolar line, and the imaging center of the left image and the imaging center of the right image are determined according to the coplanar condition:
Figure BDA0003236832150000021
and with
Figure BDA0003236832150000022
In the same plane, solving another point b of the left epipolar line to obtain a left epipolar line vector
Figure BDA0003236832150000023
Then according to the coplanar condition:
Figure BDA0003236832150000024
and
Figure BDA0003236832150000025
in the same plane, solving any point a 'on the right image homonymous epipolar line, and similarly solving another point b' on the epipolar line to obtain a right epipolar line vector
Figure BDA0003236832150000026
(103) According to the matrix M, M', the positions of the feature points on all epipolar lines are solved, a gray matrix with the pixel size of 13 x 13 is extracted by taking a certain feature point on the left epipolar line as the center, the gray matrices of 13 x 13 of all the feature points of three epipolar lines including the right epipolar line and the upper and lower adjacent epipolar lines are correspondingly extracted, the correlation coefficients of the gray matrices of the left and right images are sequentially calculated, and a threshold t is set 3 The number of phase relationships is maximum and greater than a threshold t 3 And (4) taking the point pair as a candidate homonymous point, and recording the first three points of the correlation coefficient from large to small as a candidate matching point.
Further, the threshold t 1 Is 50, threshold t 2 Is 30, threshold t 3 Is 0.6.
Further, the specific process of step (2) is as follows:
(201) According to the known inner and outer orientation elements of the left and right images and the image point coordinates of the candidate homonymy points, the ground point coordinates are solved by adopting a stereo image pair front intersection method to obtain all candidate homonymy point elevation values, and an elevation record matrix M is generated h
(202) Traversing pixel points, extracting the elevation values of all candidate homonymous points in a matrix range with the pixel size of 21 x 21 taking the pixel as the center, and if the number of the candidate homonymous points in the range is more than a threshold value t 4 If the pixel point is regarded as an aggregation point, classifying the candidate homonymous points by adopting a K-means method, and obtaining an elevation average value from the most number of candidate homonymous points to resample the aggregation point;
(203) And setting DEM grid density according to the on-site ground fluctuation degree of the left image and the right image, carrying out bilinear interpolation on the elevation of the aggregation point to obtain grid point elevation, and finally generating a grid DEM.
Further, in step (202), a threshold value t 4 Is 59.
Further, in step (3), the threshold σ =0.5m.
Further, if none of the three candidate matching points in the step (3) meets the condition of the best matching point, the feature point of the left image is considered to have no best matching point.
Adopt the beneficial effect that above-mentioned technical scheme brought:
according to the method, the FAST operator is adopted to accelerate the calculation efficiency, the multi-core line constraint is adopted to increase the matching number of the feature points, the DEM constraint is adopted to ensure the matching correctness of the feature points, and finally the effect of improving the matching number and ensuring the matching correctness is achieved.
Drawings
FIG. 1 is an overall process flow diagram of the present invention;
FIG. 2 is a schematic diagram of the FAST operator with a diameter of 2 pixels in the example;
FIG. 3 is a diagram illustrating a feature point search range in an embodiment;
fig. 4 is a schematic diagram of bilinear interpolation in an embodiment.
Detailed Description
The technical scheme of the invention is explained in detail in the following with the accompanying drawings.
The invention designs a binocular video image matching method based on DEM constraint, which comprises the following steps as shown in figure 1:
step 1, extracting feature points of the image subjected to distortion correction by using a FAST operator, solving homonymous epipolar lines of the two images by using a coplanar condition method, finally matching the feature points of the left image on the right image, wherein the matched object comprises the homonymous epipolar lines and the feature points on the upper and lower adjacent epipolar lines, obtaining three candidate matching points, and taking the optimal candidate matching point and the left image matching point as a pair of candidate homonymous points. The specific implementation steps are as follows:
the invention adopts images shot by two cameras from different angles at the same time as left and right images respectively, the size of the images is 1920 x 1080 pixels, and the inner and outer orientation elements of the two cameras are known.
Carrying out distortion correction on the image, as shown in formula (1):
Figure BDA0003236832150000041
wherein x 'and y' are horizontal and vertical coordinates after distortion correction, x and y are coordinates of image point without distortion correction, and k 1 Is the distortion coefficient.
Due to the limitation of the FAST operator, the widths of two pixel points on the periphery of the image are not used as the matching range of the feature points. As shown in FIG. 2, to detect point (x) i ,y j ) Taking two pixels as the radius as the center, and extracting the gray value of 12 pixels in the periphery. Setting a left image threshold t 1 50, right image threshold t 2 If the gray scale value of 12 pixels is greater than t, the difference between the gray scale values of 8 or more pixels and the detection point is greater than t 1 (or t) 2 ) Are all less than-t 1 (or-t) 2 ) Then the point is judged as a feature point. A matrix M of the same size as the left image is set for recording the positions of the feature points, and the pixel values are set to 255 at the same positions of the matrix M and 0 for the rest, based on the positions of the feature points of the left image. In the same way, a characteristic point position matrix M' of the right image can be obtained.
The photographing centers of the left and right images are respectively S and S', and the connecting line is a base line B. The left and right images are respectively provided with a coordinate system which is the positive direction of an X axis from top to bottom and the positive direction of a Y axis from left to right. In the invention, Y =0 is taken as an initial column, the pixel point of the column is taken as the starting point of the epipolar line, and an arbitrary point a (x) is taken a And 0) is an example.
Calculating the coordinate of the point a in the image space auxiliary coordinate system, as shown in formula (2):
Figure BDA0003236832150000051
wherein u is a 、v a 、w a Is the coordinate value of point a in the auxiliary coordinate system of image space, a 1 、a 2 、a 3 、b 1 、b 2 、b 3 、c 1 、c 2 、c 3 Is to the leftThe direction cosine of an included angle between two coordinate axis systems of the image space coordinate system and the image space auxiliary coordinate system, and f is the vertical distance from the left image photographing center S to the left image.
By setting the coplanar condition
Figure BDA0003236832150000052
And
Figure BDA0003236832150000053
in the same plane, the coordinate (x) of the end point b on the epipolar line is obtained b ,y b ) As shown in formula (3):
Figure BDA0003236832150000054
wherein x is b Is the ordinate, y, of point b b The abscissa of point b. The invention calculates the right boundary of the left image, i.e. when y b =1919, coordinate (x) of b b ,1919). If the correct result can not be obtained, the abscissa is subtracted by 1, and the process is circulated until the correct result is obtained. Thus, the starting point a and the end point b of the epipolar line can be obtained, and further, the left epipolar line vector can be obtained
Figure BDA0003236832150000055
Since the homonymous epipolar lines are coplanar, the coplanar condition is adopted
Figure BDA0003236832150000056
And
Figure BDA0003236832150000057
in the same plane, set y a′ If =0, the coordinates of the left epipolar line starting point a' on the right image corresponding to the left epipolar line starting point a are obtained. For the same reason, there is a coplanar condition
Figure BDA0003236832150000058
Figure BDA0003236832150000059
And with
Figure BDA00032368321500000510
In the same plane, the coordinates of the b' point of the right epipolar line terminal point can be obtained, and then the vector of the right epipolar line can be obtained
Figure BDA00032368321500000511
And searching all pixel points through which the left and right epipolar lines pass on the matrixes M and M', and recording the positions of the characteristic points on each epipolar line.
And sequentially matching the feature points on each epipolar line by taking the left image epipolar line as a sequence. Taking the feature point position as the center, extracting the gray value of the left image in the range of 13 × 13 pixels, and writing the gray value into a matrix form. As shown in fig. 3, all feature points on the right epipolar line corresponding to the left epipolar line where the feature point is located and two adjacent epipolar lines of the right epipolar line are searched, and a corresponding 13 × 13 grayscale matrix is extracted.
Sequentially calculating matrix correlation coefficients of the gray matrix of the left image characteristic points and the gray matrix of all the characteristic points extracted from the three epipolar lines of the right image, as shown in formula (4):
Figure BDA0003236832150000061
where m =13, n =13, matrices a and B are grayscale matrices for left and right image extraction, respectively,
Figure BDA0003236832150000062
and
Figure BDA0003236832150000063
is the average of the two matrices.
After all the correlation coefficients are calculated, a threshold value t is taken 3 =0.6, noting that the correlation coefficient is maximum and greater than t 3 The pair of feature points is candidate homonymous points, and the first three points with the correlation coefficient arranged from large to small are recorded as candidate matching points.
And 2, calculating and recording the elevations of all candidate homonymous points by adopting a stereopair forward intersection method, judging the aggregation degrees of the candidate homonymous points around all the pixel points, taking the higher aggregation degree as an aggregation point, performing elevation resampling on the aggregation point, and performing bilinear interpolation by utilizing the elevation values of the aggregation point to obtain a grid DEM. The specific implementation steps are as follows:
and knowing the internal and external orientation elements of the two photos, and knowing the coordinates of the candidate homonymy points on the two photos, and solving the ground coordinates of the candidate homonymy points by adopting a stereo pair front intersection method.
Similar to the formula (2), the coordinates c of the candidate homonymous point pairs in the two image space auxiliary coordinate systems are respectively obtained 1 (u 1 ,v 1 ,w 1 )、c 2 (u 2 ,v 2 ,w 2 ). Second, a baseline component B is calculated from the external orientation element u 、B v 、B w As shown in formula (5):
Figure BDA0003236832150000064
wherein, X S′ 、X S 、Y S′ 、Y S 、Z S′ 、Z S Is the coordinates of S and S' in the left image space auxiliary coordinate system.
Calculating a point projection coefficient, namely a proportionality coefficient of projecting the image point coordinate to the ground point coordinate, as shown in formula (6):
Figure BDA0003236832150000065
the coordinates of the ground points are calculated, only the elevation is involved in the invention, and therefore, only the coordinate value Z is calculated, as shown in formula (7):
Figure BDA0003236832150000071
obtaining the elevation values of all candidate homonymous points, and generating a corresponding elevation record matrix M according to the positions of the candidate homonymous points on the left photo h . At M h All pixel points in the middle calendarAnd extracting the elevation values of all candidate homonymous points within the range of 21 × 21 pixels by taking each pixel point as the center. Setting a threshold t 4 =59, if number of homologous points in range is greater than threshold t 4 And then, a K-means classification method is adopted, K is set to be 5, all elevation values are divided into 5 types, the type with the largest quantity in the 5 types of values is selected, the average value of the types is taken as the elevation value of the pixel point, the resampling of the elevation value is realized, and the point is named as an aggregation point. If not, t 4 Then it is not elevation resampled.
The DEM grid density is set according to the fluctuation degree of the ground corresponding to the left and right pictures, the actual ground of the image used in the method is relatively flat, and therefore two rows of grids and five columns of grids are set. As shown in fig. 4, four nearest aggregation points around each grid point are found, the elevation values of the grid points are obtained by interpolating the elevation values of the aggregation points by using a bilinear interpolation method as shown in formula (8), and the grid point elevation values are recorded in a DEM matrix M D M In (1).
Figure BDA0003236832150000072
Wherein f (x, y) is the elevation of the point to be determined, and f (x) 1 ,y 1 ),f(x 2 ,y 1 ),f(x 1 ,y 2 ),f(x 2 ,y 2 ) Elevation values of the four surrounding points, respectively.
And 3, calculating elevation values of the three candidate matching points obtained in the step 1 according to the sequence of the correlation coefficients from large to small, comparing the elevation values with a DEM value, and determining the candidate matching point as the best matching point if the absolute value of the difference value is less than 0.5m. The specific implementation steps are as follows:
computing matrix M h And if the absolute value of the difference between the elevation value of the middle candidate homonymous point and the elevation value of the same position in the DEM is less than or equal to 0.5m, the best matching is determined. If the distance is larger than 0.5m, taking other two candidate matching points, calculating an elevation value by using a stereo pair forward intersection method, and comparing the absolute value of the difference with 0.5m to obtain the best matching point. If all three results are greater than 0.5m, then no best match point is considered.
Although the FAST algorithm can greatly improve the matching quantity, the matching accuracy is reduced. The matching accuracy of the FAST algorithm is only 55.22%, and the number of correct matches is 3972%. Experiments prove that the matching accuracy of the method is 84.9 percent, the correct matching number is 4303, and the characteristic of more matching points is kept while the accuracy is improved.
The embodiments are only for illustrating the technical idea of the present invention, and the technical idea of the present invention is not limited thereto, and any modifications made on the basis of the technical scheme according to the technical idea of the present invention fall within the scope of the present invention.

Claims (7)

1. A binocular video image matching method based on DEM constraint is characterized by comprising the following steps:
(1) Respectively taking images shot by two cameras from different angles at the same time as a left image and a right image, extracting feature points of the left image and the right image subjected to distortion correction by using a FAST operator, solving homonymy epipolar lines of the two images by using a coplanar condition method, matching the feature points of the left image on the right image, obtaining three candidate matching points, and taking the best candidate matching point of the three candidate matching points and the feature point of the left image as a pair of candidate homonymy points, wherein the matching objects comprise the homonymy epipolar lines and the feature points on the upper and lower adjacent epipolar lines;
(2) Calculating and recording the elevations of all candidate homonymous points by adopting a stereopair forward intersection method, judging the aggregation degree of the candidate homonymous points around all pixel points, taking the pixel points with the aggregation degree exceeding a set threshold value as aggregation points, performing elevation resampling on the aggregation points, and performing bilinear interpolation by utilizing the elevation values of the aggregation points to obtain a grid DEM;
(3) And (3) calculating the elevation values of the three candidate matching points obtained in the step (1) according to the sequence of the correlation coefficients from large to small, comparing the elevation values with the values of the points at the same positions in the DEM, and if the absolute value of the difference value is less than or equal to a threshold value sigma, determining the corresponding candidate matching point as the optimal matching point.
2. The binocular video image matching method based on DEM constraint as recited in claim 1, wherein the specific process of the step (1) is as follows:
(101) Given a left image threshold t 1 And right image threshold t 2 Respectively aiming at the left image and the right image, adopting FAST operator with radius of 2 pixels to extract gray values of 12 pixel points around the detection point, if the difference between the gray values of at least 8 pixel points and the detection point is larger than the corresponding threshold value t 1 、t 2 Or both are less than-t 1 、-t 2 If so, the point is regarded as a characteristic point; setting a characteristic point position recording matrix M, M' with the same size as the left image and the right image;
(102) The shooting center of the left image and the shooting center of the right image are S, S', all pixel points in a certain column in the left image are set as the initial point a of each left epipolar line, and the imaging center of the left image and the imaging center of the right image are determined according to the coplanar condition:
Figure FDA0003236832140000011
and
Figure FDA0003236832140000012
in the same plane, solving another point b of the left epipolar line to obtain a left epipolar line vector
Figure FDA0003236832140000013
Then according to the coplanar condition:
Figure FDA0003236832140000014
and with
Figure FDA0003236832140000015
In the same plane, solving any point a 'on the right image homonymous epipolar line, and similarly solving another point b' on the epipolar line to obtain a right epipolar line vector
Figure FDA0003236832140000021
(103) According to the matrix M, M', the positions of the feature points on all the epipolar lines are solved, a gray matrix with the pixel size of 13 × 13 is extracted by taking a certain feature point on the left epipolar line as the center, and the right epipolar line and the left epipolar line are correspondingly extracted13 x 13 gray level matrixes of all characteristic points of three epipolar lines of upper and lower adjacent epipolar lines are used for calculating the correlation coefficient of the gray level matrixes of the left and right images in sequence and setting a threshold value t 3 The number of phase relationships is maximum and greater than a threshold t 3 And (4) taking the point pair as a candidate homonymous point, and recording the first three points of the correlation coefficient from large to small as a candidate matching point.
3. The binocular video image matching method based on DEM constraint as recited in claim 2, wherein the threshold t is 1 Is 50, threshold t 2 Is 30, threshold t 3 And was 0.6.
4. The binocular video image matching method based on DEM constraint as claimed in claim 1, wherein the specific process of the step (2) is as follows:
(201) According to the known inside and outside orientation elements of the left image and the right image and the image point coordinates of the candidate same-name points, the ground point coordinates are solved by adopting a stereo pair forward intersection method to obtain all candidate same-name point elevation values, and an elevation record matrix M is generated h
(202) Traversing pixel points, extracting the elevation values of all candidate homonymous points in a matrix range with the pixel size of 21 x 21 taking the pixel as the center, and if the number of the candidate homonymous points in the range is more than a threshold value t 4 If the pixel point is regarded as an aggregation point, classifying the candidate homonymous points by adopting a K-means method, and obtaining an elevation average value from the most number of candidate homonymous points to resample the aggregation point;
(203) And setting DEM grid density according to the on-site ground fluctuation degree of the left image and the right image, carrying out bilinear interpolation on the elevation of the aggregation point to obtain grid point elevation, and finally generating a grid DEM.
5. The binocular video image matching method based on DEM constraint of claim 4, wherein in the step (202), the threshold value t is 4 Is 59.
6. The binocular video image matching method based on DEM constraints as claimed in claim 1, wherein in step (3), the threshold σ =0.5m.
7. The binocular video image matching method based on DEM constraint of claim 1, wherein if none of the three candidate matching points in step (3) meets the condition of the best matching point, the left image feature point is considered to have no best matching point.
CN202111004645.9A 2021-08-30 2021-08-30 Binocular video image matching method based on DEM constraint Active CN113688917B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111004645.9A CN113688917B (en) 2021-08-30 2021-08-30 Binocular video image matching method based on DEM constraint

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111004645.9A CN113688917B (en) 2021-08-30 2021-08-30 Binocular video image matching method based on DEM constraint

Publications (2)

Publication Number Publication Date
CN113688917A CN113688917A (en) 2021-11-23
CN113688917B true CN113688917B (en) 2022-11-18

Family

ID=78584018

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111004645.9A Active CN113688917B (en) 2021-08-30 2021-08-30 Binocular video image matching method based on DEM constraint

Country Status (1)

Country Link
CN (1) CN113688917B (en)

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103822616B (en) * 2014-03-18 2016-01-20 武汉大学 A kind of figure segmentation retrains with topographic relief the Remote Sensing Images Matching Method combined
CN112929626B (en) * 2021-02-02 2023-02-14 辽宁工程技术大学 Three-dimensional information extraction method based on smartphone image
CN113255449A (en) * 2021-04-23 2021-08-13 东南大学 Real-time matching method of binocular video images

Also Published As

Publication number Publication date
CN113688917A (en) 2021-11-23

Similar Documents

Publication Publication Date Title
CN108073857B (en) Dynamic visual sensor DVS event processing method and device
CN111915484B (en) Reference image guiding super-resolution method based on dense matching and self-adaptive fusion
CN108648161B (en) Binocular vision obstacle detection system and method of asymmetric kernel convolution neural network
CN115205489A (en) Three-dimensional reconstruction method, system and device in large scene
CN110033514B (en) Reconstruction method based on point-line characteristic rapid fusion
CN111553845B (en) Quick image stitching method based on optimized three-dimensional reconstruction
CN111814792B (en) Feature point extraction and matching method based on RGB-D image
CN111325828B (en) Three-dimensional face acquisition method and device based on three-dimensional camera
CN116129037B (en) Visual touch sensor, three-dimensional reconstruction method, system, equipment and storage medium thereof
CN112929626A (en) Three-dimensional information extraction method based on smartphone image
CN107220996A (en) A kind of unmanned plane linear array consistent based on three-legged structure and face battle array image matching method
CN110120012B (en) Video stitching method for synchronous key frame extraction based on binocular camera
CN111127353A (en) High-dynamic image ghost removing method based on block registration and matching
CN113255449A (en) Real-time matching method of binocular video images
CN117115359B (en) Multi-view power grid three-dimensional space data reconstruction method based on depth map fusion
CN112418250B (en) Optimized matching method for complex 3D point cloud
CN117726747A (en) Three-dimensional reconstruction method, device, storage medium and equipment for complementing weak texture scene
CN110910457B (en) Multispectral three-dimensional camera external parameter calculation method based on angular point characteristics
CN113688917B (en) Binocular video image matching method based on DEM constraint
CN112102379A (en) Unmanned aerial vehicle multispectral image registration method
CN116823895A (en) Variable template-based RGB-D camera multi-view matching digital image calculation method and system
CN111325218A (en) Hog feature detection and matching method based on light field image
CN112348853B (en) Particle filter tracking method based on infrared saliency feature fusion
CN115564888A (en) Visible light multi-view image three-dimensional reconstruction method based on deep learning
CN114820987A (en) Three-dimensional reconstruction method and system based on multi-view image sequence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant