CN111144441B - DSO photometric parameter estimation method and device based on feature matching - Google Patents

DSO photometric parameter estimation method and device based on feature matching Download PDF

Info

Publication number
CN111144441B
CN111144441B CN201911217648.3A CN201911217648A CN111144441B CN 111144441 B CN111144441 B CN 111144441B CN 201911217648 A CN201911217648 A CN 201911217648A CN 111144441 B CN111144441 B CN 111144441B
Authority
CN
China
Prior art keywords
feature
points
dso
matching
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911217648.3A
Other languages
Chinese (zh)
Other versions
CN111144441A (en
Inventor
潘树国
谭涌
高旺
盛超
章辉
喻国荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN201911217648.3A priority Critical patent/CN111144441B/en
Publication of CN111144441A publication Critical patent/CN111144441A/en
Application granted granted Critical
Publication of CN111144441B publication Critical patent/CN111144441B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a DSO photometric parameter estimation method and device based on feature matching. Firstly, extracting feature points from a current image frame as candidate points of the feature points of the frame, then carrying out feature matching with the feature points determined in the previous frame, removing mismatching by applying a designed feature matching screening strategy, and selecting candidate points meeting the conditions as the feature points of the current frame by using a designed feature point activation strategy so as to establish association between the feature points. And finally, based on a camera imaging function model, constructing an error function and estimating the luminosity parameters by adopting a nonlinear optimization method. By using the method provided by the invention, the photometric parameters can be estimated from any video sequence, and the application of the photometric parameters estimated by the method for photometry can improve the positioning accuracy of DSO by about 30%.

Description

DSO photometric parameter estimation method and device based on feature matching
Technical Field
The invention relates to a Visual Odometer (VO) positioning method, in particular to a luminosity calibration method and device of a direct method Visual odometer DSO (Direct Sparse Odometry) based on a gray scale invariant assumption.
Background
With the rapid development of science and technology, the visual positioning technology plays a vital role in the fields of mobile robots, unmanned driving, virtual reality, augmented reality and the like. As one of the visual positioning technologies, a visual odometer has been rapidly developed in recent years, and has been widely used. The visual odometer based on the direct method estimates the movement of the camera according to the gray information of the pixels, and gradually goes on a main stream stage by virtue of the advantages of no need of calculating characteristic points and descriptors, capability of using more pixel information, suitability for a low-texture environment and the like, and becomes an important component in the visual odometer method. But the disadvantage is also obvious, namely sensitivity to illumination changes, and when larger illumination changes exist in the scene, the positioning accuracy and the robustness of the visual odometer based on the direct method are obviously reduced.
In 2016, J.Engel et al proposed a camera photometric calibration method and made a TUM-Mono dataset with a camera photometric calibration file, and simultaneously proposed a direct method visual odometer DSO, in which the concept of photometric calibration was introduced, and pose estimation was performed after image intensity was converted by loading the camera photometric calibration file, greatly improving its positioning accuracy and robustness. However, the camera photometric calibration method proposed by j.engel et al requires the camera to be operated, and usually the camera is not available in the study of visual odometry, only a set of video sequences taken by the camera is available. Therefore, the research on the method for estimating the photometric parameters of any video sequence is of great significance to the direct method visual odometer.
In 2017, bergman et al proposed an online photometric parameter estimation method for an automatic exposure video sequence, which uses KLT optical flow tracking to establish a correlation between pixels, and then uses a nonlinear optimization method to estimate corresponding photometric parameters. However, the optical flow method is only suitable for the situation that the camera motion is tiny and the illumination change is small, and when the camera motion is severe or the illumination change is large, the tracking precision and the robustness are low, so that the estimated photometric parameters are inaccurate.
In summary, none of the existing photometric parameter estimation methods is sufficient to meet the application requirements of the direct-based visual odometer. Therefore, research of a photometric parameter estimation method for any video sequence, which is higher in precision and stronger in robustness, has important significance.
Disclosure of Invention
The invention aims to: aiming at the problems of low positioning precision and poor positioning robustness of the direct method visual odometer under the condition of large illumination change, the method and the device for estimating the photometric parameters based on feature matching are provided, and the estimated photometric parameters are utilized for photometric calibration, so that the positioning precision and the robustness of the direct method visual odometer can be effectively improved.
The technical scheme is as follows: in order to achieve the aim of the invention, the invention adopts the following technical scheme:
a DSO photometric parameter estimation method based on feature matching comprises the following steps:
(1) Extracting feature points from the current image frame and calculating corresponding descriptors as candidate points of the feature points of the current frame;
(2) Carrying out feature matching on the feature points of the current image frame and the previous frame by using descriptors, and eliminating mismatching by using a designed feature matching screening strategy;
(3) Selecting candidate points meeting the conditions as characteristic points of the current frame by adopting a designed characteristic point activation strategy according to the characteristic matching result;
(4) And (3) circularly extracting characteristic points of image frames in the image sequence, establishing association among the characteristic points, forming a point set by points in a space corresponding to the characteristic points, and estimating photometric parameters by using a nonlinear optimization method.
In a preferred embodiment, the type of the feature point extracted from the image frame in the step (1) is SURF (SpeededUpRobust Features) feature point, which adopts the concept of the harr feature and the integral image, so that the method has better stability under a plurality of images, and simultaneously has high calculation speed, and can enhance the real-time performance of the program. In order to enable the feature matching process to have enough candidate points, the number of feature points extracted from the image frame is judged, and if the number of the extracted feature points is smaller than a preset threshold value, the detection threshold value of the feature points is reduced, and feature point extraction is carried out on the current image frame again.
In a preferred embodiment, the feature matching screening strategy designed in step (2) is:
d) Firstly, cross filtering is carried out on the characteristic point matching result;
e) Then, calculating a basic matrix by using a random sample consistency algorithm (RANSAC), and eliminating mismatching;
f) And finally, calculating the distance between the pixel coordinates of the matched feature points, and eliminating the matching with the distance larger than a preset threshold value.
In a preferred embodiment, the feature point activation strategy designed in step (3) is:
c) Taking the feature point matched with the previous frame as the feature point of the current frame;
d) And taking the candidate points with the distances exceeding the preset threshold value from the matched characteristic points as the characteristic points of the current frame.
In a preferred embodiment, the camera imaging model in step (4) is:
O=f(eV(x)L)
where O is the image output intensity, L is the irradiance at the spatial midpoint, V (x) is the optical vignetting function, e is the exposure time, and f () is the response function of the camera.
The empirical model of the camera response function is:
f in 0 (x) An average response function h obtained by a principal component analysis method k (x) Is a basis function, c k I.e. the camera response function parameters that need to be estimated.
The optical vignetting function model is:
wherein R (x) is the normalized distance from the pixel position to the center of the image, v l I.e. the parameters of the optical vignetting function that need to be estimated.
The error function model constructed based on the functions is as follows:
wherein P is a certain element of a set P consisting of spatial points corresponding to the selected feature points, F p To be able to observe an image frame of a spatial point p,weight of spatial point p in image frame i, < +.>For pixel intensity values of spatial point p in image frame i, e i For the exposure time of image frame i +.>For the position of spatial point p in image frame i, L p Is the radiance of the space point p h Is a robust kernel function.
Based on the same inventive concept, the DSO photometric parameter estimation device based on feature matching provided by the invention comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the computer program realizes the DSO photometric parameter estimation method based on feature matching when being loaded to the processor.
The beneficial effects are that: the invention discloses a DSO photometric parameter estimation method based on feature matching. Firstly, extracting characteristic points from a current image frame as candidate points for photometric parameter estimation of the frame, then carrying out characteristic matching with the characteristic points determined in the previous frame, removing mismatching by applying a designed characteristic matching screening strategy, and selecting candidate points meeting the conditions as the characteristic points of the current frame by using a designed characteristic point activation strategy so as to establish association between the characteristic points. And finally, based on a corresponding function model, estimating the response function parameters and the optical vignetting function parameters of the camera by adopting a nonlinear optimization method. By using the method provided by the invention, the photometric parameters can be estimated from any video sequence, and the positioning accuracy and the robustness of DSO can be improved by applying the photometric parameters estimated by the method to photometric calibration.
Drawings
FIG. 1 is a flow chart of a method of an embodiment of the present invention;
FIG. 2 is a graph of results of extracting SURF feature points from an image frame;
FIG. 3 is a graph of the results of non-screened feature matching;
FIG. 4 is a graph of feature matching results after screening;
FIG. 5 is a schematic diagram of a feature point activation strategy;
FIG. 6 is a view of two adjacent frames in a v1_03_diffice dataset;
FIG. 7 is a graph of the positioning result of the application of photometric parameters estimated by the method of the present invention to a DSO;
FIG. 8 is a graph of positioning results for the case of DSO without addition of a power calibration;
FIG. 9 is a graph of the positioning results of the application of photometric parameters estimated by the method proposed by Bergman to DSO;
FIG. 10 is a graph comparing absolute errors of pose estimates for DSO in three cases;
fig. 11 is a pose estimation error box diagram of DSO in three cases.
Detailed Description
The invention will be further described with reference to the drawings and the specific examples.
As shown in fig. 1, the embodiment of the invention discloses a DSO photometric parameter estimation method based on feature matching, which mainly comprises the following steps:
step 1) extracting characteristic points from the current image frame and calculating corresponding descriptors, wherein the extracted characteristic points are SURF (SpeededUp Robust Features) characteristic points as candidate points of the current frame, and the method adopts the concept of a harr characteristic and an integral image, has better stability under a plurality of images, has high calculation speed, and can enhance the real-time performance of a program. The result of extracting SURF feature points for an image frame is shown in fig. 2. In order to enable the feature matching process to have enough candidate points, the number of feature points extracted from the image frame is judged, and if the number of the extracted feature points is smaller than a preset threshold value, the detection threshold value of the feature points is reduced, and feature point extraction is carried out on the current image frame again. If the current image frame is the first frame, the feature points extracted from the frame are directly used as the feature points of the frame, and then feature matching is carried out on the feature points extracted from the second frame by using descriptors.
Step 2) performing feature matching on the feature points of the current image frame and the previous frame, wherein mismatching inevitably exists in the feature matching process, as shown in fig. 3. Therefore, a rationally designed feature matching screening strategy becomes critical. The designed feature matching screening strategy is as follows:
a) Firstly, cross filtering is carried out on the characteristic point matching result, namely, the characteristic points of the two images are matched with each other, and the two characteristic points are considered to be correct matching when the two characteristic points are matched with each other;
b) Then, calculating a basic matrix by using a random sample consistency algorithm (RANSAC), and eliminating mismatching;
c) And finally, calculating the distance between the pixel coordinates of the matched feature points, and eliminating the matching with the distance larger than a certain threshold value.
The matches after screening using the feature matching screening strategy are shown in figure 4.
And 3) selecting candidate points meeting the conditions as characteristic points of the current frame, wherein the characteristic points of the image frame are uniformly distributed as far as possible in order to improve the accuracy of photometric parameter estimation. Thus, the design feature point activation strategy is as follows:
a) Taking the feature point matched with the previous frame as the feature point of the current frame;
b) And taking the candidate points with the distances exceeding a certain threshold value from the matched characteristic points as the characteristic points of the current frame.
A feature point activation strategy schematic is shown in fig. 5.
And 4) utilizing the association of the characteristic points established in the three steps to form a point set of points in the space corresponding to the characteristic points, and utilizing a nonlinear optimization method to estimate the response function parameters and the optical vignetting function parameters of the camera based on the designed error function model. The camera imaging model adopted in the step is as follows:
O=f(eV(x)L)
where O is the image output intensity, L is the radiance at the point in the environment, V (x) is the optical vignetting function, e is the exposure time, and f () is the response function of the camera.
The empirical model of the camera response function is:
f in 0 (x) Is based on principal component analysisThe average response function, h k (x) Is a basis function, c k I.e. the camera response function parameters that need to be estimated.
The optical vignetting function model is:
wherein R (x) is the normalized distance from the pixel position to the center of the image, v l I.e. the parameters of the optical vignetting function that need to be estimated.
Based on the above function model, the error function model of the present embodiment is:
wherein P is a certain element of a set P consisting of spatial points corresponding to the selected feature points, F p To be able to observe an image frame of a spatial point p,weight of spatial point p in image frame i, < +.>For pixel intensity values of spatial point p in image frame i, e i For the exposure time of image frame i +.>For the position of spatial point p in image frame i, L p Is the radiance of the space point h For a robust kernel function, a Huber kernel may be used in this example.
Weighting of spatial pointsIs determined by the gradient of the projection points of the space points, and the calculation formula is as follows:
in the middle ofThe gradient of the projection point being the spatial point p in the image frame i.
The pseudo code of the designed photometric parameter estimation and optimization algorithm is as follows:
the ideas of the pseudo code are as follows: in a first step, the irradiance L of a point in space p Independent of the other three parameters, while fixing its value to the average of the gray values of all the pixels projected from that point, then solving for e using the Gauss Newton method i ,c k ,v l Optimal solutions of three types of parameters. For each iteration, according to increment e d ,c d ,v d Updating Jacobian matrix J and error function value Sumerror, when divergence occurs in the iterative process, a small increment e is generated by using random numbers d ,c d ,v d . Second, fixing the three parameters estimated in the first step, and solving the radiance L of the point in the space by using Gauss Newton method p Is a solution to the optimization of (3). For each iteration, according to the incrementThe jacobian matrix K and the error function value sumerr are updated. When divergence occurs in the iterative process, a random number is used to generate a small increment +.>
In order to embody the effects and advantages of the method of the present invention, experimental verification is performed according to the open source direct method visual odometer scheme DSO. The dataset adopts a v1_03_difficut (abbreviated as V103) sequence and a mh_04_difficut (abbreviated as MH 04) sequence in the EuRoC dataset, and the dataset provides a real track of the camera and does not provide a photometric calibration file. The camera in the V103 sequence has high movement speed, obvious shaking and obvious illumination change, and as shown in fig. 6, the camera in the MH04 sequence does not have obvious shaking in the movement process. The specific experimental scheme is as follows: the photometric parameters estimated by the method of the invention and the method proposed by Bergman are respectively applied to DSO, and the positioning accuracy of the DSO under the two conditions is compared, and the positioning accuracy under the condition that the DSO is not calibrated by adding the photometric degree is compared.
The comparison result of the estimated camera motion trail and the real camera motion trail of the V103 running sequence of the DSO under three conditions is shown in fig. 7, 8 and 9. Wherein fig. 7 shows the result of the positioning of the DSO by the photometric parameter estimated by the method of the present invention (match represents the algorithm of the present invention), fig. 8 shows the result of the positioning of the DSO without the addition of the photometric calibration (no_pc represents the no addition of the photometric calibration), and fig. 9 shows the result of the positioning of the DSO by the photometric parameter estimated by the method proposed by Bergman (klt represents the algorithm proposed by Bergman). Comparing and analyzing the track comparison result graphs of fig. 7, 8 and 9, it can be obviously seen that the camera motion track estimated by DSO in fig. 7 is closer to the real camera motion track, which indicates that the photometric parameters estimated by the method of the present invention are more accurate, and the positioning accuracy of DSO can be improved.
And further analyzing the pose estimation precision and the robustness of the DSO under three conditions. The test results are shown in fig. 10 and 11.
Fig. 10 shows a plot of the absolute error between the estimated camera pose and the true camera pose of the DSO in three cases. As is apparent from fig. 10, the pose estimation error of the DSO for photometric calibration using the photometric parameters estimated by the method of the present invention is significantly smaller than that of the other two cases, the pose estimation error of each time most fluctuates up and down by 0.25m, while the pose estimation error of each time of the DSO for photometric calibration using the photometric parameters estimated by the Bergman proposed method fluctuates up and down by 0.75m, and the pose estimation error of each time of the DSO for not photometric calibration fluctuates up and down by 1 m. This further demonstrates that the accuracy of the photometric parameters estimated by the method of the present invention is high and that the positioning accuracy of DSO can be significantly improved.
Fig. 11 shows a pose estimation error box diagram of DSO in three cases, whose effect is mainly the degree of dispersion and distribution density of statistical and analytical data. As can be seen from FIG. 11, the median and the quartile of the pose estimation error of the DSO for photometric calibration by using the photometric parameters estimated by the method of the present invention are significantly better than those of the other two cases, and the pose estimation error distribution is more concentrated, mainly in the vicinity of 0.25 m. Compared with the DSO without photometric calibration, the position and posture estimation error of the DSO with photometric calibration by using the photometric parameters estimated by the method proposed by Bergman is reduced to a certain extent, but the position and posture estimation error distribution is not centralized enough, and more abnormal values exist, which indicates that the positioning robustness of the DSO cannot be effectively improved by using the photometric parameters estimated by the method proposed by Bergman. The analysis shows that the photometric parameter estimation method provided by the invention has high precision, and can obviously improve the positioning precision and the robustness of the direct method visual odometer.
Finally, pose estimation Root Mean Square Error (RMSE) of the DSO running V103 sequence and MH04 sequence in three cases is given. To exclude the occasional effects, DSOs were run ten times in three cases, and the average was taken and the results are summarized in table 1.
Table 1 comparison of positioning results
As can be seen from Table 1, the DSO pose estimation error for photometric calibration by using the photometric parameters estimated by the method of the present invention is significantly better than the other two cases, and the positioning accuracy is improved by about 30% on average.
Based on the same inventive concept, the DSO photometric parameter estimation device based on feature matching provided by the embodiment of the invention comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the computer program realizes the DSO photometric parameter estimation method based on feature matching when being loaded to the processor.

Claims (3)

1. The DSO photometric parameter estimation method based on feature matching is characterized by comprising the following steps of:
(1) Extracting feature points from the current image frame and calculating corresponding descriptors as candidate points of the feature points of the current frame;
(2) Carrying out feature matching on the feature points of the current image frame and the previous frame by using descriptors, and eliminating mismatching by using a designed feature matching screening strategy;
(3) Selecting candidate points meeting the conditions as characteristic points of the current frame by adopting a designed characteristic point activation strategy according to the characteristic matching result;
(4) The steps (1) - (3) are cycled, feature points of image frames in the image sequence are extracted, association among the feature points is established, points in a space corresponding to the feature points form a point set, an error function is constructed based on a camera imaging model, and luminosity parameter estimation is carried out by utilizing a nonlinear optimization method;
the feature matching screening strategy designed in the step (2) is as follows:
a) Firstly, cross filtering is carried out on the characteristic point matching result;
b) Then calculating a basic matrix by using a random sampling consistency algorithm, and eliminating mismatching;
c) Finally, calculating the distance between the pixel coordinates of the matched feature points, and eliminating the matching with the distance larger than a preset threshold value;
the feature point activation strategy designed in the step (3) is as follows:
a) Taking the feature point matched with the previous frame as the feature point of the current frame;
b) Taking candidate points with the distance exceeding a preset threshold value from the matched characteristic points as characteristic points of the current frame;
the camera imaging model in the step (4) is as follows:
O=f(eV(x)L)
where O is the image output intensity, L is the radiance at the spatial midpoint, V (x) is the optical vignetting function, e is the exposure time, and f () is the response function of the camera;
the error function model designed based on the camera imaging model is as follows:
wherein P is a certain element of a set P consisting of spatial points corresponding to the selected feature points, F p To be able to observe an image frame of a spatial point p,weight of spatial point p in image frame i, < +.>For pixel intensity values of spatial point p in image frame i, e i For the exposure time of image frame i +.>For the position of spatial point p in image frame i, L p For the radiance of the spatial point p, I h Is a robust kernel function;
the empirical model of the camera response function in the step (4) is as follows:
f in 0 (x) An average response function h obtained by a principal component analysis method k (x) Is a basis function, c k The parameters of the response function of the camera which need to be estimated are;
the optical vignetting function model in the step (4) is as follows:
wherein R (x) is the normalized distance from the pixel position to the center of the imageSeparation, v l Is an optical vignetting function parameter to be estimated.
2. The method of claim 1, wherein the feature-matching-based DSO photometric parameter estimation in step (1) extracts feature points for a current image frame, and the feature point type is SURF.
3. A DSO photometric parameter estimation device based on feature matching, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the computer program when loaded to the processor implements the DSO photometric parameter estimation method based on feature matching according to claim 1 or 2.
CN201911217648.3A 2019-12-03 2019-12-03 DSO photometric parameter estimation method and device based on feature matching Active CN111144441B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911217648.3A CN111144441B (en) 2019-12-03 2019-12-03 DSO photometric parameter estimation method and device based on feature matching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911217648.3A CN111144441B (en) 2019-12-03 2019-12-03 DSO photometric parameter estimation method and device based on feature matching

Publications (2)

Publication Number Publication Date
CN111144441A CN111144441A (en) 2020-05-12
CN111144441B true CN111144441B (en) 2023-08-08

Family

ID=70517412

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911217648.3A Active CN111144441B (en) 2019-12-03 2019-12-03 DSO photometric parameter estimation method and device based on feature matching

Country Status (1)

Country Link
CN (1) CN111144441B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111915532B (en) * 2020-08-07 2022-02-11 北京字节跳动网络技术有限公司 Image tracking method and device, electronic equipment and computer readable medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108573500A (en) * 2018-04-24 2018-09-25 西安交通大学 A kind of method of direct estimation in-vehicle camera kinematic parameter
CN109459778A (en) * 2018-10-31 2019-03-12 东南大学 Code pseudorange based on robust variance component estimation/Doppler combines speed-measuring method and its application
CN109974743A (en) * 2019-03-14 2019-07-05 中山大学 A kind of RGB-D visual odometry optimized based on GMS characteristic matching and sliding window pose figure
CN110108258A (en) * 2019-04-09 2019-08-09 南京航空航天大学 A kind of monocular vision odometer localization method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108573500A (en) * 2018-04-24 2018-09-25 西安交通大学 A kind of method of direct estimation in-vehicle camera kinematic parameter
CN109459778A (en) * 2018-10-31 2019-03-12 东南大学 Code pseudorange based on robust variance component estimation/Doppler combines speed-measuring method and its application
CN109974743A (en) * 2019-03-14 2019-07-05 中山大学 A kind of RGB-D visual odometry optimized based on GMS characteristic matching and sliding window pose figure
CN110108258A (en) * 2019-04-09 2019-08-09 南京航空航天大学 A kind of monocular vision odometer localization method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
谭涌 等.融合光度参数估计的单目直接法视觉里程计.《测绘工程》.2021,全文. *

Also Published As

Publication number Publication date
CN111144441A (en) 2020-05-12

Similar Documents

Publication Publication Date Title
Xie et al. Multilevel cloud detection in remote sensing images based on deep learning
US7756296B2 (en) Method for tracking objects in videos using forward and backward tracking
US9460349B2 (en) Background understanding in video data
CN111462207A (en) RGB-D simultaneous positioning and map creation method integrating direct method and feature method
CN109753878B (en) Imaging identification method and system under severe weather
Liu et al. EDFLOW: Event driven optical flow camera with keypoint detection and adaptive block matching
US11651581B2 (en) System and method for correspondence map determination
US20100202659A1 (en) Image sampling in stochastic model-based computer vision
CN113379789B (en) Moving target tracking method in complex environment
CN109977834B (en) Method and device for segmenting human hand and interactive object from depth image
Chen et al. A stereo visual-inertial SLAM approach for indoor mobile robots in unknown environments without occlusions
CN113111947A (en) Image processing method, apparatus and computer-readable storage medium
CN111383252A (en) Multi-camera target tracking method, system, device and storage medium
Senst et al. Robust local optical flow: Long-range motions and varying illuminations
Zhu et al. Photometric transfer for direct visual odometry
Platinsky et al. Monocular visual odometry: Sparse joint optimisation or dense alternation?
CN113362377B (en) VO weighted optimization method based on monocular camera
CN111144441B (en) DSO photometric parameter estimation method and device based on feature matching
Istenic et al. Mission-time 3D reconstruction with quality estimation
Hosseinzadeh et al. Sparse point-plane SLAM
Xie et al. Hierarchical quadtree feature optical flow tracking based sparse pose-graph visual-inertial SLAM
CN116188826A (en) Template matching method and device under complex illumination condition
Halperin et al. Clear Skies Ahead: Towards Real‐Time Automatic Sky Replacement in Video
Han et al. Keyslam: robust RGB-D camera tracking using adaptive VO and optimal key-frame selection
KR20170037804A (en) Robust visual odometry system and method to irregular illumination changes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant