CN111144441A - DSO luminosity parameter estimation method and device based on feature matching - Google Patents

DSO luminosity parameter estimation method and device based on feature matching Download PDF

Info

Publication number
CN111144441A
CN111144441A CN201911217648.3A CN201911217648A CN111144441A CN 111144441 A CN111144441 A CN 111144441A CN 201911217648 A CN201911217648 A CN 201911217648A CN 111144441 A CN111144441 A CN 111144441A
Authority
CN
China
Prior art keywords
feature
dso
points
feature points
parameter estimation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911217648.3A
Other languages
Chinese (zh)
Other versions
CN111144441B (en
Inventor
潘树国
谭涌
高旺
盛超
章辉
喻国荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN201911217648.3A priority Critical patent/CN111144441B/en
Publication of CN111144441A publication Critical patent/CN111144441A/en
Application granted granted Critical
Publication of CN111144441B publication Critical patent/CN111144441B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses a DSO photometric parameter estimation method and device based on feature matching. The method comprises the steps of firstly extracting feature points of a current image frame to serve as candidate points of the feature points of the frame, then performing feature matching with the feature points determined in the previous frame, eliminating mismatching by applying a designed feature matching screening strategy, and selecting the candidate points meeting conditions by using a designed feature point activation strategy to serve as the feature points of the current frame so as to establish association among the feature points. And finally, based on a camera imaging function model, constructing an error function and estimating photometric parameters by adopting a nonlinear optimization method. By using the method provided by the invention, the luminosity parameters can be estimated from any video sequence, and the positioning accuracy of the DSO can be improved by about 30% by using the luminosity parameters estimated by the method to perform luminosity calibration.

Description

DSO luminosity parameter estimation method and device based on feature matching
Technical Field
The present invention relates to a Visual Odometer (VO) positioning method, and more particularly, to a luminosity calibration method and apparatus for a direct method Visual odometer dso (direct spark odometer) based on a gray scale invariant assumption.
Background
With the rapid development of scientific technology, the visual positioning technology plays a vital role in the fields of mobile robots, unmanned driving, virtual reality, augmented reality and the like. As one of the technologies based on visual positioning, the visual odometer has been rapidly developed in recent years and is widely used. The visual odometer based on the direct method estimates the motion of a camera according to the gray level information of pixels, gradually moves to a mainstream stage by virtue of the advantages that feature points and descriptors do not need to be calculated, more pixel information can be used, the method is suitable for a low-texture environment and the like, and becomes an important component in the visual odometer calculation method. But the disadvantage is also obvious, namely the sensitivity to illumination change, and when the scene has large illumination change, the positioning accuracy and robustness of the visual odometer based on the direct method are obviously reduced.
In 2016, J.Engel et al put forward a camera luminosity calibration method and made a TUM-Mono data set with a camera luminosity calibration file, and put forward a direct method visual odometer DSO, wherein the luminosity calibration concept is introduced, and the position and pose estimation is carried out after the image intensity is converted by loading the luminosity calibration file of the camera, so that the positioning precision and the robustness are greatly improved. Engel et al, however, propose a camera photometric calibration method that requires the camera to be operated, which is not generally available in the research of visual odometers, only a set of video sequences taken by the camera. Therefore, the research on the method for performing photometric parameter estimation on any video sequence has great significance to the direct method visual odometer.
In 2017, Bergman et al propose an online luminosity parameter estimation method for an automatic exposure video sequence, which establishes association between pixel points by using KLT optical flow tracking and then estimates corresponding luminosity parameters by adopting a nonlinear optimization method. However, the optical flow method is only suitable for the case where the camera motion is small and the illumination change is small, and when the camera motion is severe or the illumination change is large, the tracking accuracy and robustness are low, so that the estimated photometric parameters are inaccurate.
In summary, none of the existing photometric parameter estimation methods is sufficient to meet the application requirements of the direct method-based visual odometer. Therefore, the research of the luminosity parameter estimation method for any video sequence with higher precision and stronger robustness has great significance.
Disclosure of Invention
The purpose of the invention is as follows: aiming at the problems of low positioning precision and poor positioning robustness of the direct method visual odometer under the condition of large illumination change, the luminosity parameter estimation method and the luminosity parameter estimation device based on feature matching are provided, and the positioning precision and the robustness of the direct method visual odometer can be effectively improved by utilizing the estimated luminosity parameters to carry out luminosity calibration.
The technical scheme is as follows: in order to achieve the purpose, the invention adopts the following technical scheme:
a DSO photometric parameter estimation method based on feature matching comprises the following steps:
(1) extracting characteristic points of the current image frame and calculating corresponding descriptors to be used as candidate points of the characteristic points of the current frame;
(2) carrying out feature matching on feature points of the current image frame and the previous frame by using a descriptor, and eliminating mismatching by using a designed feature matching screening strategy;
(3) selecting candidate points meeting the conditions as the feature points of the current frame by adopting a designed feature point activation strategy according to the feature matching result;
(4) and (4) circulating the steps (1) to (3) to extract the characteristic points of the image frames in the image sequence, establishing the association between the characteristic points, forming a point set by the points in the space corresponding to the characteristic points, and estimating the photometric parameters by using a nonlinear optimization method.
In a preferred embodiment, the type of the feature points extracted from the image frames in step (1) is surf (speeduprobush features) feature points, and the harr feature and integral image concept is adopted, so that the stability under multiple images is better, the calculation speed is high, and the real-time performance of the program can be enhanced. In order to enable the feature matching process to have enough candidate points, the number of the extracted feature points of the image frame is judged, and if the number of the extracted feature points is smaller than a preset threshold value, the detection threshold value of the feature points is reduced, and feature point extraction is carried out on the current image frame again.
In a preferred embodiment, the feature matching screening strategy designed in step (2) is:
d) firstly, cross filtering is carried out on the matching result of the feature points;
e) then, calculating a basic matrix by using a random sample consensus (RANSAC) algorithm, and eliminating mismatching;
f) and finally, calculating the distance of the pixel coordinates of the matched characteristic points, and eliminating the matching of which the distance is greater than a preset threshold value.
In a preferred embodiment, the feature point activation strategy designed in step (3) is:
c) taking the feature points matched with the previous frame as the feature points of the current frame;
d) and taking the candidate points with the distance exceeding a preset threshold value with the matched feature points as the feature points of the current frame.
In a preferred embodiment, the camera imaging model in step (4) is:
O=f(eV(x)L)
where O is the image output intensity, L is the radiance of the point in space, V (x) is the optical vignetting function, e is the exposure time, and f () is the response function of the camera.
The empirical model of the camera response function is:
Figure BDA0002299930650000021
in the formula f0(x) Is an average response function, h, obtained by principal component analysisk(x) Is a basis function, ckI.e. the camera response function parameters that need to be estimated.
The optical vignetting function model is:
Figure BDA0002299930650000031
where R (x) is the normalized distance of the pixel position from the image center, vlI.e. the optical vignetting function parameters that need to be estimated.
The error function model constructed based on the above function is:
Figure BDA0002299930650000032
where P is an element of a set P consisting of spatial midpoints corresponding to the selected feature points, FpIn order to be able to observe the image frame at a spatial point p,
Figure BDA0002299930650000033
for the weight of a spatial point p in an image frame i,
Figure BDA0002299930650000034
pixel intensity values in image frame i for spatial point p, eiIs the exposure time for the image frame i,
Figure BDA0002299930650000035
for the position of the spatial point p in the image frame i, LpIs the emittance of the space point p, |hIs a robust kernel function.
Based on the same inventive concept, the invention provides a DSO photometric parameter estimation device based on feature matching, which comprises a memory, a processor and a computer program stored on the memory and operable on the processor, wherein the computer program realizes the DSO photometric parameter estimation method based on feature matching when being loaded on the processor.
Has the advantages that: the invention discloses a DSO photometric parameter estimation method based on feature matching. Firstly, extracting feature points of a current image frame to be used as candidate points for photometric parameter estimation of the frame, then performing feature matching with the determined feature points in the previous frame, eliminating mismatching by applying a designed feature matching screening strategy, and selecting the candidate points meeting conditions by using a designed feature point activation strategy to be used as the feature points of the current frame so as to establish association between the feature points. And finally, based on the corresponding function model, estimating the response function parameters of the camera and the optical vignetting function parameters by adopting a nonlinear optimization method. By using the method provided by the invention, the luminosity parameters can be estimated from any video sequence, and the positioning accuracy and robustness of the DSO can be improved by using the luminosity parameters estimated by the method to perform luminosity calibration.
Drawings
FIG. 1 is a flow chart of a method of an embodiment of the present invention;
FIG. 2 is a graph of image frame SURF feature point extraction results;
FIG. 3 is a graph of the results of an unscreened feature match;
FIG. 4 is a graph of feature matching results after screening;
FIG. 5 is a schematic diagram of a feature point activation strategy;
FIG. 6 shows two adjacent frames in the V1_03_ difficult data set;
FIG. 7 is a graph of the positioning results of the photometric parameters estimated by the method of the present invention applied to a DSO;
FIG. 8 is a graph of positioning results for a DSO without photometric calibration;
FIG. 9 is a graph of the results of the application of photometric parameters estimated by the method proposed by Bergman to the positioning of DSO;
FIG. 10 is a diagram of absolute error versus position estimate for DSO in three cases;
fig. 11 is a pose estimation error box diagram of the DSO in three cases.
Detailed Description
The invention will be further described with reference to the following drawings and specific embodiments.
As shown in fig. 1, the embodiment of the present invention discloses a method for estimating DSO photometric parameters based on feature matching, which mainly comprises the following steps:
step 1) extracting feature points from a current image frame and calculating corresponding descriptors to serve as candidate points of the current frame, wherein the extracted feature points are SURF (speedUp Robust features) feature points, the harr feature and integral image concepts are adopted, the stability is better under a plurality of images, the calculation speed is high, and the real-time performance of a program can be enhanced. The result of extracting SURF feature points for an image frame is shown in fig. 2. In order to enable the feature matching process to have enough candidate points, the number of the extracted feature points of the image frame is judged, and if the number of the extracted feature points is smaller than a preset threshold value, the detection threshold value of the feature points is reduced, and feature point extraction is carried out on the current image frame again. If the current image frame is the first frame, the feature points extracted from the frame are directly used as the feature points of the frame, and then feature matching is carried out on the feature points and the feature points extracted from the subsequent second frame by using a descriptor.
And 2) performing feature matching on the feature points of the current image frame and the previous frame, wherein error matching inevitably exists in the feature matching process, as shown in fig. 3. Therefore, it becomes crucial to design a reasonable feature matching screening strategy. The designed feature matching screening strategy is as follows:
a) firstly, cross filtering is carried out on the matching result of the feature points, namely the feature points of the two images are matched with each other, and the two image matching results are considered to be a correct matching when the two image matching results are matched with each other;
b) then, calculating a basic matrix by using a random sample consensus (RANSAC) algorithm, and eliminating mismatching;
c) and finally, calculating the distance of the pixel coordinates of the matched characteristic points, and eliminating the matching with the distance larger than a certain threshold value.
The matches after applying the feature matching screening strategy screening are shown in fig. 4.
And 3) selecting candidate points meeting the conditions as the characteristic points of the current frame, wherein the characteristic points of the image frame are uniformly distributed as much as possible in order to improve the accuracy of photometric parameter estimation. Therefore, the feature point activation strategy is designed as follows:
a) taking the feature points matched with the previous frame as the feature points of the current frame;
b) and taking the candidate points with the distance exceeding a certain threshold value with the matched feature points as the feature points of the current frame.
A schematic diagram of the feature point activation strategy is shown in fig. 5.
And 4) forming points in the space corresponding to the feature points into a point set by using the correlation of the feature points established in the three steps, and estimating the camera response function parameters and the optical vignetting function parameters by using a nonlinear optimization method based on a designed error function model. The camera imaging model adopted in the step is as follows:
O=f(eV(x)L)
where O is the image output intensity, L is the radiance of the point in the environment, V (x) is the optical vignetting function, e is the exposure time, and f () is the response function of the camera.
The empirical model of the camera response function is:
Figure BDA0002299930650000051
in the formula f0(x) Is an average response function, h, obtained by principal component analysisk(x) Is a basis function, ckI.e. the camera response function parameters that need to be estimated.
The optical vignetting function model is:
Figure BDA0002299930650000052
where R (x) is the normalized distance of the pixel position from the image center, vlI.e. the optical vignetting function parameters that need to be estimated.
Based on the function model, the error function model of this embodiment is:
Figure BDA0002299930650000053
where P is an element of a set P consisting of spatial midpoints corresponding to the selected feature points, FpIn order to be able to observe the image frame at a spatial point p,
Figure BDA0002299930650000054
for the weight of a spatial point p in an image frame i,
Figure BDA0002299930650000055
pixel intensity values in image frame i for spatial point p, eiIs the exposure time for the image frame i,
Figure BDA0002299930650000056
for the position of the spatial point p in the image frame i, LpIs the emittance of a space point, |hFor robust kernel functions, a Huber kernel may be used in this example.
Weights of spatial points
Figure BDA0002299930650000057
Is determined by the gradient of the projection point of the space point, and the calculation formula is as follows:
Figure BDA0002299930650000058
in the formula
Figure BDA0002299930650000059
The gradient of the projection point at spatial point p in image frame i.
The pseudo code for the designed photometric parameter estimation and optimization algorithm is as follows:
Figure BDA0002299930650000061
the idea of the above pseudo code is: first, the radiance L of a point in spacepThe estimation is independent of other three parameters, the value of the estimation is fixed as the average value of the gray values of all the pixel points obtained by the projection of the point, and then the Gaussian Newton method is used for solving the ei,ck,vlOptimal solution of three kinds of parameters. For each iteration, according to the increment ed,cd,vdThe Jacobian matrix J and the error function value Sumerror are updated, and when divergence occurs in the iteration process, a small increment e is generated by using a random numberd,cd,vd. Fixing the three parameters estimated in the first step, and solving the radiance L of the point in the space by using a Gauss-Newton methodpIs the most important ofAnd (4) optimizing the solution. For each iteration, according to the increment
Figure BDA0002299930650000071
And updating the Jacobian matrix K and the error function value Sumerror. When divergence occurs in the iterative process, the random number is used to generate the tiny increment
Figure BDA0002299930650000072
To demonstrate the effectiveness and advantages of the method of the present invention, experimental verification is performed below according to the open source direct method visual odometry scheme DSO. The data set adopts a V1_03_ difficult (abbreviated as V103) sequence and an MH _04_ difficult (abbreviated as MH04) sequence in the EuRoC data set, provides a real track of the camera, and does not provide a luminosity calibration file. The camera in the V103 sequence has high moving speed, obvious shaking and obvious illumination change, and as shown in FIG. 6, the camera in the MH04 sequence does not obviously shake during the moving process. The specific experimental scheme is as follows: the photometric parameters estimated by the method of the present invention and the method proposed by Bergman are applied to the DSO respectively, the positioning accuracy of the DSO in both cases is compared, and simultaneously the positioning accuracy is compared with the positioning accuracy of the DSO in the case where no photometric scale is added.
The comparison result of the camera motion trail estimated by the DSO running the V103 sequence in three cases with the real camera motion trail is shown in fig. 7, 8 and 9. Wherein fig. 7 shows the positioning result of the photometric parameters estimated by the method of the present invention applied to DSO (match stands for the algorithm of the present invention), fig. 8 shows the positioning result of DSO without photometric calibration (no _ pc stands for no photometric calibration), and fig. 9 shows the positioning result of the photometric parameters estimated by the method proposed by Bergman applied to DSO (klt stands for the algorithm proposed by Bergman). Comparing and analyzing the trajectory comparison result diagrams of fig. 7, fig. 8, and fig. 9, it can be clearly seen that the camera motion trajectory estimated by the DSO in fig. 7 is closer to the real camera motion trajectory, which indicates that the photometric parameters estimated by the method of the present invention are more accurate, and the positioning accuracy of the DSO can be improved.
And further analyzing the pose estimation precision and robustness of the DSO under the three conditions. The test results are shown in fig. 10 and 11.
Fig. 10 shows a plot of absolute error versus estimated camera pose and true camera pose for DSO in three cases. As is apparent from fig. 10, the pose estimation error of the DSO subjected to photometric calibration by using the photometric parameters estimated by the method of the present invention is significantly smaller than those of the other two cases, and the pose estimation error at each time mostly fluctuates about 0.25m, whereas the pose estimation error at each time of the DSO subjected to photometric calibration by using the photometric parameters estimated by the method proposed by Bergman fluctuates about 0.75m, and the pose estimation error at each time of the DSO not subjected to photometric calibration fluctuates about 1 m. This further shows that the photometric parameter estimated by the method of the present invention has high accuracy, and the positioning accuracy of the DSO can be significantly improved.
Fig. 11 shows a posture estimation error box diagram of the DSO in three cases, and the effect of the box diagram is mainly to count and analyze the dispersion degree and distribution density of data. As can be seen from FIG. 11, the median and quartile of the pose estimation errors of the DSO for photometric calibration with photometric parameters estimated by the method of the present invention are significantly better than those of the other two cases, and the pose estimation errors are more concentrated, mainly concentrated around 0.25 m. Compared with the DSO without luminosity calibration, the position and quartile of the position and pose estimation error of the DSO with luminosity calibration performed by the luminosity parameters estimated by the method proposed by the Bergman are reduced to a certain extent, but the position and pose estimation error distribution is not concentrated enough, and more abnormal values exist, which indicates that the positioning robustness of the DSO can not be effectively improved by performing luminosity calibration on the luminosity parameters estimated by the method proposed by the Bergman. The analysis shows that the luminosity parameter estimation method provided by the invention has high precision, and can obviously improve the positioning precision and robustness of the direct method vision odometer.
Finally, the pose estimation Root Mean Square Error (RMSE) of the DSO running V103 and MH04 sequences is given for three cases. To exclude the effects of chance, the DSO was run ten times in three cases, averaged and the results are summarized in table 1.
TABLE 1 comparison of positioning results
Figure BDA0002299930650000081
As can be seen from Table 1, the pose estimation error of the DSO for photometric calibration by using the photometric parameters estimated by the method of the present invention is obviously superior to that of the other two cases, and the positioning accuracy is averagely improved by about 30%.
Based on the same inventive concept, an embodiment of the present invention provides a DSO photometric parameter estimation device based on feature matching, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the computer program is loaded into the processor, the DSO photometric parameter estimation device based on feature matching implements the above-mentioned DSO photometric parameter estimation method based on feature matching.

Claims (8)

1. A DSO photometric parameter estimation method based on feature matching is characterized by comprising the following steps:
(1) extracting characteristic points of the current image frame and calculating corresponding descriptors to be used as candidate points of the characteristic points of the current frame;
(2) carrying out feature matching on feature points of the current image frame and the previous frame by using a descriptor, and eliminating mismatching by using a designed feature matching screening strategy;
(3) selecting candidate points meeting the conditions as the feature points of the current frame by adopting a designed feature point activation strategy according to the feature matching result;
(4) and (3) circularly extracting the feature points of the image frames in the image sequence, establishing the association between the feature points, forming a point set by points in a space corresponding to the feature points, constructing an error function based on a camera imaging model, and estimating photometric parameters by using a nonlinear optimization method.
2. The feature matching-based DSO photometric parameter estimation method according to claim 1 wherein the feature points are extracted for the current image frame in step (1), the type of feature points being SURF.
3. The feature matching based DSO photometric parameter estimation method according to claim 1 wherein the feature matching screening strategy designed in step (2) is:
a) firstly, cross filtering is carried out on the matching result of the feature points;
b) then, calculating a basic matrix by using a random sampling consistency algorithm, and eliminating mismatching;
c) and finally, calculating the distance of the pixel coordinates of the matched characteristic points, and eliminating the matching of which the distance is greater than a preset threshold value.
4. The feature matching-based DSO photometric parameter estimation method according to claim 1 wherein the feature point activation strategy designed in step (3) is:
a) taking the feature points matched with the previous frame as the feature points of the current frame;
b) and taking the candidate points with the distance exceeding a preset threshold value with the matched feature points as the feature points of the current frame.
5. The feature matching based DSO photometric parameter estimation method according to claim 1 wherein the camera imaging model in step (4) is:
O=f(eV(x)L)
where O is the image output intensity, L is the radiance of the point in space, V (x) is the optical vignetting function, e is the exposure time, and f () is the response function of the camera;
the error function model designed based on the camera imaging model is as follows:
Figure FDA0002299930640000011
where P is an element of a set P consisting of spatial midpoints corresponding to the selected feature points, FpIn order to be able to observe the image frame at a spatial point p,
Figure FDA0002299930640000021
for the weight of a spatial point p in an image frame i,
Figure FDA0002299930640000022
pixel intensity values in image frame i for spatial point p, eiIs the exposure time for the image frame i,
Figure FDA0002299930640000023
for the position of the spatial point p in the image frame i, LpIs the emittance of the space point p, |hIs a robust kernel function.
6. The feature matching based DSO photometric parameter estimation method according to claim 5 wherein the empirical model of the camera response function in step (4) is:
Figure FDA0002299930640000024
in the formula f0(x) Is an average response function, h, obtained by principal component analysisk(x) Is a basis function, ckIs the camera response function parameter that needs to be estimated.
7. The feature matching based DSO photometric parameter estimation method according to claim 5 wherein in step (4) the optical vignetting function model is:
Figure FDA0002299930640000025
where R (x) is the normalized distance of the pixel position from the image center, vlIs the optical vignetting function parameter that needs to be estimated.
8. A feature matching based DSO photometric parameter estimation device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the computer program when loaded into the processor implements the feature matching based DSO photometric parameter estimation method according to any one of claims 1-7.
CN201911217648.3A 2019-12-03 2019-12-03 DSO photometric parameter estimation method and device based on feature matching Active CN111144441B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911217648.3A CN111144441B (en) 2019-12-03 2019-12-03 DSO photometric parameter estimation method and device based on feature matching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911217648.3A CN111144441B (en) 2019-12-03 2019-12-03 DSO photometric parameter estimation method and device based on feature matching

Publications (2)

Publication Number Publication Date
CN111144441A true CN111144441A (en) 2020-05-12
CN111144441B CN111144441B (en) 2023-08-08

Family

ID=70517412

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911217648.3A Active CN111144441B (en) 2019-12-03 2019-12-03 DSO photometric parameter estimation method and device based on feature matching

Country Status (1)

Country Link
CN (1) CN111144441B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111915532A (en) * 2020-08-07 2020-11-10 北京字节跳动网络技术有限公司 Image tracking method and device, electronic equipment and computer readable medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108573500A (en) * 2018-04-24 2018-09-25 西安交通大学 A kind of method of direct estimation in-vehicle camera kinematic parameter
CN109459778A (en) * 2018-10-31 2019-03-12 东南大学 Code pseudorange based on robust variance component estimation/Doppler combines speed-measuring method and its application
CN109974743A (en) * 2019-03-14 2019-07-05 中山大学 A kind of RGB-D visual odometry optimized based on GMS characteristic matching and sliding window pose figure
CN110108258A (en) * 2019-04-09 2019-08-09 南京航空航天大学 A kind of monocular vision odometer localization method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108573500A (en) * 2018-04-24 2018-09-25 西安交通大学 A kind of method of direct estimation in-vehicle camera kinematic parameter
CN109459778A (en) * 2018-10-31 2019-03-12 东南大学 Code pseudorange based on robust variance component estimation/Doppler combines speed-measuring method and its application
CN109974743A (en) * 2019-03-14 2019-07-05 中山大学 A kind of RGB-D visual odometry optimized based on GMS characteristic matching and sliding window pose figure
CN110108258A (en) * 2019-04-09 2019-08-09 南京航空航天大学 A kind of monocular vision odometer localization method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
谭涌 等: "融合光度参数估计的单目直接法视觉里程计" *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111915532A (en) * 2020-08-07 2020-11-10 北京字节跳动网络技术有限公司 Image tracking method and device, electronic equipment and computer readable medium
CN111915532B (en) * 2020-08-07 2022-02-11 北京字节跳动网络技术有限公司 Image tracking method and device, electronic equipment and computer readable medium

Also Published As

Publication number Publication date
CN111144441B (en) 2023-08-08

Similar Documents

Publication Publication Date Title
JP7106665B2 (en) MONOCULAR DEPTH ESTIMATION METHOD AND DEVICE, DEVICE AND STORAGE MEDIUM THEREOF
CN110322500B (en) Optimization method and device for instant positioning and map construction, medium and electronic equipment
CN108780508B (en) System and method for normalizing images
US7756296B2 (en) Method for tracking objects in videos using forward and backward tracking
CN110070580B (en) Local key frame matching-based SLAM quick relocation method and image processing device
CN111383252B (en) Multi-camera target tracking method, system, device and storage medium
EP3798975B1 (en) Method and apparatus for detecting subject, electronic device, and computer readable storage medium
WO2011048302A1 (en) Method, computer program, and device for hybrid tracking of real-time representations of objects in image sequence
CN110349212B (en) Optimization method and device for instant positioning and map construction, medium and electronic equipment
WO2021108626A1 (en) System and method for correspondence map determination
Zhu et al. Photometric transfer for direct visual odometry
CN113379789B (en) Moving target tracking method in complex environment
CN113361329B (en) Robust single-target tracking method based on example feature perception
CN111144441A (en) DSO luminosity parameter estimation method and device based on feature matching
Istenic et al. Mission-time 3D reconstruction with quality estimation
CN116597246A (en) Model training method, target detection method, electronic device and storage medium
CN110751163A (en) Target positioning method and device, computer readable storage medium and electronic equipment
Halperin et al. Clear Skies Ahead: Towards Real‐Time Automatic Sky Replacement in Video
CN115170621A (en) Target tracking method and system under dynamic background based on relevant filtering framework
CN114757984A (en) Scene depth estimation method and device of light field camera
CN111524161B (en) Method and device for extracting track
CN114693986A (en) Training method of active learning model, image processing method and device
CN114677444B (en) Optimized visual SLAM method
CN109886985B (en) Image accurate segmentation method fusing deep learning network and watershed algorithm
Klenk et al. Deep Event Visual Odometry

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant