CN115930988A - Visual odometer method, device, equipment and storage medium - Google Patents
Visual odometer method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN115930988A CN115930988A CN202211541198.5A CN202211541198A CN115930988A CN 115930988 A CN115930988 A CN 115930988A CN 202211541198 A CN202211541198 A CN 202211541198A CN 115930988 A CN115930988 A CN 115930988A
- Authority
- CN
- China
- Prior art keywords
- line segment
- gradient
- screening
- model
- sampling points
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a visual odometer method, a device, equipment and a storage medium, wherein the method comprises the steps of extracting a plurality of line segment information contained in an image to be processed by using an LSD algorithm; based on a gradient intensity scoring principle, screening the information of the line segments by a quadtree homogenization screening method to obtain a plurality of strong gradient line segments; uniformly sampling each strong gradient line segment to obtain a plurality of sampling points, and performing back projection on the plurality of sampling points to obtain a space point set so as to calculate covariance of the space point set; the method carries out quadtree homogenization on the line segments detected by the LSD algorithm, and solves the problem of computation surge caused by too much segment constraint in a structured scene. In the homogenization process, the line segment with strong linear constraint is screened out for the point-line model constructed by the algorithm by calculating the gradient intensity mean value of the pixel points contained in the line segment, and the algorithm robustness brought by the introduction of the linear constraint is improved.
Description
Technical Field
The invention relates to the technical field of robot positioning and navigation, in particular to a visual odometer method, a device, equipment and a storage medium.
Background
The Simultaneous localization and Mapping (SLAM) technology is one of the core technologies of mobile robots, automatic driving, virtual/augmented reality, and other hot directions. Currently, the SLAM technology can be generally divided into a visual SLAM that is a camera-master device, and a laser SLAM that is a laser radar-master device. The laser radar can directly acquire high-precision spatial point cloud information, but the environment texture is lacked, and the high-precision laser radar is expensive. In contrast, visual cameras are low cost, low power consumption, easy to integrate, and contain rich image textures.
Generally, visual SLAM technology comprises the following technical modules: front-end visual odometer, back-end optimization, loop detection and graph building. The visual odometer technology can directly solve incremental motion information of the camera through adjacent image information, and is the most critical ring in the visual SLAM technology.
Visual odometers typically take the form of both indirect and direct methods: an indirect method: extracting image feature points and calculating a feature descriptor, and performing feature matching on the image to complete data association operation of the image, thereby constructing a geometric reprojection error model and solving visual incremental motion; the direct method comprises the following steps: and directly comparing the pixel gray level difference of the two images, completing visual projection and luminosity residual error model construction under the luminosity invariance assumption, and solving the visual incremental motion.
The indirect method generally requires more computing resources to be consumed, and the indirect method will be difficult to function in a weak texture environment where image features are difficult to extract. In contrast, the direct method skips pixel feature calculation, can also be used for solving incremental motion of a camera in a weak texture environment, and photometric information utilized by the direct method can also be fused with an edge detection technology, so that the visual odometer solution can be completed more robustly.
DSO (direct Sparse odometer) is a visual odometer scheme based on a Sparse direct method, and the scheme can keep equal to or even higher precision than the traditional indirect method, has the processing speed of about five times that of the traditional indirect method, and is one of the mainstream visual SLAM schemes at present. The DSO comprises a front-end tracking part and a back-end optimization part, wherein the front-end tracking part completes system initialization and a tracking process based on a direct method, and the back-end optimization carries out depth filtering on image points based on a front-end tracking result and constructs a window constraint optimization system state variable. Compared with the traditional scheme based on the direct method, the DSO has the difference that a luminosity error calibration model is introduced, so that the luminosity influence caused by illumination change and lens attenuation can be reduced to a great extent, and the robustness of the luminosity error model constructed by the direct method can be well improved.
DSO belongs to a visual odometer of a sparse direct method, a direct method is used for solving a model, the DSO has high algorithm solving efficiency, but compared with a characteristic point mode used by an indirect method, the direct method does not use any environment structure information, and the solution of a luminosity error model is completed only on the basis of strong luminosity-dependent invariant hypothesis, so that error accumulation is unavoidable even with the assistance of a luminosity error calibration model.
Therefore, how to overcome the defect of insufficient robustness existing in the technology of simultaneous localization and mapping (SLAM) by using a direct method to solve visual incremental motion is a technical problem which needs to be solved by a person skilled in the art urgently.
Disclosure of Invention
In view of the above, the present invention provides a visual odometer method, apparatus, device and storage medium for overcoming the above problems or at least partially solving the above problems. The method aims to perform real-time positioning calculation on the robot with the monocular camera, and improve the robustness of the monocular vision odometer technology in a structured scene.
The invention provides the following scheme:
a visual odometry method comprising:
extracting a plurality of line segment information contained in the image to be processed by using an LSD algorithm;
based on a gradient intensity scoring principle, screening the information of the line segments by a quadtree homogenization screening method to obtain a plurality of strong gradient line segments;
uniformly sampling each strong gradient line segment to obtain a plurality of sampling points, and performing back projection on the plurality of sampling points to obtain a space point set so as to calculate the covariance of the space point set;
judging whether the covariance elements of the space point sets corresponding to the sampling points contained in each strong gradient line segment meet preset conditions or not, and fitting the space point sets corresponding to the sampling points of all the strong gradient line segments meeting the preset conditions through a linear fitting technology to obtain space line segment models;
the method comprises the steps of constructing photometric errors based on a photometric invariant model, constructing point line errors based on a linear constraint model, and performing odometry calculation on a target error model, wherein the target error model comprises photometric error constraints, first collinear constraints and second collinear constraints.
Preferably: removing image luminosity distortion of the image to be processed to obtain a target image; and extracting a plurality of pieces of line segment information contained in the target image by using an LSD algorithm.
Preferably: the gradient strength scoring principle comprises that the average gradient strength of the line segments meets a preset strength threshold value.
Preferably: the quadtree homogenization screening method comprises the following steps:
let the current picture I i The detected line segments are collectedRetaining image gradient information in->The gradient value-taking operation is defined as T i (. The) the slave line segment l is fixed at each screening i Selecting n points, and completing line segment screening through the following screening model:
preferably: the determining whether the covariance elements of the space point set corresponding to the sampling points, which are included in each strong gradient line segment, satisfy a preset condition includes:
acquiring three covariance elements of a space point set corresponding to three sampling points contained in each strong gradient line segment, and calculating to obtain the ratio of one covariance element to the sum of the three covariance elements;
and judging whether the ratio is greater than a preset coefficient or not, and determining that the ratio is greater than the coefficient and meets a preset condition.
Preferably: and eliminating the strong gradient line segment corresponding to the coefficient of which the ratio is not greater than the preset coefficient.
Preferably: the odometer solution of the target error model includes:
and performing least square solution on the target error model through an LM algorithm to obtain pose transformation and the corresponding inverse depth of the pixel point.
A visual odometry device, the device comprising:
the line segment information extraction unit is used for extracting a plurality of line segment information contained in the image to be processed by using an LSD algorithm;
the strong gradient line segment screening unit is used for screening a plurality of strong gradient line segments from the line segment information by a quadtree homogenization screening method based on a gradient intensity scoring principle;
the sampling point acquisition unit is used for uniformly sampling each strong gradient line segment to obtain a plurality of sampling points, and carrying out back projection on the plurality of sampling points to obtain a space point set so as to calculate the covariance of the space point set;
the spatial line segment model fitting unit is used for judging whether covariance elements of spatial point sets corresponding to the sampling points, which are contained in each strong gradient line segment, meet preset conditions or not, and fitting the spatial point sets corresponding to the sampling points of all the strong gradient line segments meeting the preset conditions through a linear fitting technology to obtain a spatial line segment model;
and the calculating unit is used for constructing a luminosity error based on the luminosity invariant model, constructing a point line error based on the linear constraint model, and performing odometer calculation on a target error model, wherein the target error model comprises a luminosity error constraint, a first collinear constraint and a second collinear constraint.
A visual odometry apparatus, the apparatus comprising a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to perform the visual odometry method described above according to instructions in the program code.
A computer-readable storage medium for storing program code for performing the visual odometry method described above.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
the method, the device, the equipment and the storage medium for the visual odometer carry out quadtree homogenization on the line segments detected by the LSD algorithm, and solve the problem of computation surge caused by too much line segment constraint in a structured scene. In the homogenization process, the line segment with strong linear constraint is screened out for the point-line model constructed by the algorithm by calculating the gradient intensity mean value of the pixel points contained in the line segment, and the algorithm robustness brought by the introduction of the linear constraint is improved.
Of course, it is not necessary for any product in which the invention is practiced to achieve all of the above-described advantages at the same time.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings required in the embodiments will be briefly described below. It is obvious that the drawings in the following description are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
FIG. 1 is a flow chart of a visual odometry method provided by an embodiment of the present invention;
FIG. 2 is a flowchart of an overall algorithm provided by an embodiment of the present invention;
FIG. 3 is a schematic diagram of an imaging process of lens roll-off provided by an embodiment of the present invention;
FIG. 4 is a diagram of the LSD line segment detection effect provided by the embodiment of the present invention;
FIG. 5 is a schematic diagram of a linear projection model provided by an embodiment of the present invention;
FIG. 6 is a schematic diagram of a visual odometer device provided by an embodiment of the invention;
fig. 7 is a schematic diagram of a visual odometry apparatus provided by an embodiment of the invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. It is to be understood that the described embodiments are merely a few embodiments of the invention, and not all embodiments. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present invention.
Referring to fig. 1, a visual odometry method provided for an embodiment of the present invention, as shown in fig. 1, may include:
s101: extracting a plurality of line segment information contained in the image to be processed by using an LSD algorithm;
s102: based on a gradient intensity scoring principle, screening the information of the line segments by a quadtree homogenization screening method to obtain a plurality of strong gradient line segments; specifically, the gradient strength scoring principle includes that the average gradient strength of the line segment meets a preset strength threshold.
The quadtree homogenization screening method comprises the following steps:
let the current picture I i The detected line segments are collectedRetaining image gradient information in->The gradient value operation is defined as T i (. The) the slave line segment l is fixed at each screening i Selecting n points, and completing line segment screening through the following screening model:
s103: uniformly sampling each strong gradient line segment to obtain a plurality of sampling points, and performing back projection on the plurality of sampling points to obtain a spatial point set so as to calculate the covariance of the spatial point set;
s104: judging whether covariance elements of the space point set corresponding to the sampling points contained in each strong gradient line segment meet preset conditions or not, and fitting the space points corresponding to the sampling points of all the strong gradient line segments meeting the preset conditions through a linear fitting technology to obtain a space line segment model; specifically, three covariance elements of a space point set corresponding to three sampling points included in each strong gradient line segment are obtained, and a ratio of one covariance element to a sum of the three covariance elements is calculated;
and judging whether the ratio is greater than a preset coefficient or not, and determining that the ratio is greater than the coefficient and meets a preset condition. And eliminating the strong gradient line segment corresponding to the coefficient of which the ratio is not greater than the preset coefficient.
S105: the method comprises the steps of constructing photometric errors based on a photometric invariant model, constructing point line errors based on a linear constraint model, and performing odometry calculation on a target error model, wherein the target error model comprises photometric error constraints, first collinear constraints and second collinear constraints. Specifically, least square solution is carried out on the target error model through an LM algorithm, and pose transformation and pixel point corresponding inverse depth are obtained.
Based on the real-time requirement of the application of the visual odometer technology to the embedded mobile robot platform, the method provided by the embodiment of the application adopts a direct method to complete the positioning task in the visual odometer by comparing the advantages and disadvantages of the direct method and the indirect method. Aiming at the problem that the direct method is insufficient in robustness due to the fact that environmental structure information is not utilized, the method provided by the embodiment of the application aims to introduce uniform robust edge linear constraint in a direct method solving model by combining linear edge information in an environment, and robustness of a visual odometry technology in a structured scene is enhanced.
The visual odometry method provided by the embodiment of the application considers linear edge information in a conventional structured scene, and obtains the visual tracking target under linear constraint by performing linear fitting on the homogenized edge constraint. And finally, constructing a luminosity error based on a direct method and a luminosity error calibration technology, and constructing a point line error based on a linear model so as to perform rapid visual odometer calculation on the tracked target.
When the reflected light of the real object passes through the camera lens, the irradiation intensity is attenuated to a certain extent, so that the imaging quality is also attenuated after the camera photosensitive element receives the attenuated illumination amplitude. In order to solve the problem, the embodiment of the present application may provide that the image to be processed is subjected to image luminosity distortion removal to obtain a target image; and extracting a plurality of pieces of line segment information contained in the target image by using an LSD algorithm.
According to the method provided by the embodiment of the application, the gradient intensity scoring principle is combined with the quadtree homogenization screening, so that high-quality line segments in high-quality line segment information can be obtained, poor-quality line segments are removed, the positioning effect is not influenced, the number of the processed line segments during calculation can be reduced, and the purpose of saving calculation resources is achieved.
The following describes the visual odometer method provided in the embodiments of the present application in detail with reference to the drawings.
The visual odometry method provided by the embodiment of the application can comprise the following procedures in actual use:
(1) Removing photometric distortion of an input image;
(2) Acquiring image line segment information through an LSD technology;
(3) Based on a gradient intensity scoring principle, uniformly screening high-quality strong gradient line segments through a quadtree;
(4) Uniformly sampling each image line segment, back-projecting sampling points to a three-dimensional space to obtain space points, and then calculating a space line segment model for the space points through a linear fitting technology;
(5) And constructing a luminosity error based on the luminosity invariant model, constructing a point line error based on the linear constraint model, and performing odometer calculation on the overall error model.
In the visual odometry method provided by the embodiment of the application, the algorithm flow is shown in fig. 2, and the sub-modules in the dashed box are explained in detail below.
1. Removing photometric distortion of an image:
as shown in fig. 3, when the reflected light of the real object passes through the camera lens, the irradiation intensity is attenuated to some extent, and therefore, after the camera photosensitive element receives the attenuated illumination amplitude, the imaging quality is also attenuated accordingly.
Let the current picture I i Has an imaging pixel coordinate of x i The intensity of the light emitted by the real object is represented as B (x) i ) Lens pair x i The attenuation model of the pixel value is V (x) i ):Ω→[0,1]Exposure time of t i The whole imaging model is The attenuated pixel value pick>Therefore, the model for removing the photometric distortion of the image in the embodiment of the present application can be expressed as:
LSD line segment extraction:
in order to combine edge linear constraint in the environment, the embodiment of the present application performs a Line Segment extraction operation on the image with the distortion of luminosity removed, and a specific detection algorithm adopts an LSD (Line Segment Detector) algorithm. The LSD algorithm is a line segment detection method based on gradient information, and has the characteristics of high detection speed, adaptive parameters and sub-pixel level accuracy. The main idea is to combine pixels with the same gradient direction in a local area to achieve the purpose of line segment detection, and a specific detection effect of the LSD algorithm in the embodiment of the present application is shown in fig. 4.
3. Screening line segments by using a quadtree:
after LSD edge linear detection, good image linear information can be obtained, in order to further improve the robustness of linear constraint, the invention carries out quadtree homogenization on linear information content, and screens line segments at quadtree nodes according to the original image gradient strength corresponding to the line segments. Specifically, let the set of line segments detected in the current image Ii beRetaining image gradient information as->The gradient value-taking operation is defined as T i (. The) the slave line segment l is fixed at each screening i Selecting n points, and completing line segment selection through the following screening models:
namely, the screening task of the line segment is completed by judging whether the average gradient strength of the line segment meets the threshold value.
4. Fitting three-dimensional space line segments:
after the screened image line segments are obtained, projecting the sampling points to a three-dimensional space through a camera back projection model:
wherein d is x The inverse depth corresponding to sample point x. The space points obtained after projection are collected intoThen the pair->The sample points in (1) are subjected to principal component analysis to obtain a covariance element lambda thereof 1 ,λ 2 ,λ 3 If is greater or greater>User-defined parameter pair->Carrying out line segment fitting on contained points, otherwise abandoning projection and eliminating the line segment, thereby obtaining a three-dimensional space line segment which is expressed as L l 。
5. Constructing and optimizing a luminosity residual error and a dotted line residual error:
as shown in fig. 5, for adjacent image I i And I j The constraints involved in the present invention include three types:
(1) Image I i 、I j Corresponding line segment I i 、l j Photometric error constraint of upper pixel point:
here photometric error constraint E ij Reference to DSO, wherein ω x Is a weight factor, a i 、a j 、b i 、b j Obtaining affine photometric transformation parameters, t, for photometric calibration i 、t j For exposure time, | |) γ Is the Huber function, T ij As an image I i And I j The pose of the odometer is changed, namely the odometer result to be solved by the invention.
(2) Image I i Corresponding line segment l i Upper pixel point x i After back projection to space, the obtained back projection point P i And the space line segment L l Co-linear constraint of (c):
wherein P is i From the formula (3) through l i Calculated from the above sampling points, e (L) l ,P i ) Representing a spatial point P i And line segment L l The geometric distance of (a).
(3) Image I j Corresponding line segment l j Upper pixel point x j After back projection to space, the obtained back projection point P j And the space line segment L l Co-linear constraints of (d);
the photometric error and the dotted line error model constructed in the embodiment of the present application is represented as:
error model is passed through LM algorithm (LM: levenberg-Marquardt, levender-MarquarSpecially) least square solving is carried out, so that the pose transformation T can be solved ij Inverse depth d corresponding to pixel point x Changing posture T ij Used for transforming world coordinate system to realize positioning, and the pixel point corresponds to the inverse depth d x And the pixel projection is used, so that the whole process of solving the visual odometer in the embodiment of the application is completed.
In a word, the visual odometry method provided by the application performs quadtree homogenization on the line segments detected by the LSD algorithm, and solves the problem of computation quantity surge caused by excessive line segment constraint in a structured scene. In the homogenization process, a line segment with strong linear constraint is screened out for a point-line model constructed by the algorithm by calculating the gradient intensity mean value of pixel points contained in the line segment, and the algorithm robustness brought by the introduction of linear constraint is improved.
Referring to fig. 6, the present application may further provide a visual odometer apparatus, as shown in fig. 6, which may include:
a line segment information extraction unit 601, configured to extract a plurality of pieces of line segment information included in the image to be processed by using an LSD algorithm;
a line segment screening unit 602, configured to screen, based on a gradient intensity scoring principle, a plurality of strong gradient line segments from the information of the plurality of line segments by using a quadtree homogenization screening method;
the sampling point acquisition unit 603 is configured to uniformly sample each strong gradient line segment to obtain a plurality of sampling points, and perform back projection on the plurality of sampling points to obtain a spatial point set to calculate a spatial point set covariance;
a spatial line segment model fitting unit 604, configured to determine whether a covariance element of a spatial point set corresponding to the sampling point included in each strong gradient line segment satisfies a preset condition, and fit spatial points corresponding to the sampling points of all the strong gradient line segments satisfying the preset condition by using a linear fitting technique to obtain a spatial line segment model;
the calculating unit 605 is configured to construct a photometric error based on a photometric invariant model, construct a point-line error based on a linear constraint model, and perform odometry calculation on a target error model, where the target error model includes a photometric error constraint, a first collinear constraint, and a second collinear constraint.
Embodiments of the present application may also provide a visual odometry device, the device comprising a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to perform the steps of the visual odometry method described above according to instructions in the program code.
As shown in fig. 7, an embodiment of the present application provides a visual odometer device, which may include: a processor 10, a memory 11, a communication interface 12 and a communication bus 13. The processor 10, the memory 11 and the communication interface 12 all communicate with each other through a communication bus 13.
In the embodiment of the present application, the processor 10 may be a Central Processing Unit (CPU), an application specific integrated circuit, a digital signal processor, a field programmable gate array or other programmable logic device, etc.
The processor 10 may invoke a program stored in the memory 11, and in particular, the processor 10 may perform operations in embodiments of the visual odometry method.
The memory 11 is used for storing one or more programs, the program may include program codes, the program codes include computer operation instructions, in this embodiment, the memory 11 stores at least the program for implementing the following functions:
extracting a plurality of line segment information contained in the image to be processed by using an LSD algorithm;
based on a gradient intensity scoring principle, screening the information of the line segments by a quadtree homogenization screening method to obtain a plurality of strong gradient line segments;
uniformly sampling each strong gradient line segment to obtain a plurality of sampling points, and performing back projection on the plurality of sampling points to obtain a spatial point set so as to calculate the covariance of the spatial point set;
judging whether covariance elements of the space point sets corresponding to the sampling points, which are contained in each strong gradient line segment, meet preset conditions or not, and fitting the space point sets corresponding to the sampling points of all the strong gradient line segments meeting the preset conditions through a linear fitting technology to obtain space line segment models;
the method comprises the steps of constructing photometric errors based on a photometric invariant model, constructing point line errors based on a linear constraint model, and performing odometry calculation on a target error model, wherein the target error model comprises photometric error constraints, first collinear constraints and second collinear constraints.
In one possible implementation, the memory 11 may include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a file creation function, a data read/write function), and the like; the storage data area may store data created during use, such as initialization data.
Further, the memory 11 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device or other volatile solid state storage device.
The communication interface 12 may be an interface of a communication module for connecting with other devices or systems.
Of course, it should be noted that the structure shown in fig. 7 does not constitute a limitation of the visual odometry apparatus in the embodiment of the present application, and the visual odometry apparatus may include more or less components than those shown in fig. 7 in practical applications, or some components may be combined.
Embodiments of the present application may also provide a computer-readable storage medium for storing program code for performing the steps of the visual odometry method described above.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element.
From the above description of the embodiments, it is clear to those skilled in the art that the present application can be implemented by software plus necessary general hardware platform. Based on such understanding, the technical solutions of the present application may be essentially or partially implemented in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments of the present application.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, the system or system embodiments are substantially similar to the method embodiments and therefore are described in a relatively simple manner, and reference may be made to some of the descriptions of the method embodiments for related points. The above-described system and system embodiments are only illustrative, wherein the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement without inventive effort.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.
Claims (10)
1. A visual odometry method, comprising:
extracting a plurality of line segment information contained in the image to be processed by using an LSD algorithm;
based on a gradient intensity scoring principle, screening the information of the line segments by a quadtree homogenization screening method to obtain a plurality of strong gradient line segments;
uniformly sampling each strong gradient line segment to obtain a plurality of sampling points, and performing back projection on the plurality of sampling points to obtain a space point set so as to calculate covariance of the space point set;
judging whether the covariance elements of the space point sets corresponding to the sampling points contained in each strong gradient line segment meet preset conditions or not, and fitting the space point sets corresponding to the sampling points of all the strong gradient line segments meeting the preset conditions through a linear fitting technology to obtain space line segment models;
the method comprises the steps of constructing photometric errors based on a photometric invariant model, constructing point line errors based on a linear constraint model, and performing odometry calculation on a target error model, wherein the target error model comprises photometric error constraints, first collinear constraints and second collinear constraints.
2. The visual odometry method of claim 1, wherein the image to be processed is subjected to image photometric distortion removal to obtain a target image; and extracting a plurality of pieces of line segment information contained in the target image by using an LSD algorithm.
3. The visual odometry method of claim 1 wherein the gradient strength scoring principle comprises an average gradient strength of the line segments meeting a preset strength threshold.
4. The visual odometry method of claim 3 wherein the quadtree homogenization screening method comprises:
let the current picture I i The detected line segments are setRetaining image gradient information in->The gradient value operation is defined as T i (. The) the slave line segment l is fixed at each screening i Selecting n points, and completing line segment screening through the following screening model:
5. the visual odometry method of claim 1, wherein said determining whether a covariance element of a spatial point set corresponding to the sampling point, which is included in each of the strong gradient line segments, satisfies a predetermined condition comprises:
acquiring three covariance elements of a space point set corresponding to three sampling points contained in each strong gradient line segment, and calculating to obtain the ratio of one covariance element to the sum of the three covariance elements;
and judging whether the ratio is greater than a preset coefficient or not, and determining that the ratio is greater than the coefficient and meets a preset condition.
6. The visual odometry method of claim 5 wherein the strong gradient segments corresponding to the ratio not greater than a preset factor are rejected.
7. The visual odometry method of claim 1, wherein the odometry solution for a target error model comprises:
and performing least square solution on the target error model through an LM algorithm to obtain pose transformation and the corresponding inverse depth of the pixel point.
8. A visual odometry device, the device comprising:
the line segment information extraction unit is used for extracting a plurality of line segment information contained in the image to be processed by using an LSD algorithm;
the strong gradient line segment screening unit is used for screening a plurality of strong gradient line segments from the line segment information by a quadtree homogenization screening method based on a gradient intensity scoring principle;
the sampling point acquisition unit is used for uniformly sampling each strong gradient line segment to obtain a plurality of sampling points, and carrying out back projection on the plurality of sampling points to obtain a space point set so as to calculate the covariance of the space point set;
the spatial line segment model fitting unit is used for judging whether covariance elements of spatial point sets corresponding to the sampling points, which are contained in each strong gradient line segment, meet preset conditions or not, and fitting the spatial point sets corresponding to the sampling points of all the strong gradient line segments meeting the preset conditions through a linear fitting technology to obtain a spatial line segment model;
and the calculating unit is used for constructing a luminosity error based on the luminosity invariant model, constructing a point line error based on the linear constraint model, and performing odometer calculation on a target error model, wherein the target error model comprises a luminosity error constraint, a first collinear constraint and a second collinear constraint.
9. A visual odometry device, characterized in that the device comprises a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to perform the visual odometry method of any one of claims 1-7 according to instructions in the program code.
10. A computer-readable storage medium for storing program code for performing the visual odometry method of any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211541198.5A CN115930988A (en) | 2022-12-02 | 2022-12-02 | Visual odometer method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211541198.5A CN115930988A (en) | 2022-12-02 | 2022-12-02 | Visual odometer method, device, equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115930988A true CN115930988A (en) | 2023-04-07 |
Family
ID=86551624
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211541198.5A Pending CN115930988A (en) | 2022-12-02 | 2022-12-02 | Visual odometer method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115930988A (en) |
-
2022
- 2022-12-02 CN CN202211541198.5A patent/CN115930988A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112444242B (en) | Pose optimization method and device | |
CN111274974B (en) | Positioning element detection method, device, equipment and medium | |
CN109816708B (en) | Building texture extraction method based on oblique aerial image | |
WO2021004416A1 (en) | Method and apparatus for establishing beacon map on basis of visual beacons | |
CN108648194B (en) | Three-dimensional target identification segmentation and pose measurement method and device based on CAD model | |
CN111222395A (en) | Target detection method and device and electronic equipment | |
CN102507592A (en) | Fly-simulation visual online detection device and method for surface defects | |
CN105513083B (en) | A kind of PTAM video camera tracking method and device | |
CN114067197B (en) | Pipeline defect identification and positioning method based on target detection and binocular vision | |
CN110998671B (en) | Three-dimensional reconstruction method, device, system and storage medium | |
CN113689578B (en) | Human body data set generation method and device | |
CN112348775B (en) | Vehicle-mounted looking-around-based pavement pit detection system and method | |
CN111383204A (en) | Video image fusion method, fusion device, panoramic monitoring system and storage medium | |
WO2021017211A1 (en) | Vehicle positioning method and device employing visual sensing, and vehicle-mounted terminal | |
CN115049821A (en) | Three-dimensional environment target detection method based on multi-sensor fusion | |
CN117036300A (en) | Road surface crack identification method based on point cloud-RGB heterogeneous image multistage registration mapping | |
CN114332134A (en) | Building facade extraction method and device based on dense point cloud | |
Zhang et al. | Improved feature point extraction method of ORB-SLAM2 dense map | |
CN115930988A (en) | Visual odometer method, device, equipment and storage medium | |
CN115953460A (en) | Visual odometer method based on self-supervision deep learning | |
CN115631108A (en) | RGBD-based image defogging method and related equipment | |
CN114399532A (en) | Camera position and posture determining method and device | |
CN113034601A (en) | Scene map point and image frame matching method in environment modeling | |
Li et al. | Overall well-focused catadioptric image acquisition with multifocal images: a model-based method | |
CN111127474A (en) | Airborne LiDAR point cloud assisted orthophoto mosaic line automatic selection method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |