CN111879306B - Visual inertial positioning method, device and system and computer equipment - Google Patents
Visual inertial positioning method, device and system and computer equipment Download PDFInfo
- Publication number
- CN111879306B CN111879306B CN202010553571.3A CN202010553571A CN111879306B CN 111879306 B CN111879306 B CN 111879306B CN 202010553571 A CN202010553571 A CN 202010553571A CN 111879306 B CN111879306 B CN 111879306B
- Authority
- CN
- China
- Prior art keywords
- acquiring
- image sequence
- frequency
- module
- visual inertial
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/005—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/277—Analysis of motion involving stochastic approaches, e.g. using Kalman filters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/20—Scenes; Scene-specific elements in augmented reality scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Automation & Control Theory (AREA)
- Image Analysis (AREA)
Abstract
The application relates to a visual inertial positioning method, a device, a system and computer equipment, wherein the visual inertial positioning method comprises the following steps: acquiring a first image sequence of the AR equipment according to the first frequency; according to the first image sequence, acquiring the spatial position of the AR equipment in a Visual Inertial Odometer (VIO) algorithm by utilizing a filtering technology; acquiring a second image sequence according to the second frequency, and acquiring a sparse map by utilizing an optimization technology according to the second image sequence; wherein the second frequency is lower than the first frequency; and acquiring a positioning result of visual inertial positioning according to the space position and the sparse map. Through the application, the problem that the high precision and the high efficiency of vision inertial positioning cannot be considered is solved.
Description
Technical Field
The present application relates to the field of visual positioning technology, and in particular, to a method, an apparatus, a system, and a computer device for visual inertial positioning.
Background
In the development of positioning systems, visual inertial positioning systems combining image information and inertial measurement information have gradually appeared, since the inertial measurement units are able to accurately give motion information. In the related art, an Augmented Reality (AR) system at a mobile device generally employs a visual inertial positioning technology to obtain a spatial position of the device. However, in the related art, the visual inertial positioning technology applied to the AR system is generally relatively low in accuracy and difficult to recover from tracking failure; or, in order to pursue high precision and relocation capability after tracking failure for visual inertial positioning, the calculation overhead is large, the device is easy to generate heat, and the continuous use is influenced, so that the technical requirements of high precision and high efficiency cannot be simultaneously met.
At present, an effective solution is not provided aiming at the problem that the high precision and the high efficiency of the visual inertial positioning in the related technology can not be considered at the same time.
Disclosure of Invention
The embodiment of the application provides a visual inertial positioning method, a device, a system and computer equipment, which at least solve the problem that the high precision and the high efficiency of the visual inertial positioning in the related technology can not be considered at the same time.
In a first aspect, an embodiment of the present application provides a method for visual inertial positioning, where the method includes:
acquiring a first image sequence of the AR equipment according to the first frequency;
acquiring the spatial position of the AR equipment in a Visual Inertial Odometer (VIO) algorithm by utilizing a filtering technology according to the first image sequence;
acquiring a second image sequence according to a second frequency, and acquiring a sparse map by utilizing an optimization technology according to the second image sequence; wherein the second frequency is lower than the first frequency;
and acquiring a positioning result of visual inertial positioning according to the space position and the sparse map.
In some embodiments, after the obtaining a sparse map by using an optimization technique and before the obtaining a positioning result of visual inertial positioning according to the spatial position and the sparse map, the method further comprises:
acquiring a pose difference between the spatial position and the sparse map; and correcting the spatial position when the pose difference is greater than or equal to a preset error.
In some embodiments, the obtaining a sparse map using an optimization technique according to the second image sequence includes:
extracting and matching the characteristic points of the second image sequence to obtain the corresponding relation of the characteristic points;
acquiring the three-dimensional position of the characteristic point by utilizing a triangulation method according to the spatial position and the corresponding relation;
acquiring optimized position information by utilizing a beam adjustment (BundleAdjustment) technology according to the three-dimensional position; and acquiring the sparse map according to the optimized position information.
In some of these embodiments, the VIO algorithm is replaced with a VIO algorithm based on the filtering technique.
In a second aspect, an embodiment of the present application provides an apparatus for visual inertial positioning, the apparatus including: the map searching system comprises an image module, a VIO module, an optimization module and a map searching module;
the image module is used for acquiring a first image sequence of the AR equipment according to the first frequency;
the VIO module acquires the spatial position of the AR equipment in the VIO algorithm by using a filtering technology according to the first image sequence;
the optimization module is used for acquiring a second image sequence according to a second frequency and acquiring a sparse map by utilizing an optimization technology according to the second image sequence; wherein the second frequency is lower than the first frequency;
and the map query module is used for acquiring a positioning result of visual inertial positioning according to the spatial position and the sparse map.
In some of these embodiments, the apparatus further comprises a correction module;
the correction module is used for acquiring a pose difference between the spatial position and the sparse map; and correcting the spatial position when the pose difference is greater than or equal to a preset error.
In some of these embodiments, the VIO module is replaced with a VIO module based on the filtering technique.
In a third aspect, a system for visual inertial positioning is provided, the system comprising a master device and an AR device;
the main control equipment acquires a first image sequence of the AR equipment according to a first frequency;
the main control equipment acquires the spatial position of the AR equipment in the VIO algorithm by using a filtering technology according to the first image sequence;
the main control equipment acquires a second image sequence according to a second frequency, and acquires a sparse map by utilizing an optimization technology according to the second image sequence; wherein the second frequency is lower than the first frequency;
and the main control equipment acquires a positioning result of visual inertial positioning according to the spatial position and the sparse map.
In a fourth aspect, embodiments of the present application provide a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method for visual inertial positioning as described in the first aspect above when executing the computer program.
In a fifth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor, implements the method for visual inertial positioning as described in the first aspect above.
Compared with the related art, the method, the device, the system and the computer device for visual inertial positioning provided by the embodiment of the application acquire the first image sequence of the AR device according to the first frequency; according to the first image sequence, acquiring the spatial position of the AR equipment in a Visual Inertial Odometer (VIO) algorithm by utilizing a filtering technology; acquiring a second image sequence according to the second frequency, and acquiring a sparse map by utilizing an optimization technology according to the second image sequence; wherein the second frequency is lower than the first frequency; and acquiring a positioning result of the visual inertial positioning according to the space position and the sparse map, thereby solving the problem that the high precision and the high efficiency of the visual inertial positioning cannot be considered at the same time.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below to provide a more thorough understanding of the application.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a first flowchart of a visual inertial positioning method according to an embodiment of the present application;
FIG. 2 is a second flowchart of a visual inertial positioning method according to an embodiment of the present application;
FIG. 3 is a flow chart III of a visual inertial positioning method according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a geometric model according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a visual inertial positioning method according to an embodiment of the present application;
FIG. 6 is a first block diagram of a visual inertial positioning unit according to an embodiment of the present application;
FIG. 7 is a block diagram of a second embodiment of a visual inertial positioning device according to the present application;
FIG. 8 is a block diagram of a visual inertial positioning system according to an embodiment of the present application;
fig. 9 is a hardware configuration diagram of a computer device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described and illustrated below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments provided in the present application without any inventive step are within the scope of protection of the present application.
It is obvious that the drawings in the following description are only examples or embodiments of the application, and that it is also possible for a person skilled in the art to apply the application to other similar contexts on the basis of these drawings without inventive effort. Moreover, it should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the specification. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of ordinary skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments without conflict.
Unless otherwise defined, technical or scientific terms referred to herein should have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. Reference to "a," "an," "the," and similar words throughout this application are not to be construed as limiting in number, and may refer to the singular or the plural. The use of the terms "including," "comprising," "having," and any variations thereof herein, is meant to cover a non-exclusive inclusion; for example, a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to the listed steps or elements, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. Reference to "connected," "coupled," and the like in this application is not intended to be limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. The term "plurality" as referred to herein means two or more. "and/or" describes an association relationship of associated objects, meaning that three relationships may exist, for example, "A and/or B" may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. Reference herein to the terms "first," "second," "third," and the like, are merely to distinguish similar objects and do not denote a particular ordering for the objects.
In the present embodiment, a method of visual inertial positioning is provided. Fig. 1 is a first flowchart of a visual inertial positioning method according to an embodiment of the present application, as shown in fig. 1, the first flowchart includes the following steps:
step S102, acquiring a first image sequence of the AR equipment according to a first frequency; the AR equipment is provided with a camera, and the main control device acquires images acquired by the AR equipment through the camera at a first frequency so as to generate a first image sequence.
Step S104, according to the first image sequence, obtaining the space position of the AR equipment in the VIO algorithm by using a filtering technology; for example, in a Filter-based visual positioning system, the spatial position may be obtained by estimating a gaussian distribution of the state through an Extended Kalman Filter (EKF).
In some embodiments, the VIO module adopts any VIO algorithm based on a filtering method as a sub-module of the scheme, and is technically feasible; for example, the VIO module may be replaced with a VIO module based on the filtering technique; for example, the VIO module can be replaced by an ROVIO module, since the VIO algorithm is a tight coupling, the VIO is realized based on the filtering of the image block, and the calculation amount is small, thereby being more beneficial to improving the efficiency of visual inertial positioning under the condition of ensuring high precision.
Step S106, directly obtaining a second image sequence of the AR device according to the second frequency, or selectively obtaining the second image sequence from the first image sequence according to the second frequency; wherein the second frequency is lower than the first frequency; and then acquiring a sparse map by utilizing an optimization technology according to the second image sequence.
It should be noted that, the reason why the map is constructed based on the optimization method is that the map contains a large number of three-dimensional points, and if the map is modeled based on the filtering method, the space of the mathematical model is too large, so that the calculation is difficult; meanwhile, the step S106 is referred to as a map builder, and a map builder operating at a second frequency lower in frequency is used, so that extra huge calculation overhead is not caused, and heat generation of the device is avoided.
And step S108, acquiring a positioning result of the visual inertial positioning according to the spatial position and the sparse map, and further realizing the acquisition of the spatial position of the AR equipment by utilizing the visual inertial positioning technology.
In the related art, when the AR device is used in space for too long, the position tracking accuracy of the device continues to decrease; in addition, due to the high computational requirements of the optimization method, the AR device may be used for a long time, which may cause serious heat generation of the device. In the embodiment of the application, through the steps S102 to S108, the spatial position of the AR device is located and obtained based on the filtering method, and the sparse map is obtained through the optimization method, so that the location result of the visual inertial positioning is obtained, the problem that the high accuracy and the high efficiency of the visual inertial positioning cannot be considered at the same time is solved, and the high-efficiency, the high-stability and the high-accuracy spatial position tracking of the AR device is realized; and all calculations are performed locally in real time without relying on computing power outside the system.
In some of these embodiments, a method of visual inertial positioning is provided. Fig. 2 is a second flowchart of a visual inertial positioning method according to an embodiment of the present application, and as shown in fig. 2, the second flowchart includes the following steps:
step S202, monitoring tracking state information provided by a VIO module, inquiring a constructed environment map, and further acquiring a pose difference between the space position and the sparse map; and then judging whether the error of the VIO module needs to be corrected. The judgment basis of whether the error needs to be corrected is to compare the difference between the position information provided by the VIO module and the position information provided by the map module, and if the difference is larger than a certain threshold value, the error needs to be corrected. Therefore, the spatial position is corrected in the case where the pose difference is greater than or equal to a preset error; the pose difference comprises a translation error absolute value and an angle error absolute value of the pose, and the preset error can be preset by a user; and then determining a positioning result according to the sparse map and the corrected spatial position.
Different from closed loop detection of an SLAM algorithm in the related art, the embodiment of the application performs implicit pose correction on the VIO through the observation information provided by the map points in the step S202, so that the real-time pose of the AR equipment can be more accurately acquired, a globally consistent track and map are established, and the accuracy of the visual inertial positioning method is effectively improved; meanwhile, in the related art, when a user gesture violently moves or shields a camera of the AR device, the AR device location tracking fails, and the location of the device cannot be relocated when the user resumes normal use of the device; in the embodiment of the present application, through the step S202, relocation of the AR device can be provided, spatial location tracking of the relocation of the AR device is realized, and the positioning system has better robustness.
In some of these embodiments, a method of visual inertial positioning is provided. Fig. 3 is a third flowchart of a visual inertial positioning method according to an embodiment of the present application, and as shown in fig. 3, the flowchart includes the following steps:
step S302, extracting feature points from the second image sequence, matching the feature points in different images, and obtaining the corresponding relation of the feature points.
Step S304, acquiring the three-dimensional position of the feature point by using a triangulation method according to the spatial position output by the VIO algorithm of each image in the second image sequence and the corresponding relation of the feature point; wherein the spatial position comprises pose information of each image from the VIO module; triangulation is a method in which, in visual localization, a plurality of AR devices and projected points of one point in space are known, and the 3D position of the point is further found, triangulation is the reverse process of Pose Estimation (position Estimation), and after the position of the AR devices is found, the three-dimensional positions of other feature points in the image can be found one by this method.
Fig. 4 is a schematic diagram of a geometric model according to an embodiment of the present application, and as shown in fig. 4, the geometric model is a geometric model with noise, and an error caused by a pixel residual exists between a solved three-dimensional point and an actual three-dimensional point; the geometric constraints of the two frames of images are used to match the corresponding pixel points, and the three-dimensional point positions corresponding to the pixel points and the pose of the second image relative to the first image can be obtained through solving.
Step S306, according to the three-dimensional position, utilizing a BundleAdjustment technology to simultaneously acquire optimized position information and an optimized image pose, wherein a plurality of images with a common view field are given, the pose of a certain frame of image is taken as a reference coordinate system, landmark (landmark) features are extracted from all the images, the three-dimensional position of the features in the reference coordinate system is optimized, and the three-dimensional pose (position) of the optimized images in the reference coordinate system is optimized; and acquiring the sparse map according to the optimized position information and the optimized image pose.
Through the steps S302 to S306, based on the optimization method, the extraction and matching of the feature points are realized, the three-dimensional position is obtained through solving, and finally the BundleAdjustment technology is utilized to obtain the sparse map, so that the optimization of the map builder is further realized.
It should be understood that, although the steps in the flowcharts of fig. 1 to 3 are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 1-3 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with other steps or at least some of the sub-steps or stages of other steps.
An embodiment of the present invention is described in detail below with reference to an actual application scenario, and fig. 5 is a schematic diagram of a visual inertial positioning method according to an embodiment of the present application, and as shown in fig. 5, a filtering method is used to perform rough tracking of a spatial position of an AR device, which is referred to as VIO; an optimization method is adopted, image information is obtained from a VIO module at a very low working frequency, a sparse map of an environment is constructed, and the process is called as a map builder; monitoring tracking state information provided by the VIO module, inquiring the constructed environment map, and judging whether errors of the VIO need to be corrected or not; and if the position tracking error correction is needed, correcting the error of the VIO module by using the inquired map information. Wherein, the module for inquiring the map is called a map inquirer; the module for carrying out error correction on the VIO module is called an error corrector; through the embodiment, the accurate visual inertial positioning result is obtained.
In this embodiment, a device for visual inertial positioning is provided, and the device is used to implement the above embodiments and preferred embodiments, which have already been described and will not be described again. As used hereinafter, the terms "module," "unit," "subunit," and the like may implement a combination of software and/or hardware for a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 6 is a first block diagram of a visual inertial positioning device according to an embodiment of the present application, as shown in fig. 6, the device includes: an image module 62, a VIO module 64, an optimization module 66, and a map query module 68; the image module 62 is configured to obtain a first image sequence of the AR device according to the first frequency; the VIO module 64, according to the first image sequence, using a filtering technique to obtain a spatial position of the AR device in a VIO algorithm; the optimization module 66 is configured to obtain a second image sequence according to a second frequency and the first image sequence; the optimization module 66 obtains a sparse map by using an optimization technique according to the second image sequence; the map query module 68 is configured to obtain a positioning result of the visual inertial positioning according to the spatial location and the sparse map.
Through the embodiment, the VIO module 64 obtains the spatial position of the AR device based on the filtering method, and obtains the sparse map based on the optimization method through the optimization module 66, so that the map query module 68 obtains the positioning result of the visual inertial positioning, the problem that the high precision and the high efficiency of the visual inertial positioning cannot be considered at the same time is solved, and the high-efficiency, high-stability and high-precision spatial position tracking of the AR device is realized; and all calculations are performed locally in real time without relying on computing power outside the system.
In some embodiments, a device for visual inertial positioning is provided, and fig. 7 is a block diagram of a second structure of a device for visual inertial positioning according to an embodiment of the present application, as shown in fig. 7, the device includes all the modules shown in fig. 6, and further includes a correction module 72; the correcting module 72 is configured to obtain a pose difference between the spatial location and the sparse map; the correction module 72 corrects the spatial position if the pose difference is greater than or equal to a preset error.
In some embodiments, the optimization module 66 is further configured to extract and match feature points of the second image sequence, and obtain a corresponding relationship of the feature points; the optimizing module 66 obtains the three-dimensional position of the feature point by using a triangulation method according to the spatial position and the corresponding relationship; the optimization module 66 obtains optimized position information by using the bundleadjust technology according to the three-dimensional position; the optimization module 66 obtains the sparse map according to the optimized location information.
In some of these embodiments, the VIO module 64 is replaced with a VIO module based on the filtering technique. That is, any filtering method-based VIO algorithm is adopted as the sub-module in the scheme, and the technology is feasible.
The above modules may be functional modules or program modules, and may be implemented by software or hardware. For a module implemented by hardware, the modules may be located in the same processor; or the modules can be respectively positioned in different processors in any combination.
In the present embodiment, a system for visual inertial positioning is provided, and fig. 8 is a block diagram illustrating a structure of a system for visual inertial positioning according to an embodiment of the present application, and as shown in fig. 8, the system includes a main control device 82 and an AR device 84; the master device 82 obtains a first sequence of images of the AR device 84 according to a first frequency; the main control device 82 obtains the spatial position of the AR device 84 in the VIO algorithm by using a filtering technique according to the first image sequence; the main control device 82 obtains a second image sequence according to the second frequency and the first image sequence; the main control device 82 obtains a sparse map by using an optimization technology according to the second image sequence; the main control device 82 obtains the positioning result of the visual inertial positioning according to the spatial position and the sparse map.
Through the above embodiment, the main control device 82 obtains the spatial position of the AR device 84 based on the filtering method, and obtains the sparse map through the optimization method, thereby obtaining the positioning result of the visual inertial positioning, solving the problem that the high precision and the high efficiency of the visual inertial positioning cannot be considered at the same time, and realizing the high-efficiency, high-stability and high-precision spatial position tracking of the AR device 84; and all calculations are performed locally in real time without relying on computing power outside the system.
In some of these embodiments, the master device 82 is further configured to obtain a pose difference between the spatial location and the sparse map; and correcting the spatial position under the condition that the pose difference is greater than or equal to a preset error.
In some embodiments, the main control device 82 is further configured to extract and match feature points of the second image sequence, and obtain a corresponding relationship of the feature points; the main control device 82 obtains the three-dimensional position of the feature point by using a triangulation method according to the spatial position and the corresponding relationship; the main control device 82 obtains optimized position information by using a BundleAdjustment technology according to the three-dimensional position; the master control device 82 obtains the sparse map according to the optimized location information.
In some embodiments, the master device 82 is further configured to obtain a spatial location of the AR device 84 in the VIO algorithm based on the filtering technique according to the first image sequence.
In addition, the visual inertial positioning method described in conjunction with fig. 1 in the embodiment of the present application may be implemented by a computer device. Fig. 9 is a hardware configuration diagram of a computer device according to an embodiment of the present application.
The computer device may include a processor 92 and a memory 94 storing computer program instructions.
Specifically, the processor 92 may include a Central Processing Unit (CPU), or A Specific Integrated Circuit (ASIC), or may be configured to implement one or more Integrated circuits of the embodiments of the present Application.
The processor 92 reads and executes computer program instructions stored in the memory 94 to implement any of the visual inertial positioning methods described in the above embodiments.
In some of these embodiments, the computer device may also include a communication interface 96 and a bus 98. As shown in fig. 9, the processor 92, the memory 94, and the communication interface 96 are connected via a bus 98 to complete communication therebetween.
The communication interface 96 is used for realizing communication among modules, devices, units and/or equipment in the embodiment of the present application. The communication port 96 may also be implemented with other components such as: the data communication is carried out among external equipment, image/data acquisition equipment, a database, external storage, an image/data processing workstation and the like.
The computer device may execute the visual inertial positioning method in the embodiment of the present application based on the acquired image sequence, thereby implementing the visual inertial positioning method described with reference to fig. 1.
In addition, in combination with the visual inertial positioning method in the foregoing embodiments, the embodiments of the present application may be implemented by providing a computer-readable storage medium. The computer readable storage medium having stored thereon computer program instructions; the computer program instructions, when executed by a processor, implement any of the above-described embodiments of the visual inertial positioning method.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.
Claims (8)
1. A method of visual inertial positioning, the method comprising:
acquiring a first image sequence of the AR equipment according to the first frequency;
according to the first image sequence, acquiring the spatial position of the AR equipment in a Visual Inertial Odometer (VIO) algorithm by utilizing a filtering technology;
acquiring a second image sequence according to a second frequency, and acquiring a sparse map by utilizing an optimization technology according to the second image sequence; wherein the second frequency is lower than the first frequency;
acquiring a pose difference between the spatial position and the sparse map; correcting the spatial position when the pose difference is greater than or equal to a preset error;
and acquiring a positioning result of visual inertial positioning according to the space position and the sparse map.
2. The method of claim 1, wherein obtaining a sparse map using an optimization technique according to the second image sequence comprises:
extracting and matching the characteristic points of the second image sequence to obtain the corresponding relation of the characteristic points;
acquiring the three-dimensional position of the characteristic point by using a triangulation method according to the space position and the corresponding relation;
according to the three-dimensional position, acquiring optimized position information by using a beam adjustment BundleAdjustment technology; and acquiring the sparse map according to the optimized position information.
3. The method according to any of claims 1-2, wherein the VIO algorithm is replaced with a VIO algorithm based on the filtering technique.
4. An apparatus for visual inertial positioning, the apparatus comprising: the map searching system comprises an image module, a VIO module, an optimization module and a map searching module;
the image module is used for acquiring a first image sequence of the AR equipment according to the first frequency;
the VIO module acquires the spatial position of the AR equipment in the VIO algorithm by using a filtering technology according to the first image sequence;
the optimization module is used for acquiring a second image sequence according to a second frequency and acquiring a sparse map by utilizing an optimization technology according to the second image sequence; wherein the second frequency is lower than the first frequency;
the correction module is used for acquiring a pose difference between the spatial position and the sparse map; correcting the spatial position when the pose difference is greater than or equal to a preset error;
and the map query module is used for acquiring a positioning result of visual inertial positioning according to the spatial position and the sparse map.
5. The apparatus of claim 4, wherein the VIO module is replaced with a VIO module based on the filtering technique.
6. A system for visual inertial positioning, the system comprising a master device and an AR device;
the main control equipment acquires a first image sequence of the AR equipment according to a first frequency;
the main control equipment acquires the spatial position of the AR equipment in the VIO algorithm by using a filtering technology according to the first image sequence;
the master control equipment acquires a second image sequence according to a second frequency and acquires a sparse map by utilizing an optimization technology according to the second image sequence; wherein the second frequency is lower than the first frequency;
the master control device acquires a pose difference between the spatial position and the sparse map; correcting the spatial position when the pose difference is greater than or equal to a preset error;
and the main control equipment acquires a positioning result of visual inertial positioning according to the spatial position and the sparse map.
7. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 3 when executing the computer program.
8. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 3.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010553571.3A CN111879306B (en) | 2020-06-17 | 2020-06-17 | Visual inertial positioning method, device and system and computer equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010553571.3A CN111879306B (en) | 2020-06-17 | 2020-06-17 | Visual inertial positioning method, device and system and computer equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111879306A CN111879306A (en) | 2020-11-03 |
CN111879306B true CN111879306B (en) | 2022-09-27 |
Family
ID=73157875
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010553571.3A Active CN111879306B (en) | 2020-06-17 | 2020-06-17 | Visual inertial positioning method, device and system and computer equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111879306B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109345015A (en) * | 2018-09-30 | 2019-02-15 | 百度在线网络技术(北京)有限公司 | Method and apparatus for choosing route |
CN110553648A (en) * | 2018-06-01 | 2019-12-10 | 北京嘀嘀无限科技发展有限公司 | method and system for indoor navigation |
CN111258313A (en) * | 2020-01-20 | 2020-06-09 | 深圳市普渡科技有限公司 | Multi-sensor fusion SLAM system and robot |
CN111275763A (en) * | 2020-01-20 | 2020-06-12 | 深圳市普渡科技有限公司 | Closed loop detection system, multi-sensor fusion SLAM system and robot |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105953796A (en) * | 2016-05-23 | 2016-09-21 | 北京暴风魔镜科技有限公司 | Stable motion tracking method and stable motion tracking device based on integration of simple camera and IMU (inertial measurement unit) of smart cellphone |
CN106446815B (en) * | 2016-09-14 | 2019-08-09 | 浙江大学 | A kind of simultaneous localization and mapping method |
US10267924B2 (en) * | 2017-01-04 | 2019-04-23 | Qualcomm Incorporated | Systems and methods for using a sliding window of global positioning epochs in visual-inertial odometry |
EP3451288A1 (en) * | 2017-09-04 | 2019-03-06 | Universität Zürich | Visual-inertial odometry with an event camera |
CN108489482B (en) * | 2018-02-13 | 2019-02-26 | 视辰信息科技(上海)有限公司 | The realization method and system of vision inertia odometer |
CN110118554B (en) * | 2019-05-16 | 2021-07-16 | 达闼机器人有限公司 | SLAM method, apparatus, storage medium and device based on visual inertia |
CN110375738B (en) * | 2019-06-21 | 2023-03-14 | 西安电子科技大学 | Monocular synchronous positioning and mapping attitude calculation method fused with inertial measurement unit |
CN110472585B (en) * | 2019-08-16 | 2020-08-04 | 中南大学 | VI-S L AM closed-loop detection method based on inertial navigation attitude track information assistance |
-
2020
- 2020-06-17 CN CN202010553571.3A patent/CN111879306B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110553648A (en) * | 2018-06-01 | 2019-12-10 | 北京嘀嘀无限科技发展有限公司 | method and system for indoor navigation |
CN109345015A (en) * | 2018-09-30 | 2019-02-15 | 百度在线网络技术(北京)有限公司 | Method and apparatus for choosing route |
CN111258313A (en) * | 2020-01-20 | 2020-06-09 | 深圳市普渡科技有限公司 | Multi-sensor fusion SLAM system and robot |
CN111275763A (en) * | 2020-01-20 | 2020-06-12 | 深圳市普渡科技有限公司 | Closed loop detection system, multi-sensor fusion SLAM system and robot |
Also Published As
Publication number | Publication date |
---|---|
CN111879306A (en) | 2020-11-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109727288B (en) | System and method for monocular simultaneous localization and mapping | |
CN107292949B (en) | Three-dimensional reconstruction method and device of scene and terminal equipment | |
JP4985516B2 (en) | Information processing apparatus, information processing method, and computer program | |
US11017545B2 (en) | Method and device of simultaneous localization and mapping | |
WO2022052582A1 (en) | Image registration method and device, electronic apparatus, and storage medium | |
CN111127524A (en) | Method, system and device for tracking trajectory and reconstructing three-dimensional image | |
WO2018214086A1 (en) | Method and apparatus for three-dimensional reconstruction of scene, and terminal device | |
CN112184768B (en) | SFM reconstruction method and device based on laser radar and computer equipment | |
CN111080784B (en) | Ground three-dimensional reconstruction method and device based on ground image texture | |
CN111882655B (en) | Method, device, system, computer equipment and storage medium for three-dimensional reconstruction | |
CN111833447A (en) | Three-dimensional map construction method, three-dimensional map construction device and terminal equipment | |
KR20130084849A (en) | Method and apparatus for camera tracking | |
WO2023005457A1 (en) | Pose calculation method and apparatus, electronic device, and readable storage medium | |
US20230005216A1 (en) | Three-dimensional model generation method and three-dimensional model generation device | |
CN108280846B (en) | Target tracking correction method and device based on geometric figure matching | |
CN111879306B (en) | Visual inertial positioning method, device and system and computer equipment | |
CN115937002B (en) | Method, apparatus, electronic device and storage medium for estimating video rotation | |
CN111583338A (en) | Positioning method and device for unmanned equipment, medium and unmanned equipment | |
CN112288817B (en) | Three-dimensional reconstruction processing method and device based on image | |
CN111489439B (en) | Three-dimensional line graph reconstruction method and device and electronic equipment | |
CN114066980A (en) | Object detection method and device, electronic equipment and automatic driving vehicle | |
CN113763468A (en) | Positioning method, device, system and storage medium | |
Yuan et al. | CR-LDSO: direct sparse LiDAR-assisted visual odometry with cloud reusing | |
WO2014192061A1 (en) | Image processing device, image processing method, and image processing program | |
CN110807818A (en) | RGB-DSLAM method and device based on key frame |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |