CN113469495A - Automatic evaluation method and system for visual positioning system - Google Patents

Automatic evaluation method and system for visual positioning system Download PDF

Info

Publication number
CN113469495A
CN113469495A CN202110590939.8A CN202110590939A CN113469495A CN 113469495 A CN113469495 A CN 113469495A CN 202110590939 A CN202110590939 A CN 202110590939A CN 113469495 A CN113469495 A CN 113469495A
Authority
CN
China
Prior art keywords
pose
positioning
visual
continuous image
automated evaluation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110590939.8A
Other languages
Chinese (zh)
Inventor
顾升宇
王强
张小军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Visionstar Information Technology Shanghai Co ltd
Original Assignee
Visionstar Information Technology Shanghai Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Visionstar Information Technology Shanghai Co ltd filed Critical Visionstar Information Technology Shanghai Co ltd
Priority to CN202110590939.8A priority Critical patent/CN113469495A/en
Publication of CN113469495A publication Critical patent/CN113469495A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06395Quality analysis or management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Human Resources & Organizations (AREA)
  • Strategic Management (AREA)
  • Physics & Mathematics (AREA)
  • Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Educational Administration (AREA)
  • Development Economics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Game Theory and Decision Science (AREA)
  • Tourism & Hospitality (AREA)
  • Quality & Reliability (AREA)
  • General Business, Economics & Management (AREA)
  • Operations Research (AREA)
  • Marketing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application provides an automatic evaluation method and system of a visual positioning system. In one method, comprising acquiring successive image frames as visual positioning data; acquiring a true pose value A of the continuous image frame; acquiring a positioning pose B of the continuous image frame; and calculating the pose true value A and the corresponding positioning pose B to obtain the position error and the angle error of each frame of image, judging that the positioning quality is qualified when the position error is smaller than a preset position error threshold and the angle error is smaller than a preset angle error threshold, and otherwise, judging that the positioning quality is unqualified. The method can be used for automatically evaluating the quality of the positioning result, the result is reliable, the requirements on an application scene, an operation mode and equipment are low, and manpower and material resources are saved.

Description

Automatic evaluation method and system for visual positioning system
Technical Field
The present application relates to the field of computer vision technologies, and in particular, to an automated evaluation method and system for a vision positioning system.
Background
Visual localization is the location and orientation of an image obtained by matching stable features in the image to a map. With the continuous development of the camera technology, the world sensing capability of the camera is stronger and stronger, the 3D high-precision map capability, the space computing capability, the strong environment understanding capability and the virtual and real fusion capability are fused, and more technologies are provided. In the fields of augmented reality, virtual reality, navigation, mobile robots, unmanned aerial vehicles, unmanned driving and the like, the requirement for obtaining the position and the angle of a camera in space is very important. The conventional method is to carry out three-dimensional modeling on a space or a building, make a 3D map, wherein the 3D map stores the shape, position, angle, characteristics and semantic information of an object in a real three-dimensional space, and calculate the position and angle of the current equipment in the map by matching the image of the current moment shot by the equipment with the characteristics in the 3D map.
After the visual positioning is finished, the error and success rate of the current positioning need to be known, but huge manpower and material resources need to be consumed to obtain the true value of the position and pose of the positioning map in the map in many scenes. High-precision true values can be obtained indoors by vicon tracking, marks tracking, wireless technology positioning and the like, and true values can be obtained outdoors by using high-precision RTK or GPS, but the methods depend on equipment and consume manpower, and particularly, complex operation cannot be carried out in some commercial environments, true value obtaining cannot be realized, and efficient positioning effect evaluation cannot be realized.
The prior art (patent application No. 201810750858.8) relates to an automated evaluation method and system for relative accuracy of an automatic driving positioning system. And calculating a deviation value after the positioning track is aligned with the GPS, and judging whether the positioning quality is qualified or not according to the deviation value. However, the technology relies on external equipment to acquire the true value of positioning, and cannot directly use positioning equipment to acquire the true value, so that the positioning quality is inconvenient and automated to evaluate.
Disclosure of Invention
The embodiment of the invention provides an automatic evaluation method and system of a visual positioning system, and aims to solve the problems that in the prior art, the acquisition of a true value of positioning depends on external equipment, the true value cannot be acquired directly by using positioning equipment, and the automatic evaluation of the positioning quality is inconvenient.
In a first aspect of the present invention, there is provided an automated evaluation method of a visual positioning system, comprising:
step S1, collecting visual positioning data, wherein the visual positioning data at least comprises continuous image frames;
step S2, acquiring a true pose value A of the continuous image frame;
step S3, acquiring the positioning pose B of the continuous image frame;
and step S4, calculating the pose true value A and the corresponding positioning pose B to obtain the position error and the angle error of each frame of image, and judging that the positioning quality is qualified when the position error is smaller than a preset position error threshold and the angle error is smaller than a preset angle error threshold.
Further, the visual positioning data is acquired by a visual sensor alone or by a visual sensor and an inertial sensor synchronously.
Further, the vision sensor comprises at least one of a monocular camera, a binocular camera, or a multi-view camera; the inertial sensor comprises at least one of an accelerometer, a gyroscope, a magnetometer and a GPS.
Further, a relative pose relation T1 of the continuous image frames is obtained, the continuous image frames are positioned in a high-precision map, an image pose T2 with successful positioning is obtained, the relative pose relation T1 and the image pose T2 with successful positioning are calculated, all continuous image frame poses T3 are obtained, and the continuous image frame poses T3 are optimized.
Further, the calculation includes aligning the relative pose relationship T1 onto an image pose T2 coordinate system using the Sim3 transform.
Further, the three-dimensional coordinate system of the high-precision map is kept fixed, and the continuous image frames and the high-precision map are fused to generate a three-dimensional point cloud map.
Further, the continuous image frame pose T3 is optimized using the three-dimensional point cloud in the three-dimensional point cloud map.
Further, a relative pose relation T1 is obtained by using a visual odometer technology or a visual inertial odometer technology.
Further, a visual positioning method is adopted for positioning the continuous image frames to a high-precision map.
Further, the positioning pose B is acquired by the visual positioning system.
In a second aspect of the present invention, there is provided an automated evaluation system of a visual positioning system, comprising:
the data acquisition module is used for acquiring visual positioning data at least comprising continuous image frames;
the pose truth value acquisition module is used for calculating and acquiring a pose truth value A of the continuous image frames;
a positioning pose acquisition module for calculating and acquiring a positioning pose B of the continuous image frames;
and the error judgment module is used for calculating the pose true value A and the corresponding positioning pose B to obtain a position error and an angle error of each frame of image, and when the position error is smaller than a preset position error threshold and the angle error is smaller than a preset angle error threshold, judging that the positioning quality is qualified.
Further, the data acquisition module includes a visual sensor and/or an inertial sensor.
Further, the vision sensor comprises at least one of a monocular camera, a binocular camera, or a multi-view camera; the inertial sensor comprises at least one of an accelerometer, a gyroscope, a magnetometer and a GPS.
Further, the position truth value obtaining module includes the following processes: and acquiring a relative pose relation T1 of the continuous image frames, positioning the continuous image frames into a high-precision map, acquiring an image pose T2 with successful positioning, calculating the relative pose relation T1 and the image pose T2 with successful positioning to obtain all continuous image frame poses T3, and optimizing the continuous image frame poses T3.
Further, the calculation includes aligning the relative pose relationship T1 onto an image pose T2 coordinate system using the Sim3 transform.
Further, the three-dimensional coordinate system of the high-precision map is kept fixed, and the continuous image frames and the high-precision map are fused to generate a three-dimensional point cloud map.
Further, the continuous image frame pose T3 is optimized using the three-dimensional point cloud in the three-dimensional point cloud map.
Further, a relative pose relation T1 is obtained by using a visual odometer technology or a visual inertial odometer technology.
Further, a visual positioning method is adopted for positioning the continuous image frames to a high-precision map.
Further, the positioning pose B is acquired by the visual positioning system.
The method comprises the steps of collecting continuous image frames as visual positioning data; acquiring a true pose value A of the continuous image frame; acquiring a positioning pose B of the continuous image frame; and calculating the pose true value A and the corresponding positioning pose B to obtain the position error and the angle error of each frame of image, judging that the positioning quality is qualified when the position error is smaller than a preset position error threshold and the angle error is smaller than a preset angle error threshold, and otherwise, judging that the positioning quality is unqualified.
Drawings
FIG. 1 is a flow chart of an automated evaluation method of a visual positioning system provided herein;
fig. 2 is a schematic structural diagram of an automated evaluation system of a visual positioning system according to the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
For the sake of understanding, the following description will explain specific embodiments of the present invention with reference to the drawings. It is obvious that the drawings in the following description are only some examples of the invention, and that for a person skilled in the art, other drawings and embodiments can be derived from them without inventive effort.
Example one
The first embodiment of the present invention will be described in detail below with reference to the accompanying drawings. Fig. 1 shows a flowchart of an automated evaluation method of a vision positioning system. The method specifically comprises the following steps:
step S1, acquiring the visual positioning data, specifically, carrying a visual sensor, such as a camera, which may be a monocular camera, a binocular camera or a multi-view camera, on the visual positioning system, shooting a continuous frame image of the positioning object, and using the captured continuous frame image as the visual positioning data. In some embodiments, a visual positioning system refers to a visual positioning system used in the technical fields of robots, unmanned planes, AR/VR, unmanned driving, and the like;
and S2, acquiring pose truth values A of the continuous image frames, specifically, acquiring a relative pose relation T1 of the continuous image frames, positioning the continuous image frames into a high-precision map, acquiring an image pose T2 with successful positioning, calculating the relative pose relation T1 and the image pose T2 with successful positioning to obtain all continuous image frame poses T3, and optimizing the pose T3 of the continuous image frames to obtain pose truth values. The high-precision map is an electronic map with higher precision and more data dimensions. The accuracy is higher, and the data dimension is more embodied by the fact that the data dimension comprises surrounding static information which is related to traffic besides road information.
In some embodiments, the above-described calculation process may align the relative pose relationship T1 onto the image pose T2 coordinate system using the Sim3 transform. The Sim3 transformation is to solve similarity transformation (similarity transformation) using at least 3 pairs of matching points, and further solve the rotation matrix, translation vector and scale between two coordinate systems.
Specifically, the consecutive image frames include a plurality of images (f0, f1, f2,.. said.), and the relative pose relationship T1 of the consecutive image frames includes the corresponding pose ([ R10| T10], [ R11| T11], [ R12| T12],. said..) of each image frame in the consecutive image frames, wherein R1n is the n frame 3 x 3 rotation matrix, and T1n is the n frame 3 x 1 translation matrix. The successfully positioned image comprises a plurality of images (f0, f1, f 2), and the successfully positioned image is all images or part of images of the continuous image frames. The successfully located image pose T2 includes poses corresponding to each image in the successive image frames ([ R20| T20], [ R21| T21], [ R22| T22],. the.) where R2n is the n-th frame 3 x 3 rotation matrix and T2n is the n-th frame 3 x 1 translation matrix. The camera position of the continuous image frame is p1(t10, t11, t12,.. said.), the camera position of the successfully positioned image is p2(t20, t21, t22,.. said.), and the positions of the same image in the continuous image frame and the successfully positioned image are respectively denoted as p1 and p2, so that a three-dimensional point pair is formed, and the process of solving the sim3 transformation can be understood as solving the similarity transformation of two coordinate systems. After the outliers are removed through ransac, the scale parameter s of the similarity transformation of the coordinate system can be solved only by exceeding three image-existence three-dimensional point pairs, namely, the p2 sR (p1) + t, and the scale parameter s of the similarity transformation of the coordinate system can be solved only by exceeding the three image-existence three-dimensional point pairs, namely, the 3X 3 transformation matrix R, and the 3X 1 translation matrix t. The pose T3 for the solved successive image frames contains the corresponding poses for each image frame ([ R30| T30], [ R31| T31], [ R32| T32.,. the.) where R3n is the n-th frame 3 x 3 rotation matrix and T3n is the n-th frame 3 x 1 translation matrix. An image position p3 can be calculated by a camera position p3(T10, T11, T12,.. said.), a position p1 of all the continuous image frames F1 and a similar transformation s, R, T of a coordinate system, wherein p3 ═ sR (p1) + T, and a camera angle q3(R30, R31, R32,.. said.) of the continuous image frame pose T3 can be obtained by a camera angle q1(R10, R11, R12,.. said.) and R of the continuous image frame. Where q3 ═ R (q1), the rotation matrix q3 and the translation matrix p3 solved for this are all the consecutive image frames F1 pose T3.
In some embodiments, the three-dimensional coordinate system of the high-precision map is kept fixed, and the continuous image frames are fused with the high-precision map to generate the three-dimensional point cloud map.
In some embodiments, the continuous image frame pose T3 is optimized using a three-dimensional point cloud in a three-dimensional point cloud map.
In some embodiments, the relative pose relationship T1 is acquired using a visual odometry technique or a visual inertial odometry technique.
The visual odometer technology utilizes image information acquired by a vehicle-mounted camera to recover six-degree-of-freedom motion of a vehicle body, including three-degree-of-freedom rotation and three-degree-of-freedom translation. Due to dead reckoning similar to odometry. Such a self-motion estimation method based on image information is called a visual odometry technique. The basic steps of the visual odometer include feature extraction, feature matching, coordinate transformation, and motion estimation. Most current visual odometry systems are still based on this framework. Two areas of research that are very closely related to visual odometry technology are the shape information motion restoration (SFM) algorithm and the simultaneous localization and mapping (SLAM) algorithm. In the visual SLAM problem, there is a need to simultaneously estimate in real time the position of the camera itself, as well as the spatial position of the detected signposts and their associations, in order to map the environment in which it is located. Early SLAM algorithms required reliance on sensors that could obtain depth information, such as lidar, sonar, etc.; in recent years, V-SLAM relying solely on machine vision, such as monocular vision SLAM algorithm, has attracted attention. In some embodiments, the manner in which successive image frames are positioned to a high precision map employs a visual positioning method.
The visual inertial odometer technology is an algorithm for realizing SLAM by fusing data of a camera and an inertial sensor (IMU), and the camera and the IMU are fused with good complementarity. The method comprises the steps of firstly, aligning an pose sequence estimated by an IMU (inertial measurement Unit) with a pose sequence estimated by a camera to estimate the real scale of a camera track, well predicting the pose of an image frame and the position of a feature point at the previous moment in a next frame image by the IMU, improving the matching speed of a feature tracking algorithm and the robustness of the algorithm for responding to rapid rotation, and finally converting the estimated position into a world coordinate system required by actual navigation by a gravity vector provided by an accelerometer in the IMU.
With the rapid development of MEMS devices, mobile terminals such as smart phones and AR/VR glasses can conveniently acquire IMU data and camera shooting data, the performance of a monocular SLAM algorithm can be greatly improved by a VINS algorithm integrating IMU and visual information, the method is a low-cost high-performance navigation scheme, and great attention is paid to the fields such as robots, unmanned planes, AR/VR and unmanned planes.
In some embodiments, the manner in which successive image frames are localized to a high-precision map employs a visual localization method, such as SLAM techniques.
Step S3, acquiring a positioning pose B of the continuous image frame, where in some embodiments, the positioning pose B is acquired by a visual positioning system, specifically, a visual positioning system in the technical fields of robots, unmanned planes, AR/VR, unmanned vehicles, and the like. Taking a robot as an example, the robot starts from an unknown place of an unknown environment, and positions and postures of the robot are located through repeatedly observed environmental features in the motion process. The positioning mode mostly adopts a monocular camera, a binocular camera or an RGBD camera. In some embodiments, the pose of the robot may be described collectively in a uniform matrix (4 × 4).
And step S4, calculating the pose true value A and the corresponding positioning pose B to obtain the position error and the angle error of each frame of image, judging that the positioning quality is qualified when the position error is smaller than a preset position error threshold and the angle error is smaller than a preset angle error threshold, and otherwise, judging that the positioning quality is unqualified.
Specifically, the visual positioning system acquires the corresponding poses ([ R50| t50], [ R51| t51], [ R52| t52],. the.) of each frame image, wherein R5n is the n frame 3 × 3 rotation matrix, and t5n is the n frame 3 × 1 translation matrix. The corresponding image pose truth value comprises the corresponding pose ([ R40| t40], [ R41| t41], [ R42| t42],. the.) of each frame of image, wherein R4n is the n frame 3 x 3 rotation matrix, and t4n is the n frame 3 x 1 translation matrix. In one embodiment, the distance error threshold Th is 20cm, the angle error threshold Rh is 5 °, which indicates that the positioning error is within 20cm, and the angle is within 5 °, which indicates that the angle is within 5 °, which means that the image is qualified. The threshold value is set according to the requirements of the actual application or the subjective feeling of the client, and if the distance error is 1m and the angle error is within 10 °, the client accepts the error, the parameters are set to be Th 1m and Rh 10 °.
The method completely carries out automatic evaluation on the quality of the positioning result, has reliable result, has low requirements on application scenes, operation modes and equipment, and saves manpower and material resources.
Example two
The second embodiment of the present invention will be described in detail below with reference to the accompanying drawings. Fig. 2 shows a schematic structural diagram of an automated evaluation system of a vision positioning system. The method specifically comprises the following steps:
the data acquisition module may be a visual sensor mounted in a visual positioning system, for example, a camera, which may be a monocular, binocular, or multi-view camera, and takes continuous frame images of a positioning object, and uses the captured continuous frame images as visual positioning data. In some embodiments, a visual positioning system refers to a visual positioning system used in the technical fields of robots, drones, AR/VR, unmanned, and the like.
The pose true value acquiring module is used for calculating and acquiring a pose true value a of the continuous image frames, and specifically, the pose true value acquiring module can acquire a relative pose relation T1 of the continuous image frames, position the continuous image frames in a high-precision map, acquire a successfully positioned image pose T2, calculate the relative pose relation T1 and the successfully positioned image pose T2, and acquire all continuous image frame poses T3. In some embodiments, the above-described calculation process may align the relative pose relationship T1 onto the image pose T2 coordinate system using the Sim3 transform. The Sim3 transformation is to solve similarity transformation (similarity transformation) using at least 3 pairs of matching points, and further solve the rotation matrix, translation vector and scale between two coordinate systems.
In some embodiments, the three-dimensional coordinate system of the high-precision map is kept constant in the module, and the continuous image frames are fused with the high-precision map to generate the three-dimensional point cloud map.
In some embodiments, the module optimizes the continuous image frame pose T3 using a three-dimensional point cloud in a three-dimensional point cloud map.
In some embodiments, the relative pose relationship T1 is obtained using visual odometry or visual inertial odometry techniques in the module.
A positioning pose acquisition module for calculating and acquiring a positioning pose B of the continuous image frames;
and the error judgment module is used for calculating the pose true value A and the corresponding positioning pose B to obtain the position error and the angle error of each frame of image, judging that the positioning quality is qualified when the position error is smaller than a preset position error threshold and the angle error is smaller than a preset angle error threshold, and otherwise, judging that the positioning quality is unqualified.
The automatic evaluation system can be used for automatically evaluating the quality of the positioning result completely, the result is reliable, the requirements on an application scene, an operation mode and equipment are low, and manpower and material resources are saved.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It should be noted that the above embodiments can be freely combined as necessary. The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (20)

1. An automated assessment method for a visual positioning system, comprising:
step S1, collecting visual positioning data, wherein the visual positioning data at least comprises continuous image frames;
step S2, acquiring a true pose value A of the continuous image frame;
step S3, acquiring the positioning pose B of the continuous image frame;
and step S4, calculating the pose true value A and the corresponding positioning pose B to obtain the position error and the angle error of each frame of image, and judging that the positioning quality is qualified when the position error is smaller than a preset position error threshold and the angle error is smaller than a preset angle error threshold.
2. The automated assessment method according to claim 1, wherein said visual positioning data is acquired by a visual sensor alone or by a visual sensor and an inertial sensor in synchronization.
3. The automated assessment method according to claim 2, wherein said vision sensor comprises at least one of a monocular camera, a binocular camera or a multi-view camera; the inertial sensor comprises at least one of an accelerometer, a gyroscope, a magnetometer and a GPS.
4. The automated evaluation method of claim 1, wherein the relative pose relationship T1 of the successive image frames is obtained, the successive image frames are positioned into a high-precision map, the successfully positioned image pose T2 is obtained, the relative pose relationship T1 is calculated with the successfully positioned image pose T2, all successive image frame poses T3 are obtained, and the successive image frame poses T3 are optimized.
5. The automated evaluation method according to claim 4, wherein the calculating includes aligning the relative pose relationship T1 onto an image pose T2 coordinate system using a Sim3 transformation.
6. The automated evaluation method according to claim 5, wherein the three-dimensional coordinate system of the high-precision map is kept constant, and the continuous image frames are fused with the high-precision map to generate a three-dimensional point cloud map.
7. The automated evaluation method of claim 6, wherein the continuous image frame pose T3 is optimized using a three-dimensional point cloud in the three-dimensional point cloud map.
8. The automated evaluation method according to claim 4, wherein the relative pose relationship T1 is obtained by using a visual odometry technique or a visual inertial odometry technique.
9. The automated evaluation method of claim 4, wherein the means for positioning successive image frames onto a high-precision map employs a visual positioning method.
10. The automated evaluation method according to claim 1, wherein the positioning pose B is acquired by the visual positioning system.
11. An automated evaluation system for a visual positioning system, comprising:
the data acquisition module is used for acquiring visual positioning data at least comprising continuous image frames;
the pose truth value acquisition module is used for calculating and acquiring a pose truth value A of the continuous image frames;
a positioning pose acquisition module for calculating and acquiring a positioning pose B of the continuous image frames;
and the error judgment module is used for calculating the pose true value A and the corresponding positioning pose B to obtain a position error and an angle error of each frame of image, and when the position error is smaller than a preset position error threshold and the angle error is smaller than a preset angle error threshold, judging that the positioning quality is qualified.
12. The automated evaluation system of claim 11, wherein the data acquisition module comprises a visual sensor and/or an inertial sensor.
13. The automated evaluation system of claim 12, wherein the vision sensor comprises at least one of a monocular camera, a binocular camera, or a multi-view camera; the inertial sensor comprises at least one of an accelerometer, a gyroscope, a magnetometer and a GPS.
14. The automated evaluation system according to claim 11, wherein the location truth acquisition module comprises the following processes: and acquiring a relative pose relation T1 of the continuous image frames, positioning the continuous image frames into a high-precision map, acquiring an image pose T2 with successful positioning, calculating the relative pose relation T1 and the image pose T2 with successful positioning to obtain all continuous image frame poses T3, and optimizing the continuous image frame poses T3.
15. The automated evaluation system of claim 14, the calculating comprising aligning the relative pose relationship T1 onto an image pose T2 coordinate system using a Sim3 transformation.
16. The automated evaluation system of claim 15, wherein the three-dimensional coordinate system of the high-precision map is kept constant, and the successive image frames are fused with the high-precision map to generate a three-dimensional point cloud map.
17. The automated evaluation system of claim 16, wherein the continuous image frame pose T3 is optimized using a three-dimensional point cloud in the three-dimensional point cloud map.
18. The automated evaluation system of claim 14, wherein the relative pose relationship T1 is obtained using a visual odometry technique or a visual inertial odometry technique.
19. The automated evaluation system of claim 14, wherein the means for positioning successive image frames onto a high-precision map employs a visual positioning method.
20. The automated evaluation system of claim 11, wherein the positioning pose B is acquired by the visual positioning system.
CN202110590939.8A 2021-05-28 2021-05-28 Automatic evaluation method and system for visual positioning system Pending CN113469495A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110590939.8A CN113469495A (en) 2021-05-28 2021-05-28 Automatic evaluation method and system for visual positioning system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110590939.8A CN113469495A (en) 2021-05-28 2021-05-28 Automatic evaluation method and system for visual positioning system

Publications (1)

Publication Number Publication Date
CN113469495A true CN113469495A (en) 2021-10-01

Family

ID=77871660

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110590939.8A Pending CN113469495A (en) 2021-05-28 2021-05-28 Automatic evaluation method and system for visual positioning system

Country Status (1)

Country Link
CN (1) CN113469495A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114543807A (en) * 2022-01-14 2022-05-27 安徽海博智能科技有限责任公司 High-precision evaluation method for SLAM algorithm in extreme scene

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1890263A2 (en) * 2000-03-07 2008-02-20 Sarnoff Corporation Method of pose estimation adn model refinement for video representation of a three dimensional scene
US20170116751A1 (en) * 2015-10-23 2017-04-27 Wisconsin Alumni Research Foundation System and Method For Dynamic Device Tracking Using Medical Imaging Systems
CN108765563A (en) * 2018-05-31 2018-11-06 北京百度网讯科技有限公司 Processing method, device and the equipment of SLAM algorithms based on AR
CN109781068A (en) * 2018-12-11 2019-05-21 北京空间飞行器总体设计部 The vision measurement system ground simulation assessment system and method for space-oriented application
CN110411476A (en) * 2019-07-29 2019-11-05 视辰信息科技(上海)有限公司 Vision inertia odometer calibration adaptation and evaluation method and system
CN111325794A (en) * 2020-02-23 2020-06-23 哈尔滨工业大学 Visual simultaneous localization and map construction method based on depth convolution self-encoder
CN111461981A (en) * 2020-03-30 2020-07-28 北京百度网讯科技有限公司 Error estimation method and device for point cloud splicing algorithm

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1890263A2 (en) * 2000-03-07 2008-02-20 Sarnoff Corporation Method of pose estimation adn model refinement for video representation of a three dimensional scene
US20170116751A1 (en) * 2015-10-23 2017-04-27 Wisconsin Alumni Research Foundation System and Method For Dynamic Device Tracking Using Medical Imaging Systems
CN108765563A (en) * 2018-05-31 2018-11-06 北京百度网讯科技有限公司 Processing method, device and the equipment of SLAM algorithms based on AR
CN109781068A (en) * 2018-12-11 2019-05-21 北京空间飞行器总体设计部 The vision measurement system ground simulation assessment system and method for space-oriented application
CN110411476A (en) * 2019-07-29 2019-11-05 视辰信息科技(上海)有限公司 Vision inertia odometer calibration adaptation and evaluation method and system
CN111325794A (en) * 2020-02-23 2020-06-23 哈尔滨工业大学 Visual simultaneous localization and map construction method based on depth convolution self-encoder
CN111461981A (en) * 2020-03-30 2020-07-28 北京百度网讯科技有限公司 Error estimation method and device for point cloud splicing algorithm

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
韩健英等: "最小化光度误差先验的视觉SLAM算法", 《小型微型计算机系统》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114543807A (en) * 2022-01-14 2022-05-27 安徽海博智能科技有限责任公司 High-precision evaluation method for SLAM algorithm in extreme scene
CN114543807B (en) * 2022-01-14 2023-10-20 安徽海博智能科技有限责任公司 High-precision evaluation method of SLAM algorithm in extreme scene

Similar Documents

Publication Publication Date Title
US20210012520A1 (en) Distance measuring method and device
CN109029433B (en) Method for calibrating external parameters and time sequence based on vision and inertial navigation fusion SLAM on mobile platform
CN109887057B (en) Method and device for generating high-precision map
KR102434580B1 (en) Method and apparatus of dispalying virtual route
US10295365B2 (en) State estimation for aerial vehicles using multi-sensor fusion
CN110068335B (en) Unmanned aerial vehicle cluster real-time positioning method and system under GPS rejection environment
CN110120072B (en) Method and system for tracking mobile devices
Panahandeh et al. Vision-aided inertial navigation based on ground plane feature detection
US20190033867A1 (en) Systems and methods for determining a vehicle position
CN110411457B (en) Positioning method, system, terminal and storage medium based on stroke perception and vision fusion
CN105474033A (en) Attitude determination, panoramic image generation and target recognition methods for intelligent machine
CN110207714A (en) A kind of method, onboard system and the vehicle of determining vehicle pose
CN108846857A (en) The measurement method and visual odometry of visual odometry
US20210183100A1 (en) Data processing method and apparatus
CN110533719B (en) Augmented reality positioning method and device based on environment visual feature point identification technology
WO2022193508A1 (en) Method and apparatus for posture optimization, electronic device, computer-readable storage medium, computer program, and program product
CN110749308B (en) SLAM-oriented outdoor positioning method using consumer-grade GPS and 2.5D building models
CN111288989A (en) Visual positioning method for small unmanned aerial vehicle
CN115272596A (en) Multi-sensor fusion SLAM method oriented to monotonous texture-free large scene
KR101764222B1 (en) System and method for high precise positioning
CN114638897B (en) Multi-camera system initialization method, system and device based on non-overlapping views
CN110458885B (en) Positioning system and mobile terminal based on stroke perception and vision fusion
CN112991440B (en) Positioning method and device for vehicle, storage medium and electronic device
Xian et al. Fusing stereo camera and low-cost inertial measurement unit for autonomous navigation in a tightly-coupled approach
CN113469495A (en) Automatic evaluation method and system for visual positioning system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20211001