CN111595342A - Indoor positioning method and system capable of being deployed in large scale - Google Patents

Indoor positioning method and system capable of being deployed in large scale Download PDF

Info

Publication number
CN111595342A
CN111595342A CN202010340248.8A CN202010340248A CN111595342A CN 111595342 A CN111595342 A CN 111595342A CN 202010340248 A CN202010340248 A CN 202010340248A CN 111595342 A CN111595342 A CN 111595342A
Authority
CN
China
Prior art keywords
matrix
attitude
mobile terminal
image
terminal equipment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010340248.8A
Other languages
Chinese (zh)
Other versions
CN111595342B (en
Inventor
王运涛
潘泽文
易鑫
史元春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Publication of CN111595342A publication Critical patent/CN111595342A/en
Application granted granted Critical
Publication of CN111595342B publication Critical patent/CN111595342B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C1/00Measuring angles

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The embodiment of the invention discloses an indoor positioning method and system capable of being deployed in a large scale. The method of combining machine vision and a vision inertia odometer is comprehensively adopted, the positioning precision is 0.54m, the positioning precision is high, the requirement on equipment is low, the test cost is low, and large-scale deployment and application can be carried out.

Description

Indoor positioning method and system capable of being deployed in large scale
Technical Field
The embodiment of the invention relates to the technical field of indoor positioning, in particular to an indoor positioning method and system capable of being deployed in a large scale.
Background
The indoor positioning means that position positioning is realized in an indoor environment, and a set of indoor position positioning system is formed by mainly integrating multiple technologies such as wireless communication, base station positioning, inertial navigation positioning and the like, so that position monitoring of personnel, objects and the like in an indoor space is realized. The existing indoor positioning method is mainly based on wireless signal (bluetooth/wifi) positioning, or based on SLAM method of laser radar/depth image collector, SLAM (simultaneous localization and Mapping), also called cml (current localization and localization), which refers to instant positioning and map construction, or concurrent map construction and positioning, and the problem can be described as: if a robot is placed at an unknown position in an unknown environment and there is a way to draw a complete map of the environment while the robot is moving, the complete map (a continuous map) refers to every corner where a room can enter without being obstructed. The existing positioning method based on wireless signals has the industrial positioning precision of about 1m, the precision is poor, and long calibration time is needed in large-scale deployment; SLAM-based methods generally require specialized equipment, are costly to deploy, and have high environmental requirements.
Disclosure of Invention
Therefore, the embodiment of the invention provides an indoor positioning method and system capable of being deployed in a large scale, so as to solve the problems of high cost and poor precision when the existing indoor positioning method is deployed in a large scale.
In order to achieve the above object, the embodiments of the present invention provide the following technical solutions:
according to a first aspect of an embodiment of the present invention, a large-scale deployable indoor positioning method is provided, where the method includes:
selecting a plurality of characteristic positions in a positioning scene, respectively deploying coding calibration points, and recording real three-dimensional coordinates of the coding calibration points in the positioning scene;
acquiring a real positioning scene image in the moving process through an image acquisition device arranged in the mobile terminal equipment;
recognizing a code index point in the acquired image, and acquiring the relative position and relative rotation information of the mobile terminal equipment relative to the code index point through gesture recognition to obtain a first gesture matrix;
in the visual inertial odometer, calculating and acquiring relative position and relative rotation information of the current position and posture of the mobile terminal equipment relative to the initial position and posture in the moving process according to the acquired image and data measured by the IMU to obtain a second posture matrix;
acquiring real three-dimensional coordinates of the code calibration points in the initial position image according to the identified code calibration points in the initial position image, and calculating to acquire an initial absolute position and a posture of the mobile terminal device in a real positioning scene by combining the first posture matrix;
and calculating to obtain the current absolute position and the current absolute attitude of the mobile terminal equipment in the real positioning scene according to the initial absolute position and the initial attitude of the mobile terminal equipment and the second attitude matrix.
Further, the method further comprises:
in the moving process, if the coding mark point appears in the visual field of the image collector, the position calibration is automatically carried out, and the method specifically comprises the following steps:
identifying the coding mark points, acquiring the real three-dimensional coordinates of the coding mark points in the current moving position image, and estimating the relative position and relative rotation information of the mobile terminal equipment relative to the coding mark points by combining the current absolute position and the current posture of the mobile terminal equipment obtained by calculation to obtain an estimated posture matrix;
carrying out gesture recognition on the code calibration point recognized in the current mobile position image to obtain the relative position and relative rotation information of the mobile terminal equipment relative to the code calibration point to obtain a third gesture matrix;
calibrating according to the estimated attitude matrix and the third attitude matrix to obtain a fourth attitude matrix;
calculating to obtain a corrected initial absolute position and posture according to the real three-dimensional coordinates of the code calibration points in the current mobile position image and the fourth posture matrix;
and calculating to obtain the current absolute position and the current absolute attitude of the mobile terminal equipment according to the corrected initial absolute position and the corrected attitude and the second attitude matrix.
Further, correcting according to the estimated attitude matrix and the third attitude matrix to obtain a fourth attitude matrix, which specifically comprises:
and respectively splitting the pre-estimated attitude matrix and the third attitude matrix into a position matrix and a rotation matrix through rigid body transformation calculation, and calibrating according to the following formula:
calibration for rotation angle: r ═ R1*α+R2*(1-α),0.8<α<1;
Calibration for position: p ═ P1*β+P2*(1-β),0<β≤0.2;
Wherein R is a rotation matrix of the fourth attitude matrix, P is a position matrix of the fourth attitude matrix, R1For rotation matrices of estimated attitude matrices, P1To estimate the position matrix of the attitude matrix, R2A rotation matrix, P, being a third attitude matrix2Is a position matrix of the third attitude matrix.
Further, identifying a code index point in the acquired image, and acquiring the relative position and relative rotation information of the mobile terminal device relative to the code index point through gesture identification to obtain a first gesture matrix, which specifically includes:
the recognition process increases the gesture recognition accuracy for 1 second using low pass filtering and sliding window mean filtering.
Further, the method further comprises:
fusing the collected real positioning scene image with a pre-established 3D virtual object corresponding to the coding calibration point to construct an AR scene, wherein the ID and the real three-dimensional coordinate of the coding calibration point are assigned in the 3D virtual object;
and when the coded index point in the image is identified, decoding to obtain the ID of the coded index point, and inquiring in the AR scene to obtain the real three-dimensional coordinate of the coded index point in the initial position image according to the identified ID of the coded index point and the corresponding 3D virtual object.
Further, selecting a plurality of feature positions in the positioning scene and respectively deploying the code calibration points specifically include:
and acquiring a plane building graph of the positioning scene, and selecting a plurality of characteristic positions in the plane building graph to perform coding and index point deployment in the real positioning scene.
Further, the characteristic positions comprise a support column, a corner and a door frame position.
Furthermore, the coding index point adopts an Aruco DICT _6 multiplied by 6_250 coding mode.
According to a second aspect of the embodiments of the present invention, there is provided a large-scale deployable indoor positioning system, the system including:
the system comprises a plurality of coding calibration points, a plurality of positioning units and a plurality of positioning units, wherein the coding calibration points are respectively deployed at a plurality of characteristic positions in a selected positioning scene and record the real three-dimensional coordinates of the coding calibration points in the positioning scene;
the image collector is arranged in the mobile terminal equipment and used for collecting a real positioning scene image in the moving process;
the image sensor is used for identifying a code calibration point in the acquired image and acquiring the relative position and relative rotation information of the mobile terminal equipment relative to the code calibration point through gesture identification to obtain a first gesture matrix;
the visual inertial odometer is used for calculating and acquiring relative position and relative rotation information of the current position and the posture of the mobile terminal equipment relative to the initial position and the posture in the moving process according to the acquired image and data measured by the IMU to obtain a second posture matrix;
the positioning processor is used for acquiring real three-dimensional coordinates of the code calibration points in the initial position image according to the identified code calibration points in the initial position image, and calculating and acquiring an initial absolute position and a posture of the mobile terminal equipment in a real positioning scene by combining the first posture matrix;
and calculating to obtain the current absolute position and the current absolute attitude of the mobile terminal equipment in the real positioning scene according to the initial absolute position and the initial attitude of the mobile terminal equipment and the second attitude matrix.
Further, the mobile terminal device comprises a mobile phone.
The embodiment of the invention has the following advantages:
the method and the system for indoor positioning capable of being deployed in large scale, provided by the embodiment of the invention, are characterized in that coded calibration points are deployed at a plurality of characteristic positions in a positioning scene, an image collector in mobile terminal equipment is used for collecting images of the positioning scene, the coded calibration points in the images are identified, a first attitude matrix is obtained through attitude identification, a second attitude matrix is obtained through a visual inertial odometer, the initial absolute position and the attitude of the mobile terminal equipment are obtained through calculation, and finally the current absolute position and the attitude of the mobile terminal equipment are obtained through calculation. The method of combining machine vision and a vision inertia odometer is comprehensively adopted, the positioning precision is 0.54m, the positioning precision is high, the requirement on equipment is low, the test cost is low, and large-scale deployment and application can be carried out.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It should be apparent that the drawings in the following description are merely exemplary, and that other embodiments can be derived from the drawings provided by those of ordinary skill in the art without inventive effort.
Fig. 1 is a schematic flowchart of an indoor positioning method that can be deployed in a large scale according to embodiment 1 of the present invention;
fig. 2 is a schematic structural diagram of an indoor positioning system that can be deployed in a large scale according to embodiment 1 of the present invention.
Detailed Description
The present invention is described in terms of particular embodiments, other advantages and features of the invention will become apparent to those skilled in the art from the following disclosure, and it is to be understood that the described embodiments are merely exemplary of the invention and that it is not intended to limit the invention to the particular embodiments disclosed. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1
The embodiment of the invention provides an indoor positioning method capable of being deployed in a large scale, and particularly as shown in figure 1, the method comprises the following steps:
and 110, selecting a plurality of characteristic positions in the positioning scene, deploying the coding calibration points respectively, and recording the real three-dimensional coordinates of the coding calibration points in the positioning scene.
Specifically, a planar building map of a positioning scene is obtained, a plurality of characteristic positions in the planar building map are selected to perform coding and calibration point deployment in the real positioning scene, and the characteristic positions comprise a plurality of characteristic points such as pillars, wall corners and door frames. The coded index points are coded in an Aruco DICT-6 multiplied by 6-250 coding mode, each coded index point has a specific ID, a printed coded index point image is attached to a characteristic position, and the size of the index point image is 10 x 10 cm.
And 120, acquiring a real positioning scene image in the moving process through the mobile terminal equipment arranged in the mobile terminal equipment.
Specifically, the image collector on the mobile terminal device such as a handheld mobile phone can be used for shooting in a positioning scene while moving. The image collector is a camera. The image collector of the mobile phone does not need to be calibrated.
Specifically, the construction of the AR scene may be implemented by installing an AR application in the mobile terminal device, where in this embodiment, the AR application is a goose arch application. Before the ARCore is installed, an ARCore software development kit for Unity is imported based on a Unity3D development platform, a 3D virtual object corresponding to a coding calibration point is created in a virtual scene of the development platform, the ID and the real three-dimensional coordinate of the coding calibration point are assigned to the corresponding 3D virtual object, an ARCore application program package is generated, and finally the ARCore application program package is exported and installed to mobile terminal equipment supporting ARCore service. The acquired real positioning scene image and a pre-established 3D virtual object corresponding to the coding calibration point are fused to construct an AR scene, the ID and the real three-dimensional coordinates of the coding calibration point are assigned to the 3D virtual object, and the real three-dimensional coordinates of the coding calibration point can be inquired in the running AR scene in the calculation process.
And step 130, identifying the code index point in the acquired image, and acquiring the relative position and relative rotation information of the mobile terminal device relative to the code index point through gesture identification to obtain a first gesture matrix.
Firstly, recognizing a code index point through a machine vision method to obtain an initial absolute position of the mobile terminal device in an indoor scene. Specifically, firstly, identifying a mark point, reading an image from a CPU, segmenting, rotating and de-distorting the image by using an OpenCV-based image sensor and an edge detection algorithm, then identifying a coded mark point in the image by using the image sensor, decoding to obtain an ID of the coded mark point, obtaining a relative position and relative rotation information of the coded mark point relative to a terminal device by gesture identification, increasing gesture identification accuracy by using low-pass filtering and sliding window mean filtering in the gesture identification process for 1 second, and obtaining a first gesture matrix T0
And 140, calculating and acquiring the relative position and relative rotation information of the current position and posture of the mobile terminal equipment relative to the initial position and posture in the moving process according to the acquired image and the data measured by the IMU in the visual inertial odometer to obtain a second posture matrix.
Specifically, the method includes preprocessing an image collected by an image collector, extracting feature points, performing attitude estimation by combining attitude angles and acceleration data collected by an inertial measurement unit IMU, and finally obtaining the relative position and relative rotation information of the mobile terminal device at the current position at the initial position based on an extended Kalman filtering method to obtain a second attitude matrix B based on an ARCore application-mounted visual inertial odometer VIO SLAM method.
And 150, acquiring the real three-dimensional coordinates of the code index points in the initial position image according to the identified code index points in the initial position image, and calculating to acquire the initial absolute position and the attitude of the mobile terminal equipment in the real positioning scene by combining the first attitude matrix.
Specifically, the encoded index point in the initial position image is identified through the image sensor, the ID of the encoded index point in the initial position image is obtained through decoding, the real three-dimensional coordinate of the corresponding index point can be searched in the AR scene during the APP operation according to the ID of the encoded index point, the real three-dimensional coordinate is converted into the attitude matrix a, and then the initial absolute position attitude matrix I of the mobile terminal device is T0 -1*A。
And 160, calculating to obtain the current absolute position and the current absolute posture of the mobile terminal equipment in the real positioning scene according to the initial absolute position and the posture of the mobile terminal equipment and the second posture matrix.
Specifically, the current absolute position and posture matrix C of the mobile terminal device is I × B.
In the moving process of the mobile terminal device, due to the drift of the gyroscope, errors are accumulated continuously along with the increase of the moving distance relative to the initial position. In the moving process, if the coding mark point appears in the visual field of the image collector, the position calibration is automatically carried out, and the method specifically comprises the following steps:
identifying the coding mark point, obtaining the real three-dimensional coordinate (absolute position) of the coding mark point in the current moving position image, combining the current absolute position and the attitude of the mobile terminal equipment obtained by the system calculation, estimating the relative position and the relative rotation information of the mobile terminal equipment relative to the coding mark point, and obtaining an estimated attitude matrix T1
And then, performing gesture recognition on the code calibration point recognized in the current mobile position image through an image sensor based on OpenCV (open circuit computing environment) to acquire the relative position and relative rotation information of the mobile terminal equipment relative to the code calibration point, so as to acquire a third gesture matrix T2
According to the estimated attitude matrix T1And third moment of attitudeMatrix T2Performing calibration to obtain a fourth attitude matrix T, specifically including:
will predict the attitude matrix T1And a third attitude matrix T2Respectively split into position matrix and rotation matrix by rigid body transformation calculation, because of T1The angle estimation accuracy of the matrix to the equipment attitude is higher than T2The matrix, and therefore the calibration, is performed according to the following equation:
calibration for rotation angle: r ═ R1*α+R2*(1-α),0.8<α<1;
Calibration for position: p ═ P1*β+P2*(1-β),0<β≤0.2;
The values of α and β can be adjusted according to the actual environment, and generally are α -0.9, β -0.2, R is the rotation matrix of the fourth attitude matrix, P is the position matrix of the fourth attitude matrix, and R is the position matrix of the fourth attitude matrix1For rotation matrices of estimated attitude matrices, P1To estimate the position matrix of the attitude matrix, R2A rotation matrix, P, being a third attitude matrix2A position matrix which is a third attitude matrix;
then, obtaining the calibrated position and attitude information according to the above positioning calculation process, specifically: calculating to obtain a corrected initial absolute position and posture according to the real three-dimensional coordinates of the code calibration points in the current mobile position image and the fourth posture matrix;
and calculating to obtain the current absolute position and the current absolute attitude of the mobile terminal equipment according to the corrected initial absolute position and the corrected attitude and the second attitude matrix.
And finally, reading the currently acquired image in the CPU, naming and storing the acquired current mobile position image by using the current absolute position and the attitude of the mobile terminal equipment, wherein the attitude information is expressed in a quaternion form, and storing the image in an application root directory file.
In order to test the method, a G teaching building is selected as a test point, 12 calibration points are respectively deployed on a 2 th building and a 3 rd building of the G building, 40 points are selected in the 2 nd building and the 3 rd building, accurate coordinates of the 40 points are obtained in a manual measurement mode, fifteen testees hold equipment to carry out data acquisition on two routes passing through the 40 points in different directions, different walking speeds, different illumination conditions and different calibration frequencies, experimental data are analyzed, alpha is set to be 0.9, and beta is set to be 0.1, so that error values under different conditions are obtained. The average value of the positioning accuracy of the method is 0.54m through testing.
The indoor positioning method capable of being deployed in large scale, provided by the embodiment of the invention, comprises the steps of deploying coding calibration points at a plurality of characteristic positions in a positioning scene, acquiring an image of the positioning scene through an image acquisition device in mobile terminal equipment, identifying the coding calibration points in the image, obtaining a first posture matrix through posture identification, obtaining a second posture matrix through a visual inertial odometer, calculating to obtain an initial absolute position and a posture of the mobile terminal equipment, and finally calculating to obtain a current absolute position and a posture of the mobile terminal equipment. The method of combining machine vision and a vision inertia odometer is comprehensively adopted, the positioning precision is 0.54m, the positioning precision is high, the requirement on equipment is low, the test cost is low, and large-scale deployment and application can be carried out.
Example 2
An embodiment 2 of the present invention provides an indoor positioning system that can be deployed in a large scale, and specifically, as shown in fig. 2, the system includes:
the coding calibration points are respectively deployed at a plurality of characteristic positions in the selected positioning scene, and the real three-dimensional coordinates of the coding calibration points in the positioning scene are recorded;
the image collector is arranged in the mobile terminal equipment and is used for collecting a real positioning scene image in the moving process;
the image sensor is used for identifying a code calibration point in the acquired image and acquiring the relative position and relative rotation information of the mobile terminal equipment relative to the code calibration point through gesture identification to obtain a first gesture matrix;
the visual inertial odometer is used for calculating and acquiring relative position and relative rotation information of the current position and the posture of the mobile terminal equipment relative to the initial position and the posture in the moving process according to the acquired image and data measured by the IMU to obtain a second posture matrix;
the positioning processor is used for acquiring the real three-dimensional coordinates of the code calibration points in the initial position image according to the identified code calibration points in the initial position image, and calculating and acquiring the initial absolute position and the attitude of the mobile terminal equipment in a real positioning scene by combining the first attitude matrix;
and calculating to obtain the current absolute position and the current absolute attitude of the mobile terminal equipment in the real positioning scene according to the initial absolute position and the initial attitude of the mobile terminal equipment and the second attitude matrix.
Further, the mobile terminal device comprises a mobile phone.
The functions executed by each component in the indoor positioning system capable of large-scale deployment provided in the embodiment of the present invention have been described in detail in the indoor positioning method capable of large-scale deployment provided in embodiment 1, and will not be described again here.
The indoor positioning system capable of being deployed in a large scale, provided by the embodiment of the invention, deploys the code calibration points at a plurality of characteristic positions in a positioning scene, acquires an image of the positioning scene through an image collector in mobile terminal equipment, identifies the code calibration points in the image, obtains a first attitude matrix through attitude identification, obtains a second attitude matrix through a visual inertial odometer, calculates to obtain an initial absolute position and an attitude of the mobile terminal equipment, and finally calculates to obtain a current absolute position and an attitude of the mobile terminal equipment. The method of combining machine vision and a vision inertia odometer is comprehensively adopted, the positioning precision is 0.54m, the positioning precision is high, the requirement on equipment is low, the test cost is low, and large-scale deployment and application can be carried out.
Although the invention has been described in detail above with reference to a general description and specific examples, it will be apparent to one skilled in the art that modifications or improvements may be made thereto based on the invention. Accordingly, such modifications and improvements are intended to be within the scope of the invention as claimed.

Claims (10)

1. A method for large-scale deployable indoor positioning, the method comprising:
selecting a plurality of characteristic positions in a positioning scene, respectively deploying coding calibration points, and recording real three-dimensional coordinates of the coding calibration points in the positioning scene;
acquiring a real positioning scene image in the moving process through an image acquisition device arranged in the mobile terminal equipment;
recognizing a code index point in the acquired image, and acquiring the relative position and relative rotation information of the mobile terminal equipment relative to the code index point through gesture recognition to obtain a first gesture matrix;
in the visual inertial odometer, calculating and acquiring relative position and relative rotation information of the current position and posture of the mobile terminal equipment relative to the initial position and posture in the moving process according to the acquired image and data measured by the IMU to obtain a second posture matrix;
acquiring real three-dimensional coordinates of the code calibration points in the initial position image according to the identified code calibration points in the initial position image, and calculating to acquire an initial absolute position and a posture of the mobile terminal device in a real positioning scene by combining the first posture matrix;
and calculating to obtain the current absolute position and the current absolute attitude of the mobile terminal equipment in the real positioning scene according to the initial absolute position and the initial attitude of the mobile terminal equipment and the second attitude matrix.
2. The method of claim 1, further comprising:
in the moving process, if the coding mark point appears in the visual field of the image collector, the position calibration is automatically carried out, and the method specifically comprises the following steps:
identifying the coding mark points, acquiring the real three-dimensional coordinates of the coding mark points in the current moving position image, and estimating the relative position and relative rotation information of the mobile terminal equipment relative to the coding mark points by combining the current absolute position and the current posture of the mobile terminal equipment obtained by calculation to obtain an estimated posture matrix;
carrying out gesture recognition on the code calibration point recognized in the current mobile position image to obtain the relative position and relative rotation information of the mobile terminal equipment relative to the code calibration point to obtain a third gesture matrix;
calibrating according to the estimated attitude matrix and the third attitude matrix to obtain a fourth attitude matrix;
calculating to obtain a corrected initial absolute position and posture according to the real three-dimensional coordinates of the code calibration points in the current mobile position image and the fourth posture matrix;
and calculating to obtain the current absolute position and the current absolute attitude of the mobile terminal equipment according to the corrected initial absolute position and the corrected attitude and the second attitude matrix.
3. The indoor positioning method capable of being deployed in a large scale according to claim 2, wherein the obtaining of the fourth attitude matrix by performing correction according to the estimated attitude matrix and the third attitude matrix specifically comprises:
and respectively splitting the pre-estimated attitude matrix and the third attitude matrix into a position matrix and a rotation matrix through rigid body transformation calculation, and calibrating according to the following formula:
calibration for rotation angle: r ═ R1*α+R2*(1-α),0.8<α<1;
Calibration for position: p ═ P1*β+P2*(1-β),0<β≤0.2;
Wherein R is a rotation matrix of the fourth attitude matrix, P is a position matrix of the fourth attitude matrix, R1For rotation matrices of estimated attitude matrices, P1To estimate the position matrix of the attitude matrix, R2A rotation matrix, P, being a third attitude matrix2Is a position matrix of the third attitude matrix.
4. The method according to claim 1, wherein the method for indoor positioning that can be deployed in a large scale includes identifying a coded calibration point in the acquired image, and obtaining a first attitude matrix by obtaining a relative position and relative rotation information of the mobile terminal device with respect to the coded calibration point through attitude identification, and specifically includes:
the recognition process increases the gesture recognition accuracy for 1 second using low pass filtering and sliding window mean filtering.
5. The method of claim 1, further comprising:
fusing the collected real positioning scene image with a pre-established 3D virtual object corresponding to the coding calibration point to construct an AR scene, wherein the ID and the real three-dimensional coordinate of the coding calibration point are assigned in the 3D virtual object;
and when the coded index point in the image is identified, decoding to obtain the ID of the coded index point, and inquiring in the AR scene to obtain the real three-dimensional coordinate of the coded index point in the initial position image according to the identified ID of the coded index point and the corresponding 3D virtual object.
6. The indoor positioning method capable of being deployed in a large scale according to claim 1, wherein selecting a plurality of feature positions in a positioning scene and respectively deploying code calibration points specifically comprises:
and acquiring a plane building graph of the positioning scene, and selecting a plurality of characteristic positions in the plane building graph to perform coding and index point deployment in the real positioning scene.
7. The method as claimed in claim 6, wherein the characteristic positions include pillars, corners and door frames.
8. The method as claimed in claim 1, wherein the coded index point is coded by ArUco DICT _6 x 6_250 coding.
9. A mass-deployable indoor positioning system, the system comprising:
the system comprises a plurality of coding calibration points, a plurality of positioning units and a plurality of positioning units, wherein the coding calibration points are respectively deployed at a plurality of characteristic positions in a selected positioning scene and record the real three-dimensional coordinates of the coding calibration points in the positioning scene;
the image collector is arranged in the mobile terminal equipment and used for collecting a real positioning scene image in the moving process;
the image sensor is used for identifying a code calibration point in the acquired image and acquiring the relative position and relative rotation information of the mobile terminal equipment relative to the code calibration point through gesture identification to obtain a first gesture matrix;
the visual inertial odometer is used for calculating and acquiring relative position and relative rotation information of the current position and the posture of the mobile terminal equipment relative to the initial position and the posture in the moving process according to the acquired image and data measured by the IMU to obtain a second posture matrix;
the positioning processor is used for acquiring real three-dimensional coordinates of the code calibration points in the initial position image according to the identified code calibration points in the initial position image, and calculating and acquiring an initial absolute position and a posture of the mobile terminal equipment in a real positioning scene by combining the first posture matrix;
and calculating to obtain the current absolute position and the current absolute attitude of the mobile terminal equipment in the real positioning scene according to the initial absolute position and the initial attitude of the mobile terminal equipment and the second attitude matrix.
10. The system of claim 9, wherein the mobile terminal device comprises a mobile phone.
CN202010340248.8A 2020-04-02 2020-04-26 Indoor positioning method and system capable of being deployed in large scale Active CN111595342B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010255753 2020-04-02
CN2020102557532 2020-04-02

Publications (2)

Publication Number Publication Date
CN111595342A true CN111595342A (en) 2020-08-28
CN111595342B CN111595342B (en) 2022-03-18

Family

ID=72187667

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010340248.8A Active CN111595342B (en) 2020-04-02 2020-04-26 Indoor positioning method and system capable of being deployed in large scale

Country Status (1)

Country Link
CN (1) CN111595342B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112118436A (en) * 2020-09-18 2020-12-22 联想(北京)有限公司 Image presentation method, device and system based on augmented reality device
CN112304305A (en) * 2020-10-31 2021-02-02 中环曼普科技(南京)有限公司 Vehicle initial positioning method and system combined with marker image
CN112785530A (en) * 2021-02-05 2021-05-11 广东九联科技股份有限公司 Image rendering method, device and equipment for virtual reality and VR equipment
CN114693754A (en) * 2022-05-30 2022-07-01 湖南大学 Unmanned aerial vehicle autonomous positioning method and system based on monocular vision inertial navigation fusion

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108225375A (en) * 2018-01-08 2018-06-29 哈尔滨工程大学 A kind of optimization coarse alignment method of the anti-outer speed outlier based on medium filtering
CN110411476A (en) * 2019-07-29 2019-11-05 视辰信息科技(上海)有限公司 Vision inertia odometer calibration adaptation and evaluation method and system
CN110794955A (en) * 2018-08-02 2020-02-14 广东虚拟现实科技有限公司 Positioning tracking method, device, terminal equipment and computer readable storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108225375A (en) * 2018-01-08 2018-06-29 哈尔滨工程大学 A kind of optimization coarse alignment method of the anti-outer speed outlier based on medium filtering
CN110794955A (en) * 2018-08-02 2020-02-14 广东虚拟现实科技有限公司 Positioning tracking method, device, terminal equipment and computer readable storage medium
CN110411476A (en) * 2019-07-29 2019-11-05 视辰信息科技(上海)有限公司 Vision inertia odometer calibration adaptation and evaluation method and system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
MD ABID HASAN: ""MEMS IMU Based Pedestrian Indoor Navigation"", 《WIRELESS PERS COMMUN (2018)》 *
TAE GYUN KIM: ""Concurrent Estimation of Robot Pose and Landmark Locations in Underwater Robot"", 《2013 13TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND SYSTEMS (ICCAS 2013)》 *
路丹晖等: "视觉和IMU融合的移动机器人运动解耦估计", 《浙江大学学报(工学版)》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112118436A (en) * 2020-09-18 2020-12-22 联想(北京)有限公司 Image presentation method, device and system based on augmented reality device
CN112304305A (en) * 2020-10-31 2021-02-02 中环曼普科技(南京)有限公司 Vehicle initial positioning method and system combined with marker image
CN112785530A (en) * 2021-02-05 2021-05-11 广东九联科技股份有限公司 Image rendering method, device and equipment for virtual reality and VR equipment
CN112785530B (en) * 2021-02-05 2024-05-24 广东九联科技股份有限公司 Image rendering method, device and equipment for virtual reality and VR equipment
CN114693754A (en) * 2022-05-30 2022-07-01 湖南大学 Unmanned aerial vehicle autonomous positioning method and system based on monocular vision inertial navigation fusion

Also Published As

Publication number Publication date
CN111595342B (en) 2022-03-18

Similar Documents

Publication Publication Date Title
CN111595342B (en) Indoor positioning method and system capable of being deployed in large scale
US11704833B2 (en) Monocular vision tracking method, apparatus and non-transitory computer-readable storage medium
CN110009681B (en) IMU (inertial measurement unit) assistance-based monocular vision odometer pose processing method
CN107888828B (en) Space positioning method and device, electronic device, and storage medium
WO2018142900A1 (en) Information processing device, data management device, data management system, method, and program
JP5832341B2 (en) Movie processing apparatus, movie processing method, and movie processing program
US20150235367A1 (en) Method of determining a position and orientation of a device associated with a capturing device for capturing at least one image
CN108871311B (en) Pose determination method and device
CN111197984A (en) Vision-inertial motion estimation method based on environmental constraint
JP6797607B2 (en) Image processing device, image processing method and program for image processing
KR20140009737A (en) Hybrid map based localization method of robot
JP4132068B2 (en) Image processing apparatus, three-dimensional measuring apparatus, and program for image processing apparatus
CN111665512A (en) Range finding and mapping based on fusion of 3D lidar and inertial measurement unit
WO2018142533A1 (en) Position/orientation estimating device and position/orientation estimating method
CN115371665A (en) Mobile robot positioning method based on depth camera and inertia fusion
CN113701750A (en) Fusion positioning system of underground multi-sensor
CN113610702B (en) Picture construction method and device, electronic equipment and storage medium
JP2018173882A (en) Information processing device, method, and program
EP4332631A1 (en) Global optimization methods for mobile coordinate scanners
CN111862146A (en) Target object positioning method and device
JP2010145219A (en) Movement estimation device and program
CN107883979B (en) Method and system for unifying inertial sensor coordinate system and reference coordinate system
CN113628284A (en) Pose calibration data set generation method, device and system, electronic equipment and medium
CN115932879B (en) Mine robot gesture rapid measurement system based on laser point cloud
JP7346342B2 (en) Measuring device, measuring method and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant