CN110779527A - Indoor positioning method based on multi-source data fusion and visual deep learning - Google Patents

Indoor positioning method based on multi-source data fusion and visual deep learning Download PDF

Info

Publication number
CN110779527A
CN110779527A CN201911034554.2A CN201911034554A CN110779527A CN 110779527 A CN110779527 A CN 110779527A CN 201911034554 A CN201911034554 A CN 201911034554A CN 110779527 A CN110779527 A CN 110779527A
Authority
CN
China
Prior art keywords
unity
area
tango
calibration
coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911034554.2A
Other languages
Chinese (zh)
Other versions
CN110779527B (en
Inventor
王朱伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuxi Han Yong Polytron Technologies Inc
Original Assignee
Wuxi Han Yong Polytron Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuxi Han Yong Polytron Technologies Inc filed Critical Wuxi Han Yong Polytron Technologies Inc
Priority to CN201911034554.2A priority Critical patent/CN110779527B/en
Publication of CN110779527A publication Critical patent/CN110779527A/en
Application granted granted Critical
Publication of CN110779527B publication Critical patent/CN110779527B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses an indoor positioning method based on multi-source data fusion and visual deep learning. Clicking the newly-built area description in a Unity interface, scanning a certain area in a room through a Tango SDK module to obtain coordinate data of all points in a three-dimensional space of the area to form a point cloud array, and storing the point cloud array into a scene file fbx; placing a calibration point position on the scanned area scene, and then storing the position into an xml file containing coordinate information of the calibration point position; creating three cubes to be used as the calibration of Unity space coordinates; reading position coordinate information of the calibration point position, and setting the calibration coordinate of the Cube according to an xyz coordinate of the calibration point position in the area learning xml file; manufacturing a plan top view of a scanning area in Revit according to the same proportion; importing the plane top view into a Unity space coordinate to form an image space map; and creating a Sphere as a mark for displaying the current position on the map through the Tango Camera module, and displaying the current position in real time through the Tango SDK module.

Description

Indoor positioning method based on multi-source data fusion and visual deep learning
The technical field is as follows:
the invention belongs to the technical field of data analysis, and particularly relates to an indoor positioning method based on multi-source data fusion and visual deep learning.
Background art:
in daily travel of people, navigation becomes an indispensable travel tool for people, and the travel tool greatly facilitates the travel needs of people. At present, navigation software at home and abroad determines the position based on satellite positioning, but when equipment for receiving satellite signals is located in an indoor closed place, the capacity of receiving the signals is greatly reduced, and even the signals cannot be received, so that people often get lost and sometimes have to take too much to go to places where people need to go when places like superstores or underground parking lots are used.
The information disclosed in this background section is only for enhancement of understanding of the general background of the invention and should not be taken as an acknowledgement or any form of suggestion that this information forms the prior art already known to a person skilled in the art.
The invention content is as follows:
the invention aims to provide an indoor positioning method based on multi-source data fusion and visual deep learning, so that the defects in the prior art are overcome.
In order to achieve the purpose, the invention provides an indoor positioning system based on multi-source data fusion and visual deep learning, which comprises an intelligent terminal, wherein a laser radar is arranged on the intelligent terminal, Unity software and Revit software are installed on the intelligent terminal, and a Tango SDK module and a Tango Camera module are loaded in Unity.
An indoor positioning method based on multi-source data fusion and visual deep learning comprises the following steps: (1) the Unity installed on the intelligent terminal is started, the newly-built area description is clicked in a Unity interface, and a certain indoor area is scanned through a Tango SDK module;
(2) the Tango SDK module calls a laser radar to scan the three-dimensional space of the area, three coordinates of each point in the indoor three-dimensional space are obtained in Unity, and coordinate data of all points in the three-dimensional space of the area form a point cloud array;
(3) after scanning is finished, storing the point cloud array obtained in the step (2) into a scene file fbx in Unity;
(4) opening the scene file fbx in the step (3) by the Unity, placing the position of the calibration point on the scanned scene in the area, and then storing the position of the calibration point into an xml file containing the position coordinate information of the calibration point;
(5) creating three cubes in the Unity as the calibration of a Unity space coordinate; opening the xml file in the step (4) through a Tango SDK module, reading position coordinate information of the calibration point position, and setting the calibration coordinate of the Cube according to an xyz coordinate of the calibration point position in the area learning xml file;
(6) opening Revit, and preparing a plan top view of the scanning area in the step (1) in the Revit according to the same proportion;
(7) leading the plane top view in the step (6) into Unity space coordinates, and adjusting the size of the plane top view and rotating the plane top view until the plane top view corresponds to the positions of the three cubes to form an image space map;
(8) in the image space map, a Sphere is created through a Tango Camera module to serve as a mark for displaying the current position on the map, in the moving process, the steps (2) - (3) are repeated through a Tango SDK module to continuously scan a certain indoor block area, the obtained scene file fbx is compared with the image space map, the moving track of the Sphere on the image space map is obtained, and the current position is displayed in real time.
Compared with the prior art, the invention has the following beneficial effects:
the method comprises the steps of forming a scene by recording point cloud data through laser scanning of a Tango SDK module in a terminal, conducting image space map modeling by introducing a planar top view drawn by Revit into the Unity and combining with a scene file, and recording a motion track on the image space map through real-time scanning of the Tango SDK module, so that indoor navigation in a closed environment is realized.
Description of the drawings:
FIG. 1 is a flow chart of an indoor positioning method based on multi-source data fusion and visual deep learning according to the present invention;
FIG. 2 shows a source code of a scene file saved in a point cloud array by a Tango SDK module according to the present invention;
FIG. 3 is an interface diagram of the Unity newly created area description of the present invention;
FIG. 4 is an interface diagram of placing a index point on a regional scene in Unity, in accordance with the present invention.
The specific implementation mode is as follows:
the following detailed description of specific embodiments of the invention is provided, but it should be understood that the scope of the invention is not limited to the specific embodiments.
Throughout the specification and claims, unless explicitly stated otherwise, the word "comprise", or variations such as "comprises" or "comprising", will be understood to imply the inclusion of a stated element or component but not the exclusion of any other element or component.
Example 1
An indoor positioning system based on multi-source data fusion and visual deep learning comprises an intelligent terminal, wherein a laser radar is arranged on the intelligent terminal, Unity software and Revit software are installed on the intelligent terminal, and a TangoSDK module and a Tango Camera module are loaded in Unity.
As shown in fig. 1, an indoor positioning method based on multi-source data fusion and visual deep learning includes:
(1) the Unity installed on the intelligent terminal is started, as shown in fig. 3, in a Unity interface, a newly-built area description is clicked, and a certain indoor area is scanned through a Tango SDK module;
(2) the Tango SDK module calls a laser radar to scan the three-dimensional space of the area, three coordinates of each point in the indoor three-dimensional space are obtained in Unity, and coordinate data of all points in the three-dimensional space of the area form a point cloud array;
(3) after the scanning is finished, the Tango SDK module saves the point cloud array obtained in the step (2) into a scene file fbx in Unity through the source code shown in fig. 2;
(4) opening the scene file fbx in the step (3) by Unity, as shown in fig. 4, placing a calibration point position on the scanned scene in the area, and then saving the position into an xml file containing coordinate information of the calibration point position;
(5) creating three cubes in the Unity as the calibration of a Unity space coordinate; opening the xml file in the step (4) through a Tango SDK module, reading position coordinate information of the calibration point position, and setting the calibration coordinate of the Cube according to an xyz coordinate of the calibration point position in the area learning xml file;
(6) opening Revit, and preparing a plan top view of the scanning area in the step (1) in the Revit according to the same proportion;
(7) leading the plane top view in the step (6) into Unity space coordinates, and adjusting the size of the plane top view and rotating the plane top view until the plane top view corresponds to the positions of the three cubes to form an image space map;
(8) in the image space map, a Sphere is created through a Tango Camera module to serve as a mark for displaying the current position on the map, in the moving process, the steps (2) - (3) are repeated through a Tango SDK module to continuously scan a certain indoor block area, the obtained scene file fbx is compared with the image space map, the moving track of the Sphere on the image space map is obtained, and the current position is displayed in real time.
The foregoing descriptions of specific exemplary embodiments of the present invention have been presented for purposes of illustration and description. It is not intended to limit the invention to the precise form disclosed, and obviously many modifications and variations are possible in light of the above teaching. The exemplary embodiments were chosen and described in order to explain certain principles of the invention and its practical application to enable one skilled in the art to make and use various exemplary embodiments of the invention and various alternatives and modifications as are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the claims and their equivalents.

Claims (5)

1. An indoor positioning method based on multi-source data fusion and visual deep learning comprises the following steps:
(1) the Unity installed on the intelligent terminal is started, the newly-built area description is clicked in a Unity interface, and a certain indoor area is scanned through a Tango SDK module;
(2) the Tango SDK module calls a laser radar to scan the three-dimensional space of the area to obtain a three-dimensional space point cloud array of the area;
(3) after scanning is finished, storing the point cloud array obtained in the step (2) into a scene file fbx in Unity;
(4) opening the scene file fbx in the step (3) by the Unity, placing the position of the calibration point on the scanned scene in the area, and then storing the position of the calibration point into an xml file containing the position coordinate information of the calibration point;
(5) creating three Cube cubes in the Unity as the calibration of Unity space coordinates, and learning the coordinate information of the calibration point position in the step (4) according to the region;
(6) opening Revit, and preparing a plan top view of the scanning area in the step (1) in the Revit according to the same proportion;
(7) leading the plane top view in the step (6) into Unity space coordinates, and adjusting the size of the plane top view and rotating the plane top view until the plane top view corresponds to the positions of the three cubes to form an image space map;
(8) in the image space map, a Sphere is created through a Tang Camera module to serve as a mark for displaying the current position on the map, and the position of the Sphere is displayed in real time in the moving process.
2. The indoor positioning method based on multi-source data fusion and visual deep learning of claim 1, wherein: the Unity loads the Tango SDK module and the Tango Camera module.
3. The indoor positioning method based on multi-source data fusion and visual deep learning of claim 1, wherein: in the step (2), three coordinates of each point in the indoor three-dimensional space are obtained in Unity, and coordinate data of all points in the three-dimensional space of the area form a point cloud array.
4. The indoor positioning method based on multi-source data fusion and visual deep learning of claim 1, wherein: in the step (5), the learning process according to the region is as follows: and (5) opening the xml file in the step (4) through the Tango SDK module, reading position coordinate information of the calibration point position, and setting the calibration coordinate of the Cube according to the xyz coordinate of the calibration point position in the xml file.
5. The indoor positioning method based on multi-source data fusion and visual deep learning of claim 1, wherein: in the step (8), continuously scanning a certain block area in the room through the Tango SDK module, repeating the steps (2) - (3), comparing the obtained scene file fbx with the image space map to obtain the moving track of the Sphere on the image space map, and displaying the current position in real time.
CN201911034554.2A 2019-10-29 2019-10-29 Indoor positioning method based on multi-source data fusion and visual deep learning Expired - Fee Related CN110779527B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911034554.2A CN110779527B (en) 2019-10-29 2019-10-29 Indoor positioning method based on multi-source data fusion and visual deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911034554.2A CN110779527B (en) 2019-10-29 2019-10-29 Indoor positioning method based on multi-source data fusion and visual deep learning

Publications (2)

Publication Number Publication Date
CN110779527A true CN110779527A (en) 2020-02-11
CN110779527B CN110779527B (en) 2021-04-06

Family

ID=69387147

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911034554.2A Expired - Fee Related CN110779527B (en) 2019-10-29 2019-10-29 Indoor positioning method based on multi-source data fusion and visual deep learning

Country Status (1)

Country Link
CN (1) CN110779527B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112055216A (en) * 2020-10-30 2020-12-08 成都四方伟业软件股份有限公司 Method and device for rapidly loading mass of oblique photography based on Unity

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106199626A (en) * 2016-06-30 2016-12-07 上海交通大学 Based on the indoor three-dimensional point cloud map generation system and the method that swing laser radar
CN106485785A (en) * 2016-09-30 2017-03-08 李娜 A kind of scene generating method based on indoor three-dimensional modeling and positioning and system
CN106780735A (en) * 2016-12-29 2017-05-31 深圳先进技术研究院 A kind of semantic map constructing method, device and a kind of robot
US20170300973A1 (en) * 2016-04-19 2017-10-19 Wal-Mart Stores, Inc. Systems, apparatuses, and method for mapping a space
CN107356256A (en) * 2017-07-05 2017-11-17 中国矿业大学 A kind of indoor high-accuracy position system and method for multi-source data mixing
CN107393013A (en) * 2017-06-30 2017-11-24 网易(杭州)网络有限公司 Virtual roaming file generated, display methods, device, medium, equipment and system
CN108710739A (en) * 2018-05-11 2018-10-26 北京建筑大学 A kind of Building Information Model lightweight and three-dimensional scenic visualization method and system
CN108959707A (en) * 2018-05-31 2018-12-07 武汉虹信技术服务有限责任公司 A kind of BIM model texture and material method for visualizing based on Unity
CN109087393A (en) * 2018-07-23 2018-12-25 汕头大学 A method of building three-dimensional map
CN109410327A (en) * 2018-10-09 2019-03-01 鼎宸建设管理有限公司 A kind of three-dimension tidal current method based on BIM and GIS
CN109657318A (en) * 2018-12-11 2019-04-19 北京市政路桥股份有限公司 A method of fusion BIM and AR technology carry out complicated structure and technique anatomy
CN110189412A (en) * 2019-05-13 2019-08-30 武汉大学 More floor doors structure three-dimensional modeling methods and system based on laser point cloud

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170300973A1 (en) * 2016-04-19 2017-10-19 Wal-Mart Stores, Inc. Systems, apparatuses, and method for mapping a space
CN106199626A (en) * 2016-06-30 2016-12-07 上海交通大学 Based on the indoor three-dimensional point cloud map generation system and the method that swing laser radar
CN106485785A (en) * 2016-09-30 2017-03-08 李娜 A kind of scene generating method based on indoor three-dimensional modeling and positioning and system
CN106780735A (en) * 2016-12-29 2017-05-31 深圳先进技术研究院 A kind of semantic map constructing method, device and a kind of robot
CN107393013A (en) * 2017-06-30 2017-11-24 网易(杭州)网络有限公司 Virtual roaming file generated, display methods, device, medium, equipment and system
CN107356256A (en) * 2017-07-05 2017-11-17 中国矿业大学 A kind of indoor high-accuracy position system and method for multi-source data mixing
CN108710739A (en) * 2018-05-11 2018-10-26 北京建筑大学 A kind of Building Information Model lightweight and three-dimensional scenic visualization method and system
CN108959707A (en) * 2018-05-31 2018-12-07 武汉虹信技术服务有限责任公司 A kind of BIM model texture and material method for visualizing based on Unity
CN109087393A (en) * 2018-07-23 2018-12-25 汕头大学 A method of building three-dimensional map
CN109410327A (en) * 2018-10-09 2019-03-01 鼎宸建设管理有限公司 A kind of three-dimension tidal current method based on BIM and GIS
CN109657318A (en) * 2018-12-11 2019-04-19 北京市政路桥股份有限公司 A method of fusion BIM and AR technology carry out complicated structure and technique anatomy
CN110189412A (en) * 2019-05-13 2019-08-30 武汉大学 More floor doors structure three-dimensional modeling methods and system based on laser point cloud

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
武胜等: "基于虚拟现实技术的室内导航系统", 《微型机与应用》 *
鲁小虎: "室内场景三维建模与导航技术现状与分析", 《黑龙江科技信息》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112055216A (en) * 2020-10-30 2020-12-08 成都四方伟业软件股份有限公司 Method and device for rapidly loading mass of oblique photography based on Unity
CN112055216B (en) * 2020-10-30 2021-01-22 成都四方伟业软件股份有限公司 Method and device for rapidly loading mass of oblique photography based on Unity

Also Published As

Publication number Publication date
CN110779527B (en) 2021-04-06

Similar Documents

Publication Publication Date Title
WO2021073656A1 (en) Method for automatically labeling image data and device
US10297074B2 (en) Three-dimensional modeling from optical capture
US20190026400A1 (en) Three-dimensional modeling from point cloud data migration
CN103366631B (en) A kind of method making indoor map and the device making indoor map
CN109446973B (en) Vehicle positioning method based on deep neural network image recognition
CN109815300A (en) A kind of vehicle positioning method
CN109163715B (en) Electric power station selection surveying method based on unmanned aerial vehicle RTK technology
CN113640822B (en) High-precision map construction method based on non-map element filtering
CN111811502B (en) Motion carrier multi-source information fusion navigation method and system
CN112800516A (en) Building design system with real-scene three-dimensional space model
CN103632044A (en) Camera topology building method and device based on geographic information system
US20200209389A1 (en) Locating Method and Device, Storage Medium, and Electronic Device
CN110779527B (en) Indoor positioning method based on multi-source data fusion and visual deep learning
CN116106870A (en) Calibration method and device for external parameters of vehicle laser radar
CN105444773A (en) Navigation method and system based on real scene recognition and augmented reality
CN109636897B (en) Octmap optimization method based on improved RGB-D SLAM
CN112509133A (en) Three-dimensional reservoir high-definition live-action display method based on GIS
CN112348941A (en) Real-time fusion method and device based on point cloud and image data
CN111323026B (en) Ground filtering method based on high-precision point cloud map
CN116817891A (en) Real-time multi-mode sensing high-precision map construction method
CN114459483B (en) Landmark navigation map construction and application method and system based on robot navigation
CN114415661B (en) Planar laser SLAM and navigation method based on compressed three-dimensional space point cloud
CN113947665B (en) Method for constructing map of spherical hedge trimmer based on multi-line laser radar and monocular vision
CN113139661B (en) Ground feature depth prediction method based on deep learning and multi-view remote sensing images
CN205301998U (en) Vision and indoor positioning system of food delivery robot who finds range and fuse

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210406