CN110779527B - Indoor positioning method based on multi-source data fusion and visual deep learning - Google Patents

Indoor positioning method based on multi-source data fusion and visual deep learning Download PDF

Info

Publication number
CN110779527B
CN110779527B CN201911034554.2A CN201911034554A CN110779527B CN 110779527 B CN110779527 B CN 110779527B CN 201911034554 A CN201911034554 A CN 201911034554A CN 110779527 B CN110779527 B CN 110779527B
Authority
CN
China
Prior art keywords
unity
area
tango
calibration
coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911034554.2A
Other languages
Chinese (zh)
Other versions
CN110779527A (en
Inventor
王朱伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuxi Heyoo Technology Co ltd
Original Assignee
Wuxi Heyoo Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuxi Heyoo Technology Co ltd filed Critical Wuxi Heyoo Technology Co ltd
Priority to CN201911034554.2A priority Critical patent/CN110779527B/en
Publication of CN110779527A publication Critical patent/CN110779527A/en
Application granted granted Critical
Publication of CN110779527B publication Critical patent/CN110779527B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses an indoor positioning method based on multi-source data fusion and visual deep learning. Clicking the newly-built area description in a Unity interface, scanning a certain area in a room through a Tango SDK module to obtain coordinate data of all points in a three-dimensional space of the area to form a point cloud array, and storing the point cloud array into a scene file fbx; placing a calibration point position on the scanned area scene, and then storing the position into an xml file containing coordinate information of the calibration point position; creating three cubes to be used as the calibration of Unity space coordinates; reading position coordinate information of the calibration point position, and setting the calibration coordinate of the Cube according to an xyz coordinate of the calibration point position in the area learning xml file; manufacturing a plan top view of a scanning area in Revit according to the same proportion; importing the plane top view into a Unity space coordinate to form an image space map; and creating a Sphere as a mark for displaying the current position on the map through the Tango Camera module, and displaying the current position in real time through the Tango SDK module.

Description

Indoor positioning method based on multi-source data fusion and visual deep learning
The technical field is as follows:
the invention belongs to the technical field of data analysis, and particularly relates to an indoor positioning method based on multi-source data fusion and visual deep learning.
Background art:
in daily travel of people, navigation becomes an indispensable travel tool for people, and the travel tool greatly facilitates the travel needs of people. At present, navigation software at home and abroad determines the position based on satellite positioning, but when equipment for receiving satellite signals is located in an indoor closed place, the capacity of receiving the signals is greatly reduced, and even the signals cannot be received, so that people often get lost and sometimes have to take too much to go to places where people need to go when places like superstores or underground parking lots are used.
The information disclosed in this background section is only for enhancement of understanding of the general background of the invention and should not be taken as an acknowledgement or any form of suggestion that this information forms the prior art already known to a person skilled in the art.
The invention content is as follows:
the invention aims to provide an indoor positioning method based on multi-source data fusion and visual deep learning, so that the defects in the prior art are overcome.
In order to achieve the purpose, the invention provides an indoor positioning system based on multi-source data fusion and visual deep learning, which comprises an intelligent terminal, wherein a laser radar is arranged on the intelligent terminal, Unity software and Revit software are installed on the intelligent terminal, and a Tango SDK module and a Tango Camera module are loaded in Unity.
An indoor positioning method based on multi-source data fusion and visual deep learning comprises the following steps: (1) the Unity installed on the intelligent terminal is started, the newly-built area description is clicked in a Unity interface, and a certain indoor area is scanned through a Tango SDK module;
(2) the Tango SDK module calls a laser radar to scan the three-dimensional space of the area, three coordinates of each point in the indoor three-dimensional space are obtained in Unity, and coordinate data of all points in the three-dimensional space of the area form a point cloud array;
(3) after scanning is finished, storing the point cloud array obtained in the step (2) into a scene file fbx in Unity;
(4) opening the scene file fbx in the step (3) by the Unity, placing the position of the calibration point on the scanned scene in the area, and then storing the position of the calibration point into an xml file containing the position coordinate information of the calibration point;
(5) creating three cubes in the Unity as the calibration of a Unity space coordinate; opening the xml file in the step (4) through a Tango SDK module, reading position coordinate information of the calibration point position, and setting the calibration coordinate of the Cube according to an xyz coordinate of the calibration point position in the area learning xml file;
(6) opening Revit, and preparing a plan top view of the scanning area in the step (1) in the Revit according to the same proportion;
(7) leading the plane top view in the step (6) into Unity space coordinates, and adjusting the size of the plane top view and rotating the plane top view until the plane top view corresponds to the positions of the three cubes to form an image space map;
(8) in the image space map, a Sphere is created through a Tango Camera module to serve as a mark for displaying the current position on the map, in the moving process, the steps (2) - (3) are repeated through a Tango SDK module to continuously scan a certain indoor block area, the obtained scene file fbx is compared with the image space map, the moving track of the Sphere on the image space map is obtained, and the current position is displayed in real time.
Compared with the prior art, the invention has the following beneficial effects:
the method comprises the steps of forming a scene by recording point cloud data through laser scanning of a Tango SDK module in a terminal, conducting image space map modeling by introducing a planar top view drawn by Revit into the Unity and combining with a scene file, and recording a motion track on the image space map through real-time scanning of the Tango SDK module, so that indoor navigation in a closed environment is realized.
Description of the drawings:
FIG. 1 is a flow chart of an indoor positioning method based on multi-source data fusion and visual deep learning according to the present invention;
FIG. 2 shows a source code of a scene file saved in a point cloud array by a Tango SDK module according to the present invention;
FIG. 3 is an interface diagram of the Unity newly created area description of the present invention;
FIG. 4 is an interface diagram of placing a index point on a regional scene in Unity, in accordance with the present invention.
The specific implementation mode is as follows:
the following detailed description of specific embodiments of the invention is provided, but it should be understood that the scope of the invention is not limited to the specific embodiments.
Throughout the specification and claims, unless explicitly stated otherwise, the word "comprise", or variations such as "comprises" or "comprising", will be understood to imply the inclusion of a stated element or component but not the exclusion of any other element or component.
Example 1
An indoor positioning system based on multi-source data fusion and visual deep learning comprises an intelligent terminal, wherein a laser radar is arranged on the intelligent terminal, Unity software and Revit software are installed on the intelligent terminal, and a Tango SDK module and a Tango Camera module are loaded in Unity.
As shown in fig. 1, an indoor positioning method based on multi-source data fusion and visual deep learning includes:
(1) the Unity installed on the intelligent terminal is started, as shown in fig. 3, in a Unity interface, a newly-built area description is clicked, and a certain indoor area is scanned through a Tango SDK module;
(2) the Tango SDK module calls a laser radar to scan the three-dimensional space of the area, three coordinates of each point in the indoor three-dimensional space are obtained in Unity, and coordinate data of all points in the three-dimensional space of the area form a point cloud array;
(3) after the scanning is finished, the Tango SDK module saves the point cloud array obtained in the step (2) into a scene file fbx in Unity through the source code shown in fig. 2;
(4) opening the scene file fbx in the step (3) by Unity, as shown in fig. 4, placing a calibration point position on the scanned scene in the area, and then saving the position into an xml file containing coordinate information of the calibration point position;
(5) creating three cubes in the Unity as the calibration of a Unity space coordinate; opening the xml file in the step (4) through a Tango SDK module, reading position coordinate information of the calibration point position, and setting the calibration coordinate of the Cube according to an xyz coordinate of the calibration point position in the area learning xml file;
(6) opening Revit, and preparing a plan top view of the scanning area in the step (1) in the Revit according to the same proportion;
(7) leading the plane top view in the step (6) into Unity space coordinates, and adjusting the size of the plane top view and rotating the plane top view until the plane top view corresponds to the positions of the three cubes to form an image space map;
(8) in the image space map, a Sphere is created through a Tango Camera module to serve as a mark for displaying the current position on the map, in the moving process, the steps (2) - (3) are repeated through a Tango SDK module to continuously scan a certain indoor block area, the obtained scene file fbx is compared with the image space map, the moving track of the Sphere on the image space map is obtained, and the current position is displayed in real time.
The foregoing descriptions of specific exemplary embodiments of the present invention have been presented for purposes of illustration and description. It is not intended to limit the invention to the precise form disclosed, and obviously many modifications and variations are possible in light of the above teaching. The exemplary embodiments were chosen and described in order to explain certain principles of the invention and its practical application to enable one skilled in the art to make and use various exemplary embodiments of the invention and various alternatives and modifications as are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the claims and their equivalents.

Claims (5)

1. An indoor positioning method based on multi-source data fusion and visual deep learning comprises the following steps:
(1) the Unity installed on the intelligent terminal is started, the newly-built area description is clicked in a Unity interface, and a certain indoor area is scanned through a Tango SDK module;
(2) the Tango SDK module calls a laser radar to scan the three-dimensional space of the area to obtain a three-dimensional space point cloud array of the area;
(3) after scanning is finished, storing the point cloud array obtained in the step (2) into a scene file fbx in Unity;
(4) opening the scene file fbx in the step (3) by the Unity, placing the position of the calibration point on the scanned scene in the area, and then storing the position of the calibration point into an xml file containing the position coordinate information of the calibration point;
(5) creating three Cube cubes in the Unity as the calibration of Unity space coordinates, and learning the coordinate information of the calibration point position in the step (4) according to the region;
(6) opening Revit, and preparing a plan top view of the scanning area in the step (1) in the Revit according to the same proportion;
(7) leading the plane top view in the step (6) into Unity space coordinates, and adjusting the size of the plane top view and rotating the plane top view until the plane top view corresponds to the positions of the three cubes to form an image space map;
(8) in the image space map, a Sphere is created through a Tang Camera module to serve as a mark for displaying the current position on the map, and the position of the Sphere is displayed in real time in the moving process.
2. The indoor positioning method based on multi-source data fusion and visual deep learning of claim 1, wherein: the Unity loads the Tango SDK module and the Tango Camera module.
3. The indoor positioning method based on multi-source data fusion and visual deep learning of claim 1, wherein: in the step (2), three coordinates of each point in the indoor three-dimensional space are obtained in Unity, and coordinate data of all points in the three-dimensional space of the area form a point cloud array.
4. The indoor positioning method based on multi-source data fusion and visual deep learning of claim 1, wherein: in the step (5), the learning process according to the region is as follows: and (5) opening the xml file in the step (4) through the Tango SDK module, reading position coordinate information of the calibration point position, and setting the calibration coordinate of the Cube according to the xyz coordinate of the calibration point position in the xml file.
5. The indoor positioning method based on multi-source data fusion and visual deep learning of claim 1, wherein: in the step (8), continuously scanning a certain block area in the room through the Tango SDK module, repeating the steps (2) - (3), comparing the obtained scene file fbx with the image space map to obtain the moving track of the Sphere on the image space map, and displaying the current position in real time.
CN201911034554.2A 2019-10-29 2019-10-29 Indoor positioning method based on multi-source data fusion and visual deep learning Active CN110779527B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911034554.2A CN110779527B (en) 2019-10-29 2019-10-29 Indoor positioning method based on multi-source data fusion and visual deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911034554.2A CN110779527B (en) 2019-10-29 2019-10-29 Indoor positioning method based on multi-source data fusion and visual deep learning

Publications (2)

Publication Number Publication Date
CN110779527A CN110779527A (en) 2020-02-11
CN110779527B true CN110779527B (en) 2021-04-06

Family

ID=69387147

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911034554.2A Active CN110779527B (en) 2019-10-29 2019-10-29 Indoor positioning method based on multi-source data fusion and visual deep learning

Country Status (1)

Country Link
CN (1) CN110779527B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112055216B (en) * 2020-10-30 2021-01-22 成都四方伟业软件股份有限公司 Method and device for rapidly loading mass of oblique photography based on Unity

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017184555A1 (en) * 2016-04-19 2017-10-26 Wal-Mart Stores, Inc. Systems, apparatuses, and method for mapping a space
CN106199626B (en) * 2016-06-30 2019-08-09 上海交通大学 Based on the indoor three-dimensional point cloud map generation system and method for swinging laser radar
CN106485785B (en) * 2016-09-30 2023-09-26 李娜 Scene generation method and system based on indoor three-dimensional modeling and positioning
CN106780735B (en) * 2016-12-29 2020-01-24 深圳先进技术研究院 Semantic map construction method and device and robot
CN107393013B (en) * 2017-06-30 2021-03-16 网易(杭州)网络有限公司 Virtual roaming file generation and display method, device, medium, equipment and system
CN107356256A (en) * 2017-07-05 2017-11-17 中国矿业大学 A kind of indoor high-accuracy position system and method for multi-source data mixing
CN108710739B (en) * 2018-05-11 2022-04-22 北京建筑大学 Method and system for building information model lightweight and three-dimensional scene visualization
CN108959707B (en) * 2018-05-31 2022-09-20 武汉虹信技术服务有限责任公司 Unity-based BIM model texture material visualization method
CN109087393A (en) * 2018-07-23 2018-12-25 汕头大学 A method of building three-dimensional map
CN109410327B (en) * 2018-10-09 2022-05-17 广东博智林机器人有限公司 BIM and GIS-based three-dimensional city modeling method
CN109657318A (en) * 2018-12-11 2019-04-19 北京市政路桥股份有限公司 A method of fusion BIM and AR technology carry out complicated structure and technique anatomy
CN110189412B (en) * 2019-05-13 2023-01-03 武汉大学 Multi-floor indoor structured three-dimensional modeling method and system based on laser point cloud

Also Published As

Publication number Publication date
CN110779527A (en) 2020-02-11

Similar Documents

Publication Publication Date Title
CN110146910B (en) Positioning method and device based on data fusion of GPS and laser radar
WO2021073656A1 (en) Method for automatically labeling image data and device
US10297074B2 (en) Three-dimensional modeling from optical capture
CN110542908B (en) Laser radar dynamic object sensing method applied to intelligent driving vehicle
US20190026400A1 (en) Three-dimensional modeling from point cloud data migration
CN103366631B (en) A kind of method making indoor map and the device making indoor map
CN109446973B (en) Vehicle positioning method based on deep neural network image recognition
CN113640822B (en) High-precision map construction method based on non-map element filtering
CN109163715B (en) Electric power station selection surveying method based on unmanned aerial vehicle RTK technology
CN110793548B (en) Navigation simulation test system based on virtual-real combination of GNSS receiver hardware in loop
CN105467994A (en) Vision and ranging fusion-based food delivery robot indoor positioning system and positioning method
US11067694B2 (en) Locating method and device, storage medium, and electronic device
CN112800516A (en) Building design system with real-scene three-dimensional space model
CN103632044A (en) Camera topology building method and device based on geographic information system
CN112907573B (en) Depth completion method based on 3D convolution
CN110779527B (en) Indoor positioning method based on multi-source data fusion and visual deep learning
CN113724387A (en) Laser and camera fused map construction method
CN116106870A (en) Calibration method and device for external parameters of vehicle laser radar
CN111811502A (en) Motion carrier multi-source information fusion navigation method and system
CN112348941A (en) Real-time fusion method and device based on point cloud and image data
CN117372642A (en) Three-dimensional modeling method and visualization system based on digital twin
CN111323026B (en) Ground filtering method based on high-precision point cloud map
CN114415661B (en) Planar laser SLAM and navigation method based on compressed three-dimensional space point cloud
CN113947665B (en) Method for constructing map of spherical hedge trimmer based on multi-line laser radar and monocular vision
CN113139661B (en) Ground feature depth prediction method based on deep learning and multi-view remote sensing images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant