CN112581530A - Indoor positioning method, storage medium, equipment and system - Google Patents

Indoor positioning method, storage medium, equipment and system Download PDF

Info

Publication number
CN112581530A
CN112581530A CN202011389423.9A CN202011389423A CN112581530A CN 112581530 A CN112581530 A CN 112581530A CN 202011389423 A CN202011389423 A CN 202011389423A CN 112581530 A CN112581530 A CN 112581530A
Authority
CN
China
Prior art keywords
user
indoor space
image information
indoor
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011389423.9A
Other languages
Chinese (zh)
Inventor
黄孝斌
魏剑平
司博章
果泽宇
黄飞
王国金
樊勇
张纬静
陈海雁
李颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Shidai Lingyu Information Technology Co ltd
Original Assignee
Beijing Shidai Lingyu Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Shidai Lingyu Information Technology Co ltd filed Critical Beijing Shidai Lingyu Information Technology Co ltd
Priority to CN202011389423.9A priority Critical patent/CN112581530A/en
Publication of CN112581530A publication Critical patent/CN112581530A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2004Aligning objects, relative positioning of parts

Abstract

The application relates to an indoor positioning method, a storage medium, equipment and a system, comprising: and receiving a positioning request sent by the user terminal, wherein the positioning request contains the personal information of the user, so that the appearance characteristic data of the user can be matched from the database according to the positioning request. Then, all-directional image information of an indoor space where the user terminal is located is collected, three-dimensional models of all people in the indoor space are built according to the all-directional image information, and meanwhile, object appearance feature recognition is conducted on all people in the indoor space based on appearance feature data and all-directional image information of the user, and the three-dimensional models corresponding to the appearance feature data of the user are determined. Because the three-dimensional model of the user is established in the current indoor space, the relative coordinate data of the three-dimensional model of the user in the indoor space can be calculated to position the position of the user in the current indoor space, and the relative coordinate data is sent to the user terminal. According to the method and the device, three-dimensional space modeling and face recognition technologies are combined, the indoor position of the user is located, and compared with the prior art that the user is located through a radio frequency signal, the method and the device are small in interference and high in precision.

Description

Indoor positioning method, storage medium, equipment and system
Technical Field
The present application relates to the field of indoor positioning technologies, and in particular, to a method, a storage medium, a device, and a system for indoor positioning.
Background
When the satellite positioning cannot be used in an indoor environment, other indoor positioning technologies are generally used as auxiliary positioning of the satellite positioning, and the problems that satellite signals are weak and cannot penetrate through buildings when reaching the ground are solved. In the prior art, generally, a beacon is deployed, a radio frequency signal is sent out through the beacon, a receiving terminal receives the beacon signal, and an rsi value is calculated to determine the position of the receiving terminal in the space.
Disclosure of Invention
To overcome, at least to some extent, the problems in the related art, the present application provides a method, a storage medium, an apparatus, and a system for indoor positioning.
The scheme of the application is as follows:
according to an aspect of the embodiments of the present application, there is provided an indoor positioning method, including:
receiving a positioning request sent by a user terminal, wherein the positioning request contains personal information of the user;
matching the appearance characteristic data of the user from a database according to the positioning request;
acquiring all-dimensional image information of an indoor space where the user terminal is currently located;
establishing three-dimensional models of all people in the indoor space according to the omnidirectional image information;
based on the appearance characteristic data of the user and the all-dimensional image information, carrying out object appearance characteristic identification on all people in the indoor space, and determining a three-dimensional model corresponding to the appearance characteristic data of the user;
and calculating relative coordinate data of the three-dimensional model of the user in the indoor space, and sending the relative coordinate data to the user terminal.
Preferably, in an implementation manner of the present application, the method further includes:
receiving a registration request sent by a user terminal, wherein the registration request comprises personal information and appearance characteristic data of the user;
and registering information of the user, associating the personal information of the user with the appearance characteristic data, and storing the appearance characteristic data of the user in the database.
Preferably, in an implementation manner of the present application, the acquiring the omnidirectional image information of the indoor space where the user terminal is currently located specifically includes:
acquiring all-around image information of the indoor space where the user terminal is currently located through a plurality of binocular cameras arranged in the indoor space; wherein the acquisition ranges of the plurality of binocular cameras cover the indoor space.
Preferably, in an implementation manner of the present application, the building a three-dimensional model of all people in the indoor space according to the omnidirectional image information specifically includes:
and analyzing the omnidirectional image information, and establishing a three-dimensional model of all people in the indoor space according to the analyzed omnidirectional image information.
Preferably, in an implementation manner of the present application, the establishing a three-dimensional model of all people in the indoor space according to the analyzed omnidirectional image information specifically includes:
and establishing a three-dimensional model of all people in the indoor space according to the analyzed omnidirectional image information based on the binocular stereoscopic vision principle.
Preferably, in an implementation manner of the present application, the identifying, based on the appearance feature data of the user, the appearance features of the object on the three-dimensional models of all the people in the indoor space specifically includes:
and carrying out object appearance characteristic recognition on the three-dimensional models of all the people in the indoor space by an AI recognition technology based on the appearance characteristic data of the user.
Preferably, in an implementation manner of the present application, the calculating the relative coordinate data of the three-dimensional model of the user in the indoor space specifically includes:
establishing a space model of the indoor space according to the analyzed omnidirectional image information;
and calculating the relative coordinate data of the three-dimensional model of the user in the indoor space according to the distance between the three-dimensional model of the user and each binocular camera in the space model of the indoor space.
According to a second aspect of embodiments of the present application, there is provided a storage medium storing a computer program which, when executed by a processor, implements a method of indoor positioning as described in any one of the above.
According to a third aspect of embodiments of the present application, there is provided an indoor positioning apparatus, including: a processor and a memory;
the processor and the memory are connected through a communication bus:
the processor is used for calling and executing the program stored in the memory;
the memory for storing a program for at least performing the method of indoor positioning as claimed in any one of the above.
According to a fourth aspect of embodiments of the present application, there is provided a system for indoor positioning, comprising: the system comprises a server and a plurality of binocular cameras arranged indoors;
the binocular camera is used for acquiring all-dimensional image information of an indoor space where the user terminal is located currently and sending the all-dimensional image information to the server;
the server is used for receiving a positioning request sent by a user terminal, wherein the positioning request contains personal information of the user; matching the appearance characteristic data of the user from a database according to the positioning request; receiving all-dimensional image information of the indoor space where the user terminal is currently located, which is sent by the binocular camera; establishing three-dimensional models of all people in the indoor space according to the omnidirectional image information; based on the appearance characteristic data of the user and the all-dimensional image information, carrying out object appearance characteristic identification on all people in the indoor space, and determining a three-dimensional model corresponding to the appearance characteristic data of the user; and calculating relative coordinate data of the three-dimensional model of the user in the indoor space, and sending the relative coordinate data to the user terminal.
The technical scheme provided by the application can comprise the following beneficial effects: in the application, the positioning request sent by the user terminal is received, and the positioning request contains the personal information of the user, so that the appearance characteristic data of the user can be matched from the database according to the positioning request. Then, all-directional image information of an indoor space where the user terminal is located is collected, three-dimensional models of all people in the indoor space are built according to the all-directional image information, and meanwhile, object appearance feature recognition is conducted on all people in the indoor space based on appearance feature data and all-directional image information of the user, and the three-dimensional models corresponding to the appearance feature data of the user are determined. Because the three-dimensional model of the user is established in the current indoor space, the relative coordinate data of the three-dimensional model of the user in the indoor space can be calculated to position the position of the user in the current indoor space, and the relative coordinate data is sent to the user terminal. According to the method and the device, three-dimensional space modeling and face recognition technologies are combined, the indoor position of the user is located, and compared with the prior art that the user is located through a radio frequency signal, the method and the device are small in interference and high in precision.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
Fig. 1 is a schematic flowchart of a method for indoor positioning according to an embodiment of the present application;
FIG. 2 is a schematic structural diagram of an indoor positioning apparatus according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an indoor positioning system according to an embodiment of the present application.
Reference numerals: a processor-21; a memory-22; a server-31; a binocular camera-32.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
Example one
A method of indoor positioning, referring to fig. 1, includes:
s11: receiving a positioning request sent by a user terminal, wherein the positioning request contains personal information of a user;
the user terminal can be a mobile device such as a mobile phone terminal and a tablet computer terminal of a user.
Before the user locates, it needs to register on the server established in this embodiment. Therefore, the method for indoor positioning in this embodiment further includes:
receiving a registration request sent by a user terminal, wherein the registration request comprises personal information and appearance characteristic data of a user;
the personal information of the user includes, for example, the name, gender, mobile phone number, login account number, password, etc. The appearance characteristic data of the user can enable the user to perform head nodding, head shaking and other actions according to the instruction, and the facial characteristic data of the user is collected through a camera of the user terminal and serves as the appearance characteristic data of the user.
And registering information of the user, associating the personal information of the user with the appearance characteristic data, and storing the appearance characteristic data of the user in a database.
The method comprises the steps of registering information of a user, registering a personal account of the user in a server, and associating personal information of the user such as name, gender and mobile phone number with appearance characteristic data of the user. The appearance characteristic data of the user is stored in the database and is convenient to call when the subsequent user carries out positioning.
S12: matching the appearance characteristic data of the user from the database according to the positioning request;
since the positioning request includes the personal information of the user, and the personal information of the user is associated with the appearance feature data of the user, the appearance feature data of the user can be matched from the database according to the personal information of the user.
S13: acquiring all-dimensional image information of an indoor space where a user terminal is currently located;
the method specifically comprises the following steps:
acquiring all-around image information of an indoor space where a user terminal is currently located through a plurality of binocular cameras arranged in the indoor space; wherein, the collection scope of a plurality of binocular cameras covers the interior space.
A binocular camera is a mature camera in the prior art which can position targets in a range.
In this embodiment, the plurality of binocular cameras arranged in the indoor space are used to collect the all-dimensional image information of the indoor space where the user terminal is currently located, and the collection range of the binocular cameras needs to cover the whole indoor space. The deployment of the binocular camera is planned according to actual conditions, the acquisition range of the binocular camera can be designed as long as the binocular camera can cover the whole application scene, and the image cross acquisition can facilitate the self-matching of later-stage images (for example, a room is small enough and is installed at four corners, and the four acquisition positions are repeatedly arranged in the information acquired at the four positions, so that the four acquisition positions are selected and modeled when a space model is generated, and more accurate indoor models and holographic images similar to automobiles can be obtained).
S14: establishing a three-dimensional model of all people in the indoor space according to the omnibearing image information;
and analyzing the omnidirectional image information, and establishing a three-dimensional model of all people in the indoor space according to the analyzed omnidirectional image information based on the binocular stereoscopic vision principle.
The binocular stereo vision is based on parallax, and three-dimensional information is acquired by a trigonometry principle, namely a triangle is formed between the image planes of two cameras and a north object. The three-dimensional size of the object in the common field of view of the two cameras and the three-dimensional coordinates of the characteristic points of the space object can be obtained by keeping the position relationship between the two cameras. Therefore, binocular vision systems are generally composed of two cameras, i.e., binocular cameras.
Any point on the image surface of the left camera of the binocular camera can completely determine the three-dimensional coordinates of the point as long as the corresponding matching point can be found on the image surface of the right camera. The method is point-to-point operation, and all points on a plane can participate in the operation as long as corresponding matching points exist, so that corresponding three-dimensional coordinates are obtained.
And modeling the images in the range through the binocular camera. The multi-point deployment of the binocular camera can realize the modeling of the space and further realize the acquisition of the indoor coordinates,
s15: based on the appearance characteristic data and the omnibearing image information of the user, carrying out object appearance characteristic identification on all people in the indoor space, and determining a three-dimensional model corresponding to the appearance characteristic data of the user;
because the current indoor all-around image information of the user terminal comprises the face images of a plurality of current indoor persons, the object appearance feature identification can be carried out on all the persons in the indoor space based on the appearance feature data and the all-around image information of the user, and because the three-dimensional models of all the persons in the indoor space are established according to the all-around image information in the steps, the three-dimensional models corresponding to the appearance feature data of the user can be determined. The appearance feature recognition can adopt an AI recognition technology, and a face recognition technology in the AI recognition technology is a mature recognition technology.
S16: and calculating relative coordinate data of the three-dimensional model of the user in the indoor space, and sending the relative coordinate data to the user terminal.
Specifically, a space model of an indoor space is established according to the analyzed omnidirectional image information;
and calculating the relative coordinate data of the three-dimensional model of the user in the indoor space according to the distance between the three-dimensional model of the user and each binocular camera in the space model of the indoor space.
In the above steps, a plurality of binocular cameras are deployed as the acquisition points to model the current indoor space, so that the indoor space is a known model, and only the spatial position of the user away from the binocular cameras needs to be known, and then the three-dimensional model of the user is put into the spatial model of the indoor space, so that the spatial position information of the user can be obtained, and the relative coordinate data of the three-dimensional model of the user in the indoor space can be obtained.
In this embodiment, the positioning request sent by the user terminal is received, and since the positioning request includes the personal information of the user, the appearance feature data of the user can be matched from the database according to the positioning request. Then, all-directional image information of an indoor space where the user terminal is located is collected, three-dimensional models of all people in the indoor space are built according to the all-directional image information, and meanwhile, object appearance feature recognition is conducted on all people in the indoor space based on appearance feature data and all-directional image information of the user, and the three-dimensional models corresponding to the appearance feature data of the user are determined. Because the three-dimensional model of the user is established in the current indoor space, the relative coordinate data of the three-dimensional model of the user in the indoor space can be calculated to position the position of the user in the current indoor space, and the relative coordinate data is sent to the user terminal. According to the method and the device, three-dimensional space modeling and face recognition technologies are combined, the indoor position of the user is located, and compared with the prior art that the user is located through a radio frequency signal, the method and the device are small in interference and high in precision.
The application scenarios of the indoor positioning method in this embodiment may be as follows:
the user needs to position the current position of the user in a shopping mall, and the position of the user can be positioned by the binocular camera arranged in the shopping mall through the indoor positioning method.
Example two
A storage medium storing a computer program which, when executed by a processor, implements a method of indoor positioning as in the above embodiments.
EXAMPLE III
An apparatus for indoor positioning, referring to fig. 2, comprising: a processor 21 and a memory 22;
the processor 21 is connected to the memory 22 by a communication bus:
the processor 21 is used for calling and executing the program stored in the memory;
a memory 22 for storing a program for at least performing the method of indoor positioning as in the above embodiments.
Example four
A system for indoor positioning, referring to fig. 3, comprising: a server 31 and a plurality of binocular cameras 32 provided indoors;
the binocular camera 32 is used for acquiring the omnidirectional image information of the indoor space where the user terminal is currently located and sending the omnidirectional image information to the server 31;
the server 31 is configured to receive a positioning request sent by a user terminal, where the positioning request includes personal information of a user; matching the appearance characteristic data of the user from the database according to the positioning request; receiving the omnidirectional image information of the indoor space where the user terminal is currently located, which is sent by the binocular camera 32; establishing a three-dimensional model of all people in the indoor space according to the omnibearing image information; based on the appearance characteristic data and the omnibearing image information of the user, carrying out object appearance characteristic identification on all people in the indoor space, and determining a three-dimensional model corresponding to the appearance characteristic data of the user; and calculating relative coordinate data of the three-dimensional model of the user in the indoor space, and sending the relative coordinate data to the user terminal.
It is understood that the same or similar parts in the above embodiments may be mutually referred to, and the same or similar parts in other embodiments may be referred to for the content which is not described in detail in some embodiments.
It should be noted that, in the description of the present application, the terms "first", "second", etc. are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Further, in the description of the present application, the meaning of "a plurality" means at least two unless otherwise specified.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (10)

1. A method of indoor positioning, comprising:
receiving a positioning request sent by a user terminal, wherein the positioning request contains personal information of the user;
matching the appearance characteristic data of the user from a database according to the positioning request;
acquiring all-dimensional image information of an indoor space where the user terminal is currently located;
establishing three-dimensional models of all people in the indoor space according to the omnidirectional image information;
based on the appearance characteristic data of the user and the all-dimensional image information, carrying out object appearance characteristic identification on all people in the indoor space, and determining a three-dimensional model corresponding to the appearance characteristic data of the user;
and calculating relative coordinate data of the three-dimensional model of the user in the indoor space, and sending the relative coordinate data to the user terminal.
2. The method of claim 1, further comprising:
receiving a registration request sent by a user terminal, wherein the registration request comprises personal information and appearance characteristic data of the user;
and registering information of the user, associating the personal information of the user with the appearance characteristic data, and storing the appearance characteristic data of the user in the database.
3. The method according to claim 1, wherein the acquiring the omni-directional image information of the indoor space where the user terminal is currently located specifically includes:
acquiring all-around image information of the indoor space where the user terminal is currently located through a plurality of binocular cameras arranged in the indoor space; wherein the acquisition ranges of the plurality of binocular cameras cover the indoor space.
4. The method according to claim 3, wherein the establishing a three-dimensional model of all people in the indoor space according to the omnidirectional image information specifically comprises:
and analyzing the omnidirectional image information, and establishing a three-dimensional model of all people in the indoor space according to the analyzed omnidirectional image information.
5. The method according to claim 4, wherein the building a three-dimensional model of all people in the indoor space according to the analyzed omnidirectional image information specifically comprises:
and establishing a three-dimensional model of all people in the indoor space according to the analyzed omnidirectional image information based on the binocular stereoscopic vision principle.
6. The method according to claim 4, wherein the identifying the object appearance features of the three-dimensional models of all the persons in the indoor space based on the appearance feature data of the user specifically comprises:
and carrying out object appearance characteristic recognition on the three-dimensional models of all the people in the indoor space by an AI recognition technology based on the appearance characteristic data of the user.
7. The method according to claim 4, wherein the calculating relative coordinate data of the three-dimensional model of the user in the indoor space comprises:
establishing a space model of the indoor space according to the analyzed omnidirectional image information;
and calculating the relative coordinate data of the three-dimensional model of the user in the indoor space according to the distance between the three-dimensional model of the user and each binocular camera in the space model of the indoor space.
8. A storage medium, characterized in that the storage medium stores a computer program which, when executed by a processor, implements the method of indoor positioning according to any one of claims 1-7.
9. An apparatus for indoor positioning, comprising: a processor and a memory;
the processor and the memory are connected through a communication bus:
the processor is used for calling and executing the program stored in the memory;
the memory for storing a program for at least performing the method of indoor positioning of any of claims 1-7.
10. A system for indoor positioning, comprising: the system comprises a server and a plurality of binocular cameras arranged indoors;
the binocular camera is used for acquiring all-dimensional image information of an indoor space where the user terminal is located currently and sending the all-dimensional image information to the server;
the server is used for receiving a positioning request sent by a user terminal, wherein the positioning request contains personal information of the user; matching the appearance characteristic data of the user from a database according to the positioning request; receiving all-dimensional image information of the indoor space where the user terminal is currently located, which is sent by the binocular camera; establishing three-dimensional models of all people in the indoor space according to the omnidirectional image information; based on the appearance characteristic data of the user and the all-dimensional image information, carrying out object appearance characteristic identification on all people in the indoor space, and determining a three-dimensional model corresponding to the appearance characteristic data of the user; and calculating relative coordinate data of the three-dimensional model of the user in the indoor space, and sending the relative coordinate data to the user terminal.
CN202011389423.9A 2020-12-01 2020-12-01 Indoor positioning method, storage medium, equipment and system Pending CN112581530A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011389423.9A CN112581530A (en) 2020-12-01 2020-12-01 Indoor positioning method, storage medium, equipment and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011389423.9A CN112581530A (en) 2020-12-01 2020-12-01 Indoor positioning method, storage medium, equipment and system

Publications (1)

Publication Number Publication Date
CN112581530A true CN112581530A (en) 2021-03-30

Family

ID=75128105

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011389423.9A Pending CN112581530A (en) 2020-12-01 2020-12-01 Indoor positioning method, storage medium, equipment and system

Country Status (1)

Country Link
CN (1) CN112581530A (en)

Similar Documents

Publication Publication Date Title
Chen et al. Crowd map: Accurate reconstruction of indoor floor plans from crowdsourced sensor-rich videos
CN207117844U (en) More VR/AR equipment collaborations systems
CN110645986B (en) Positioning method and device, terminal and storage medium
JP7236565B2 (en) POSITION AND ATTITUDE DETERMINATION METHOD, APPARATUS, ELECTRONIC DEVICE, STORAGE MEDIUM AND COMPUTER PROGRAM
US7991194B2 (en) Apparatus and method for recognizing position using camera
US20210274358A1 (en) Method, apparatus and computer program for performing three dimensional radio model construction
US11416719B2 (en) Localization method and helmet and computer readable storage medium using the same
CN109165606B (en) Vehicle information acquisition method and device and storage medium
WO2017027338A1 (en) Apparatus and method for supporting interactive augmented reality functionalities
KR101181967B1 (en) 3D street view system using identification information.
CN110443850B (en) Target object positioning method and device, storage medium and electronic device
CN113936085B (en) Three-dimensional reconstruction method and device
CN112423191B (en) Video call device and audio gain method
CN112927363A (en) Voxel map construction method and device, computer readable medium and electronic equipment
CN111511017B (en) Positioning method and device, equipment and storage medium
CN112270709A (en) Map construction method and device, computer readable storage medium and electronic device
CN113902802A (en) Visual positioning method and related device, electronic equipment and storage medium
US20230003529A1 (en) Location determination method and electronic device for supporting same
RU176382U1 (en) INFORMATION GATHERING UNIT FOR A JOINT REALITY DEVICE
CN112581530A (en) Indoor positioning method, storage medium, equipment and system
CN113936064B (en) Positioning method and device
CN112233146B (en) Position recommendation method and device, computer readable storage medium and electronic equipment
CN115205364A (en) Object height determination method and device and storage medium
CN110320496B (en) Indoor positioning method and device
CN112598732A (en) Target equipment positioning method, map construction method and device, medium and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination