CN115631310A - Positioning system, method and device of three-dimensional space map and computing equipment - Google Patents

Positioning system, method and device of three-dimensional space map and computing equipment Download PDF

Info

Publication number
CN115631310A
CN115631310A CN202211152773.2A CN202211152773A CN115631310A CN 115631310 A CN115631310 A CN 115631310A CN 202211152773 A CN202211152773 A CN 202211152773A CN 115631310 A CN115631310 A CN 115631310A
Authority
CN
China
Prior art keywords
information
dimensional space
data
space map
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211152773.2A
Other languages
Chinese (zh)
Inventor
吕万洲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Alibaba China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Co Ltd filed Critical Alibaba China Co Ltd
Priority to CN202211152773.2A priority Critical patent/CN115631310A/en
Publication of CN115631310A publication Critical patent/CN115631310A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Computer Graphics (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Computer Hardware Design (AREA)
  • Databases & Information Systems (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Data Mining & Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the specification provides a positioning system, a method, a device and a computing device of a three-dimensional space map, wherein the positioning system of the three-dimensional space map comprises the following steps: the method comprises the steps of collecting equipment, user equipment and a server; the acquisition equipment is configured to acquire environment basic data information of a target environment and send the environment basic data information to the server, wherein the environment basic data information comprises multidimensional sensing data information; the server is configured to receive the environment basic data information and generate a three-dimensional space map of the target environment according to the environment basic data information; the user equipment is configured to acquire multimedia data of a target environment and send the multimedia data to the server based on the augmented reality framework; and the server is also configured to determine the position information of the user based on the three-dimensional space map and the multimedia data and send the position information to the user equipment.

Description

Positioning system, method and device of three-dimensional space map and computing equipment
Technical Field
The embodiment of the specification relates to the technical field of computers, in particular to a positioning system of a three-dimensional space map.
Background
In practical application, people can not find directions or places in strange environments, but with the progress of scientific technology, AR real-time navigation is widely applied to the lives of people and is deeply favored by people. However, most of the existing AR navigation technologies are Wi-Fi-based positioning technologies, bluetooth-based positioning technologies, infrared-based positioning technologies, and the like, and these positioning technologies are easily interfered by other signals and have poor stability, thereby affecting the accuracy thereof.
For the indoor positioning technology, the positioning technology based on the Bluetooth is mainly applied to small-range positioning, and the transmission distance is short, so that extremely high deployment and maintenance costs can be generated in a large indoor environment; although the infrared-based positioning technology has relatively high indoor positioning accuracy, because light cannot pass through barriers, infrared rays can only be transmitted at a sight distance and easily interfered by other light, the infrared transmission distance is short, the indoor positioning effect is poor, when equipment is placed in a pocket or is shielded by a wall, the equipment cannot normally work, an antenna needs to be installed in each room, and therefore the overall cost of deployment in a large space is very high.
Therefore, a method is needed to solve the technical problems of low accuracy and high cost of positioning effect achieved by the above positioning technology.
Disclosure of Invention
In view of this, this specification provides a positioning system for a three-dimensional space map. One or more embodiments of the present disclosure relate to a method and an apparatus for positioning a three-dimensional map, a computing device, an augmented reality AR device or a virtual reality VR device, a computer-readable storage medium, and a computer program, so as to solve technical deficiencies in the prior art.
According to a first aspect of embodiments herein, there is provided a positioning system for a three-dimensional space map, including: collecting equipment, user equipment and a server;
the acquisition equipment is configured to acquire environment basic data information of a target environment and send the environment basic data information to the server based on an augmented reality framework, wherein the environment basic data information comprises multidimensional sensing data information;
the server is configured to receive the environment basic data information and generate a three-dimensional space map of the target environment according to the environment basic data information;
the user equipment is configured to acquire multimedia data of the target environment and send the multimedia data to the server based on the augmented reality framework;
the server is further configured to determine position information of a user based on the three-dimensional space map, the multimedia data and a visual positioning service, and send the position information to the user equipment.
According to a second aspect of the embodiments of the present specification, there is provided a positioning method for a three-dimensional space map, applied to a server, including:
receiving environment basic data information uploaded by an acquisition device based on an augmented reality framework, and generating a three-dimensional space map of a target environment according to the environment basic data information;
receiving multimedia data of the target environment uploaded by user equipment based on the augmented reality framework;
and determining the position information of the user based on the three-dimensional space map, the multimedia data and the visual positioning service, and sending the position information to the user equipment.
According to a third aspect of the embodiments of the present specification, there is provided a positioning apparatus for a three-dimensional space map, applied to a server, including:
the generating module is configured to receive environment basic data information uploaded by an acquisition device based on an augmented reality framework and generate a three-dimensional space map of a target environment according to the environment basic data information;
a first receiving module configured to receive multimedia data of the target environment uploaded by a user equipment based on the augmented reality framework;
a positioning module configured to determine location information of a user based on the three-dimensional spatial map, the multimedia data, and a visual positioning service, and to transmit the location information to the user device.
According to a fourth aspect of embodiments herein, there is provided a computing device comprising:
a memory and a processor;
the memory is used for storing computer executable instructions, and the processor is used for executing the computer executable instructions, and the computer executable instructions are executed by the processor to realize the steps of the positioning method of the three-dimensional space map.
According to a fifth aspect of embodiments herein, there is provided an augmented reality AR device or a virtual reality VR device, comprising:
a memory, a processor, and a display;
the memory is configured to store computer-executable instructions and the processor is configured to execute the computer-executable instructions, which when executed by the processor implement the steps of:
acquiring multimedia data of a target environment, and sending the multimedia data to a server based on an augmented reality framework;
receiving the position information sent by the server, and determining the current position of the user based on the position information;
generating an AR environment image based on the current location of the user; or
Acquiring corresponding reference information based on the current position of the user, wherein the reference information comprises preset reference information and/or real-time reference information, the reference information is information for user reference, the preset reference information is reference information which is input in the user equipment in advance, and the real-time reference information is reference information which is displayed in real time according to the real-time position of the user;
rendering the AR environment image or the reference information to a display of the augmented reality AR device or the virtual reality VR device for display.
According to a sixth aspect of embodiments herein, there is provided a computer-readable storage medium storing computer-executable instructions, which when executed by a processor, implement the steps of the positioning method for a three-dimensional space map.
According to a seventh aspect of embodiments herein, there is provided a computer program, wherein when the computer program is executed in a computer, the computer is caused to execute the steps of the positioning method of the three-dimensional space map.
The positioning system of the three-dimensional space map provided by the specification comprises acquisition equipment, user equipment and a server, wherein the acquisition equipment is configured to acquire environment basic data information of a target environment and send the environment basic data information to the server, and the environment basic data information comprises multidimensional sensing data information; the server is configured to receive the environment basic data information and generate a three-dimensional space map of the target environment according to the environment basic data information; the user equipment is configured to acquire multimedia data of the target environment and send the multimedia data to the server based on an augmented reality framework; the server is further configured to determine position information of a user based on the three-dimensional space map and the multimedia data, and send the position information to the user equipment.
In the positioning system of the three-dimensional space map provided by the specification, the acquisition equipment acquires the multi-dimensional environment basic data information of the target environment, and the server constructs the three-dimensional space map of the target environment according to the environment basic data information of different dimensions, so that the accuracy of the three-dimensional space map can be improved; the interactive process between the acquisition equipment and the server and between the user equipment and the server is realized through the augmented reality framework, the wide compatibility of the positioning system of the three-dimensional space map is realized, and the problems of low compatibility and difficult adaptation of the terminal are solved; the visual positioning service deployed in the server can realize positioning in a three-dimensional space map; the visual positioning service is coupled with the extended reality frame, the accuracy of three-dimensional space positioning can be improved, in the positioning process, deployment of other conditions in a target environment is not needed, and the manufacturing cost is greatly reduced.
Drawings
Fig. 1 is a block diagram illustrating a positioning system for a three-dimensional map according to an embodiment of the present disclosure;
FIG. 2 is an architectural diagram of a visual location services technique provided by one embodiment of the present specification;
FIG. 3 is a schematic diagram of an end cloud collaboration architecture in which a visual positioning service is coupled to an augmented reality framework provided by an embodiment of the present description;
fig. 4 is a flowchart of a method for positioning a three-dimensional space map according to an embodiment of the present disclosure;
FIG. 5 is a flowchart of a positioning method applied to a three-dimensional space map of a speed skating training scene according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of a positioning apparatus for a three-dimensional space map according to an embodiment of the present disclosure;
fig. 7 is a block diagram of a computing device according to an embodiment of the present disclosure.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present description. This description may be implemented in many ways other than those specifically set forth herein, and those skilled in the art will appreciate that the present description is susceptible to similar generalizations without departing from the scope of the description, and thus is not limited to the specific implementations disclosed below.
The terminology used in the description of the one or more embodiments is for the purpose of describing the particular embodiments only and is not intended to be limiting of the description of the one or more embodiments. As used in one or more embodiments of the present specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used in one or more embodiments of the present specification refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It will be understood that, although the terms first, second, etc. may be used herein in one or more embodiments to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, a first can also be referred to as a second and, similarly, a second can also be referred to as a first without departing from the scope of one or more embodiments of the present description. The word "if" as used herein may be interpreted as "at" \8230; "or" when 8230; \8230; "or" in response to a determination ", depending on the context.
First, the noun terms to which one or more embodiments of the present specification relate are explained.
AR (Augmented Reality, AR for short) navigation: the AR navigation is a navigation mode, a map, a mobile phone camera or an AR glasses camera is combined with an AR technology/space map building depth, the camera can present all of the real world on a mobile phone screen, meanwhile, virtual models such as cartoon characters, indication arrows and the like can be superposed on an existing image, and the virtual models can guide the navigation direction for pedestrians.
The three-dimensional space reconstruction device and the cloud algorithm engine are adopted to realize the three-dimensional space image construction, a user can complete the three-dimensional space high-precision positioning in real time by identifying a real environment through a smart phone or an AR (augmented reality) glasses camera with an IMU (inertial measurement Unit), so that accurate visual navigation is carried out, and a virtual navigation prompt is superposed in reality to help the user to quickly find a target parking space/track/meeting room/competition seat/shop/service desk/elevator/toilet/scenic spot and the like, so that the three-dimensional space image construction device can be superposed with navigation/mutual entertainment/marketing for use. The application scene comprises the following steps: training venue/exhibition/release/mall/scenic spot/park/exhibition hall/museum.
Extended reality (XR for short): the Virtual environment is created by combining Reality and Virtual through a computer, and is also a general term for technologies such as AR (augmented Reality), VR (Virtual Reality), MR (mixed Reality), and the like. By fusing the visual interaction technologies of the three parts, the experience is provided with the 'immersion feeling' of seamless conversion between the virtual world and the real world.
IMU (initial Measurement Unit): namely an inertial measurement unit, for measuring the three-axis attitude angle (or angular velocity) and acceleration of the object. An IMU comprises three single-axis accelerometers and three single-axis gyroscopes, wherein the accelerometers detect acceleration signals of an object in three independent axes of a carrier coordinate system, and the gyroscopes detect angular velocity signals of the carrier relative to a navigation coordinate system, measure the angular velocity and acceleration of the object in three-dimensional space, and calculate the attitude of the object according to the angular velocity and acceleration signals. Has important application value in navigation.
GPS (Global Positioning System): the global positioning system, is a high-precision radio navigation positioning system based on artificial earth satellites, and can provide accurate geographic position, moving speed and precise time information anywhere in the world and in the near-earth space.
VPS (Visual Positioning Service): i.e., visual positioning service, is a system or service that utilizes image information for positioning. The VPS deployed at the cloud can provide large-space three-dimensional map and map POI (Point of Information Point) identification, map semantic service and global positioning service.
iBeacon: the iBeacon technology means that by using the bluetooth low energy technology, the iBeacon base station can automatically create a signal area, and when the device enters the area, the corresponding application program will prompt the user whether to access the signal network. The iBeacon is a low energy consumption bluetooth technology, the working principle is similar to the previous bluetooth technology, and the iBeacon transmits signals, and IOS (internet Operating System-Cisco, abbreviated IOS, also can be written as IOS, cisco network configuration System) equipment positions, receives and feeds back signals. Many corresponding indoor location technology applications can be made based on this simple location technology.
Synchronous positioning and mapping (SLAM for short): the sensors currently used in SLAM are mainly classified into two types, namely, lidar (Lidar) based sensors and Visual VSLAM (Visual SLAM) based sensors.
Six Degrees of freedom (Six-details of freedom, 6Dof for short): besides the capability of rotating on the three axes of X, Y and Z, the device also has the capability of moving on the three axes of X, Y and Z. A 6DOF XR instrument can simulate all head dynamics. In addition, the height of the user can be calibrated by using the displacement data, and the user can visually perceive the actual height of the target object by wearing the AR glasses supporting 6Dof, so that the scene is more real.
AR tracking technology: to realize the fusion of the physical environment and the virtual elements, all that must be done by the AR glasses is to be able to track the virtual environment. Visual tracking techniques include Image-based tracking (Image-based tracking), object-based tracking (Object-based tracking), and simultaneous localization and mapping (SLAM). In the image-based tracking technology, firstly, the tracked image needs to be preprocessed to obtain a feature point set of the image, and then the images in the video stream are matched in a real-time state. The tracking technology based on object, which is similar to the tracking technology based on image, firstly preprocesses the tracked object, stores its various characteristics, then analyzes the image in the video stream in real time, and calculates the relative position of the tracking system and the tracked object. The synchronous positioning and mapping technology is a tracking technology without preprocessing, positioning and mapping can be completed simultaneously after starting, and although the method is simple to use, the method does not easily acquire the relative position between the glasses and a specified object in the physical environment like the former two methods.
AR interaction technology: for AR glasses, the more convenient interaction modes are voice, gesture, and remote control handle. The gesture interaction can be realized by inputting instructions only by making gestures by a user without introducing additional interaction equipment, so that the development of the AR glasses interaction technology is promoted to a certain extent. The remote control handle is used as a traditional input means, although the input is simple, the accurate and efficient input can be guaranteed, and therefore the AR glasses product is still provided with a remote controller or a touch pad as an interactive device.
OpenXR: openXR is an XR-specific application interface that provides high-performance access to Augmented Reality (AR) and Virtual Reality (VR) (collectively XR) platforms and devices.
Software Development Kit (SDK): broadly refers to a collection of related documents, paradigms and tools that assist in the development of a certain class of software. A collection of development tools is typically used by a software engineer to build application software for a particular software package, software framework, hardware platform, operating system, etc.
In the present specification, a positioning system of a three-dimensional space map is provided, and the present specification relates to a positioning method and apparatus of a three-dimensional space map, a computing device, and a computer-readable storage medium, which are described in detail in the following embodiments one by one.
The embodiment of the positioning system for the three-dimensional space map provided by the specification is as follows:
fig. 1 is a block diagram illustrating a positioning system for a three-dimensional space map according to an embodiment of the present disclosure.
The positioning system of the three-dimensional space map comprises:
acquisition device 102, server 104, and user device 106;
the acquisition device 102 is configured to acquire environment basic data information of a target environment and send the environment basic data information to the server 104, wherein the environment basic data information includes multidimensional sensing data information;
the server 104 is configured to receive the environment basic data information, and generate a three-dimensional space map of the target environment according to the environment basic data information;
the user equipment 106 is configured to acquire multimedia data of the target environment and send the multimedia data to the server 104 based on an augmented reality framework;
the server 104 is further configured to determine location information of a user based on the three-dimensional space map and the multimedia data, and send the location information to the user device 106.
The acquisition equipment is equipment for acquiring environment basic data information of a target environment, and the acquisition equipment comprises a sensor set platform, such as a panoramic camera and the like; the target environment is an environment where the user is located, for example, the user is located at store 1 of mall a, the user wants to go to store 2 of mall a, but does not know how to go to store 2, and then mall a is the target environment.
The environment basic data information refers to environment basic data information of a target environment acquired by using acquisition equipment, the environment basic data information is multidimensional sensing data information, and the multidimensional sensing data information is data information of different dimensions, and specifically may include image information, distance information, position information, moving speed and the like of the target environment.
The extended reality framework (XR) refers to an open cross-terminal platform which can combine real and virtual through a computer and is compatible with most devices on the market. For example, the augmented reality framework may be an XR platform running in an XR chip installed in the smart device.
In practical application, when the positioning system of the three-dimensional space map needs to be updated, the updated positioning system of the three-dimensional space map only needs to be issued to the XR platform, and the XR platform runs in the XR chip, so that the equipment provided with the XR chip can update the positioning system of the three-dimensional space map, and the equipment with different models can be adapted one by one without being adapted.
According to the positioning system of the three-dimensional space map, the user equipment uploads the acquired data information to the server through the extended reality frame, so that the positioning system of the three-dimensional space map is not limited by the type of the equipment, and the positioning system provided by the embodiment of the specification can be applied as long as the extended reality frame runs in the terminal, so that the positioning system can be applied to different terminals, is widely compatible and supported across terminal platforms, and solves the problems of limited terminal compatibility and low adaptability at present.
The server is used for generating a three-dimensional space map of the target environment through the environment basic data information of the target environment and positioning the user position based on the visual positioning service. The three-dimensional space map is a three-dimensional space map of a target environment generated according to the environment basic data information, and if the target environment is market a, the three-dimensional space map is the three-dimensional space map of market a, following the above example.
Specifically, after the acquisition equipment acquires the environment basic data information of the target environment, the environment basic data information is sent to the server, and the server generates the three-dimensional space map of the target environment according to the environment basic data information.
The user equipment refers to intelligent equipment for acquiring multimedia data of a target environment, and can be understood as an intelligent automobile, an intelligent mobile phone, intelligent glasses, an intelligent watch or other intelligent wearable equipment with an image acquisition function. The multimedia data can be understood as multimedia data such as photos or videos of a target environment shot by a user through various intelligent devices with shooting functions, and the multimedia data is used for realizing positioning of the user in the target environment. Along the above example, when the target environment is mall a, the multimedia data is data information such as a photo or a video of mall 1 of mall a. The location information may be understood as location information where the user is located in the target environment.
Specifically, in an embodiment provided in this specification, a three-dimensional space mapping and a three-dimensional space positioning service in a server are encapsulated into a visual positioning service, a user equipment acquires multimedia data of a target environment and sends the multimedia data to the server based on an augmented reality frame, and the server can determine position information of a user according to a generated three-dimensional space map, the multimedia data and the visual positioning service and send the position information to the user equipment, so that the user knows the position information of the user.
In practical applications, users often encounter situations where they do not know where the destination is and how to go to the destination, and in such cases, users typically solve the problem by navigation positioning. The existing plane map navigation mode can cause the problem that some users with weak direction sense can not understand the map easily and can use the map freely, based on the problem, the three-dimensional map navigation gradually rises, but the existing three-dimensional map navigation has low accuracy, so that the users can not find the destination as desired.
Therefore, the present specification provides a positioning system for a three-dimensional space map, which acquires multi-dimensional environment basic data information of a target environment through an acquisition device, and uploads the environment basic data information to a server, and the server generates the three-dimensional space map of the target environment according to the environment basic data information after receiving the environment basic data information.
Compared with the traditional plane map, the three-dimensional space map can more intuitively display the target environment to the user, and solves the problem that the user cannot easily understand the map; the three-dimensional space map of the target environment is generated based on the multi-dimensional environment basic data information, so that the accuracy of the generated three-dimensional space map can be improved, errors are reduced, and user experience is improved.
The positioning system of the three-dimensional space map can also acquire multimedia data of a target environment through user equipment, upload the multimedia data to the server based on the augmented reality framework, and determine the position information of the user according to the generated three-dimensional space map, the multimedia data of the target environment and the visual positioning service and send the position information of the user to the user equipment.
In practical application, a user can obtain multimedia data of a target environment through user equipment, the user equipment uploads the multimedia data to a server based on an augmented reality framework, the server determines position information of the user according to a three-dimensional space map, the multimedia data of the target environment and a visual positioning service, and sends the position information to the user equipment, and at the moment, the user can determine the position of the user in the target environment based on the position information.
For example, the user Y is located in shop 1 of shop a, at this time, the user Y can take a picture of shop 1 through a mobile phone, the mobile phone uploads the picture of shop 1 to the server through the augmented reality frame, after receiving the picture of shop 1, the server can determine the position information of the user Y in shop a based on the AR map of shop a, the picture of shop 1 and the visual positioning service, and send the position information of the user Y in shop a to the mobile phone, and display the AR map with the position information of the user Y in shop 1 in the mobile phone of user Y.
With reference to fig. 2, fig. 2 is a schematic diagram of an architecture of a visual positioning service technology provided in this specification, where a server receives environment basic data information of a target environment, performs data preprocessing on the environment basic data information, constructs a three-dimensional space map based on a large scene mapping algorithm and a service, and can provide multiple positioning services based on vision, semantics, bluetooth, and GPS, so that accuracy of the generated three-dimensional space map is greatly improved, and based on the generation of the three-dimensional space map, a positioning algorithm, such as a 3D visual positioning algorithm, is used to calculate a position of a user in real time, thereby determining position information of the user, and then the position information of the user is sent to a user device in the form of a Software Development Kit (SDK).
The process of determining the location information of the user is explained taking a positioning algorithm as an example. The server preprocesses the generated three-dimensional space map, and can divide the three-dimensional space map into a plurality of small maps, wherein the small maps comprise a panoramic view of a target environment and views at different angles; performing image retrieval in a plurality of small maps according to multimedia data of a target environment, and performing rough positioning according to the similarity between the multimedia data and each small map; and extracting the local image characteristics of each frame of multimedia data image frame, and performing characteristic matching on the local characteristics of each frame of multimedia data image frame and the small map with higher similarity according to the similarity result to complete accurate positioning, thereby determining the position information of the user.
The positioning system of the three-dimensional space map provided by the specification comprises acquisition equipment, user equipment and a server, wherein the acquisition equipment is configured to acquire environment basic data information of a target environment and send the environment basic data information to the server, and the environment basic data information comprises multidimensional sensing data information; the server is configured to receive the environment basic data information and generate a three-dimensional space map of the target environment according to the environment basic data information; the user equipment is configured to acquire multimedia data of the target environment and send the multimedia data to the server based on an augmented reality framework; the server is further configured to determine position information of a user based on the three-dimensional space map, the multimedia data and a visual positioning service, and send the position information to the user equipment.
In the positioning system of the three-dimensional space map provided by the specification, the acquisition equipment acquires the multi-dimensional environment basic data information of the target environment, and the server constructs the three-dimensional space map of the target environment according to the environment basic data information of different dimensions, so that the accuracy of the three-dimensional space map can be improved; the interactive process between the acquisition equipment and the server and between the user equipment and the server is realized through the augmented reality framework, the wide compatibility of the positioning system of the three-dimensional space map is realized, and the problems of low compatibility and difficult adaptation of the terminal are solved; the visual positioning service deployed in the server can realize positioning in a three-dimensional space map; the visual positioning service is coupled with the extended reality frame, the accuracy of three-dimensional space positioning can be improved, in the positioning process, deployment of other conditions in a target environment is not needed, and the manufacturing cost is greatly reduced.
In order to improve the accuracy of the three-dimensional space map, in an optional implementation manner provided by the embodiments of the present specification, the acquisition device 102 is further configured to acquire the environment basic data information through a sensor set platform running in the acquisition device 102, where the sensor set platform includes at least two of a vision sensor, a laser sensor, an inertial measurement unit, and a global positioning system.
The acquisition equipment comprises a sensor set platform, wherein the sensor set platform can be understood as a sensor set consisting of a plurality of sensors and is a general name of the plurality of sensors, and the sensor set platform comprises at least two of a visual sensor, a laser sensor, an inertial measurement unit and a global positioning system. The visual sensor is used for acquiring image information of a target environment; the laser sensor is used for acquiring distance information in a target environment; the Inertial Measurement Unit (IMU) is used for measuring the three-axis attitude angle and acceleration of a building or an object and the like in a target environment; global positioning systems are used to provide accurate geographic location, speed of movement, and time information.
The acquisition device provided in this specification acquires, through the sensor set platform, environment basic data information of a target environment, where the environment basic data information includes information of each dimension such as an image, a distance, an acceleration, a speed, and a time, and generates a three-dimensional space map by providing environment basic data information of different dimensions, so that accuracy of the three-dimensional space map can be improved.
The collected environment basic data information is not all used for generating a three-dimensional space map of a target environment, repeated or wrong information exists in the environment basic data information inevitably, and if the three-dimensional space map is generated directly according to the collected environment basic data information, the repeated and wrong information can be processed, so that the time for generating the three-dimensional space map is prolonged, and the accuracy of the generated three-dimensional space map is reduced. Therefore, in order to ensure the accuracy of the three-dimensional space map and to improve the efficiency of generating the three-dimensional space map, the acquired environment basic data information needs to be preprocessed before the corresponding three-dimensional space map is generated based on the environment basic data information.
Based on this, in an optional implementation manner provided by the embodiments of the present specification, the server 104 is further configured to perform data preprocessing on the environment basic data information, obtain an initial three-dimensional space map of the target environment, and determine the three-dimensional space map based on the initial three-dimensional space map, where the accuracy of the three-dimensional space map is higher than that of the initial three-dimensional space map.
The initial three-dimensional space map is obtained by preprocessing the environmental basic data information, and the accuracy of the initial three-dimensional space map is low. The data preprocessing refers to the processing of data registration, coordinate correction, data simplification and the like on the environment basic data information.
After receiving the environment basic data information, the server performs preprocessing such as data registration, coordinate correction and data simplification on the environment basic data information, and generates an initial three-dimensional space map of the target environment based on the preprocessed data information. After obtaining an initial three-dimensional space map of the target environment, determining the three-dimensional space map based on the initial three-dimensional space map.
Specifically, the process of obtaining the three-dimensional space map from the initial three-dimensional space map is as follows:
in an optional implementation manner provided by the embodiment of this specification, the server 104 is further configured to eliminate an accumulated error in the initial three-dimensional space map based on a loop detection method, and obtain the three-dimensional space map.
The loop detection method is a method for detecting whether the map can be successfully closed, so that the effect of reducing accumulated errors in the initial three-dimensional space map is achieved, and the accuracy of the detected map is higher due to loop detection.
Specifically, the loop detection method can be implemented as follows:
determining a target key frame of the initial three-dimensional space map, determining a bag-of-words vector of the target key frame based on a bag-of-words model, determining a candidate image sequence of the target key frame, calculating the similarity between the target key frame and each candidate image frame, obtaining a loop pair according to the calculation result, and determining the three-dimensional space map under the condition that the verification of the loop pair is successful.
The target key frame is a key frame which is determined to carry out loop detection from a key frame queue consisting of a plurality of key frames; the bag-of-words model is a simplified expression model under natural language processing and information retrieval, and can convert the descriptor of the feature point of one frame of image into the description vector of the bag-of-words, directly compare the description vector with another frame of image, compare the image with a bag-of-words database, and return the previous images with the most similar ones. The loop pair is composed of a target key frame and a candidate image frame with the highest similarity with the target key frame.
Specifically, the target key frame is input into the bag-of-word model, a bag-of-word vector of the target key frame is obtained, a candidate image frame sequence of the target key frame is determined according to the bag-of-word vector, the similarity between the target key frame and each candidate image frame is calculated, the candidate image frame with the highest similarity and the target key frame form a loop pair, loop verification is performed, and the three-dimensional space map is determined under the condition that the loop pair verification is successful.
By carrying out loop detection on the initial three-dimensional space map, the accumulated error of the initial three-dimensional space map can be eliminated, and the accuracy of the initial three-dimensional space map is improved.
In practical application, the method can be applied to a three-dimensional space map with location information labels, and also can need to use a three-dimensional space map without location information labels. For example, when a user needs to accurately find a certain place, a three-dimensional space map labeled by place information, such as a seat, a floor, a door, etc., needs to be used; when the user is in a navigation scene and only needs to know the traveling direction, the three-dimensional space map without the location information label can meet the requirements of the user.
Therefore, in an optional implementation manner provided by the embodiments of the present specification, the server 104 is further configured to receive location labeling data of the target environment, and label the three-dimensional space map based on the location labeling data.
The position marking data refers to position data information for marking a three-dimensional space map, the position marking data can be directly and manually input by related technicians or can be acquired from a plane map, and the type and the acquisition mode of the position marking data are not limited in any way in the specification. The following description will be given by taking two modes, i.e., the mode in which the position marking data is directly usable and the mode in which the position marking data is acquired from the plan map, as examples.
If the position marking data can be directly used, the position marking data can be directly marked on the three-dimensional space map. The position marking data can be obtained by the following steps:
in an optional implementation manner provided by the embodiment of the present specification, the server is further configured to receive an entry instruction for the location annotation data of the target environment, open an entry interface in response to the entry instruction, and receive the location annotation data for the target environment based on the entry interface.
For example, if the position of store No. 1 on the generated three-dimensional space map needs to be labeled, store No. 1 can be input at the position corresponding to store No. 1, and the display interface for inputting store No. 1 is the input interface of store No. 1.
Specifically, in order to perform position data annotation on each position in the target environment, the three-dimensional space map with the position annotation data can be generated by triggering an entry instruction of the position annotation data for the target environment, inputting the position annotation data of the corresponding position in the target environment in an opened entry interface, and completing the entry.
If the position marking data is acquired from the planar map, in an optional implementation manner provided in the embodiment of the present specification, the system further includes a planar map acquisition device;
the planar map acquisition device is configured to acquire a planar map corresponding to the target environment and send the planar map to the server 104, where the planar map carries position data;
the server 104 is further configured to receive the planar map, obtain position data in the planar map, and label the three-dimensional space map based on the position data of the planar map.
The planar map acquisition device is a device for acquiring a planar map corresponding to a target environment, and the planar map acquisition device may be the acquisition device, or may be another device with an acquisition function besides the acquisition device, which is not limited herein.
Specifically, the planar map acquisition device acquires a planar map corresponding to a target environment and sends the acquired planar map to the server, and the server receives the planar map corresponding to the target environment, acquires position data carried in the planar map, and marks the acquired position data on the three-dimensional spatial map.
The positioning system of the three-dimensional space map provided by the specification can generate the three-dimensional space map without the site information label, can also generate the three-dimensional space map with the site information label, and simultaneously meets different requirements of users. Different acquisition modes are available for acquiring the position marking data, so that the method for marking the three-dimensional space map is flexible.
In an optional implementation manner provided by the embodiment of the present specification, the user equipment 106 is further configured to invoke an image capturing unit to capture image data of the target environment, where the image capturing unit is configured in the user equipment.
The image acquisition unit is a device with an image acquisition function, such as a camera. The image data refers to data displayed in the form of an image, such as a photograph taken by a camera, or a video frame included in a taken video. Specifically, the user equipment acquires image data of the target environment by calling the image acquisition unit. Taking user equipment as an example of a smart phone, a user can shoot a target environment through a camera in the smart phone to obtain a picture or a video of the target environment.
Based on the image data of the target environment acquired by the user equipment, the server can further determine the position information of the user on the basis of the three-dimensional space map and the visual positioning service, so that the positioning function of the positioning system of the three-dimensional space map is accurately realized.
Further, the positioning system of the three-dimensional space map provided in this specification may be applied in a navigation scene, and therefore, in an optional implementation manner provided in this specification, the user device 106 is further configured to determine a current location of the user based on the location information, receive an entry instruction, determine a target location in response to the entry instruction, determine a navigation path according to the target location and the current location, and generate and display an AR navigation image based on the navigation path.
The target position refers to an end position in one navigation, that is, a position input by a user in user equipment. The input instruction is an instruction generated by triggering the target position input control by the user or an instruction of the target position acquired according to the voice information. The navigation path is a navigation path from the current position to the target position, for example, if the current position is P and the target position is P1, the corresponding navigation path is a navigation path from P to P1.
Specifically, the user equipment determines the current position of the user according to the position information, determines the target position in response to an input instruction of the target position, generates a navigation path from the current position to the target position based on the current position and the target position, and displays the navigation path in the user equipment in the form of an AR navigation image.
Taking user equipment as an intelligent mobile phone as an example, the intelligent mobile phone receives position information sent by a server and determines that the current position of a user is P, determines that the target position is P1 in response to an input instruction of the target position P1, generates a navigation path from the current position P to the target position P1 according to the current position P and the target position P1, and displays the navigation path in the intelligent mobile phone in the form of an AR navigation image.
In an actual AR navigation scenario, there is a case where a user puts down a smart device or turns off a camera, in which case the navigation path being run is forced to be interrupted, and re-implementing positioning or navigation requires the user to manually refresh the re-positioning and continue navigation. In the repositioning process, it takes several seconds or even several minutes, which causes a long time consumption in the using process of the user and reduces the using experience of the user.
The positioning system of the three-dimensional space map provided in this specification may set a point location on a plane map of a target environment by using the iBeacon technique, and set a point location identifier for each point location, and the user equipment acquires the position information in the plane map by using the point location identifier, thereby determining the position information in the three-dimensional space map according to the position information in the plane map, completing the relocation, and displaying the location information in the user equipment in the form of an AR navigation image again. The positioning system of the three-dimensional space map provided by the specification can improve the efficiency of determining the position information of the user and improve the use experience of the user by combining the iBeacon technology and the AR navigation technology.
When the positioning system of the three-dimensional space map provided by the specification is applied to a navigation scene, a real AR navigation image can be provided for a user in real time for the user to use, the accuracy of the three-dimensional space map generated by the positioning system of the three-dimensional space map provided by the specification is higher, and the use experience of the user is improved.
In order to facilitate the use of the user and reduce the network delay, the positioning system of the three-dimensional space map may include an edge server, after the server generates the three-dimensional space map, the three-dimensional space map is downloaded to the edge server, in practical application, the corresponding edge server may be determined according to the network location information of the user equipment, and the edge server realizes the positioning of the user location.
In an optional implementation manner provided by the embodiments of the present specification, the server 104 includes a cloud server and a plurality of edge servers;
the cloud server is configured to acquire network location information of the user equipment 106, determine a target edge server from the plurality of edge servers based on the network location information, and send the three-dimensional space map to the target edge server;
the target edge server is configured to receive and store the three-dimensional space map;
the user equipment 106 is further configured to obtain multimedia data of the target environment and send the multimedia data to the target edge server based on the augmented reality framework;
the target edge server is further configured to determine location information of a user based on the three-dimensional space map and the multimedia data, and send the location information to the user device 106.
Wherein, the edge server is a computer existing at the logic 'edge' of the network; the network location information may be understood as information for determining a network location of the user equipment, and may be an IP address, a MAC address, or the like.
In practical applications, a plurality of edge servers are distributed in a network environment, and in order to reduce network delay and improve response rate, an edge server closest to a user equipment needs to be selected as a target edge server.
Specifically, the cloud server acquires network location information of the user equipment, determines an edge server closest to the network location of the user equipment as a target edge server, wherein the network location refers to a network logic location, sends the generated three-dimensional space map to the target edge server, and the target edge server receives the three-dimensional space map and stores the three-dimensional space map.
At this time, the user equipment does not send the multimedia data of the target environment to the server any more after acquiring the multimedia data of the target environment, but sends the multimedia data of the target environment to the target edge server, and the target edge server determines the position information of the user based on the stored three-dimensional space map, the multimedia data of the target environment and the visual positioning service and sends the position information of the user to the user equipment.
For example, after acquiring network location information of the user equipment S, the cloud server determines that the target edge server is M according to the network location information, and sends the three-dimensional space map to the target edge server M, the target edge server receives multimedia data of a target environment uploaded by the user equipment S, determines location information of the user according to the multimedia data, the three-dimensional space map and the visual positioning service, and sends the location information to the user equipment S, so that the user can know the location of the user in the target environment.
In the above embodiment provided by this specification, by introducing the edge server, after the three-dimensional space map is generated, the three-dimensional space map can be called from the edge server, and compared with a server, an effect of reducing network delay can be achieved.
After the positioning of the three-dimensional space is realized, the obtained position information also has the value and the function of the position information, and different application values can be realized in different application scenes.
In an optional implementation manner provided by the embodiment of the present specification, the user equipment 106 is further configured to receive the location information, determine a current location of the user based on the location information, and acquire and display corresponding reference information according to the current location of the user, where the reference information includes preset reference information and/or real-time reference information, the reference information is information for reference of the user, the preset reference information is reference information entered in the user equipment in advance, and the real-time reference information is reference information displayed in real time according to the real-time location of the user.
The reference information refers to information which is used for a user to refer to and make corresponding changes according to the reference information, and the reference information can be preset reference information or real-time reference information; correspondingly, the preset reference information is reference information which is input in the user equipment in advance, and the real-time reference information is reference information which is displayed in real time according to the real-time position of the user.
For example, in an application scenario of speed skating training, the gesture and action information of the virtual trainer input into the AR glasses worn by the athlete may be preset reference information, and the preset reference information may be displayed in real time according to a position where the athlete slides, so as to prompt the athlete to change an incorrect sliding manner or action in time. And the posture and action information of the athlete displayed in the AR glasses in real time along with the sliding of the athlete is real-time reference information.
For another example, in an application scene of automobile driving, information such as different driving speeds corresponding to different road sections, which is pre-recorded, may also be available, where the information is preset reference information, and the preset reference information is used to prompt a driver to slow down and walk. The actual real-time running speed of the automobile is real-time reference information.
Specifically, the user equipment determines the current position of the user based on the position information, and acquires and displays the corresponding reference information in real time according to the current position of the user.
The scene of speed skating training is taken as an example for explanation, the AR glasses worn by the athlete during training determine that the athlete is in the curve sliding according to the position information, and the training posture of the virtual coach during the sliding at the curve is obtained and displayed, at this time, the athlete can make corresponding adjustment according to the training posture of the virtual coach.
Alternatively, the training position of the virtual trainer and the real-time gliding position of the athlete may be displayed simultaneously on the AR glasses.
Taking a driving scene of the vehicle as an example for explanation, the instrument panel acquires and displays the standard driving speed of the current position and the actual real-time speed of the driving of the vehicle according to the driving position of the vehicle, and the driver can appropriately adjust the actual real-time speed according to the standard driving speed and the actual real-time speed.
The positioning system of the three-dimensional space map provided by the specification is not limited to the implementation of positioning or navigation, and can prompt the user to adjust the current operation state in response, so that the use experience of the user is improved.
In practical applications, such as fitness scenes, speed skating scenes, etc., there is also a need for analyzing and summarizing real-time data, so as to make a better use plan for subsequent use.
In an optional implementation manner provided by the embodiment of the present specification, the positioning system of the three-dimensional space map further includes a data analysis device;
the user equipment 106 is further configured to obtain data information to be analyzed of the user and send the data information to be analyzed to the data analysis equipment;
the data analysis equipment is configured to perform data analysis on the data information to be analyzed and obtain a data analysis result.
The data analysis device may be a smart device with a data analysis function, for example, a smart phone, a tablet computer, a notebook computer, and the like. The data information to be analyzed refers to data information that needs to be analyzed, for example, training data of athletes, fitness data of fitness enthusiasts, and the like.
For example, when the user Zhang III runs outdoors, the running data generated in real time along with the running exercise of Zhang III can be the data information to be analyzed, and the historical running data of Zhang III can also be the data information to be analyzed. Specifically explain with the scene of outdoor running as an example, zhang san when running in the open air, the real-time running data that produces can be gathered by the intelligent bracelet that carries when zhang san runs to the running data send who will gather to zhang san's cell-phone, the cell-phone carries out data analysis and output data analysis result to the running data after receiving the running data, after running, zhang san can look over oneself produced data information when running through the cell-phone, and look over data analysis result. The user can use the generated data analysis result as reference information for making a subsequent running plan, so that the user can run healthily and reasonably.
Further, taking an application scenario as an example of speed skating, in an alternative implementation provided by the embodiment of the present specification,
the user device 106 is further configured to obtain taxi data information of the user and send the taxi data information to the data analysis device, wherein the taxi data information is training data generated in a taxi training process;
the data analysis device is configured to perform data analysis on the taxis data information, and obtain and display a result of the taxis data analysis of the user.
The taxi data information refers to taxi training data information generated by a user in a skating process at a speed skating place, and the taxi data analysis result refers to a data analysis result generated after data analysis is carried out on the taxi data information generated by the user.
Based on the same principle as the above, after obtaining the sliding data information of the user, the user device sends the sliding data information of the user to the data analysis device, and the data analysis device performs data analysis on the sliding data information of the user and generates a data analysis result corresponding to the sliding data information.
The positioning system of the three-dimensional space map provided by the specification can perform data analysis on data information to be analyzed of a user, and output a data analysis result for subsequent use, so that convenience is brought to the user.
The positioning system of the three-dimensional space map provided by the specification comprises acquisition equipment, user equipment and a server, wherein the acquisition equipment is configured to acquire environment basic data information of a target environment and send the environment basic data information to the server, and the environment basic data information comprises multidimensional sensing data information; the server is configured to receive the environment basic data information and generate a three-dimensional space map of the target environment according to the environment basic data information; the user equipment is configured to acquire multimedia data of the target environment and send the multimedia data to the server based on an augmented reality framework; the server is further configured to determine position information of a user based on the three-dimensional space map, the multimedia data and a visual positioning service, and send the position information to the user equipment.
In the positioning system of the three-dimensional space map provided by the specification, the acquisition equipment acquires the multi-dimensional environment basic data information of the target environment, and the server constructs the three-dimensional space map of the target environment according to the environment basic data information with different dimensions, so that the accuracy of the three-dimensional space map can be improved; the interactive process between the acquisition equipment and the server and between the user equipment and the server is realized through the augmented reality framework, the wide compatibility of the positioning system of the three-dimensional space map is realized, and the problems of low compatibility and difficult adaptation of the terminal are solved; the visual positioning service deployed in the server can realize positioning in a three-dimensional space map; the visual positioning service is coupled with the extended reality frame, the accuracy of three-dimensional space positioning can be improved, in the positioning process, deployment of other conditions in a target environment is not needed, and the manufacturing cost is greatly reduced.
An edge server can be deployed in the positioning system of the three-dimensional space map provided by the specification, so that network delay can be reduced and response rate can be improved by deploying the edge server; or data analysis equipment is deployed in a positioning system of the three-dimensional space map, data analysis is carried out on data information needing data analysis in the positioning process, data analysis results are output for users, and the use experience of the users is improved.
The following description will further describe the positioning system of the three-dimensional space map by taking an application scenario of the positioning system of the three-dimensional space map provided in the present specification in speed skating training as an example, with reference to fig. 3. Fig. 3 illustrates a schematic diagram of an end cloud collaboration architecture in which a visual positioning service and an augmented reality framework are coupled according to an embodiment of the present disclosure.
The positioning system of the three-dimensional space map comprises:
panoramic camera 302, server 304, and AR glasses 306;
the panoramic camera 302 is configured to collect environment basic data information of a speed skating training ground through a sensor set platform running in the panoramic camera 302 and send the environment basic data information to the server 304, wherein the environment basic data information comprises multidimensional sensing data information;
the server 304 is configured to receive the environment basic data information, perform data preprocessing on the environment basic data information, obtain an initial three-dimensional space map of the speed skating training field, determine a three-dimensional space map of the speed skating training field based on the initial three-dimensional space map, receive position labeling data of the speed skating training field, and label the three-dimensional space map based on the position labeling data;
the AR glasses 306 are configured to invoke an AR tracking camera to collect image data of the speed skating training field, and send the image data to the server 304 based on an augmented reality frame;
the server 304, further configured to determine location information of an athlete based on the three-dimensional spatial map, the image data, and a visual positioning service, and send the location information to the AR glasses 306;
the AR glasses 306 are further configured to receive the position information, determine a current position of the athlete based on the position information, and acquire and display corresponding reference information according to the current position of the athlete, wherein the reference information includes preset reference information and/or real-time reference information.
Specifically, the panoramic camera collects the environmental basic data information of the speed skating training field through a sensor set platform running in the panoramic camera and sends the environmental basic data information to the server; the server carries out data preprocessing on the environment basic data information, obtains an initial three-dimensional space map of the speed skating training field according to the preprocessed data information, and determines the three-dimensional space map based on the initial three-dimensional space map.
The method comprises the steps that an athlete can wear AR glasses in the sliding training process, the AR glasses collect image data of a speed skating training field by calling an AR tracking camera, the image data are uploaded to a server through an extended reality frame, the server determines position information of the athlete based on a generated three-dimensional space map, the image data and visual positioning service deployed in the server, for example, the athlete slides at a curve, the position information of the athlete is sent to the AR glasses worn by the athlete, corresponding reference information is obtained based on the position information of the athlete, and the reference information is displayed on lenses of the AR glasses.
For example, when the athlete is in a curve position, real-time sliding information of the athlete is displayed on the lens of the AR glasses, which may include information of the sliding posture, the sliding speed, the heart rate, and the like of the athlete, and the sliding posture of the virtual trainer in the curve position may also be displayed on the lens of the AR glasses.
Accordingly, the information of the athlete and the information of the virtual coach displayed on the AR glasses lenses can enable the athlete to know the own sliding information in real time, and make corresponding adjustment according to the sliding posture of the virtual coach. Training is carried out by wearing AR glasses, and training of athletes can be completed under the condition of reducing manpower and material resources.
Further, in order to reduce the influence of network delay, facilitate the analysis of training data of the athlete by a coach, and make a subsequent training plan for the athlete, an edge server and a data analysis device may be deployed in a positioning system of a three-dimensional space map, in an optional implementation manner provided by an embodiment of the present specification, the server 304 includes a cloud server and a plurality of edge servers, and the positioning system of the three-dimensional space map further includes a data analysis device;
the cloud server is further configured to acquire network location information of the AR glasses 306, determine a target edge server from the plurality of edge servers based on the network location information, and send the three-dimensional space map to the target edge server;
the target edge server is configured to receive and store the three-dimensional space map;
the AR glasses 306 are further configured to acquire image data of the speed skating training field and send the image data to the target edge server based on the augmented reality framework;
the target edge server further configured to determine location information of an athlete based on the three-dimensional spatial map, the image data, and a visual positioning service, and send the location information to the AR glasses 306;
the AR glasses 306 further configured to receive the position information, determine a current position of the athlete based on the position information, and acquire and display corresponding reference information according to the current position of the athlete, wherein the reference information includes preset reference information and/or real-time reference information;
the AR glasses 306 are further configured to acquire data information to be analyzed of the athlete and send the data information to be analyzed to the data analysis device;
the data analysis equipment is configured to perform data analysis on the data information to be analyzed and obtain a data analysis result.
Specifically, the cloud server determines a target edge server according to the network position information of the AR glasses, and sends the generated three-dimensional space map of the speed skating training field to the target edge server. At the moment, after the AR glasses acquire the image data of the speed skating training field, the image data are sent to the target edge server through the extended reality frame, the target edge server determines the sliding position information of the athlete according to the three-dimensional space map, the image data and the visual positioning service, the AR glasses determine the sliding position of the athlete in the training field after receiving the sliding position information, the real-time sliding information of the athlete and the sliding posture of the virtual coach are acquired according to the sliding position, and the real-time sliding information and the sliding posture of the virtual coach are displayed on the glasses of the AR glasses, so that the athlete can timely adjust the sliding posture of the athlete according to the sliding posture of the virtual coach.
In the process, the AR glasses can also send training data of the athlete, which need to be subjected to data analysis, to the data analysis equipment, the data analysis equipment performs data analysis on the training data and obtains a data analysis result, and the data analysis result can be used for a coach to give corresponding training guidance to the athlete in the subsequent training process or make a more appropriate training plan for the athlete.
In an alternative implementation provided by the embodiments of the present disclosure, the athlete can also communicate and communicate with a coach, team member, etc. through the voice interaction function of the AR glasses.
The positioning system of the three-dimensional space map provided by the specification can be applied to a training scene of speed skating, and the positioning system of the three-dimensional space map is adopted for training the speed skating, so that the role of a trainer required in the current training can be replaced, the manpower and material resources are reduced, meanwhile, the efficiency of acquiring training data of athletes can be improved, and the training effect is improved.
Referring to fig. 4, fig. 4 is a flowchart illustrating a positioning method of a three-dimensional space map according to an embodiment of the present disclosure, which specifically includes the following steps.
Step 402: and receiving the environment basic data information uploaded by the acquisition equipment, and generating a three-dimensional space map of the target environment according to the environment basic data information.
In an optional implementation manner provided by the embodiment of the present specification, the positioning method of the three-dimensional space map is applied to a server. Specifically, the server receives the environment basic data information of the target environment uploaded by the acquisition device, and generates a three-dimensional space map of the target environment according to the environment basic data information.
In practical applications, all the acquired environment basic data information may not be used to generate a three-dimensional space map of a target environment, repeated or wrong information may inevitably exist in the environment basic data information, and if the three-dimensional space map is generated directly according to the acquired environment basic data information, the repeated or wrong information may be processed, which may not only prolong the time for generating the three-dimensional space map, but also reduce the accuracy of the generated three-dimensional space map. Therefore, in order to ensure the accuracy of the three-dimensional space map and to improve the efficiency of generating the three-dimensional space map, the acquired environment basic data information needs to be preprocessed before the corresponding three-dimensional space map is generated based on the environment basic data information.
Based on this, in an optional implementation manner provided by the embodiments of this specification, the generating a three-dimensional space map of a target environment according to the environment basic data information includes:
carrying out data preprocessing on the environment basic data information to obtain an initial three-dimensional space map of the target environment;
determining the three-dimensional space map based on the initial three-dimensional space map, wherein the three-dimensional space map has a higher accuracy than the initial three-dimensional space map.
Specifically, after receiving the environment basic data information, the server performs preprocessing such as data registration, coordinate correction, data simplification and the like on the environment basic data information, and generates an initial three-dimensional space map of the target environment based on the preprocessed data information. After obtaining an initial three-dimensional space map of the target environment, determining the three-dimensional space map based on the initial three-dimensional space map.
In an optional implementation manner provided by the embodiment of this specification, determining the three-dimensional space map based on the initial three-dimensional space map includes:
and eliminating accumulated errors in the initial three-dimensional space map based on a loop detection method to obtain the three-dimensional space map.
The loop detection method is a method for detecting whether the map can be successfully closed, so that the effect of reducing accumulated errors in the initial three-dimensional space map is achieved, and the accuracy of the detected map is higher through loop detection. The method for detecting the loop has been described in detail in the positioning system of the three-dimensional space map, and is not described herein again.
Step 404: and receiving the multimedia data of the target environment uploaded by the user equipment based on the augmented reality framework.
Step 406: and determining the position information of the user based on the three-dimensional space map and the multimedia data, and sending the position information to the user equipment.
Specifically, in an embodiment provided in this specification, a three-dimensional space mapping and a three-dimensional space positioning service in a server are encapsulated into a visual positioning service, and the server can determine location information of a user according to a generated three-dimensional space map, multimedia data, and the visual positioning service, and send the location information to a user device, so that the user knows the location information.
In practical application, the method can be applied to a three-dimensional space map with location information labels, and also can need to use a three-dimensional space map without location information labels. For example, when a user needs to accurately find a certain place, a three-dimensional space map labeled by place information, such as a seat, a floor, a door, etc., needs to be used; when the user is in a navigation scene and only needs to know the traveling direction, the three-dimensional space map without the location information label can meet the requirements of the user.
Therefore, in an optional implementation manner provided by the embodiments of this specification, the method further includes:
receiving location annotation data of the target environment;
and marking the three-dimensional space map based on the position marking data.
Specifically, after generating the three-dimensional space map of the target environment, the server receives the position marking data of the target environment, and marks the three-dimensional space map according to the position marking data.
The three-dimensional space map is marked according to each position information, the workload is relatively large, and under the condition that the target environment has the plane map, the marking can be directly carried out according to the plane map, so that the marking efficiency of the three-dimensional space map can be improved.
In an optional implementation manner provided by the embodiment of the present specification, the receiving the position annotation data of the target environment includes:
receiving a plan map of the target environment;
correspondingly, the three-dimensional space map is marked based on the position marking data, and the method comprises the following steps:
and acquiring position data in the plane map, and labeling the three-dimensional space map based on the position data of the plane map.
Specifically, position data in a planar map of the target environment is extracted, and the three-dimensional space map is labeled according to the position data of the planar map.
The positioning system of the three-dimensional space map provided by the specification can generate the three-dimensional space map without the site information label, can also generate the three-dimensional space map with the site information label, and simultaneously meets different requirements of users. Different acquisition modes are available for acquiring the position marking data, so that the method for marking the three-dimensional space map is flexible.
In order to facilitate the use of the user and reduce the network delay, the server can also send the generated three-dimensional space map to the corresponding edge server, and in practical application, the edge server can execute the subsequent use process of the three-dimensional space map.
In an optional implementation manner provided by the embodiments of this specification, the method further includes:
acquiring network position information of the user equipment;
and determining a target edge server based on the network position information, and sending the three-dimensional space map to the target edge server.
Specifically, the server determines a target edge server according to the network location information of the user equipment, and sends the generated three-dimensional space map to the target edge server, and the target edge server may store the three-dimensional space map for subsequent use.
According to the positioning method of the three-dimensional space map, the accuracy of the three-dimensional space map can be improved by receiving the multi-dimensional environment basic data information uploaded by the acquisition equipment and constructing the three-dimensional space map of the target environment through the environment basic data information with different dimensions; the interactive process between the user equipment and the server is realized through the augmented reality framework, the wide compatibility of the positioning system of the three-dimensional space map is realized, and the problems of low terminal compatibility and difficult adaptation are solved; positioning in a three-dimensional space map can be realized through a visual positioning service deployed in a server; the visual positioning service is coupled with the augmented reality framework, so that the accuracy of three-dimensional space positioning can be improved, and in the positioning process, other conditions do not need to be deployed in the target environment again, so that the manufacturing cost is greatly reduced; the server can also determine a target edge server according to the network position information of the user equipment, and after the three-dimensional space map is generated, the target edge server realizes interaction with the user equipment, so that network delay is reduced, and response rate is improved.
The following will further describe the positioning method of the three-dimensional space map by taking an application of the positioning method of the three-dimensional space map provided in the present specification in a speed skating scene as an example with reference to fig. 5. Fig. 5 is a flowchart illustrating a processing procedure of a positioning method applied to a three-dimensional space map of a speed skating training scene according to an embodiment of the present specification, where the positioning method of the three-dimensional space map is applied to a server, and specifically includes the following steps.
Step 502: and receiving the environment basic data information of the speed skating training field uploaded by the three-dimensional space reconstruction acquisition equipment.
Specifically, image information, distance information, acceleration information, moving speed, time information and the like of a speed skating training field uploaded by the panoramic camera are received.
Step 504: and carrying out data preprocessing on the environment basic data information to obtain an initial three-dimensional space map of the speed skating training field.
Specifically, information such as image information, distance information, acceleration information, moving speed and time information is subjected to data registration, coordinate correction, data simplification and the like, and an initial three-dimensional space map of the speed skating training field is obtained according to the processed data.
Step 506: and eliminating accumulated errors in the initial three-dimensional space map based on a loop detection method to obtain the three-dimensional space map.
Specifically, an accumulative error in the initial three-dimensional space map is eliminated by using a loop method, and the three-dimensional space map of the speed skating training field is obtained.
Step 508: and receiving the multimedia data of the speed skating training field uploaded by the AR glasses based on the augmented reality frame.
In the training process, the athlete can wear the AR glasses to train, and specifically, receives image data or video data of a speed skating training field uploaded by the AR glasses based on the augmented reality frame.
Step 510: and determining the position information of the athlete based on the three-dimensional space map, the multimedia data and the visual positioning service, and sending the position information to the AR glasses.
Specifically, sliding position information of the athlete is determined based on a three-dimensional space map, image data or video data and a visual positioning service, and the sliding position information is sent to the AR glasses.
When the positioning method of the three-dimensional space map provided by the specification is applied to a speed skating training scene, the accuracy of the three-dimensional space map is improved by receiving the environment basic data information of a speed skating training field and generating the three-dimensional space map of the speed skating training field; multimedia data of a speed skating training field uploaded by the AR glasses are received, the sliding position of the athlete can be determined through a three-dimensional space map, the multimedia data and visual positioning service, and convenience is brought to speed skating training.
Corresponding to the above method embodiment, the present specification further provides an embodiment of a positioning apparatus for a three-dimensional space map, and fig. 6 shows a schematic structural diagram of a positioning apparatus for a three-dimensional space map provided in an embodiment of the present specification. As shown in fig. 6, the apparatus is applied to a server, and includes:
the generating module 602 is configured to receive the environment basic data information uploaded by the acquisition device, and generate a three-dimensional space map of the target environment according to the environment basic data information;
a first receiving module 604 configured to receive multimedia data of the target environment uploaded by a user equipment based on an augmented reality framework;
a determining module 606 configured to determine location information of a user based on the three-dimensional spatial map and the multimedia data, and send the location information to the user equipment.
Optionally, the generating module 602 is further configured to:
carrying out data preprocessing on the environment basic data information to obtain an initial three-dimensional space map of the target environment;
determining the three-dimensional space map based on the initial three-dimensional space map, wherein the three-dimensional space map has a higher accuracy than the initial three-dimensional space map.
Optionally, the generating module 602 is further configured to:
and eliminating accumulated errors in the initial three-dimensional space map based on a loop detection method to obtain the three-dimensional space map.
Optionally, the apparatus further comprises:
a second receiving module configured to receive location annotation data for the target environment;
an annotation module configured to annotate the three-dimensional spatial map based on the location annotation data.
Optionally, the second receiving module is further configured to:
receiving a plan map of the target environment;
accordingly, the annotation module is further configured to:
and acquiring position data in the plane map, and marking the three-dimensional space map based on the position data of the plane map.
Optionally, the apparatus further comprises:
an obtaining module configured to obtain network location information of the user equipment;
a sending module configured to determine a target edge server based on the network location information and send the three-dimensional space map to the target edge server.
The positioning device for a three-dimensional space map provided by the present specification, applied to a server, includes: the generating module is configured to receive the environment basic data information uploaded by the acquisition equipment and generate a three-dimensional space map of the target environment according to the environment basic data information; a first receiving module configured to receive multimedia data of the target environment uploaded by a user equipment based on an augmented reality framework; a determination module configured to determine location information of a user based on the three-dimensional spatial map and the multimedia data, and to transmit the location information to the user device.
The accuracy of the three-dimensional space map can be improved by receiving the multi-dimensional environment basic data information uploaded by the acquisition equipment and constructing the three-dimensional space map of the target environment through the environment basic data information with different dimensions; the interactive process between the user equipment and the server is realized through the augmented reality framework, the wide compatibility of the positioning system of the three-dimensional space map is realized, and the problems of low terminal compatibility and difficult adaptation are solved; positioning in a three-dimensional space map can be realized through a visual positioning service deployed in a server; the visual positioning service is coupled with the extended reality frame for use, so that the accuracy of three-dimensional space positioning can be improved, and in the positioning process, other conditions do not need to be deployed in the target environment again, so that the manufacturing cost is greatly reduced; the server can also determine a target edge server according to the network position information of the user equipment, and after the three-dimensional space map is generated, the target edge server realizes interaction with the user equipment, so that the network delay is reduced, and the response rate is improved.
The above is a schematic solution of the positioning apparatus for a three-dimensional space map according to the embodiment. It should be noted that the technical solution of the positioning apparatus for a three-dimensional space map and the technical solution of the positioning method for a three-dimensional space map belong to the same concept, and details of the technical solution of the positioning apparatus for a three-dimensional space map, which are not described in detail, can be referred to the description of the technical solution of the positioning method for a three-dimensional space map.
FIG. 7 illustrates a block diagram of a computing device 700 provided in accordance with one embodiment of the present description. The components of the computing device 700 include, but are not limited to, memory 710 and a processor 720. Processor 720 is coupled to memory 710 via bus 730, and database 750 is used to store data.
Computing device 700 also includes access device 740, access device 740 enabling computing device 700 to communicate via one or more networks 760. Examples of such networks include the Public Switched Telephone Network (PSTN), a Local Area Network (LAN), a Wide Area Network (WAN), a Personal Area Network (PAN), or a combination of communication networks such as the internet. Access device 740 may include one or more of any type of network interface (e.g., a Network Interface Card (NIC)) whether wired or wireless, such as an IEEE802.11 Wireless Local Area Network (WLAN) wireless interface, a worldwide interoperability for microwave access (Wi-MAX) interface, an ethernet interface, a Universal Serial Bus (USB) interface, a cellular network interface, a bluetooth interface, a Near Field Communication (NFC) interface, and so forth.
In one embodiment of the present description, the above-described components of computing device 700, as well as other components not shown in FIG. 7, may also be connected to each other, such as by a bus. It should be understood that the block diagram of the computing device architecture shown in FIG. 7 is for purposes of example only and is not limiting as to the scope of the present description. Those skilled in the art may add or replace other components as desired.
Computing device 700 may be any type of stationary or mobile computing device, including a mobile computer or mobile computing device (e.g., tablet, personal digital assistant, laptop, notebook, netbook, etc.), mobile phone (e.g., smartphone), wearable computing device (e.g., smartwatch, smartglasses, etc.), or other type of mobile device, or a stationary computing device such as a desktop computer or PC. Computing device 700 may also be a mobile or stationary server.
Wherein the processor 720 is configured to execute computer-executable instructions that, when executed by the processor, implement the steps of the data processing method described above.
The foregoing is a schematic diagram of a computing device of the present embodiment. It should be noted that the technical solution of the computing device and the technical solution of the positioning method of the three-dimensional space map belong to the same concept, and details that are not described in detail in the technical solution of the computing device can be referred to the description of the technical solution of the positioning method of the three-dimensional space map.
An embodiment of this specification also provides an augmented reality AR device or a virtual reality VR device, including:
a memory, a processor, and a display;
the memory is configured to store computer-executable instructions and the processor is configured to execute the computer-executable instructions, which when executed by the processor implement the steps of:
acquiring multimedia data of a target environment, and sending the multimedia data to a server based on an augmented reality framework;
receiving the position information sent by the server, and determining the current position of the user based on the position information;
generating an AR environment image based on the current location of the user; or
Acquiring corresponding reference information based on the current position of the user, wherein the reference information comprises preset reference information and/or real-time reference information, the reference information is information for user reference, the preset reference information is reference information which is input in the user equipment in advance, and the real-time reference information is reference information which is displayed in real time according to the real-time position of the user;
rendering the AR environment image or the reference information to a display of the augmented reality AR device or the virtual reality VR device for display.
The above is a schematic scheme of an augmented reality AR device or a virtual reality VR device of this embodiment. It should be noted that the technical solution of the augmented reality AR device or the virtual reality VR device and the technical solution of the positioning method of the three-dimensional space map belong to the same concept, and details of the technical solution of the augmented reality AR device or the virtual reality VR device, which are not described in detail, can be referred to the description of the technical solution of the positioning method of the three-dimensional space map.
An embodiment of the present specification further provides a computer-readable storage medium, which stores computer-executable instructions, and when the computer-executable instructions are executed by a processor, the steps of the positioning method of the three-dimensional space map are implemented.
The above is an illustrative scheme of a computer-readable storage medium of the present embodiment. It should be noted that the technical solution of the storage medium and the technical solution of the positioning method of the three-dimensional space map belong to the same concept, and for details that are not described in detail in the technical solution of the storage medium, reference may be made to the description of the technical solution of the positioning method of the three-dimensional space map.
An embodiment of the present specification further provides a computer program, where the computer program is executed in a computer, and causes the computer to execute the steps of the positioning method for a three-dimensional space map.
The above is a schematic scheme of a computer program of the present embodiment. It should be noted that the technical solution of the computer program is the same as the technical solution of the positioning method of the three-dimensional space map, and details of the technical solution of the computer program, which are not described in detail, can be referred to the description of the technical solution of the positioning method of the three-dimensional space map.
The foregoing description of specific embodiments has been presented for purposes of illustration and description. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The computer instructions comprise computer program code which may be in the form of source code, object code, an executable file or some intermediate form, or the like. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, U.S. disk, removable hard disk, magnetic diskette, optical disk, computer Memory, read-Only Memory (ROM), random Access Memory (RAM), electrical carrier wave signal, telecommunications signal, and software distribution medium, etc. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
It should be noted that, for the sake of simplicity, the foregoing method embodiments are described as a series of acts, but those skilled in the art should understand that the present embodiment is not limited by the described acts, because some steps may be performed in other sequences or simultaneously according to the present embodiment. Further, those skilled in the art should also appreciate that the embodiments described in this specification are preferred embodiments and that acts and modules referred to are not necessarily required for an embodiment of the specification.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
The preferred embodiments of the present specification disclosed above are intended only to aid in the description of the specification. Alternative embodiments are not exhaustive and do not limit the invention to the precise embodiments described. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the embodiments and the practical application, to thereby enable others skilled in the art to best understand and utilize the embodiments. The specification is limited only by the claims and their full scope and equivalents.

Claims (16)

1. A system for locating a three-dimensional spatial map, comprising: collecting equipment, user equipment and a server;
the acquisition equipment is configured to acquire environment basic data information of a target environment and send the environment basic data information to the server, wherein the environment basic data information comprises multidimensional sensing data information;
the server is configured to receive the environment basic data information and generate a three-dimensional space map of the target environment according to the environment basic data information;
the user equipment is configured to acquire multimedia data of the target environment and send the multimedia data to the server based on an augmented reality framework;
the server is further configured to determine position information of a user based on the three-dimensional space map and the multimedia data, and send the position information to the user equipment.
2. The system of claim 1, the acquisition device further configured to acquire the environmental base data information via a sensor assembly platform operating in the acquisition device, wherein the sensor assembly platform includes at least two of a vision sensor, a laser sensor, an inertial measurement unit, and a global positioning system.
3. The system of claim 1, the server further configured to perform data pre-processing on the environment base data information, obtain an initial three-dimensional space map of the target environment, determine the three-dimensional space map based on the initial three-dimensional space map, wherein the three-dimensional space map has a higher accuracy than the initial three-dimensional space map.
4. The system of claim 3, the server further configured to obtain the three-dimensional space map based on a loop detection method to eliminate accumulated errors in the initial three-dimensional space map.
5. The system of claim 1, the server further configured to receive location annotation data for the target environment, the three-dimensional spatial map annotated based on the location annotation data.
6. The system of claim 5, the server further configured to receive an entry instruction for location annotation data for the target environment, to open an entry interface in response to the entry instruction, to receive location annotation data for the target environment based on the entry interface.
7. The system of claim 5, further comprising a planar map capture device;
the plane map acquisition equipment is configured to acquire a plane map corresponding to the target environment and send the plane map to the server, wherein the plane map carries position data;
the server is further configured to receive the plane map, acquire position data in the plane map, and label the three-dimensional space map based on the position data of the plane map.
8. The system of claim 1, the user device further configured to invoke an image acquisition unit to acquire image data of the target environment, wherein the image acquisition unit is configured in the user device.
9. The system of claim 1, the user device further configured to determine a current location of the user based on the location information, receive an entry instruction, determine a target location in response to the entry instruction, determine a navigation path from the target location and the current location, generate and display an AR navigation image based on the navigation path.
10. The system of claim 1, the server comprising a cloud server and a plurality of edge servers;
the cloud server is configured to acquire network location information of the user equipment, determine a target edge server from the plurality of edge servers based on the network location information, and send the three-dimensional space map to the target edge server;
the target edge server is configured to receive and store the three-dimensional space map;
the user equipment is further configured to acquire multimedia data of the target environment and send the multimedia data to the target edge server based on the augmented reality framework;
the target edge server is further configured to determine position information of a user based on the three-dimensional space map and the multimedia data, and send the position information to the user equipment.
11. The system according to claim 1, wherein the user equipment is further configured to receive the location information, determine a current location of a user based on the location information, and acquire and display corresponding reference information according to the current location of the user, wherein the reference information includes preset reference information and/or real-time reference information, the reference information is information for user reference, the preset reference information is reference information previously entered in the user equipment, and the real-time reference information is reference information displayed in real time according to the real-time location of the user.
12. The system of any one of claims 1-11, further comprising a data analysis device;
the user equipment is further configured to acquire data information to be analyzed of the user and send the data information to be analyzed to the data analysis equipment;
the data analysis equipment is configured to perform data analysis on the data information to be analyzed and obtain a data analysis result.
13. The system of claim 12, further comprising:
the user equipment is further configured to acquire sliding data information of the user and send the sliding data information to the data analysis equipment, wherein the sliding data information is training data generated in a sliding training process;
the data analysis device is configured to perform data analysis on the taxi data information, and obtain and display a taxi data analysis result of the user.
14. A positioning method of a three-dimensional space map is applied to a server and comprises the following steps:
receiving environment basic data information uploaded by acquisition equipment, and generating a three-dimensional space map of a target environment according to the environment basic data information;
receiving multimedia data of the target environment uploaded by user equipment based on an augmented reality framework;
and determining the position information of the user based on the three-dimensional space map and the multimedia data, and sending the position information to the user equipment.
15. A computing device, comprising:
a memory and a processor;
the memory is used for storing computer-executable instructions, and the processor is used for executing the computer-executable instructions, and the computer-executable instructions when executed by the processor realize the steps of the positioning method of the three-dimensional space map of claim 14.
16. An Augmented Reality (AR) device or a Virtual Reality (VR) device comprising:
a memory, a processor, and a display;
the memory is configured to store computer-executable instructions and the processor is configured to execute the computer-executable instructions, which when executed by the processor implement the steps of:
acquiring multimedia data of a target environment, and sending the multimedia data to a server based on an augmented reality framework;
receiving the position information sent by the server, and determining the current position of the user based on the position information;
generating an AR environment image based on the current location of the user; or
Acquiring corresponding reference information based on the current position of the user, wherein the reference information comprises preset reference information and/or real-time reference information, the reference information is information for user reference, the preset reference information is reference information which is input in the user equipment in advance, and the real-time reference information is reference information which is displayed in real time according to the real-time position of the user;
rendering the AR environment image or the reference information to a display of the augmented reality AR device or the virtual reality VR device for display.
CN202211152773.2A 2022-09-21 2022-09-21 Positioning system, method and device of three-dimensional space map and computing equipment Pending CN115631310A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211152773.2A CN115631310A (en) 2022-09-21 2022-09-21 Positioning system, method and device of three-dimensional space map and computing equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211152773.2A CN115631310A (en) 2022-09-21 2022-09-21 Positioning system, method and device of three-dimensional space map and computing equipment

Publications (1)

Publication Number Publication Date
CN115631310A true CN115631310A (en) 2023-01-20

Family

ID=84903404

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211152773.2A Pending CN115631310A (en) 2022-09-21 2022-09-21 Positioning system, method and device of three-dimensional space map and computing equipment

Country Status (1)

Country Link
CN (1) CN115631310A (en)

Similar Documents

Publication Publication Date Title
CN106993181B (en) More VR/AR equipment collaboration systems and Synergistic method
CN103703758B (en) mobile augmented reality system
CN104376118B (en) The outdoor moving augmented reality method of accurate interest point annotation based on panorama sketch
US20170201708A1 (en) Information processing apparatus, information processing method, and program
CN108540542B (en) Mobile augmented reality system and display method
CN104781849A (en) Fast initialization for monocular visual simultaneous localization and mapping (SLAM)
WO2016098457A1 (en) Information processing device, information processing method, and program
CN104641399A (en) System and method for creating an environment and for sharing a location based experience in an environment
EP2806645A1 (en) Image enhancement using a multi-dimensional model
US11769306B2 (en) User-exhibit distance based collaborative interaction method and system for augmented reality museum
CN108955682A (en) Mobile phone indoor positioning air navigation aid
CN110168615A (en) Information processing equipment, information processing method and program
WO2021011836A1 (en) Universal pointing and interacting device
KR20150077607A (en) Dinosaur Heritage Experience Service System Using Augmented Reality and Method therefor
EP3486875B1 (en) Apparatus and method for generating an augmented reality representation of an acquired image
CN113608614A (en) Display method, augmented reality device, equipment and computer-readable storage medium
CN108534781A (en) Indoor orientation method based on video
WO2019127320A1 (en) Information processing method and apparatus, cloud processing device, and computer program product
CN115631310A (en) Positioning system, method and device of three-dimensional space map and computing equipment
Chi et al. Locate, Tell, and Guide: Enabling public cameras to navigate the public
Jonker et al. Philosophies and technologies for ambient aware devices in wearable computing grids
Karlekar et al. Mixed reality on mobile devices
Sykes et al. Conscious GPS: a system to aid the visually impaired to navigate public transportation
US20230316675A1 (en) Traveling in time and space continuum
CN113450439A (en) Virtual-real fusion method, device and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination