WO2023088127A1 - 室内导航方法、服务器、装置和终端 - Google Patents

室内导航方法、服务器、装置和终端 Download PDF

Info

Publication number
WO2023088127A1
WO2023088127A1 PCT/CN2022/130486 CN2022130486W WO2023088127A1 WO 2023088127 A1 WO2023088127 A1 WO 2023088127A1 CN 2022130486 W CN2022130486 W CN 2022130486W WO 2023088127 A1 WO2023088127 A1 WO 2023088127A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
navigation
virtual
real
map
Prior art date
Application number
PCT/CN2022/130486
Other languages
English (en)
French (fr)
Inventor
施文哲
朱方
欧阳新志
周琴芬
夏宏飞
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2023088127A1 publication Critical patent/WO2023088127A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations

Definitions

  • the present application relates to the field of augmented reality (Augmented Reality, AR) technology, and in particular to an indoor navigation method, server, device and terminal.
  • augmented reality Augmented Reality, AR
  • the present application provides an indoor navigation method, server, device and terminal.
  • An embodiment of the present application provides an indoor navigation method, including: obtaining real-time indoor actual location information of an AR device, and the actual location information is used to represent the location information of the AR device in the world coordinate system; combining the actual location information with the preset navigation map match, determine the real-time positioning information; determine the virtual guidance information based on the real-time positioning information and the obtained guidance route, and the guidance route is a route determined based on the acquired initial position information and target position information of the AR device; send the virtual guidance information to An AR device, so that the AR device generates and dynamically displays a virtual navigation image according to the virtual guide information.
  • the embodiment of the present application also provides an indoor navigation method, including: sending the actual location information of the AR device indoors to the server, so that the server can match the actual location information with the preset navigation map, determine the real-time positioning information, and follow the guidance route and real-time positioning information, generate and send virtual guide information to the AR device, where the guide route is a route determined based on the acquired initial position information and target position information of the AR device, and the actual position information is used to represent the AR device in the world coordinate system
  • the location information in; in response to the virtual guidance information sent by the server, generate a virtual navigation image; dynamically display the virtual navigation image.
  • An embodiment of the present application further provides a server, including: an acquisition module configured to acquire real-time indoor actual location information of the AR device, the actual location information being used to represent the location information of the AR device in the world coordinate system; a matching module configured to It is configured to match the actual location information with the preset navigation map to determine real-time positioning information; the determining module is configured to determine virtual guidance information based on the real-time positioning information and the obtained guidance route, and the guidance route is based on the obtained AR device The route determined by the initial location information and the target location information; the first sending module is configured to send the virtual guidance information to the AR device, so that the AR device generates and dynamically displays the virtual navigation image according to the virtual guidance information.
  • a server including: an acquisition module configured to acquire real-time indoor actual location information of the AR device, the actual location information being used to represent the location information of the AR device in the world coordinate system; a matching module configured to It is configured to match the actual location information with the preset navigation map to determine real-time positioning information; the determining module is configured
  • the embodiment of the present application also provides an AR device, including: a second sending module, configured to send the actual location information of the AR device indoors to the server, so that the server can match the actual location information with the preset navigation map and determine the real-time positioning information, and generate and send virtual guidance information to the AR device according to the guidance route and real-time positioning information, wherein the guidance route is a route determined based on the acquired initial position information and target position information of the AR device, and the actual position information is used Characterize the position information of the AR device in the world coordinate system; the generation module is configured to generate a virtual navigation image in response to the virtual guidance information sent by the server; the display module is configured to dynamically display the virtual navigation image.
  • a second sending module configured to send the actual location information of the AR device indoors to the server, so that the server can match the actual location information with the preset navigation map and determine the real-time positioning information, and generate and send virtual guidance information to the AR device according to the guidance route and real-time positioning information, wherein the guidance route is a route determined
  • An embodiment of the present application further provides a terminal, including: at least one AR device, where the AR device is configured to implement any indoor navigation method according to the present application.
  • the embodiment of the present application also provides an electronic device, including: one or more processors; memory, on which one or more programs are stored, and when the one or more programs are executed by the one or more processors, one or more Multiple processors implement any indoor navigation method according to the present application.
  • Another embodiment of the present application provides a readable storage medium, the readable storage medium stores a computer program, and when the computer program is executed by a processor, the processor implements any indoor navigation method according to the present application.
  • Fig. 1 shows a schematic flowchart of the indoor navigation method provided by the present application.
  • Fig. 2 shows another schematic flowchart of the indoor navigation method provided by the present application.
  • Fig. 3 shows another schematic flowchart of the indoor navigation method provided by the present application.
  • FIG. 4 shows a block diagram of the server provided by the present application.
  • FIG. 5 shows a block diagram of the composition of the AR device provided by the present application.
  • FIG. 6 shows a block diagram of a terminal provided by the present application.
  • Fig. 7 shows a block diagram of the composition of the indoor navigation system provided by the present application.
  • Fig. 8 shows a schematic flow chart of the navigation method of the indoor navigation system provided by the present application.
  • FIG. 9 shows a block diagram of an exemplary hardware architecture of a computing device capable of implementing the indoor navigation method and apparatus according to the present application.
  • Fig. 1 shows a schematic flowchart of the indoor navigation method provided by the present application.
  • the indoor navigation method can be applied to a server.
  • the indoor navigation method according to the present application includes at least but not limited to the following steps S101 to S104.
  • step S101 the actual indoor location information of the AR device is obtained in real time.
  • the actual location information is used to represent the location information of the AR device in the world coordinate system.
  • the world coordinate system can be defined as: taking the center of a small circle as the origin o, the x-axis is horizontal to the right, the y-axis is vertically downward, and the direction of the z-axis is determined according to the right-hand rule.
  • the world coordinate system can be used as the starting coordinate space.
  • the indoor location of the AR device can be updated in time, and then the actual location information can be processed to improve the positioning accuracy of the AR device.
  • Step S102 matching the actual position information with the preset navigation map to determine real-time positioning information.
  • the preset navigation map may include a plane map in the building to be navigated, and the preset navigation map may be a two-dimensional plane map.
  • the actual position information represents the position information of the AR device in the three-dimensional space.
  • the real-time positioning information of the AR device in the two-dimensional plane map can be determined, and real-time Determine the location information of the AR device to ensure the accuracy of the real-time information of the AR device.
  • the acquired real-time positioning information can facilitate subsequent auxiliary positioning and navigation update of navigation information.
  • Step S103 determining virtual guidance information according to the real-time positioning information and the obtained guidance route.
  • the guidance route is a route determined based on the acquired initial position information and target position information of the AR device.
  • the initial location information may include the location of the AR device when it initially enters the navigation system, and the target location information represents the destination location information that the AR device needs to reach.
  • the target location information represents the destination location information that the AR device needs to reach.
  • the AR device needs to provide real-time positioning information so that the server can match the real-time positioning information corresponding to the AR device with the guidance route, dynamically adjust the guidance route in real time, and generate virtual guidance information.
  • the guidance information can avoid the influence of factors such as obstructions and observation angle changes, and send accurate navigation information to the AR device, and use the virtual guidance information to prompt the AR device in time, so as to avoid the AR device from going to the wrong position and prompt the navigation accuracy. Meet users' high-precision navigation needs.
  • Step S104 sending virtual guidance information to the AR device.
  • the AR device After the AR device obtains the virtual guidance information, it generates and dynamically displays the virtual navigation image according to the virtual guidance information. Since the AR device is a device that can support dynamic and three-dimensional display of image information or video information, it can display navigation information in real time and dynamically through virtual navigation images, which is convenient for users to intuitively view navigation information and the position of the AR device in the actual environment. information to improve navigation accuracy.
  • the real position of the AR device in the world coordinate system can be determined by acquiring the actual indoor position information of the AR device in real time; the actual position information of the AR device in the world coordinate system can be matched with the preset navigation map , which can map the actual position information to the preset navigation map in two-dimensional space, so as to determine the real-time positioning information corresponding to the AR device in the preset navigation map, so as to facilitate subsequent processing; according to the real-time positioning information and the obtained guidance route , determining virtual guidance information, wherein the guidance route is a route determined based on the acquired initial position information and target position information of the AR device, matching the real-time positioning information with the guidance route, and determining the virtual guidance information that needs to be provided to the AR device, To improve the accuracy of navigation; send virtual guidance information to the AR device, so that the AR device can generate and dynamically display high-precision virtual navigation images based on the virtual guidance information, so as to intuitively guide users to quickly go to the target location and improve navigation accuracy.
  • the guidance route is a route determined
  • Fig. 2 shows another schematic flowchart of the indoor navigation method provided by the present application.
  • the indoor navigation method can be applied to a server.
  • the actual location information includes the real-scene view corresponding to the actual indoor position of the AR device;
  • the real-scene view corresponding to the actual indoor position of the AR device is processed by a deep learning neural network, which can be detailed
  • the positioning features in the real-view view corresponding to the actual position are optimized to improve positioning accuracy.
  • the indoor navigation method in the embodiment of the present application includes at least but not limited to the following steps S201 to S206.
  • step S201 the actual indoor location information of the AR device is obtained in real time.
  • the actual location information includes: a real scene view corresponding to the actual indoor location of the AR device.
  • the real scene view corresponding to the actual position of the AR device in the room may include views of multiple levels.
  • the real scene view may be a panoramic view captured by a panoramic camera device and/or a panoramic camera device, or a partial area view captured by an ordinary camera device that can be seen by an observer.
  • the real-view view above is only an example, and the viewing range of the view can be set according to actual needs. Other unexplained real-view views are also within the protection scope of the present application, and will not be repeated here.
  • the actual location information of the AR device in the room can be displayed from multiple angles, avoiding the omission of orientation information, making the obtained actual location information more comprehensive and convenient for subsequent processing.
  • Step S202 based on the deep learning neural network, process the real-scene view corresponding to the actual indoor location of the AR device to obtain location information to be matched.
  • the real scene view is a view collected by an AR device.
  • Reality views may include: partial area views or panoramic views.
  • the location information to be matched is used to represent the location information of the AR device in the building to be navigated.
  • the deep learning neural network can be used to extract the image information in the partial area view and/or the panoramic view, and then extract the features of the image information to obtain the processed image information, so that the processed image information is more accurate.
  • the relative position of the AR device can be reflected, so that the relative position of the AR device can represent the position information to be matched.
  • the 360-degree image can be divided into 12 projection surfaces on average; , Net VLAD) encoding method, image retrieval and scene recognition are performed on 12 projection surfaces to obtain the location information to be matched and improve the accuracy of the location information to be matched.
  • Step S203 searching for a preset navigation map according to the location information to be matched, and determining whether there is location information to be matched in the preset navigation map.
  • the preset navigation map includes multiple pieces of location information, and the location information to be matched is matched with each piece of location information in the preset navigation map to determine whether there is any location information to be matched in the navigation map.
  • Step S204 when it is determined that the location information to be matched exists in the preset navigation map, the location information matching the location information to be matched in the preset navigation map is used as real-time positioning information.
  • the real-time positioning information can indicate that the current traveling direction of the AR device and the relative position of the AR device are correct.
  • the real-time positioning information can reflect the position of the AR device in the preset navigation map, which facilitates subsequent processing.
  • Step S205 determining virtual guidance information according to the real-time positioning information and the obtained guidance route.
  • the guidance route includes a plurality of pieces of guidance position information.
  • Each guide position information represents the direction that the AR device needs to travel and the relative position information of the AR device. Through each guide position information, the travel path of the AR device can be corrected in time, and the AR device can be guided to reach the target position as soon as possible.
  • determining the virtual guidance information based on the real-time positioning information and the obtained guidance route includes: matching the real-time positioning information with a plurality of guidance position information, and determining the information on the direction to be moved and the distance to be moved corresponding to the AR device;
  • the device corresponds to the information on the direction to be moved, the information on the distance to be moved, and a plurality of guide position information, and updates the guide route; and determines the virtual guide information according to the updated guide route.
  • the direction information to be moved can be matched with the direction in which the AR device needs to travel in the guided position information to obtain a direction matching result; the distance to be moved information can be matched with the relative position information of the AR device in the guided position information to obtain a position matched result;
  • the result of direction matching and position matching it can be characterized whether the current traveling state of the AR device matches the guiding route. If it is determined that there is an error between the current traveling state of the AR device and the guiding route, and the error exceeds the preset threshold, update Guide route, and according to the updated guide route, determine virtual guide information, the virtual guide information is information corresponding to the updated guide route, timely calibrate the AR device, and improve navigation accuracy.
  • calibrating the AR device may include: adjusting relevant shooting parameters (eg, shooting angle, image resolution, etc.) of the AR device, so as to improve navigation accuracy.
  • relevant shooting parameters eg, shooting angle, image resolution, etc.
  • the virtual guide information includes: at least one of camera pose estimation information, environment perception information, and light source perception information.
  • the environment perception information is used to represent the position information of the AR device in the building to be navigated
  • the camera pose estimation information is used to represent the direction information corresponding to the AR device
  • the light source perception information is used to represent the light source information obtained by the AR device.
  • Using different dimensions of information to describe the information of the AR device during the navigation process can improve the navigation accuracy of the AR device and avoid the influence of factors such as obstacles and observation angle changes.
  • Step S206 sending virtual guidance information to the AR device.
  • step S206 in this embodiment is the same as step S104 in the previous embodiment, and will not be repeated here.
  • the location information to be matched is obtained, and the accuracy of the location information to be matched is improved; the preset location information is searched according to the location information to be matched Navigation map, determine whether there is position information to be matched in the preset navigation map, so as to judge whether the traveling direction of the AR device and the relative position of the AR device are correct; Set the location information in the navigation map that matches the location information to be matched as the real-time positioning information, so that the real-time positioning information can reflect the position of the AR device in the preset navigation map, which is convenient for subsequent processing; the real-time positioning information and the obtained guidance route Perform matching, determine virtual guidance information, and send virtual guidance information to the AR device, which can calibrate the travel route of the AR device in time and improve navigation accuracy.
  • the real scene view corresponding to the actual indoor position of the AR device is processed to obtain the position information to be matched (that is, step S202), including: extracting the real scene corresponding to the actual indoor position of the AR device Local features in the view; input the local features into the deep learning neural network to obtain the global features corresponding to the actual indoor location of the AR device; determine the location information to be matched based on the global features corresponding to the actual indoor location of the AR device.
  • the global feature is used to represent the position information of the AR device in the building to be navigated; the local feature is used to represent the relative location information. Through the global feature and the local feature, the positioning of the AR device can be used as an environmental reference object to make the positioning more accurate.
  • the local features corresponding to the AR device are specifically related to the building to be navigated. Which part of the local position is matched, and based on the actual position of the matched local position in the building to be navigated, determine the position information to be matched.
  • the location information to be matched can reflect the actual location information of the AR device in the building to be navigated, improving the positioning accuracy.
  • step S101 or step S201 before acquiring the actual indoor position information of the AR device in real time (ie, step S101 or step S201), it also includes: acquiring panoramic data in the building to be navigated; performing point cloud based on panoramic data and preset algorithms. Build a map to generate a dense map; determine the preset navigation map based on the dense map and the plane view corresponding to the building to be navigated.
  • the panorama data may include: view data corresponding to all scenes in the entire building to be navigated, or panorama data corresponding to areas in the building to be navigated that need to be navigated.
  • the panoramic data can include multi-frame point cloud data collected by a panoramic camera.
  • the coordinate systems corresponding to the two frames of point cloud data before and after are the same; based on the above method, multiple frames are continuously superimposed
  • Point cloud data can be used for point cloud mapping (for example, using the orthographic projection view corresponding to the multi-frame point cloud view to align with the plane view corresponding to the building to be navigated) to generate a dense map.
  • the dense map can fully reflect the location and direction characteristics of the building to be navigated.
  • the preset navigation map Match the plane view corresponding to the building to be navigated with the dense map to obtain a two-dimensional plane view, that is, the preset navigation map, so that the preset navigation map can inherit the location and direction features in the dense map to ensure the comprehensiveness of the map and completeness, to facilitate subsequent positioning and navigation.
  • point cloud mapping is performed based on panoramic data and preset algorithms to generate a dense map, including: processing the panoramic data according to the principle of photogrammetry to generate point cloud data, which includes three-dimensional coordinate information and color information ; According to the preset 3D reconstruction algorithm, the point cloud data is processed to generate a dense map.
  • the principle of photogrammetry is to collect images through optical cameras, and process the collected images to obtain the shape, size, position, characteristics and relationship of the object being photographed. For example, acquire multiple images of the subject, measure and analyze each image, obtain the analysis results, and output the analysis results in the form of diagrams or digital data.
  • the position of the subject can be characterized by three-dimensional coordinate information.
  • the position information of the subject in the three-dimensional space can be accurately obtained;
  • the color of the photographed object is collected multiple times, and the color information of the photographed object is analyzed, so that the color characteristics of the photographed object can be accurately known (for example, the color characteristics based on the red green blue (Red Green Blue, RGB) color space , or color features based on YUV color space, etc.)
  • Y in the YUV color space represents brightness (Luminance or Luma), that is, the grayscale value, "U” and “V” represent chroma (Chrominance or Chroma), which are used to describe the color and saturation of the image.
  • the 3D coordinate information and color information in the point cloud data are processed, so that the generated dense map can reflect the position information and color information of each object in the 3D space, and improve the image in the map.
  • the stereo image of each subject makes the dense map more accurate and convenient for navigation and positioning.
  • the preset navigation map is determined according to the dense map and the plane view corresponding to the building to be navigated, including: according to the preset scale factor, the dense map is mapped to the plane view to be processed by using an orthographic projection; The processing plane view is matched with the plane view corresponding to the building to be navigated to determine a preset navigation map.
  • Matching the plane view to be processed with the plane view corresponding to the building to be navigated can be achieved in the following manner: comparing the plane view to be processed with the plane view corresponding to the building to be navigated, or comparing the plane view to be processed with the building to be navigated The corresponding plan view is aligned to determine the preset navigation map.
  • the preset scale factor is a coefficient factor that can be calibrated. Through the preset scale factor, the dense map can be reasonably scaled, so that the scaled dense map can adapt to display devices of different sizes.
  • the dense map is a three-dimensional map
  • the plane view corresponding to the building to be navigated is a two-dimensional view
  • a preset navigation map can be obtained, which can reflect the accuracy of the dense map, and can also reflect the characteristics of the plane view corresponding to the building to be navigated, and use the preset navigation map to navigate the AR device and positioning, which can further improve the accuracy of navigation and positioning.
  • the plane view corresponding to the building to be navigated includes: a computer-aided design (Computer Aided Design, CAD) view, where the CAD view is a vector plane view determined based on a preset scale factor.
  • CAD Computer Aided Design
  • the preset scale factor can be calibrated, when the preset scale factor remains unchanged, displaying the CAD view on different devices (for example, mobile phones or tablet computers of different sizes, etc.) can ensure that the clarity of the view conforms to Expected requirements, that is, the CAD view has the characteristic of invariant scaling, and the CAD view is a directional vector plane view.
  • the plane views corresponding to the plane views to be processed and the buildings to be navigated as CAD views, it can be ensured that the plane views to be processed and the plane views corresponding to the buildings to be navigated have the characteristics of invariant scaling and are directional
  • the vector plane view improves the display effect of the terminal and brings users a better experience.
  • step S104 after sending the virtual guidance information to the AR device (that is, step S104 or step S206), it also includes: receiving the arrival position information fed back by the AR device; comparing the arrival position information with the target position information to determine whether the AR device has arrived The target position; when it is determined that the AR device has reached the target position, the navigation is ended.
  • the arrival location information fed back by the AR device By comparing the arrival location information fed back by the AR device with the target location information, it is determined whether the arrival location information and the target location information are the same. If the two location information are determined to be the same, it is determined that the AR device has reached the target location, and the navigation can be ended. It is no longer necessary to obtain the real-time location information fed back by the AR device, reducing the amount of information processing and improving the efficiency of information processing.
  • the navigation beacon can be represented by a cartoon image (for example, a cartoon image of a character and/or an animal cartoon) to increase the fun of AR navigation.
  • a cartoon image for example, a cartoon image of a character and/or an animal cartoon
  • Fig. 3 shows another schematic flowchart of the indoor navigation method provided by the present application.
  • the indoor navigation method can be applied to an AR device, and the AR device can be installed on a terminal.
  • the indoor navigation method in the embodiment of the present application includes at least but not limited to the following steps S301 to S303.
  • Step S301 sending the actual indoor location information of the AR device to the server.
  • the actual indoor location information of the AR device may include: three-dimensional space coordinate information corresponding to the location of the AR device.
  • the actual position of the AR device in the building to be navigated can be reflected through the three-dimensional space coordinate information.
  • the server obtains the actual location information, it matches the actual location information with the preset navigation map, determines real-time positioning information, and generates and sends virtual guidance information to the AR device according to the guidance route and real-time positioning information, wherein the guidance route is Based on the acquired initial location information of the AR device and the route determined by the target location information, the actual location information is used to represent the location information of the AR device in the world coordinate system.
  • the mapping position information of the AR device in the preset navigation map can be clarified, and the guidance route can be dynamically adjusted in real time to improve navigation accuracy.
  • Step S302 generating a virtual navigation image in response to the virtual guidance information sent by the server.
  • the virtual navigation images may include: AR images or AR videos based on dynamic imaging. Through dynamic AR images or AR videos, the position information of the AR device in the building to be navigated and the target route information can be clearly and three-dimensionally viewed.
  • the virtual guidance information includes camera pose estimation information, environment perception information and light source perception information.
  • the environment perception information is used to represent the position information of the AR device in the building to be navigated.
  • the camera pose estimation information is used to represent the direction information corresponding to the AR device.
  • the light source is used to represent the light source information acquired by the AR device.
  • the camera pose estimation information may include: orientation information corresponding to the AR device, for example, the relative position of the AR device (for example, the camera of the AR device faces the direction opposite to the user's face, or the camera of the AR device faces the ground, etc.).
  • the light source perception information may include: light information of multiple angles received by the AR device.
  • the virtual navigation image is generated in response to the virtual guidance information sent by the server, including: receiving the virtual guidance information sent by the server; processing the environment perception information and light source perception information according to a preset three-dimensional reconstruction algorithm to obtain AR The virtual image of the AR; match the camera pose estimation information with the AR virtual image to determine the virtual navigation image.
  • the preset 3D reconstruction algorithm may include: a multi-view geometry (Open Multiple View Geometry, OpenMVG) algorithm and a multi-view stereo reconstruction library (Open Multi-View Stereo reconstruction library, OpenMVS) algorithm.
  • a multi-view geometry Open Multiple View Geometry, OpenMVG
  • a multi-view stereo reconstruction library Open Multi-View Stereo reconstruction library, OpenMVS
  • the OpenMVG algorithm can accurately solve common problems in multi-view geometry. For example, calibration based on scene structure information; self-calibration based on camera active information (eg, pure rotation information); self-calibration independent of scene structure and camera active information, etc.
  • the OpenMVS algorithm is suitable for scenarios such as dense point cloud reconstruction, surface reconstruction, surface refinement and texture mapping, and surface refinement can make images clearer.
  • the OpenMVS algorithm uses the OpenMVS algorithm to perform surface reconstruction and surface refinement on the environmental perception information, and perform texture mapping and other processing on the light source perception information to obtain a virtual image of AR, which can reflect the environmental information of the AR device in more detail and the acquired information The projection information of the light source on the AR device, etc.
  • the OpenMVG algorithm is used to process the camera pose estimation information to obtain multiple view information based on the AR device, and match the multiple view information with the AR virtual image to determine the virtual navigation image and improve the accuracy of the virtual navigation image.
  • Step S303 dynamically displaying the virtual navigation image.
  • the obtained virtual navigation image can be displayed in real time, and the obtained virtual navigation image can also be played dynamically in the form of frames, which is convenient for users to view, three-dimensionally and intuitively view navigation information, and improve navigation accuracy.
  • the server can match the actual location information with the preset navigation map, determine real-time positioning information, and generate And send virtual guidance information to the AR device, through the guidance route to clarify the path information that needs to be navigated, and match the guidance route with real-time positioning information, the mapping position information of the AR device in the preset navigation map can be clarified, real-time dynamic Adjust the guidance route to improve navigation accuracy; in response to the virtual guidance information sent by the server, generate a dynamic display of virtual navigation images, which is convenient for users to view navigation information three-dimensionally and intuitively, and improve navigation accuracy.
  • FIG. 4 shows a block diagram of the server provided by the present application.
  • the server 400 includes an acquisition module 401 , a matching module 402 , a determination module 403 and a first sending module 404 .
  • the obtaining module 401 is configured to obtain real-time indoor actual position information of the AR device, and the actual position information is used to represent the position information of the AR device in the world coordinate system;
  • the matching module 402 is configured to compare the actual position information with the preset navigation map Matching to determine real-time positioning information;
  • the determining module 403 is configured to determine virtual guidance information according to the real-time positioning information and the obtained guidance route, and the guidance route is a route determined based on the obtained initial position information and target position information of the AR device;
  • the first sending module 404 is configured to send virtual guide information to the AR device, so that the AR device generates and dynamically displays a virtual navigation image according to the virtual guide information.
  • the actual location information includes: the real scene view corresponding to the actual indoor position of the AR device; the matching module 402 is specifically configured to: process the real scene view corresponding to the actual indoor position of the AR device based on a deep learning neural network, Obtain the location information to be matched; search the preset navigation map according to the location information to be matched, and determine whether there is the location information to be matched in the preset navigation map; The location information in the map that matches the location information to be matched is used as real-time positioning information.
  • the real scene view corresponding to the actual indoor position of the AR device is processed to obtain the location information to be matched, including: extracting local features in the real scene view corresponding to the actual indoor position of the AR device ; Input the local features into the deep learning neural network to obtain the global features corresponding to the actual indoor position of the AR device.
  • the global features are used to represent the position information of the AR device in the building to be navigated; based on the actual indoor position of the AR device, the corresponding The global feature determines the location information to be matched.
  • the server 400 also includes: a preset navigation map generation module, configured to: acquire panoramic data in buildings to be navigated; perform point cloud mapping based on panoramic data and preset algorithms to generate dense maps; The plan view corresponding to the map and the building to be navigated determines the preset navigation map.
  • a preset navigation map generation module configured to: acquire panoramic data in buildings to be navigated; perform point cloud mapping based on panoramic data and preset algorithms to generate dense maps; The plan view corresponding to the map and the building to be navigated determines the preset navigation map.
  • point cloud mapping is performed based on panoramic data and preset algorithms to generate dense maps, including: processing panoramic data according to the principle of photogrammetry to generate point cloud data, which includes three-dimensional coordinate information and color information ; According to the preset 3D reconstruction algorithm, the point cloud data is processed to generate a dense map.
  • the preset navigation map is determined according to the dense map and the plane view corresponding to the building to be navigated, including: according to the preset scale factor, the dense map is mapped to the plane view to be processed by using an orthographic projection; The processing plane view is matched with the plane view corresponding to the building to be navigated to determine a preset navigation map.
  • the plane view corresponding to the building to be navigated includes: a CAD view, and the CAD view is determined to be a vector plane view based on a preset scale factor.
  • the guidance route includes a plurality of guidance location information; the determination module 403 is specifically configured to: match the real-time positioning information with the plurality of guidance location information, and determine the direction information to be moved and the distance information to be moved corresponding to the AR device; The guidance route is updated according to the direction information to be moved, the distance information to be moved and a plurality of guidance position information corresponding to the AR device; virtual guidance information is determined according to the updated guidance route.
  • the server 400 further includes: a confirmation module, configured to: receive the arrival location information fed back by the AR device; compare the arrival location information with the target location information to determine whether the AR device has reached the target location; In the case of , end the navigation.
  • a confirmation module configured to: receive the arrival location information fed back by the AR device; compare the arrival location information with the target location information to determine whether the AR device has reached the target location; In the case of , end the navigation.
  • the virtual guidance information includes: at least one of camera pose estimation information, environment perception information, and light source perception information; wherein, the environment perception information is used to represent the position information of the AR device in the building to be navigated, and the camera The attitude estimation information is used to represent the direction information corresponding to the AR device, and the light source perception information is used to represent the light source information acquired by the AR device.
  • the actual position information of the AR device in the room can be obtained in real time by the acquisition module, and the actual position of the AR device in the world coordinate system can be determined; the actual position information of the AR device in the world coordinate system can be compared with the predicted Assuming that the navigation map matches, the actual location information can be mapped to the preset navigation map on the two-dimensional space, so as to determine the real-time positioning information corresponding to the AR device in the preset navigation map to facilitate subsequent processing; use the determination module according to the real-time The positioning information and the obtained guidance route determine the virtual guidance information, wherein the guidance route is a route determined based on the acquired initial position information and target position information of the AR device, and the real-time positioning information is matched with the guidance route to determine the need to provide The virtual guidance information for the AR device to improve the accuracy of navigation; use the first sending module to send the virtual guidance information to the AR device, so that the AR device can generate and dynamically display high-precision virtual navigation images based on the virtual guidance information, and intuitively Guide users to the
  • FIG. 5 shows a block diagram of the composition of the AR device provided by the present application.
  • the AR device 500 includes a second sending module 501 , a generating module 502 and a displaying module 503 .
  • the second sending module 501 is configured to send the actual location information of the AR device indoors to the server, so that the server can match the actual location information with the preset navigation map, determine real-time positioning information, and generate And send virtual guide information to the AR device, wherein the guide route is a route determined based on the acquired initial position information and target position information of the AR device, and the actual position information is used to represent the position information of the AR device in the world coordinate system; generate
  • the module 502 is configured to generate a virtual navigation image in response to the virtual guidance information sent by the server; the presentation module 503 is configured to dynamically display the virtual navigation image.
  • the actual location information of the AR device indoors is sent to the server through the second sending module, so that the server can match the actual location information with the preset navigation map, determine real-time positioning information, and based on the guidance route and real-time Positioning information, generate and send virtual guidance information to the AR device, through the guidance route to clarify the path information that needs to be navigated, and match the guidance route with real-time positioning information, the mapping position of the AR device in the preset navigation map can be clarified Information, real-time dynamic adjustment of guidance routes to improve navigation accuracy; use the generation module to respond to the virtual guidance information sent by the server to generate virtual navigation images, and use the display module to dynamically display virtual navigation images, which is convenient for users to view navigation information in a three-dimensional and intuitive way, and improve Navigation accuracy.
  • FIG. 6 shows a block diagram of a terminal provided by the present application.
  • a terminal 600 includes: at least one AR device 500 , and the AR device 500 is configured to implement any indoor navigation method according to the embodiments of the present application.
  • the AR device 500 includes: a second sending module 501 configured to send the actual location information of the AR device 500 indoors to the server, so that the server matches the actual location information with the preset navigation map, determines real-time positioning information, and Generate and send virtual guidance information to the AR device 500 according to the guidance route and real-time positioning information, wherein the guidance route is a route determined based on the obtained initial position information and target position information of the AR device 500, and the actual position information is used to represent the AR
  • the position information of the device 500 in the world coordinate system the generation module 502 is configured to generate a virtual navigation image in response to the virtual guidance information sent by the server; the presentation module 503 is configured to dynamically display the virtual navigation image.
  • the actual indoor location information of the AR device 500 is sent to the server through the second sending module 501, so that the server can match the actual location information with the preset navigation map, determine real-time positioning information, and follow the guidance route and real-time positioning information, generate and send virtual guidance information to the AR device 500, through the guidance route to clarify the path information that needs to be navigated, and match the guidance route with the real-time positioning information, it can be determined that the AR device 500 is on the preset navigation map Mapping position information in the map, dynamically adjust the guidance route in real time, and improve navigation accuracy; use the generation module 502 to respond to the virtual guidance information sent by the server, generate a virtual navigation image, and use the display module 503 to dynamically display the virtual navigation image, which is convenient for users to be three-dimensional and intuitive View navigation information to improve navigation accuracy.
  • Fig. 7 shows a block diagram of the composition of the indoor navigation system provided by the present application.
  • the indoor navigation system includes: an offline map creation device 710 , a cloud navigation server 720 , a terminal 730 and a preset navigation map generation device 740 .
  • the offline map creation device 710 includes: a panorama image acquisition device 711 and a point cloud map creation device 712 .
  • the cloud navigation server 720 includes: an identification and positioning module 721 , a path planning module 722 and a real-time navigation module 723 .
  • the terminal 730 includes: an initial positioning module 731 , a destination selection module 732 , a real-time navigation image feedback module 733 , a virtual navigation image generation module 734 and a display module 735 .
  • the terminal 730 may be a mobile phone terminal supporting AR functions (for example, supporting at least one of functions such as motion capture, environment perception, and light source perception), and the panoramic image acquisition device 711 may include: a panoramic camera device and/or a panoramic camera device.
  • Fig. 8 shows a schematic flow chart of the navigation method of the indoor navigation system provided by the present application. As shown in Fig. 8, the navigation method of the indoor navigation system at least includes but not limited to the following steps S801 to S812.
  • step S801 the terminal 730 sends a map download request to the default navigation map generation device 740 .
  • the download request is used to request to obtain a preset navigation map
  • the preset navigation map is a map determined based on image data pre-collected by the panoramic image acquisition device 711 in the offline map creation device 710 .
  • the download request may include information such as the identification of the terminal 730 and the number of the preset navigation map, where the number of the preset navigation map may be the number obtained from the historical interaction information of the terminal 730, or a real-time map uploaded by the terminal. The number identified by the image information.
  • the panoramic image collection device 711 can send the collected image data based on the indoor environment (for example, the indoor environment of a shopping mall, etc.) to the point cloud map creation device 712, so that the point cloud map creation device 712 can follow the preset
  • the 3D reconstruction algorithm processes the collected image data based on the indoor environment to obtain a preset navigation map.
  • the point cloud map creation device 712 generates a dense map by calling an optimized three-dimensional reconstruction algorithm; view; determine the default navigation map.
  • the optimized 3D reconstruction algorithm may include OpenMVG algorithm and OpenMVS algorithm.
  • OpenMVG algorithm can accurately solve common problems in multi-view geometry. For example, calibration based on scene structure information; self-calibration based on camera active information (eg, pure rotation information); self-calibration independent of scene structure and camera active information, etc.
  • the OpenMVS algorithm is suitable for scenarios such as dense point cloud reconstruction, surface reconstruction, surface refinement and texture mapping, and surface refinement can make images clearer.
  • the optimized algorithm obtained can be used in conjunction to realize the three-dimensional reconstruction of the image.
  • the point cloud map creation device 712 may perform dimensionality reduction processing on the three-dimensional map in a manner of orthographic projection to obtain a two-dimensional planar map. For example, map a 3D map onto a 2D planar map in an orthographic projection.
  • the three-dimensional map and the two-dimensional planar map keep the horizontal and vertical coordinates (for example, x coordinates and y coordinates, etc.) unchanged, and the two-dimensional planar map can be aligned with the preset CAD view to maintain the consistency of the coordinates, In this way, the generated preset navigation map can have the characteristic of invariant zoom scale, and is a directional vector plane view.
  • the preset navigation map can be accurately displayed on display screens of different sizes (for example, display screens of mobile phones or display screens of tablet computers of different sizes, etc.), so as to improve user experience.
  • Step S802 the default navigation map generation device 740 sends the preset navigation map corresponding to the current scene of the terminal 730 to the terminal 730 , so as to complete the map initialization of the terminal 730 .
  • Step S803 the terminal 730 uploads the real scene image of its current location to the cloud navigation server 720 .
  • step S804 the cloud navigation server 720 processes the real-scene image uploaded by the terminal 730 through the identification and positioning module 721, and matches the real-scene image with a preset navigation map to determine real-time positioning information.
  • the identification and positioning module 721 invokes the deep learning hierarchical semantic description algorithm to classify the real-scene images uploaded by the terminal 730, and preliminarily clarify the major categories described in the real-scene images (for example, house images or person images, etc.); Further refinement analysis is performed to obtain the initial position of the terminal 730 .
  • the real scene image uploaded by the terminal 730 may include: a partial area view or a panoramic image.
  • the panoramic image can be a 360-degree image taken by a panoramic camera, and then the 360-degree image is evenly divided into 12 projection surfaces, and then the encoding method based on Net VLAD is used to perform image retrieval and scene recognition on the 12 projection surfaces , to obtain the initial location of the terminal 730, for example, the three-dimensional space coordinate information corresponding to the location of the terminal 730.
  • the real-time positioning information may include: the location of the terminal 730 corresponds to the two-dimensional coordinate information in the preset navigation map, and the two-dimensional coordinate information is obtained by mapping the three-dimensional space coordinate information corresponding to the location of the terminal 730 to the coordinate information obtained in the preset navigation map.
  • Step S805 the cloud navigation server 720 sends the real-time positioning information to the terminal 730 , so that the terminal 730 uses the display module 735 to display the real-time positioning information.
  • the real-time positioning information can represent the real-time position of the terminal 730 in the preset navigation map.
  • Step S806 the destination selection module 732 in the terminal 730 obtains the target location information input by the user, and generates and sends a route navigation message to the cloud navigation server 720 based on the initial location information of the terminal 730 and the target location information.
  • the target location information may be determined by the user operating the map displayed on the mobile phone terminal and directly clicking on a specific location, so that the user can easily operate when selecting the target location and ensure the ease of use of address selection.
  • Step S807 the cloud navigation server 720 performs message analysis on the received path navigation message, obtains the initial location information and target location information of the terminal 730, and then uses the path planning module 722 to call, for example, rapidly traverses the random tree (Rapidly-exploring Random Tree, RRT) algorithm to process the initial position information and target position information to obtain the guidance route.
  • RRT Random Tree
  • the RRT algorithm is a tree-shaped data storage structure and algorithm.
  • the data storage result is established through the method of path increment, and the distance between the randomly selected point and the tree is quickly reduced.
  • the RRT algorithm can effectively search for non-convex
  • the (Non Convex) high-dimensional space is especially suitable for path planning including obstacles and non-holonomic (Non-Holonomic) systems or reverse dynamics (Kino-Dynamic) differential constraints.
  • Step S808 the cloud navigation server 720 sends the guiding route to the terminal 730 , so that the terminal 730 uses the display module 735 to display the guiding route.
  • Step S809 the terminal 730 uses the real-time navigation image feedback module 733 to upload the location information and scene images acquired in real time to the real-time navigation module 723 in the cloud navigation server 720 .
  • step S810 the real-time navigation module 723 determines virtual navigation information by performing motion capture, environment perception, and light source perception on the location information and scene images acquired in real time.
  • the virtual navigation information may include: at least one of camera pose estimation information, environment perception information, and light source perception information; the environment perception information is used to represent the position information of the terminal 730 in the building to be navigated, and the camera posture estimation information is used to represent the terminal.
  • the direction information corresponding to 730 and the light source perception information are used to characterize the light source information acquired by the terminal 730 . It can comprehensively measure the information of the terminal 730 in real-time navigation, and improve the navigation accuracy.
  • Step S811 the cloud navigation server 720 sends the updated virtual navigation information to the terminal 730, so that the terminal 730 uses the virtual navigation image generation module 734 to generate an updated virtual navigation image, and uses the display module 735 to dynamically display the updated virtual navigation in real time image.
  • steps S809 to S811 may be performed repeatedly during the navigation process, so as to adjust the real-time virtual navigation image.
  • the terminal 730 will upload the image information corresponding to its location to the cloud navigation server 720 in real time, so that the cloud navigation server 720 can match the image information corresponding to the location of the terminal 730 with the guidance route, and real-time Dynamically adjust the guidance route to ensure the consistency and accuracy of the guidance route.
  • Step S812 after the terminal 730 arrives at the target location (i.e., the navigation destination), it will continue to upload the destination image corresponding to the navigation destination to the cloud navigation server 720, so that the cloud navigation server 720 can combine the previous guidance route to determine whether the navigation terminal is compatible with the preset
  • the target location information matches the target location information, and if the match is determined, the navigation process ends.
  • the navigation beacons in the AR navigation process can be represented by cartoon characters to increase the fun of AR navigation.
  • the panoramic camera device in the portable terminal by using the panoramic camera device in the portable terminal to collect visual data of the scene in the building to be navigated, and perform three-dimensional reconstruction based on the collected visual data, a corresponding image corresponding to the physical space in the building to be navigated is generated.
  • the dense map, and the visual data in the building to be navigated are segmented and mapped to generate a point cloud map, and then the point cloud map is aligned with the preset CAD view, so that the generated preset navigation map can have different zoom scales.
  • Variable characteristics and is a directional vector plane view.
  • the terminal uploads the image information corresponding to its location to the cloud navigation server in real time, so that the cloud navigation server can adjust the pre-planned guidance route in real time, generate virtual guidance information, and send the virtual guidance information to the terminal.
  • the terminal can generate an updated virtual navigation image based on the virtual guidance information, and dynamically display the updated virtual navigation image through AR, so that users can view the navigation information dynamically and three-dimensionally, which is convenient for users to locate and navigate, and improves navigation accuracy .
  • FIG. 9 shows a block diagram of an exemplary hardware architecture of a computing device capable of implementing the indoor navigation method and apparatus according to the present application.
  • the computing device 900 includes an input device 901 , an input interface 902 , a central processing unit 903 , a memory 904 , an output interface 905 , and an output device 906 .
  • the input interface 902, the central processing unit 903, the memory 904, and the output interface 905 are connected to each other through the bus 907, and the input device 901 and the output device 906 are connected to the bus 907 through the input interface 902 and the output interface 905 respectively, and then communicate with other components of the computing device 900. Component connections.
  • the input device 901 receives input information from the outside, and transmits the input information to the central processing unit 903 through the input interface 902; the central processing unit 903 processes the input information based on computer-executable instructions stored in the memory 904 to generate output information, temporarily or permanently store the output information in the memory 904, and then transmit the output information to the output device 906 through the output interface 905; the output device 906 outputs the output information to the outside of the computing device 900 for the user to use.
  • the computing device shown in FIG. 9 may be implemented as an electronic device, and the electronic device may include: a memory configured to store a program; a processor configured to run the program stored in the memory to Execute the indoor navigation method according to various embodiments of the present application.
  • the computing device shown in FIG. 9 can be implemented as an indoor navigation system, and the indoor navigation system can include: a memory configured to store a program; a processor configured to run the program stored in the memory , to execute the indoor navigation method according to various embodiments of the present application.
  • Another embodiment of the present application provides a readable storage medium, the readable storage medium stores a computer program, and when the computer program is executed by a processor, the processor implements the indoor navigation method according to each embodiment of the present application.
  • Embodiments of the present application may be realized by a data processor of a mobile device executing computer program instructions, for example in a processor entity, or by hardware, or by a combination of software and hardware.
  • Computer program instructions may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-dependent instructions, microcode, firmware instructions, state setting data, or source code written in any combination of one or more programming languages or object code.
  • ISA instruction set architecture
  • Any logic flow block diagrams in the drawings of the present application may represent program steps, or may represent interconnected logic circuits, modules and functions, or may represent a combination of program steps and logic circuits, modules and functions.
  • Computer programs can be stored on memory.
  • the memory may be of any type suitable to the local technical environment and may be implemented using any suitable data storage technology, such as, but not limited to, read-only memory (ROM), random-access memory (RAM), optical memory devices and systems (digital versatile disc DVD or CD), etc.
  • Computer readable media may include non-transitory storage media.
  • the data processor can be of any type suitable for the local technical environment, such as but not limited to general purpose computer, special purpose computer, microprocessor, digital signal processor (DSP), application specific integrated circuit (ASIC), programmable logic device (FGPA) and processors based on multi-core processor architectures.
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FGPA programmable logic device

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Navigation (AREA)

Abstract

一种室内导航方法、服务器(400)、AR装置(500)和终端(600),涉及增强现实AR技术领域。室内导航方法包括:实时获取AR装置(500)在室内的实际位置信息(S101),实际位置信息用于表征AR装置(500)在世界坐标系中的位置信息;将实际位置信息与预设导航地图相匹配,确定实时定位信息(S102);依据实时定位信息和获取到的引导路线,确定虚拟引导信息(S103),引导路线是基于获取到的AR装置(500)的初始位置信息和目标位置信息确定的路线;发送虚拟引导信息至AR装置(500)(S104),以使AR装置(500)依据虚拟引导信息生成并动态展示虚拟导航图像。

Description

室内导航方法、服务器、装置和终端 技术领域
本申请涉及增强现实(Augmented Reality,AR)技术领域,具体涉及一种室内导航方法、服务器、装置和终端。
背景技术
随着城市的不断发展,大型建筑物(例如,机场、高铁站、商场和高层写字楼等)不断涌现。人们在各个大型建筑物内运动时,容易出现无法定位的问题。随着空间数据的获取手段的增多,可以采用便携式全景相机进行视觉数据的采集,进而对这些视觉数据进行处理,以实现室内定位。
但是,由于室内环境的复杂性,人们对于室内导航的精确需求也不断提高。现有的室内平面导航方法,无法直观的引导用户快速地前往目标位置;并且,使用室内平面导航方法也存在导航精度低的问题,无法满足用户的高精度导航需求。
发明内容
本申请提供一种室内导航方法、服务器、装置和终端。
本申请实施例提供一种室内导航方法,包括:实时获取AR装置在室内的实际位置信息,实际位置信息用于表征AR装置在世界坐标系中的位置信息;将实际位置信息与预设导航地图相匹配,确定实时定位信息;依据实时定位信息和获取到的引导路线,确定虚拟引导信息,引导路线是基于获取到的AR装置的初始位置信息和目标位置信息确定的路线;发送虚拟引导信息至AR装置,以使AR装置依据虚拟引导信息生成并动态展示虚拟导航图像。
本申请实施例还提供一种室内导航方法,包括:发送AR装置在室内的实际位置信息至服务器,以使服务器将实际位置信息与预设导航地图相匹配,确定实时定位信息,并依据引导路线和实时定位信息,生成并发送虚拟引导信息至AR装置,其中,引导路线是基于获取到 的AR装置的初始位置信息和目标位置信息确定的路线,实际位置信息用于表征AR装置在世界坐标系中的位置信息;响应于服务器发送的虚拟引导信息,生成虚拟导航图像;动态展示虚拟导航图像。
本申请实施例还提供一种服务器,包括:获取模块,被配置为实时获取AR装置在室内的实际位置信息,实际位置信息用于表征AR装置在世界坐标系中的位置信息;匹配模块,被配置为将实际位置信息与预设导航地图相匹配,确定实时定位信息;确定模块,被配置为依据实时定位信息和获取到的引导路线,确定虚拟引导信息,引导路线是基于获取到的AR装置的初始位置信息和目标位置信息确定的路线;第一发送模块,被配置为发送虚拟引导信息至AR装置,以使AR装置依据虚拟引导信息生成并动态展示虚拟导航图像。
本申请实施例还提供一种AR装置,包括:第二发送模块,被配置为发送AR装置在室内的实际位置信息至服务器,以使服务器将实际位置信息与预设导航地图相匹配,确定实时定位信息,并依据引导路线和实时定位信息,生成并发送虚拟引导信息至AR装置,其中,引导路线是基于获取到的AR装置的初始位置信息和目标位置信息确定的路线,实际位置信息用于表征AR装置在世界坐标系中的位置信息;生成模块,被配置为响应于服务器发送的虚拟引导信息,生成虚拟导航图像;展示模块,被配置为动态展示虚拟导航图像。
本申请实施例还提供一种终端,包括:至少一个AR装置,该AR装置用于实现根据本申请的任意一种室内导航方法。
本申请实施例还提供一种电子设备,包括:一个或多个处理器;存储器,其上存储有一个或多个程序,当一个或多个程序被一个或多个处理器执行,使得一个或多个处理器实现根据本申请的任意一种室内导航方法。
本申请实施还例提供一种可读存储介质,该可读存储介质存储有计算机程序,计算机程序被处理器执行时,使得处理器实现根据本申请的任意一种室内导航方法。
关于本申请的以上实施例和其他方面以及其实现方式,在附图说明、具体实施方式和权利要求中提供更多说明。
附图说明
图1示出本申请提供的室内导航方法的流程示意图。
图2示出本申请提供的室内导航方法的又一流程示意图。
图3示出本申请提供的室内导航方法的又一流程示意图。
图4示出本申请提供的服务器的组成方框图。
图5示出本申请提供的AR装置的组成方框图。
图6示出本申请提供的终端的组成方框图。
图7示出本申请提供的室内导航系统的组成方框图。
图8示出本申请提供的室内导航系统的导航方法的流程示意图。
图9示出能够实现根据本申请的室内导航方法和装置的计算设备的示例性硬件架构的结构图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚明白,下文中将结合附图对本申请的实施例进行详细说明。需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互任意组合。
随着用户对于导航技术的需求不断更新,传统的基于全球定位系统(Global Positioning System,GPS)的定位和导航技术,以及基于无线射频信号的定位和导航技术,无法满足用户对于精度较高的导航需求。
在室内环境中,由于不同光源所提供的光照具有时变性,且室内部署环境的时变性,以及障碍物的遮挡和用户的观测角度变化等因素的影响,易导致采集到的定位信息和导航信息不准确,无法满足用户的高精度导航需求。
图1示出本申请提供的室内导航方法的流程示意图。该室内导航方法可应用于服务器。如图1所示,根据本申请的室内导航方法至少包括但不限于以下步骤S101至S104。
步骤S101,实时获取AR装置在室内的实际位置信息。
实际位置信息用于表征AR装置在世界坐标系中的位置信息。世 界坐标系可以定义为:以带有小圆的圆心为原点o,x轴水平向右,y轴垂直向下,根据右手法则确定z轴的方向。在进行图形转换时,世界坐标系可用作起始坐标空间。
通过实时的获得实际位置信息,能够及时更新AR装置在室内的位置,进而对该实际位置信息进行处理,提升对AR装置的定位准确性。
步骤S102,将实际位置信息与预设导航地图相匹配,确定实时定位信息。
预设导航地图可以包括待导航建筑内的平面地图,预设导航地图可以是二维平面地图。
实际位置信息表征的是AR装置在三维空间中的位置信息,通过将三维的实际位置信息映射到二维的预设导航地图中,确定AR装置在二维平面地图中的实时定位信息,能够实时确定AR装置的定位信息,保证AR装置的实时信息的准确性。此外,获取到的实时定位信息能够方便后续对导航信息的辅助定位和导航更新。
步骤S103,依据实时定位信息和获取到的引导路线,确定虚拟引导信息。
引导路线是基于获取到的AR装置的初始位置信息和目标位置信息确定的路线。
例如,初始位置信息可以包括AR装置在初始进入导航系统时的位置,而目标位置信息表征AR装置需要到达的终点位置信息。通过对初始位置信息和目标位置信息进行规划,找出适合的多个导航路径,并从多个导航路径中查找出路径最短的导航线路,将该导航路线作为引导路线,以使AR装置依据该引导路线能够尽快到达终点位置。
在为AR装置提供导航信息的过程中,AR装置需要提供实时定位信息,以使服务器能够将AR装置对应的实时定位信息与引导路线进行匹配,实时动态的调整引导路线,生成虚拟引导信息,虚拟引导信息可以避免障碍物的遮挡和观测角度变化等因素的影响,并将准确的导航信息发送至AR装置,使用虚拟引导信息及时提示AR装置,以避免AR装置走错位置,提示导航准确性,满足用户的高精度导航需求。
步骤S104,发送虚拟引导信息至AR装置。
AR装置在获得虚拟引导信息后,依据该虚拟引导信息生成并动态展示虚拟导航图像。由于AR装置是能够支持动态、立体展示图像信息或视频信息的装置,可通过虚拟导航图像的方式,实时动态的展示导航信息,方便用户直观的查看导航信息,以及AR装置在实际环境中的位置信息,提升导航精准度。
在本实施例中,通过实时获取AR装置在室内的实际位置信息,能够确定AR装置在世界坐标系中的实际位置;将AR装置在世界坐标系中的实际位置信息与预设导航地图相匹配,能够将实际位置信息映射到二维空间上的预设导航地图中,以确定该AR装置在预设导航地图中对应的实时定位信息,方便后续处理;依据实时定位信息和获取到的引导路线,确定虚拟引导信息,其中,引导路线是基于获取到的AR装置的初始位置信息和目标位置信息确定的路线,将实时定位信息与引导路线相匹配,确定需要提供给AR装置的虚拟引导信息,以提升导航的准确性;发送虚拟引导信息至AR装置,以使AR装置依据虚拟引导信息生成并动态展示高精度的虚拟导航图像,以直观的引导用户快速的前往目标位置,提升导航准确性。
图2示出本申请提供的室内导航方法的又一流程示意图。该室内导航方法可应用于服务器。本实施例与上一实施例的区别在于:实际位置信息包括AR装置在室内的实际位置对应的实景视图;通过深度学习神经网络对AR装置在室内的实际位置对应的实景视图进行处理,能够细化实际位置对应的实景视图中的定位特征,提升定位准确性。
如图2所示,本申请实施例中的室内导航方法至少包括但不限于以下步骤S201至S206。
步骤S201,实时获取AR装置在室内的实际位置信息。
实际位置信息包括:AR装置在室内的实际位置对应的实景视图。AR装置在室内的实际位置对应的实景视图可以包括多个层次的视图。例如,该实景视图可以是可以通过全景摄像装置和/或全景照相装置采集到的全景视图,也可以是采用普通摄像装置采集到的观察者所能 够看到的部分区域视图。以上对于实景视图仅是举例说明,可根据实际需要进行设置视图的取景范围,其他未说明的实景视图也在本申请的保护范围之内,在此不再赘述。
通过实景视图,能够多角度的展示AR装置在室内的实际位置信息,避免方位信息的遗漏,使获得的实际位置信息更全面,方便后续处理。
步骤S202,基于深度学习神经网络对AR装置在室内的实际位置对应的实景视图进行处理,获得待匹配位置信息。
实景视图是通过AR装置采集到的视图。实景视图可以包括:部分区域视图或全景视图。待匹配位置信息用于表征AR装置在待导航建筑物内的位置信息。
在一些实施例中,采用深度学习神经网络能够提取部分区域视图和/或全景视图中的图像信息,进而对该图像信息的特征进行提取,获得处理后的图像信息,使处理后的图像信息更能体现AR装置的相对位置,以使该AR装置的相对位置能够表征待匹配位置信息。
例如,可对采用普通摄像装置采集到的观察者所能够看到的部分区域视图进行分层,并对每个层次的视图进行分块处理,再对每块视图进行分类,进一步细化各个层次的视图对应的类别,细化AR装置的相对位置,从而获得待匹配位置信息。
又例如,若实景视图包括全景视图(即,360度的图像),可以将该360度的图像平均划分为12个投影面;再采用基于网络的局部聚合描述子向量(Net Vector of Locally Aggregated Descriptors,Net VLAD)的编码方式,分别对12个投影面进行图像检索和场景识别,以获得待匹配位置信息,提升待匹配位置信息的准确性。
步骤S203,依据待匹配位置信息搜索预设导航地图,确定预设导航地图中是否存在待匹配位置信息。
预设导航地图包括多个位置信息,将待匹配位置信息与预设导航地图中的各个位置信息进行匹配,以确定导航地图中是否存在待匹配位置信息。
若确定导航地图中存在待匹配位置信息,则表征AR装置的行进 方向和AR装置的相对位置是正确的;若确定导航地图中不存在待匹配位置信息,则表征AR装置的行进方向和AR装置的相对位置是错误的,需要提醒AR装置尽快调整行进方向或调整AR装置的相对位置,以修正AR装置的行进方向和AR装置的相对位置,使AR装置可以保持以正确路径进行移动从而尽快到达目标位置。
步骤S204,在确定预设导航地图中存在待匹配位置信息的情况下,将预设导航地图中与待匹配位置信息相匹配的位置信息作为实时定位信息。
实时定位信息能够表征AR装置当前的行进方向和AR装置的相对位置是正确的。
通过将预设导航地图中与待匹配位置信息相匹配的位置信息作为实时定位信息,使实时定位信息可以体现AR装置在预设导航地图中的位置,方便后续处理。
步骤S205,依据实时定位信息和获取到的引导路线,确定虚拟引导信息。
引导路线包括多个引导位置信息。每个引导位置信息都表征AR装置需要行进的方向和AR装置的相对位置信息,通过各个引导位置信息,能够及时纠正AR装置的行进路径,引导AR装置尽快到达目标位置。
例如,依据实时定位信息和获取到的引导路线,确定虚拟引导信息,包括:将实时定位信息与多个引导位置信息进行匹配,确定AR装置对应的待移动方向信息和待移动距离信息;依据AR装置对应的待移动方向信息、待移动距离信息和多个引导位置信息,更新引导路线;根据更新后的引导路线,确定虚拟引导信息。
待移动方向信息可以与引导位置信息中的AR装置需要行进的方向相匹配,获得方向匹配结果;待移动距离信息可以与引导位置信息中的AR装置的相对位置信息相匹配,获得位置匹配结果;通过方向匹配结果和位置匹配结果,能够表征AR装置当前的行进状态是否与引导路线相匹配,在确定AR装置当前的行进状态与引导路线存在误差,且该误差超过预设阈值的情况下,更新引导路线,并根据更新后 的引导路线,确定虚拟引导信息,该虚拟引导信息是与更新后的引导路线对应的信息,及时对AR装置进行校准,提升导航精准性。
例如,对AR装置进行校准可以包括:对AR装置的相关拍摄参数(例如,拍摄角度,或图像分辨率等)进行调整,从而提升导航精准性。
在一些具体实现中,虚拟引导信息包括:相机姿态估计信息、环境感知信息和光源感知信息中的至少一种。
环境感知信息用于表征AR装置在待导航建筑物内的位置信息,相机姿态估计信息用于表征AR装置对应的方向信息,光源感知信息用于表征AR装置获取到的光源信息。
通过不同维度的信息来描述AR装置在导航过程中的信息,能够提升对AR装置的导航准确性,避免障碍物的遮挡和观测角度变化等因素的影响。
步骤S206,发送虚拟引导信息至AR装置。
需要说明的是,本实施例中的步骤S206与上一实施例中的步骤S104相同,在此不再赘述。
在本实施例中,通过基于深度学习神经网络对AR装置在室内的实际位置对应的实景视图进行处理,获得待匹配位置信息,提升待匹配位置信息的准确性;依据待匹配位置信息搜索预设导航地图,确定预设导航地图中是否存在待匹配位置信息,以判断AR装置的行进方向和AR装置的相对位置是否正确;在确定预设导航地图中存在待匹配位置信息的情况下,将预设导航地图中与待匹配位置信息相匹配的位置信息作为实时定位信息,使实时定位信息可以体现AR装置在预设导航地图中的位置,方便后续处理;将实时定位信息和获取到的引导路线进行匹配,确定虚拟引导信息,并发送虚拟引导信息给AR装置,能够及时对AR装置的行进路线进行校准,提升导航精准性。
根据本申请实施例,基于深度学习神经网络对AR装置在室内的实际位置对应的实景视图进行处理,获得待匹配位置信息(即步骤S202),包括:提取AR装置在室内的实际位置对应的实景视图中的局部特征;将局部特征输入深度学习神经网络,获得AR装置在室内 的实际位置对应的全局特征;基于AR装置在室内的实际位置对应的全局特征,确定待匹配位置信息。
全局特征用于表征AR装置在待导航建筑物内的位置信息;局部特征用于表征AR装置在其获取到的部分区域视图(例如,通过AR装置上传的图像、照片等视图数据)中的相对位置信息。通过全局特征和局部特征,能够对AR装置的定位起到环境参照物的作用,以使定位更准确。
通过深度学习神经网络对局部特征进行分析(例如,使用机器学习的方式,对待导航建筑物内的多个局部位置进行学习和匹配),能够明确AR装置对应的局部特征具体与待导航建筑物内的哪个局部位置匹配,并基于匹配的局部位置在待导航建筑内的实际位置,确定待匹配位置信息。待匹配位置信息可以体现AR装置在待导航建筑内的实际位置信息,提升定位准确性。
根据本申请实施例,在实时获取AR装置在室内的实际位置信息(即步骤S101或步骤S201)之前,还包括:获取待导航建筑物内的全景数据;基于全景数据和预设算法进行点云建图,生成稠密地图;依据稠密地图和待导航建筑物对应的平面视图,确定预设导航地图。
全景数据可以包括:整个待导航建筑物内的全部景物对应的视图数据,或待导航建筑物内需要进行导航的区域对应的全景数据。
例如,全景数据可以包括通过全景相机采集到的多帧点云数据,通过对前后两帧点云数据进行旋转变换,使得前后两帧点云数据对应的坐标系相同;基于上述方法不断叠加多帧点云数据,可以进行点云建图(例如,采用多帧点云视图对应的正投影视图,与待导航建筑物对应的平面视图进行对齐),生成稠密地图。该稠密地图可以全面体现待导航建筑物的位置特征和方向特征。
将待导航建筑物对应的平面视图与稠密地图相匹配,能够获得二维平面视图,即预设导航地图,使该预设导航地图可以继承稠密地图中的位置特征和方向特征,保证地图的全面性和完整性,方便后续定位和导航。
在一些具体实现中,基于全景数据和预设算法进行点云建图, 生成稠密地图,包括:根据摄影测量原理对全景数据进行处理,生成点云数据,点云数据包括三维坐标信息和颜色信息;依据预设的三维重建算法对点云数据进行处理,生成稠密地图。
摄影测量原理是通过光学摄影机采集图像,并对该采集图像进行处理,以获取被摄物体的形状、大小、位置、特性及其相互关系。例如,获取被摄物体的多幅影像,针对每张影像进行测量和分析,获得分析结果,并以图解的形式或数字数据的形式进行输出该分析结果。
被摄物体的位置可以通过三维坐标信息来表征,通过对多帧的点云数据中的三维坐标信息进行多次分析和匹配,能够准确获得被摄物体在三维空间中的位置信息;通过对被摄物体的颜色进行多次采集,并针对该被摄物体的颜色信息进行分析,可准确获知该被摄物体的色彩特征(例如,基于红绿蓝(Red Green Blue,RGB)色彩空间的色彩特征,或基于YUV色彩空间的色彩特征等)
YUV色彩空间中的“Y”表示明亮度(Luminance或Luma),即灰阶值,“U”和“V”表示色度(Chrominance或Chroma),用于描述影像的色彩及饱和度。
通过预设的三维重建算法,对点云数据中的三维坐标信息和颜色信息进行处理,以使生成的稠密地图能够体现三维空间中的各个被摄物体的位置信息和色彩信息,完善地图中的各个被摄物体的立体图像,使稠密地图更准确,方便导航和定位。
在一些具体实现中,依据稠密地图和待导航建筑物对应的平面视图,确定预设导航地图,包括:依据预设尺度因子,采用正投影的方式将稠密地图映射为待处理平面视图;将待处理平面视图与待导航建筑物对应的平面视图进行匹配,确定预设导航地图。
将待处理平面视图与待导航建筑物对应的平面视图进行匹配可以采用如下方式实现:将待处理平面视图与待导航建筑物对应的平面视图进行对比,或将待处理平面视图与待导航建筑物对应的平面视图进行对齐,从而确定预设导航地图。
预设尺度因子是可以标定的系数因子,通过该预设尺度因子,能够对稠密地图进行合理的缩放,以使缩放后的稠密地图可以适应不 同尺寸的显示设备。
需要说明的是,因稠密地图是三维地图,而待导航建筑物对应的平面视图是二维视图,因此,需要采用正投影的方式将稠密地图映射为待处理平面视图,以方便将该待处理平面视图与待导航建筑物对应的平面视图进行匹配,使用待导航建筑物对应的平面视图来验证待处理平面视图的准确性,在确定待处理平面视图与待导航建筑物对应的平面视图相匹配的情况下,可获得预设导航地图,该预设导航地图能够体现稠密地图的准确性,并且还可以体现待导航建筑物对应的平面视图的特征,使用该预设导航地图对AR装置进行导航和定位,能够进一步提升导航和定位的准确性。
在一些具体实现中,待导航建筑物对应的平面视图包括:计算机辅助设计(Computer Aided Design,CAD)视图,CAD视图是基于预设尺度因子确定的矢量平面视图。
由于预设尺度因子可以进行标定,在预设尺度因子保持不变的情况下,在不同设备(例如,不同尺寸的手机或平板电脑等)上显示该CAD视图,都能够保证视图的清晰程度符合预期要求,即CAD视图具备缩放尺度不变的特性,且该CAD视图是具有方向性的矢量平面视图。
通过将待处理平面视图和待导航建筑物对应的平面视图都设计为CAD视图,能够保证待处理平面视图和待导航建筑物对应的平面视图具备缩放尺度不变的特性,且是具有方向性的矢量平面视图,提升终端的展示效果,为用户带来更好的使用体验。
根据本申请实施例,在发送虚拟引导信息至AR装置(即步骤S104或步骤S206)之后,还包括:接收AR装置反馈的到达位置信息;对比到达位置信息和目标位置信息,确定AR装置是否到达目标位置;在确定AR装置到达目标位置的情况下,结束导航。
通过将AR装置反馈的到达位置信息和目标位置信息进行对比,确定到达位置信息和目标位置信息是否相同,在确定两个位置信息相同的情况下,确定AR装置已到达目标位置,可以结束导航,无需再获取AR装置反馈的实时位置信息,减少信息处理量,提升信息处理 效率。
在确定AR装置还没有到达目标位置的情况下,还需要继续获取AR装置反馈的实时位置信息,以辅助AR装置进行导航和定位,提升导航精准性。
需要说明的是,在为AR装置进行导航的过程中,导航信标可以选择卡通形象(例如,人物卡通形象和/或动物卡通形象等)来表示,以增加AR导航的趣味性。
图3示出本申请提供的室内导航方法的又一流程示意图。该室内导航方法可应用于AR装置,该AR装置可安装在终端上。如图3所示,本申请实施例中的室内导航方法至少包括但不限于以下步骤S301至S303。
步骤S301,发送AR装置在室内的实际位置信息至服务器。
AR装置在室内的实际位置信息可以包括:AR装置所处位置对应的三维空间坐标信息。通过该三维空间坐标信息能够体现AR装置在待导航建筑物内的实际位置。
服务器在获得该实际位置信息时,将实际位置信息与预设导航地图相匹配,确定实时定位信息,并依据引导路线和实时定位信息,生成并发送虚拟引导信息至AR装置,其中,引导路线是基于获取到的AR装置的初始位置信息和目标位置信息确定的路线,实际位置信息用于表征AR装置在世界坐标系中的位置信息。
通过引导路线明确需要导航的路径信息,将该引导路线与实时定位信息相匹配,可明确AR装置在预设导航地图中的映射位置信息,实时动态调整引导路线,提升导航准确性。
步骤S302,响应于服务器发送的虚拟引导信息,生成虚拟导航图像。
虚拟导航图像可以包括:基于动态成像的AR图像或AR视频。通过动态的AR图像或AR视频,可清晰、立体的查看到AR装置在待导航建筑物内的位置信息,及目标路线信息。
虚拟引导信息包括相机姿态估计信息,环境感知信息和光源感知信息,环境感知信息用于表征AR装置在待导航建筑物内的位置信 息,相机姿态估计信息用于表征AR装置对应的方向信息,光源感知信息用于表征AR装置获取到的光源信息。
例如,相机姿态估计信息可以包括:AR装置对应的方位信息,例如,AR装置相对位置(例如,AR装置的摄像头朝向与用户面部相反的方向,或AR装置的摄像头朝向地面等)。光源感知信息可以包括:AR装置接收到的多个角度的光信息。
在一些具体实现中,响应于服务器发送的虚拟引导信息,生成虚拟导航图像,包括:接收服务器发送的虚拟引导信息;依据预设的三维重建算法对环境感知信息和光源感知信息进行处理,获得AR的虚拟图像;将相机姿态估计信息与AR的虚拟图像相匹配,确定虚拟导航图像。
预设的三维重建算法可以包括:多视图几何图形(Open Multiple View Geometry,OpenMVG)算法和多视立体重建库(Open Multi-View Stereo reconstruction library,OpenMVS)算法。
OpenMVG算法能够精确解决多视图几何中的常见问题。例如,基于场景结构信息的标定;基于摄像机主动信息(例如,纯旋转信息)的自标定;不依赖场景结构和摄像机主动信息的自标定等问题。OpenMVS算法适用于对稠密点云重建、表面重建、表面细化和纹理映射等场景,并且,表面细化能够使图像更清晰。而将OpenMVG算法和OpenMVS算法进行优化,所获得的优化后的算法能够协同使用,以实现三维重建。
使用OpenMVS算法对环境感知信息进行表面重建和表面细化,并对光源感知信息进行纹理映射等处理,获得AR的虚拟图像,该虚拟图像可以更细化的体现AR装置所处环境信息以及获取到的光源对AR装置的投影信息等。通过使用OpenMVG算法对相机姿态估计信息进行处理,以获得基于AR装置的多个视图信息,并将多个视图信息与AR的虚拟图像进行匹配,确定虚拟导航图像,提升虚拟导航图像的精准性。
步骤S303,动态展示虚拟导航图像。
例如,可以实时展示获得的虚拟导航图像,也可以将获得的虚 拟导航图像以帧的方式,动态播放出来,方便用户查看,立体直观的查看导航信息,提升导航准确性。
在本实施例中,通过发送AR装置在室内的实际位置信息至服务器,以使服务器可以将实际位置信息与预设导航地图相匹配,确定实时定位信息,并依据引导路线和实时定位信息,生成并发送虚拟引导信息至AR装置,通过该引导路线明确需要导航的路径信息,并且,将该引导路线与实时定位信息相匹配,可明确AR装置在预设导航地图中的映射位置信息,实时动态调整引导路线,提升导航准确性;响应于服务器发送的虚拟引导信息,生成动态展示虚拟导航图像,方便用户立体直观的查看导航信息,提升导航准确性。
下面结合附图,详细介绍根据本申请实施例的各个设备。图4示出本申请提供的服务器的组成方框图。如图4所示,服务器400包括获取模块401、匹配模块402、确定模块403和第一发送模块404。
获取模块401被配置为实时获取AR装置在室内的实际位置信息,实际位置信息用于表征AR装置在世界坐标系中的位置信息;匹配模块402被配置为将实际位置信息与预设导航地图相匹配,确定实时定位信息;确定模块403被配置为依据实时定位信息和获取到的引导路线,确定虚拟引导信息,引导路线是基于获取到的AR装置的初始位置信息和目标位置信息确定的路线;第一发送模块404被配置为发送虚拟引导信息至AR装置,以使AR装置依据虚拟引导信息生成并动态展示虚拟导航图像。
在一些具体实现中,实际位置信息包括:AR装置在室内的实际位置对应的实景视图;匹配模块402具体用于:基于深度学习神经网络对AR装置在室内的实际位置对应的实景视图进行处理,获得待匹配位置信息;依据待匹配位置信息搜索预设导航地图,确定预设导航地图中是否存在待匹配位置信息;在确定预设导航地图中存在待匹配位置信息的情况下,将预设导航地图中与待匹配位置信息相匹配的位置信息作为实时定位信息。
在一些具体实现中,基于深度学习神经网络对AR装置在室内的实际位置对应的实景视图进行处理,获得待匹配位置信息,包括:提 取AR装置在室内的实际位置对应的实景视图中的局部特征;将局部特征输入深度学习神经网络,获得AR装置在室内的实际位置对应的全局特征,全局特征用于表征AR装置在待导航建筑物内的位置信息;基于AR装置在室内的实际位置对应的全局特征,确定待匹配位置信息。
在一些具体实现中,服务器400还包括:预设导航地图生成模块,用于:获取待导航建筑物内的全景数据;基于全景数据和预设算法进行点云建图,生成稠密地图;依据稠密地图和待导航建筑物对应的平面视图,确定预设导航地图。
在一些具体实现中,基于全景数据和预设算法进行点云建图,生成稠密地图,包括:根据摄影测量原理对全景数据进行处理,生成点云数据,点云数据包括三维坐标信息和颜色信息;依据预设的三维重建算法对点云数据进行处理,生成稠密地图。
在一些具体实现中,依据稠密地图和待导航建筑物对应的平面视图,确定预设导航地图,包括:依据预设尺度因子,采用正投影的方式将稠密地图映射为待处理平面视图;将待处理平面视图与待导航建筑物对应的平面视图进行匹配,确定预设导航地图。
在一些具体实现中,待导航建筑物对应的平面视图包括:CAD视图,CAD视图是基于预设尺度因子确定是矢量平面视图。
在一些具体实现中,引导路线包括多个引导位置信息;确定模块403具体用于:将实时定位信息与多个引导位置信息进行匹配,确定AR装置对应的待移动方向信息和待移动距离信息;依据AR装置对应的待移动方向信息、待移动距离信息和多个引导位置信息,更新引导路线;根据更新后的引导路线,确定虚拟引导信息。
在一些具体实现中,服务器400还包括:确认模块,用于:接收AR装置反馈的到达位置信息;对比到达位置信息和目标位置信息,确定AR装置是否到达目标位置;在确定AR装置到达目标位置的情况下,结束导航。
在一些具体实现中,虚拟引导信息包括:相机姿态估计信息、环境感知信息和光源感知信息中的至少一种;其中,环境感知信息用 于表征AR装置在待导航建筑物内的位置信息,相机姿态估计信息用于表征AR装置对应的方向信息,光源感知信息用于表征AR装置获取到的光源信息。
在本实施例中,通过获取模块实时获取AR装置在室内的实际位置信息,能够确定AR装置在世界坐标系中的实际位置;使用匹配模块将AR装置在世界坐标系中的实际位置信息与预设导航地图相匹配,能够将实际位置信息映射到二维空间上的预设导航地图中,以确定该AR装置在预设导航地图中对应的实时定位信息,方便后续处理;使用确定模块依据实时定位信息和获取到的引导路线,确定虚拟引导信息,其中,引导路线是基于获取到的AR装置的初始位置信息和目标位置信息确定的路线,将实时定位信息与引导路线相匹配,确定需要提供给AR装置的虚拟引导信息,以提升导航的准确性;使用第一发送模块发送虚拟引导信息至AR装置,以使AR装置依据虚拟引导信息生成并动态展示高精度的虚拟导航图像,以直观的引导用户快速的前往目标位置,提升导航准确性。
图5示出本申请提供的AR装置的组成方框图。如图5所示,AR装置500包括第二发送模块501、生成模块502和展示模块503。
第二发送模块501被配置为发送AR装置在室内的实际位置信息至服务器,以使服务器将实际位置信息与预设导航地图相匹配,确定实时定位信息,并依据引导路线和实时定位信息,生成并发送虚拟引导信息至AR装置,其中,引导路线是基于获取到的AR装置的初始位置信息和目标位置信息确定的路线,实际位置信息用于表征AR装置在世界坐标系中的位置信息;生成模块502被配置为响应于服务器发送的虚拟引导信息,生成虚拟导航图像;展示模块503被配置为动态展示虚拟导航图像。
在本实施例中,通过第二发送模块发送AR装置在室内的实际位置信息至服务器,以使服务器可以将实际位置信息与预设导航地图相匹配,确定实时定位信息,并依据引导路线和实时定位信息,生成并发送虚拟引导信息至AR装置,通过该引导路线明确需要导航的路径信息,并且,将该引导路线与实时定位信息相匹配,可明确AR装置 在预设导航地图中的映射位置信息,实时动态调整引导路线,提升导航准确性;使用生成模块响应于服务器发送的虚拟引导信息,生成虚拟导航图像,并使用展示模块动态展示虚拟导航图像,方便用户立体直观的查看导航信息,提升导航准确性。
图6示出本申请提供的终端的组成方框图。如图6所示,终端600包括:至少一个AR装置500,该AR装置500用于实现根据本申请实施例的任意一种室内导航方法。
例如,AR装置500包括:第二发送模块501,被配置为发送AR装置500在室内的实际位置信息至服务器,以使服务器将实际位置信息与预设导航地图相匹配,确定实时定位信息,并依据引导路线和实时定位信息,生成并发送虚拟引导信息至AR装置500,其中,引导路线是基于获取到的AR装置500的初始位置信息和目标位置信息确定的路线,实际位置信息用于表征AR装置500在世界坐标系中的位置信息;生成模块502,被配置为响应于服务器发送的虚拟引导信息,生成虚拟导航图像;展示模块503,被配置为动态展示虚拟导航图像。
在本实施例中,通过第二发送模块501发送AR装置500在室内的实际位置信息至服务器,以使服务器可以将实际位置信息与预设导航地图相匹配,确定实时定位信息,并依据引导路线和实时定位信息,生成并发送虚拟引导信息至AR装置500,通过该引导路线明确需要导航的路径信息,并且,将该引导路线与实时定位信息相匹配,可明确AR装置500在预设导航地图中的映射位置信息,实时动态调整引导路线,提升导航准确性;使用生成模块502响应于服务器发送的虚拟引导信息,生成虚拟导航图像,并使用展示模块503动态展示虚拟导航图像,方便用户立体直观的查看导航信息,提升导航准确性。
需要明确的是,本申请并不局限于上文实施例中所描述并在图中示出的特定配置和处理。为了描述的方便和简洁,这里省略了对已知方法的详细描述,并且上述描述的系统、模块和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
图7示出本申请提供的室内导航系统的组成方框图。如图7所示,室内导航系统包括:离线地图创建设备710、云端导航服务器720、 终端730和预设导航地图生成装置740。
离线地图创建设备710包括:全景图像采集装置711和点云地图创建装置712。云端导航服务器720包括:识别定位模块721、路径规划模块722和实时导航模块723。终端730包括:初始定位模块731、目的地选择模块732、实时导航图像反馈模块733、虚拟导航图像生成模块734和显示模块735。
终端730可以是支持AR功能(例如,支持动作捕捉、环境感知和光源感知等功能中的至少一项)的手机终端,全景图像采集装置711可以包括:全景摄像装置和/或全景照相装置。
图8示出本申请提供的室内导航系统的导航方法的流程示意图。如图8所示,室内导航系统的导航方法至少包括但不限于如下步骤S801至S812。
步骤S801,终端730发送地图下载请求至预设导航地图生成装置740。
下载请求用于请求获得预设导航地图,预设导航地图是基于离线地图创建设备710中的全景图像采集装置711预先采集的图像数据确定的地图。
例如,下载请求可以包括终端730的标识和预设导航地图的编号等信息,其中,预设导航地图的编号可以是通过终端730的历史交互信息中获得的编号,也可以是通过终端上传的实时图像信息确定的编号。
例如,全景图像采集装置711可以将采集到的基于室内环境(例如,某个商场的室内环境等)的图像数据发送给点云地图创建装置712,以使点云地图创建装置712能够依据预设的三维重建算法对采集到的基于室内环境的图像数据进行处理,获得预设导航地图。
例如,依据全景图像采集装置711采集到的图像数据,点云地图创建装置712通过调用优化过的三维重建算法,生成稠密地图;然后,将该稠密地图与待导航的某商场某楼层对应的平面视图进行匹配;确定预设导航地图。
优化过的三维重建算法可以包括OpenMVG算法和OpenMVS算法。 OpenMVG算法能够精确解决多视图几何中的常见问题。例如,基于场景结构信息的标定;基于摄像机主动信息(例如,纯旋转信息)的自标定;不依赖场景结构和摄像机主动信息的自标定等问题。
OpenMVS算法适用于对稠密点云重建、表面重建、表面细化和纹理映射等场景,并且,表面细化能够使图像更清晰。而将OpenMVG算法和OpenMVS算法进行优化,所获得的优化后的算法能够协同使用,以实现图像的三维重建。
需要说明的是,点云地图创建装置712可以采用正投影的方式对三维地图进行降维处理,以获得二维平面地图。例如,将三维地图以正投影的方式映射到二维平面地图上。其中的三维地图和二维平面地图保持横纵坐标(例如,x坐标和y坐标等)不变,并且,该二维平面地图可以与预设的CAD视图进行对齐,以保持坐标的一致性,从而使生成的预设导航地图能够具备缩放尺度不变的特性,且是具有方向性的矢量平面视图。
在一些具体实现中,预设导航地图的在不同尺寸的显示屏(例如,手机的显示屏或不同尺寸的平板电脑的显示屏等)上可以进行精确显示,以提升用户的使用体验。
例如,还可以采用某商场中的某楼层对应的二维平面CAD视图,反向提升由点云数据构建的稠密地图的精准度,同时去除稠密地图中的噪声干扰信息,以使稠密地图更准确。
步骤S802,预设导航地图生成装置740将与终端730当前所在场景对应的预设导航地图发送至终端730,以完成终端730的地图初始化。
步骤S803,终端730上传其当前所处位置的实景图像至云端导航服务器720。
步骤S804,云端导航服务器720通过识别定位模块721对终端730上传的实景图像进行处理,并将该实景图像与预设导航地图相匹配,确定实时定位信息。
例如,识别定位模块721调用深度学习层次语义描述算法,对终端730上传的实景图像进行分类,初步明确该实景图像所述的大类 别(例如,房屋图像或人物图像等);然后针对该大类别再进一步进行细化分析,以获得终端730的初始位置。
进一步地,终端730上传的实景图像可以包括:部分区域视图或全景图像。全景图像可以是由全景照相机拍摄的360度的图像,然后将该360度的图像平均划分为12个投影面,再采用基于Net VLAD的编码方式,分别对12个投影面进行图像检索和场景识别,以获得终端730的初始位置,例如,终端730所处位置对应的三维空间坐标信息。
在一些具体实现中,实时定位信息可以包括:终端730所处位置对应在预设导航地图中的二维坐标信息,该二维坐标信息是通过将终端730所处位置对应的三维空间坐标信息映射到预设导航地图中获得的坐标信息。
步骤S805,云端导航服务器720发送实时定位信息至终端730,以使终端730使用显示模块735显示该实时定位信息。
实时定位信息能够表征终端730在预设导航地图中的实时位置。
步骤S806,终端730中的目的地选择模块732获取用户输入的目标位置信息,并基于终端730的初始位置信息和该目标位置信息,生成并发送路径导航消息至云端导航服务器720。
例如,其中的目标位置信息可以是通过用户操作手机终端中显示的地图,直接点击特定位置确定的信息,以使用户在选择目标位置时,可以操作简便,保证地址选择的易用性。
步骤S807,云端导航服务器720对接收到的路径导航消息进行消息解析,获得终端730的初始位置信息和目标位置信息,然后,使用路径规划模块722调用,例如,快速遍历随机树(Rapidly-exploring Random Tree,RRT)算法,对初始位置信息和目标位置信息进行处理,获得引导路线。
需要说明的是,RRT算法是一种树形数据存储结构和算法,通过路径递增的方法建立数据存储结果,并快速减小随机选择点和树之间的距离,RRT算法可以有效地搜索非凸的(Non Convex)高维度的空间,特别适用于包括障碍物和非完整(Non-Holonomic)系统或反向 动力学(Kino-Dynamic)微分约束条件下的路径规划。
步骤S808,云端导航服务器720发送引导路线至终端730,以使终端730使用显示模块735显示该引导路线。
步骤S809,终端730使用实时导航图像反馈模块733将实时获取到的位置信息和场景图像上传至云端导航服务器720中的实时导航模块723。
步骤S810,实时导航模块723通过对实时获取到的位置信息和场景图像进行动作捕捉、环境感知和光源感知等处理,确定虚拟导航信息。
虚拟导航信息可以包括:相机姿态估计信息、环境感知信息和光源感知信息中的至少一种;环境感知信息用于表征终端730在待导航建筑物内的位置信息,相机姿态估计信息用于表征终端730对应的方向信息,光源感知信息用于表征终端730获取到的光源信息。能够全面衡量终端730在实时导航中的信息,提升导航准确性。
步骤S811,云端导航服务器720发送更新后的虚拟引导信息至终端730,以使终端730使用虚拟导航图像生成模块734生成更新后的虚拟导航图像,并使用显示模块735实时动态展示更新后的虚拟导航图像。
需要说明的是,步骤S809至步骤S811可以在导航的过程中重复执行的步骤,以调整实时的虚拟导航图像。
例如,在导航过程中,终端730会实时上传其所处位置对应的图像信息至云端导航服务器720,以使云端导航服务器720能够将终端730所处位置对应的图像信息与引导路线相匹配,实时动态调整引导路线,以保证引导路线的一致性和准确性。
步骤S812,终端730到达目标位置(即导航终点)后,会继续上传导航终点对应的终点图像至云端导航服务器720,以使云端导航服务器720结合之前的引导路线,判断该导航终端是否与预设的目标位置信息相匹配,并在确定匹配的情况下,结束导航流程。
在一些具体实现中,AR导航过程中的导航信标可以选择卡通人物等表示,以增加AR导航的趣味性。
在本实施例中,通过采用便携的终端中的全景摄像装置对待导航建筑物内的场景进行视觉数据采集,并基于采集到的视觉数据进行三维重建,生成与待导航建筑物内的物理空间对应的稠密地图,并对待导航建筑物内的视觉数据进行分段建图,生成点云地图,进而将该点云地图与预设的CAD视图进行对齐,以使生成的预设导航地图能够具备缩放尺度不变的特性,且是具有方向性的矢量平面视图。在导航过程中,终端实时上传其所处位置对应的图像信息至云端导航服务器,以使云端导航服务器能够实时调整预先规划的引导路线,生成虚拟引导信息,并发送该虚拟引导信息至终端,以使终端可以基于该虚拟引导信息生成更新后的虚拟导航图像,并通过AR的方式动态展示更新后的虚拟导航图像,使用户可以动态立体的查看导航信息,方便用户定位和导航,提升导航准确性。
图9示出能够实现根据本申请的室内导航方法和装置的计算设备的示例性硬件架构的结构图。
如图9所示,计算设备900包括输入设备901、输入接口902、中央处理器903、存储器904、输出接口905、以及输出设备906。输入接口902、中央处理器903、存储器904、以及输出接口905通过总线907相互连接,输入设备901和输出设备906分别通过输入接口902和输出接口905与总线907连接,进而与计算设备900的其他组件连接。
具体地,输入设备901接收来自外部的输入信息,并通过输入接口902将输入信息传送到中央处理器903;中央处理器903基于存储器904中存储的计算机可执行指令对输入信息进行处理以生成输出信息,将输出信息临时或者永久地存储在存储器904中,然后通过输出接口905将输出信息传送到输出设备906;输出设备906将输出信息输出到计算设备900的外部供用户使用。
在一个实施例中,图9所示的计算设备可以被实现为一种电子设备,该电子设备可以包括:存储器,被配置为存储程序;处理器,被配置为运行存储器中存储的程序,以执行根据本申请各实施例的室内导航方法。
在一个实施例中,图9所示的计算设备可以被实现为一种室内导航系统,该室内导航系统可以包括:存储器,被配置为存储程序;处理器,被配置为运行存储器中存储的程序,以执行根据本申请各实施例的室内导航方法。
本申请实施还例提供一种可读存储介质,该可读存储介质存储有计算机程序,计算机程序被处理器执行时,使得处理器实现根据本申请各实施例的室内导航方法。
以上所述,仅为本申请的示例性实施例而已,并非用于限定本申请的保护范围。一般来说,本申请的多种实施例可以在硬件或专用电路、软件、逻辑或其任何组合中实现。例如,一些方面可以被实现在硬件中,而其它方面可以被实现在可以被控制器、微处理器或其它计算装置执行的固件或软件中,尽管本申请不限于此。
本申请的实施例可以通过移动装置的数据处理器执行计算机程序指令来实现,例如在处理器实体中,或者通过硬件,或者通过软件和硬件的组合。计算机程序指令可以是汇编指令、指令集架构(ISA)指令、机器指令、机器相关指令、微代码、固件指令、状态设置数据、或者以一种或多种编程语言的任意组合编写的源代码或目标代码。
本申请附图中的任何逻辑流程的框图可以表示程序步骤,或者可以表示相互连接的逻辑电路、模块和功能,或者可以表示程序步骤与逻辑电路、模块和功能的组合。计算机程序可以存储在存储器上。存储器可以具有任何适合于本地技术环境的类型并且可以使用任何适合的数据存储技术实现,例如但不限于只读存储器(ROM)、随机访问存储器(RAM)、光存储器装置和系统(数码多功能光碟DVD或CD光盘)等。计算机可读介质可以包括非瞬时性存储介质。数据处理器可以是任何适合于本地技术环境的类型,例如但不限于通用计算机、专用计算机、微处理器、数字信号处理器(DSP)、专用集成电路(ASIC)、可编程逻辑器件(FGPA)以及基于多核处理器架构的处理器。
通过示范性和非限制性的示例,上文已提供了对本申请的示范实施例的详细描述。但结合附图和权利要求来考虑,对以上实施例的 多种修改和调整对本领域技术人员来说是显而易见的,但不偏离本申请的范围。因此,本申请的恰当范围将根据权利要求确定。

Claims (17)

  1. 一种室内导航方法,包括:
    实时获取增强现实AR装置在室内的实际位置信息,所述实际位置信息用于表征所述AR装置在世界坐标系中的位置信息;
    将所述实际位置信息与预设导航地图相匹配,确定实时定位信息;
    依据所述实时定位信息和获取到的引导路线,确定虚拟引导信息,所述引导路线是基于获取到的所述AR装置的初始位置信息和目标位置信息确定的路线;
    发送所述虚拟引导信息至所述AR装置,以使所述AR装置依据所述虚拟引导信息生成并动态展示虚拟导航图像。
  2. 根据权利要求1所述的方法,其中,所述实际位置信息包括:所述AR装置在室内的实际位置对应的实景视图;
    将所述实际位置信息与预设导航地图相匹配,确定实时定位信息,包括:
    基于深度学习神经网络对所述AR装置在室内的实际位置对应的实景视图进行处理,获得待匹配位置信息;
    依据所述待匹配位置信息搜索所述预设导航地图,确定所述预设导航地图中是否存在所述待匹配位置信息;
    在所述预设导航地图中存在所述待匹配位置信息的情况下,将所述预设导航地图中与所述待匹配位置信息相匹配的位置信息作为所述实时定位信息。
  3. 根据权利要求2所述的方法,其中,基于深度学习神经网络对所述AR装置在室内的实际位置对应的实景视图进行处理,获得待匹配位置信息,包括:
    提取所述AR装置在室内的实际位置对应的实景视图中的局部特征;
    将所述局部特征输入所述深度学习神经网络,获得所述AR装置在室内的实际位置对应的全局特征,所述全局特征用于表征所述AR装置在待导航建筑物内的位置信息;
    基于所述AR装置在室内的实际位置对应的全局特征,确定所述待匹配位置信息。
  4. 根据权利要求1所述的方法,其中,在实时获取增强现实AR装置在室内的实际位置信息之前,所述方法还包括:
    获取待导航建筑物内的全景数据;
    基于所述全景数据和预设算法进行点云建图,生成稠密地图;
    依据所述稠密地图和所述待导航建筑物对应的平面视图,确定所述预设导航地图。
  5. 根据权利要求4所述的方法,其中,基于所述全景数据和预设算法进行点云建图,生成稠密地图,包括:
    根据摄影测量原理对所述全景数据进行处理,生成点云数据,所述点云数据包括三维坐标信息和颜色信息;
    依据预设的三维重建算法对所述点云数据进行处理,生成所述稠密地图。
  6. 根据权利要求4所述的方法,其中,依据所述稠密地图和所述待导航建筑物对应的平面视图,确定所述预设导航地图,包括:
    依据预设尺度因子,采用正投影的方式将所述稠密地图映射为待处理平面视图;
    将所述待处理平面视图与所述待导航建筑物对应的平面视图进行匹配,确定所述预设导航地图。
  7. 根据权利要求6所述的方法,其中,所述待导航建筑物对应的平面视图包括:计算机辅助设计CAD视图,所述CAD视图是基于所述预设尺度因子确定的矢量平面视图。
  8. 根据权利要求1所述的方法,其中,所述引导路线包括多个引导位置信息;
    依据所述实时定位信息和获取到的引导路线,确定虚拟引导信息,包括:
    将所述实时定位信息与多个所述引导位置信息进行匹配,确定所述AR装置对应的待移动方向信息和待移动距离信息;
    依据所述AR装置对应的待移动方向信息、所述待移动距离信息和多个所述引导位置信息,更新所述引导路线;
    根据更新后的引导路线,确定所述虚拟引导信息。
  9. 根据权利要求1所述的方法,其中,在发送所述虚拟引导信息至所述AR装置,以使所述AR装置依据所述虚拟引导信息生成并动态展示虚拟导航图像之后,所述方法还包括:
    接收所述AR装置反馈的到达位置信息;
    对比所述到达位置信息和所述目标位置信息,确定所述AR装置是否到达目标位置;
    在确定所述AR装置到达所述目标位置的情况下,结束导航。
  10. 根据权利要求1至9中任一项所述的方法,其中,所述虚拟引导信息以下信息中的至少一种包括:相机姿态估计信息、环境感知信息和光源感知信息;
    其中,所述环境感知信息用于表征所述AR装置在待导航建筑物内的位置信息,所述相机姿态估计信息用于表征所述AR装置对应的方向信息,所述光源感知信息用于表征所述AR装置获取到的光源信息。
  11. 一种室内导航方法,包括:
    发送增强现实AR装置在室内的实际位置信息至服务器,以使所述服务器将所述实际位置信息与预设导航地图相匹配,确定实时定位 信息,并依据引导路线和所述实时定位信息,生成并发送虚拟引导信息至所述AR装置,其中,所述引导路线是基于获取到的所述AR装置的初始位置信息和目标位置信息确定的路线,所述实际位置信息用于表征所述AR装置在世界坐标系中的位置信息;
    响应于所述服务器发送的虚拟引导信息,生成虚拟导航图像;
    动态展示所述虚拟导航图像。
  12. 根据权利要求11所述的方法,其中,响应于所述服务器发送的虚拟引导信息,生成虚拟导航图像,包括:
    接收所述服务器发送的虚拟引导信息,其中,所述虚拟引导信息包括相机姿态估计信息,环境感知信息和光源感知信息,所述环境感知信息用于表征所述AR装置在待导航建筑物内的位置信息,所述相机姿态估计信息用于表征所述AR装置对应的方向信息,所述光源感知信息用于表征所述AR装置获取到的光源信息;
    依据预设的三维重建算法对所述环境感知信息和光源感知信息进行处理,获得所述AR的虚拟图像;
    将所述相机姿态估计信息与所述AR的虚拟图像相匹配,确定所述虚拟导航图像。
  13. 一种服务器,包括:
    获取模块,被配置为实时获取增强现实AR装置在室内的实际位置信息,所述实际位置信息用于表征所述AR装置在世界坐标系中的位置信息;
    匹配模块,被配置为将所述实际位置信息与预设导航地图相匹配,确定实时定位信息;
    确定模块,被配置为依据所述实时定位信息和获取到的引导路线,确定虚拟引导信息,所述引导路线是基于获取到的所述AR装置的初始位置信息和目标位置信息确定的路线;
    第一发送模块,被配置为发送所述虚拟引导信息至所述AR装置,以使所述AR装置依据所述虚拟引导信息生成并动态展示虚拟导航图 像。
  14. 一种增强现实AR装置,包括:
    第二发送模块,被配置为发送所述AR装置在室内的实际位置信息至服务器,以使所述服务器将所述实际位置信息与预设导航地图相匹配,确定实时定位信息,并依据引导路线和所述实时定位信息,生成并发送虚拟引导信息至所述AR装置,其中,所述引导路线是基于获取到的所述AR装置的初始位置信息和目标位置信息确定的路线,所述实际位置信息用于表征所述AR装置在世界坐标系中的位置信息;
    生成模块,被配置为响应于所述服务器发送的虚拟引导信息,生成虚拟导航图像;
    展示模块,被配置为动态展示所述虚拟导航图像。
  15. 一种终端,包括:
    至少一个如权利要求14所述的AR装置。
  16. 一种电子设备,包括:
    一个或多个处理器;
    存储器,其上存储有一个或多个程序,当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1至10中任一项所述的室内导航方法,或如权利要求11至12中任一项所述的室内导航方法。
  17. 一种可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时,使得所述处理器实现如权利要求1至10中任一项所述的室内导航方法,或如权利要求11至12中任一项所述的室内导航方法。
PCT/CN2022/130486 2021-11-18 2022-11-08 室内导航方法、服务器、装置和终端 WO2023088127A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111369855.8A CN116136408A (zh) 2021-11-18 2021-11-18 室内导航方法、服务器、装置和终端
CN202111369855.8 2021-11-18

Publications (1)

Publication Number Publication Date
WO2023088127A1 true WO2023088127A1 (zh) 2023-05-25

Family

ID=86333159

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/130486 WO2023088127A1 (zh) 2021-11-18 2022-11-08 室内导航方法、服务器、装置和终端

Country Status (2)

Country Link
CN (1) CN116136408A (zh)
WO (1) WO2023088127A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117579791A (zh) * 2024-01-16 2024-02-20 安科优选(深圳)技术有限公司 具有摄像功能的信息显示系统及信息显示方法

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW202001191A (zh) * 2018-06-08 2020-01-01 林器弘 室內導航系統
KR20200002219A (ko) * 2018-06-29 2020-01-08 현대엠엔소프트 주식회사 실내 길안내 장치 및 그 방법
CN111065891A (zh) * 2018-08-16 2020-04-24 北京嘀嘀无限科技发展有限公司 基于增强现实的室内导航系统
TW202028699A (zh) * 2019-01-28 2020-08-01 林器弘 用於移動通訊裝置之室內定位導航系統
CN111583335A (zh) * 2019-02-18 2020-08-25 上海欧菲智能车联科技有限公司 定位系统、定位方法和非易失性计算机可读存储介质
CN113532442A (zh) * 2021-08-26 2021-10-22 杭州北斗时空研究院 一种室内ar行人导航方法
CN113628349A (zh) * 2021-08-06 2021-11-09 西安电子科技大学 基于场景内容自适应的ar导航方法、设备及可读存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW202001191A (zh) * 2018-06-08 2020-01-01 林器弘 室內導航系統
KR20200002219A (ko) * 2018-06-29 2020-01-08 현대엠엔소프트 주식회사 실내 길안내 장치 및 그 방법
CN111065891A (zh) * 2018-08-16 2020-04-24 北京嘀嘀无限科技发展有限公司 基于增强现实的室内导航系统
TW202028699A (zh) * 2019-01-28 2020-08-01 林器弘 用於移動通訊裝置之室內定位導航系統
CN111583335A (zh) * 2019-02-18 2020-08-25 上海欧菲智能车联科技有限公司 定位系统、定位方法和非易失性计算机可读存储介质
CN113628349A (zh) * 2021-08-06 2021-11-09 西安电子科技大学 基于场景内容自适应的ar导航方法、设备及可读存储介质
CN113532442A (zh) * 2021-08-26 2021-10-22 杭州北斗时空研究院 一种室内ar行人导航方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117579791A (zh) * 2024-01-16 2024-02-20 安科优选(深圳)技术有限公司 具有摄像功能的信息显示系统及信息显示方法
CN117579791B (zh) * 2024-01-16 2024-04-02 安科优选(深圳)技术有限公司 具有摄像功能的信息显示系统及信息显示方法

Also Published As

Publication number Publication date
CN116136408A (zh) 2023-05-19

Similar Documents

Publication Publication Date Title
US12056837B2 (en) Employing three-dimensional (3D) data predicted from two-dimensional (2D) images using neural networks for 3D modeling applications and other applications
US10740975B2 (en) Mobile augmented reality system
CN110568447B (zh) 视觉定位的方法、装置及计算机可读介质
CN112771539B (zh) 采用使用神经网络从二维图像预测的三维数据以用于3d建模应用
CN107223269B (zh) 三维场景定位方法和装置
US11127203B2 (en) Leveraging crowdsourced data for localization and mapping within an environment
CN110383343B (zh) 不一致检测系统、混合现实系统、程序和不一致检测方法
US10482659B2 (en) System and method for superimposing spatially correlated data over live real-world images
KR20220028042A (ko) 포즈 결정 방법, 장치, 전자 기기, 저장 매체 및 프로그램
WO2023056544A1 (en) Object and camera localization system and localization method for mapping of the real world
TW201715476A (zh) 運用擴增實境技術之導航系統
CN110361005B (zh) 定位方法、定位装置、可读存储介质及电子设备
CN108629799B (zh) 一种实现增强现实的方法及设备
CN113454685A (zh) 基于云的相机标定
US20180350137A1 (en) Methods and systems for changing virtual models with elevation information from real world image processing
WO2023088127A1 (zh) 室内导航方法、服务器、装置和终端
CN116109684A (zh) 面向变电场站在线视频监测二三维数据映射方法及装置
CN113610702B (zh) 一种建图方法、装置、电子设备及存储介质
US11385856B2 (en) Synchronizing positioning systems and content sharing between multiple devices
JP7334460B2 (ja) 作業支援装置及び作業支援方法
CN114089836B (zh) 标注方法、终端、服务器和存储介质
CA3172195A1 (en) Object and camera localization system and localization method for mapping of the real world
Laskar et al. Robust loop closures for scene reconstruction by combining odometry and visual correspondences
US20230215092A1 (en) Method and system for providing user interface for map target creation
Shuai et al. Multi-sensor Fusion for Autonomous Positioning of Indoor Robots

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22894664

Country of ref document: EP

Kind code of ref document: A1