WO2021057886A1 - 基于光通信装置的导航方法、系统、设备及介质 - Google Patents

基于光通信装置的导航方法、系统、设备及介质 Download PDF

Info

Publication number
WO2021057886A1
WO2021057886A1 PCT/CN2020/117639 CN2020117639W WO2021057886A1 WO 2021057886 A1 WO2021057886 A1 WO 2021057886A1 CN 2020117639 W CN2020117639 W CN 2020117639W WO 2021057886 A1 WO2021057886 A1 WO 2021057886A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
navigation
optical communication
position information
communication device
Prior art date
Application number
PCT/CN2020/117639
Other languages
English (en)
French (fr)
Inventor
方俊
牛旭恒
李江亮
Original Assignee
北京外号信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京外号信息技术有限公司 filed Critical 北京外号信息技术有限公司
Publication of WO2021057886A1 publication Critical patent/WO2021057886A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/16Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using electromagnetic waves other than radio waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B10/00Transmission systems employing electromagnetic waves other than radio-waves, e.g. infrared, visible or ultraviolet light, or employing corpuscular radiation, e.g. quantum communication
    • H04B10/11Arrangements specific to free-space transmission, i.e. transmission through air or vacuum
    • H04B10/114Indoor or close-range type systems
    • H04B10/116Visible light communication

Definitions

  • the present invention relates to the field of optical information technology and location services, and more specifically to a navigation method, system, equipment and medium based on optical communication devices.
  • GPS signals do not provide altitude information and have limited positioning accuracy. They will be blocked indoors. This is difficult to meet the accuracy requirements for navigation in scenes such as bustling and dense commercial districts or large shopping malls with several floors.
  • the purpose of the embodiments of the present invention is to provide a navigation method, system, equipment, and medium based on an optical communication device, which can accurately obtain the position information and posture information of the equipment, so as to provide accurate navigation prompt information for the equipment.
  • the present invention can also accurately provide real-scene route guidance by instantly superimposing virtual navigation instructions in the current real scene acquired by the device in real time.
  • a navigation method based on an optical communication device comprising: S1) identifying the identification information transmitted by the optical communication device according to the image collected by the device and containing the optical communication device, And determine the position information and posture information of the device relative to the optical communication device; S2) use the identification information to obtain the preset position information of the optical communication device; S3) based on the obtained position information of the optical communication device and the The device relative to the position information and posture information of the optical communication device determines the current location information and posture information of the device; S4) obtains navigation prompt information, wherein the navigation prompt information is based on the destination location information and the current device The location information and posture information are generated.
  • the method may further include reacquiring an image of any optical communication device through the device, and returning to the step S1) to continue execution.
  • the method may further include monitoring changes in the position and posture of the device through multiple sensors built into the device, and updating the current position and posture information of the device based on the monitored changes in the position and posture.
  • the method may further include updating the current position information and posture information of the device by comparing the real scene in the field of view of the device with a scene model previously established for the real scene.
  • the step S4) includes: S41) obtaining superimposed position information of one or more virtual navigation instructions to be superimposed, wherein the superimposed position information is based on the destination position information and the device Based on the current position information; S42) based on the current position information and posture information of the device and the superimposed position information of the one or more virtual navigation instructions, superimposed on the real scene presented by the display medium of the device One or more virtual navigation instructions.
  • the method may further include continuing to perform step S42) or continue to perform S41) and S42) in response to the updated current position information and posture information of the device.
  • the destination location information may be obtained through the following steps: presenting a list of destinations on the display medium of the device; and responding to one of the presented destination lists Destination selection to obtain destination location information related to the selected destination.
  • the destination location information may be determined based at least in part on information related to the destination, and the information related to the destination includes one or more of the following or a combination thereof: Destination name, destination type, destination function, destination status.
  • the destination location information may be determined based on information related to the destination type or destination function received by the device and combined with the current location information of the device.
  • the destination location information may be based on information related to the destination type or destination function received by the device and combined with the current location information of the device and the current status information of the destination. definite.
  • the destination location information may be determined based on pre-stored information related to the destination.
  • the preset posture information of the optical communication device is also obtained; and wherein, the step S3) includes: based on the obtained position information of the optical communication device And the posture information and the position information and posture information of the device relative to the optical communication device to determine the current position information and posture information of the device.
  • a storage medium in which a computer program is stored, and the computer program can be used to implement the method according to the first aspect of the embodiment of the present invention when the computer program is executed.
  • an electronic device including a processor and a memory, and a computer program is stored in the memory. The method described in the first aspect of the embodiment of the invention.
  • a navigation system based on an optical communication device, including an optical communication device, an optical communication device server, and a navigation server, wherein: the optical communication device server is used to receive information from a navigation device.
  • the identification information transmitted by the optical communication device, and the location information of the optical communication device is provided to the navigation device; and the navigation server is used to determine that based on the destination location information and the current location information and posture information of the navigation device.
  • the navigation device provides navigation prompt information, wherein the current location information and posture information of the navigation device are determined based on the location information of the optical communication device and the location information and posture information of the navigation device relative to the optical communication device.
  • the optical communication device server is further configured to provide the navigation device with posture information of the optical communication device, and wherein the current position information and posture information of the navigation device are based on The position information and posture information of the optical communication device and the position information and posture information of the navigation device relative to the optical communication device are determined.
  • the navigation server is further configured to determine the superimposed position information of one or more virtual navigation instructions to be superimposed based on the destination position information and the current position information of the navigation device.
  • the location information can be used by the navigation device to superimpose one or more virtual navigation instructions on the real scene presented by the display medium of the navigation device based on its current location information and posture information.
  • the navigation system further includes the navigation device, which is used to: collect an image of the optical communication device; identify the identification information transmitted by the optical communication device based on the collected image and determine the Position information and posture information of the navigation device relative to the optical communication device; use the identification information to obtain the position information of the optical communication device from the optical communication device server; based on the obtained position information of the optical communication device and the navigation device relative to The position information and posture information of the optical communication device determine the current position information and posture information of the navigation device; and the superimposed position information based on the current position information and posture information of the navigation device and the one or more virtual navigation instructions , Superimpose one or more virtual navigation instructions on the real scene presented by the display medium of the navigation device.
  • the navigation device which is used to: collect an image of the optical communication device; identify the identification information transmitted by the optical communication device based on the collected image and determine the Position information and posture information of the navigation device relative to the optical communication device; use the identification information to obtain the position information of the optical communication device from the optical communication device server; based on the obtained position
  • the navigation device is further configured to use the identification information to obtain the posture information of the optical communication device from the optical communication device server, and based on the obtained position information and posture information of the optical communication device and The position information and posture information of the navigation device relative to the optical communication device determine the current position information and posture information of the navigation device.
  • the precise identification of the position and posture of the navigation object is realized through the optical tags arranged in the environment, so that accurate navigation prompt information can be provided for the navigation object.
  • some of the solutions of the present invention can superimpose corresponding navigation instructions in the currently presented scene in real time with the continuous change of the real scene presented on the display medium of the navigation object to realize fast and flexible real-scene route guidance, which is not only applicable Suitable for outdoor navigation, and more suitable for indoor navigation.
  • Figure 1 shows an exemplary optical label
  • Figure 2 is a schematic diagram of an optical label network according to an embodiment of the present invention.
  • Fig. 3 shows a schematic flowchart of a method for superimposing virtual objects in a real scene based on light tags according to an embodiment of the present invention
  • FIG. 4 is a schematic flowchart of a navigation method based on optical tags according to an embodiment of the present invention.
  • Fig. 5 is a schematic diagram of parking space navigation according to an optical tag-based navigation method according to an embodiment of the present invention.
  • Augmented Reality is also called mixed reality technology. It applies virtual objects to real scenes through computer technology, so that the real scenes and virtual objects are presented in the same screen or space in real time, thereby enhancing users’ Perception of the real world.
  • some data information can be superimposed on a fixed position in the field of view. For example, when a pilot is learning to fly an airplane, he can view the flight data superimposed on the real scene by wearing a display helmet. Both are displayed at a fixed position in the field of view (for example, always in the upper left corner).
  • This augmented reality technology lacks sufficient flexibility.
  • a real object in a real scene can be recognized first, and then a virtual object can be superimposed on or near the real object displayed on the screen.
  • the current augmented reality technology is difficult to superimpose virtual objects at precise positions in the real scene, especially when the superimposed position of the virtual objects is far away from the recognized real objects.
  • Optical communication devices are also called optical tags, and these two terms can be used interchangeably in this article.
  • Optical tags can transmit information through different light-emitting methods. They have the advantages of long recognition distance and relaxed requirements for visible light conditions, and the information transmitted by optical tags can change over time, which can provide large information capacity and flexible configuration capabilities (for example, the optical communication devices described in Chinese Patent Publications CN105740936A, CN109661666A, CN109936694A, etc.).
  • the optical tag usually includes a controller and at least one light source, and the controller can drive the light source through different driving modes to transmit different information to the outside. Fig.
  • the optical label 100 shows an exemplary optical label 100, which includes three light sources (respectively a first light source 101, a second light source 102, and a third light source 103).
  • the optical label 100 also includes a controller (not shown in FIG. 1), which is used to select a corresponding driving mode for each light source according to the information to be transmitted. For example, in different driving modes, the controller can use different driving signals to control the light emitting mode of the light source, so that when a device with imaging function is used to photograph the light label 100, the imaging of the light source therein can show a different appearance. (For example, different colors, patterns, brightness, etc.).
  • the driving mode of each light source at the moment can be analyzed, so as to analyze the information transmitted by the optical label 100 at the moment.
  • the controller of the light tag can control the properties of the light emitted by each light source in order to transmit information.
  • the "0" or "1" of the binary digital information can be expressed by controlling the on and off of each light source, so that multiple light sources in the light tag can be used to express a sequence of binary digital information.
  • each optical tag can be assigned an identification information (ID), which is used to uniquely identify or identify the manufacturer, manager, and user of the optical tag Light label.
  • ID an identification information
  • the identification information can be issued by the optical tag, and the user can use, for example, the image acquisition equipment or imaging device built in the mobile phone to image the optical tag to obtain the information (such as identification information) conveyed by the optical tag, which can be based on the Information to access the corresponding service, for example, visit a web page associated with the identification information of the optical tag, obtain other information associated with the identification information (for example, the location information of the optical tag corresponding to the identification information), and so on.
  • the device with image capture function mentioned in this article can be, for example, a device carried or controlled by a user (for example, a mobile phone with a camera, a tablet computer, smart glasses, AR glasses, a smart helmet, a smart watch, etc.), or a device capable of Machines that move autonomously (for example, drones, driverless cars, robots, etc.).
  • the device can capture the image of the optical label through the camera on it to obtain the image containing the optical label, and use the built-in application to analyze the imaging of the optical label (or each light source in the optical label) to identify the optical label. Information passed.
  • the optical tag can be installed in a fixed or variable location, and the identification information (ID) of the optical tag and any other information (such as location information) can be stored in the server.
  • ID identification information
  • Fig. 2 shows an exemplary optical label network.
  • the optical label network includes a plurality of optical labels and at least one server, wherein the information related to each optical label can be stored on the server.
  • the identification information (ID) or any other information of each light tag can be saved on the server, such as service information related to the light tag, description information or attributes related to the light tag, such as the location information of the light tag, Model information, physical size information, physical shape information, posture or orientation information, etc.
  • the optical label may also have uniform or default physical size information and physical shape information.
  • the device can use the identified identification information of the optical tag to query the server to obtain other information related to the optical tag.
  • the location information of the optical tag may refer to the actual location of the optical tag in the physical world, which may be indicated by geographic coordinate information.
  • the server may be a software program running on a computing device, a computing device, or a cluster composed of multiple computing devices.
  • the optical tag may be offline, that is, the optical tag does not need to communicate with the server. Of course, it can be understood that online optical tags that can communicate with the server are also feasible.
  • the light tag can be used as an anchor point to determine the position and posture of the device, so as to realize the superposition of the virtual object into the real scene.
  • the virtual object can be, for example, an icon, a picture, a text, an emoticon, a virtual three-dimensional object, a three-dimensional scene model, an animation, a video, a web page link that can be jumped, and so on.
  • Fig. 3 shows a method for superimposing virtual objects in a real scene based on light tags according to an embodiment, and the method includes the following steps:
  • Step 301 The device obtains the identification information of the optical label.
  • the device can recognize the identification information conveyed by the optical label by collecting and analyzing the image of the optical label.
  • the identification information may be associated with one or more virtual objects.
  • Step 302 The device uses the identification information of the optical tag to query to obtain the virtual object to be superimposed and the superimposition information of the virtual object, and the superimposition information includes superimposition position information.
  • the device After the device recognizes the identification information of the optical tag, it can use the identification information to send a query request to the server.
  • Information related to the optical tag can be pre-stored at the server, such as identification information of the optical tag, location information of the optical tag, description of one or more virtual objects associated with the optical tag (or identification information of the optical tag) Information, superimposed position information of each virtual object, and so on.
  • the description information of the virtual object is related information used to describe the virtual object, for example, it may include pictures, text, icons, identification information of the virtual object, shape information, color information, size information, etc. contained in the virtual object. Based on the description information, the device can present the corresponding virtual object.
  • the superimposed position information of the virtual object may be position information relative to the light tag (for example, distance information of the superimposed position of the virtual object relative to the light tag and direction information relative to the light tag), which is used to indicate the superimposed position of the virtual object.
  • the device can obtain the description information of the virtual object to be superimposed in the real scene currently presented by the device and the superimposition information of the virtual object by sending a query request to the server.
  • the superimposed information of the virtual object may also include the superimposed posture information of the virtual object.
  • the superimposed posture information can be the posture information of the virtual object relative to the light tag, or its posture in the coordinate system of the real world. information.
  • the superimposed position information of the virtual object can also be used to determine the superimposed posture of the virtual object.
  • the superimposed position information of several points on it can be determined. The superimposed position information of these different points can be used to determine the posture of the virtual object relative to the light tag or the posture in the real world coordinate system. .
  • Step 303 The device determines its position information relative to the optical tag.
  • the device may determine its position information relative to the optical tag in various ways, and the relative position information may include distance information and direction information of the device relative to the optical tag. Normally, the position information of the device relative to the optical label is actually the position information of the image capture device of the device relative to the optical label.
  • the device can determine its position information relative to the optical label by acquiring an image including the optical label and analyzing the image. For example, the device can determine the relative distance between the optical tag and the identification device through the imaging size of the optical tag in the image and optional other information (for example, the actual physical size of the optical tag, the focal length of the device's camera) (the larger the image, the The closer the distance; the smaller the image, the farther the distance).
  • the device may use the identification information of the optical tag to obtain the actual physical size information of the optical tag from the server, or the optical tag may have a uniform physical size and store the physical size on the device.
  • the device may determine the orientation information of the device relative to the optical label by including the perspective distortion of the optical label imaging in the image of the optical label and optional other information (for example, the imaging position of the optical label).
  • the device may use the identification information of the optical tag to obtain the physical shape information of the optical tag from the server, or the optical tag may have a unified physical shape and store the physical shape on the device.
  • the device can also directly obtain the relative distance between the optical label and the identification device through a depth camera or binocular camera installed on it.
  • the device can also use any other existing positioning method to determine its position information relative to the optical tag.
  • the device can not only determine its location information relative to the optical tag, but also obtain the current location information of the device based on the determined location information of the device relative to the optical tag and the location information of the optical tag itself, This facilitates precise positioning or navigation of the user. Since the optical tag itself has accurate location information, the device location obtained based on the optical tag is more accurate than traditional GPS positioning.
  • the object for positioning or navigation may not be the user, but may be a machine that can move autonomously, such as a drone, an unmanned car, a robot, and so on.
  • the machine that can move autonomously can be equipped with an image acquisition device, and can interact with the optical tag in a similar way to a mobile phone, so as to obtain its own location information.
  • step 304 the device determines its posture information relative to the optical tag.
  • the posture information of the device is actually the posture information of the image acquisition device of the device.
  • the device can determine its posture information relative to the optical tag according to the imaging of the optical tag.
  • the imaging position or imaging area of the optical tag is located in the center of the imaging field of the device, it can be considered that the device is currently facing the optical tag.
  • the imaging direction of the optical tag can be further considered. As the posture of the device changes, the imaging position and/or imaging direction of the optical tag on the device will change accordingly. Therefore, the posture information of the device relative to the optical tag can be obtained according to the imaging of the optical tag on the device.
  • the position and posture information (also collectively referred to as pose information) of the device relative to the optical tag can also be determined in the following manner.
  • a coordinate system can be established based on the optical label, and the coordinate system can be referred to as the optical label coordinate system.
  • Some points on the optical label may be determined as some spatial points in the optical label coordinate system, and the coordinates of these spatial points in the optical label coordinate system may be determined according to the physical size information and/or physical shape information of the optical label.
  • Some points on the optical label may be, for example, the corners of the housing of the optical label, the end of the light source in the optical label, some identification points in the optical label, and so on.
  • the image points corresponding to these spatial points can be found in the image taken by the device camera, and the position of each image point in the image can be determined.
  • the pose information of the device camera in the optical label coordinate system when the image is taken can be calculated (R, t), where R is the rotation matrix, which can be used to represent the posture information of the device camera in the optical label coordinate system, and t is the displacement vector, which can be used to represent the position information of the device camera in the optical label coordinate system .
  • R is the rotation matrix, which can be used to represent the posture information of the device camera in the optical label coordinate system
  • t is the displacement vector, which can be used to represent the position information of the device camera in the optical label coordinate system .
  • the method of calculating R and t is known in the prior art.
  • the PnP (Perspective-n-Point) method of 3D-2D can be used to calculate R and t. In order not to obscure the present invention, the details are omitted here. Introduction.
  • using the rotation matrix R and the displacement vector t can also describe how to transform the coordinates of a certain point between the optical tag coordinate system and the device camera coordinate system.
  • the coordinates of a certain point in the optical label coordinate system can be converted to the coordinates in the device camera coordinate system, and can be further converted to the position of the image point in the image.
  • the coordinates of the multiple feature points in the optical label coordinate system that is, Relative to the position information of the optical tag
  • the coordinates of these feature points in the device camera coordinate system can be determined, so that the respective imaging positions of these feature points on the device can be determined .
  • the overall imaging position, size, or posture of the virtual object can be determined accordingly.
  • step 305 based on the superimposition information of the virtual object, the position information and posture information of the device relative to the light tag, the virtual object is presented on the display medium of the device, thereby superimposing the virtual object in the real scene .
  • the superimposed position information of the virtual object reflects the position information of the virtual object to be superimposed relative to the optical label.
  • a three-dimensional coordinate system with the light tag as the origin can be actually created, where both the device and the virtual object to be superimposed have The exact spatial coordinates in this coordinate system.
  • the position information of the virtual object to be superimposed relative to the device may also be determined based on the superimposed position information of the virtual object and the position information of the device relative to the light tag. Based on the above, the virtual object can be superimposed in the real scene based on the posture information of the device.
  • the imaging size of the virtual object to be superimposed can be determined based on the relative distance between the device and the virtual object to be superimposed, and the position of the virtual object to be superimposed can be determined based on the relative orientation of the device and the virtual object to be superimposed and the posture information of the device.
  • the imaging position on the device Based on the imaging position and imaging size, accurate superposition of virtual objects can be realized in a real scene.
  • the virtual object to be superimposed may have a default imaging size. In this case, only the imaging position of the virtual object to be superimposed on the device may be determined, but the imaging size of the virtual object may not be determined.
  • the posture of the superimposed virtual object can be further determined.
  • the imaging of the virtual object to be superimposed on the device can be determined according to the pose information (R, t) of the device (more precisely, the device's camera) relative to the light tag calculated above The position, size and posture of the camera. In one case, if it is determined that the virtual object to be superimposed is not currently in the field of view of the device (for example, the imaging position of the virtual object is outside the display screen), the virtual object is not displayed.
  • the device can use various feasible ways to present the real scene.
  • the device can collect real-world information through a camera and use the above-mentioned information to reproduce the real scene on the display screen, and the image of the virtual object can be superimposed on the display screen.
  • the device (such as smart glasses) can also reproduce the real scene not through the display screen, but can simply reproduce the real scene through a prism, lens, mirror, transparent object (such as glass), etc., and the image of the virtual object can be optically reproduced. Superimposed on the real scene.
  • the above-mentioned display screens, prisms, lenses, mirrors, transparent objects, etc. may be collectively referred to as the display medium of the device, and virtual objects may be presented on the display medium.
  • the user observes the real scene through a specific lens, and the lens can reflect the image of the virtual object into the user's eyes.
  • the user of the device can directly observe the real scene or part thereof, the real scene or part thereof does not need to be reproduced through any medium before being observed by the user's eyes, and the virtual object can be optically superimposed on In the real scene. Therefore, the real scene or part of it does not necessarily need to be presented or reproduced by the device before being observed by the user's eyes.
  • the device may translate and/or rotate.
  • the device can also re-determine (for example, when the optical tag leaves the field of view of the device and then re-enters the field of view of the device, or at regular intervals when the optical tag remains in the field of view of the device) Position information and its posture information relative to the light tag, and based on the superimposed position information of the virtual object, the position information of the device relative to the light tag, and the posture information of the device relative to the light tag, re-determine the imaging position and/or imaging of the virtual object Size, so as to correct the superposition of the virtual object in the real scene.
  • Position information and its posture information relative to the light tag and based on the superimposed position information of the virtual object, the position information of the device relative to the light tag, and the posture information of the device relative to the light tag, re-determine the imaging position and/or imaging of the virtual object Size, so as to correct the superposition of the virtual object in the real scene.
  • the superposition of the virtual object is realized in the real scene presented by the display medium of the device, but it is understandable that this It is not necessary, and position information or posture information in other coordinate systems can also be used to realize the superposition of virtual objects.
  • a navigation method based on optical tags is provided, and the flow chart is shown in FIG. 4.
  • the method can be executed by the device and mainly includes: Step S401, identifying the identification information transmitted by the optical communication device according to the image collected by the device and containing the optical communication device, and determining the position information and posture of the device relative to the optical communication device Information; step S402, using the identification information to obtain the preset position information of the optical communication device; step S403, based on the obtained position information of the optical communication device and the position information and attitude information of the device relative to the optical communication device , Determine the current location information and posture information of the device; step S404, obtain superimposed location information of one or more virtual navigation instructions to be superimposed, where the superimposed location information is based on the destination location information and the current location information of the device And determined; step S405, based on the current position information and posture information of the device and the superimposed position information of the one or more virtual navigation instructions, superimpose one or more on the real scene presented by the display medium of the
  • a device with an image capturing component or a person carrying the device can use the device to perform image capture on one or some light tags within the field of view while traveling.
  • the identification information of the optical tag is identified according to the captured image; and it can also be determined based on the captured image that the device is relative to the Position information and posture information of the optical tag.
  • the location information of the optical tag may be obtained from the server based on the identification information of the optical tag.
  • the posture information of the optical tag can also be obtained.
  • the identification information of the optical tag is to indicate a specific optical tag. When two or more optical tags are arranged in the environment to be navigated, the identification information of the optical tag needs to be identified; and if it is in a specific venue or a specific environment When only one optical label is arranged in the optical label, there is no need to identify the identification information of the optical label, and the device directly accesses the preset server to obtain the relevant information of the unique optical label.
  • the current position information and posture information of the device are determined.
  • the current position information and posture information of the device may be the position and posture in the coordinate system used for navigation.
  • the coordinate system used for navigation is the optical tag coordinate system mentioned above, then the current position information and posture information of the device are the position information and posture information of the device relative to the optical tag.
  • the preset position information of the actual light tag can be combined with the position information of the device relative to the light tag and The posture information is calculated to obtain the position information and posture information of the device in the coordinate system used for navigation as the current position information and posture information of the device.
  • the current position information and posture information of the device can be determined based on the obtained position information and posture information of the optical tag and the position information and posture information of the device relative to the optical tag.
  • step S404 the superimposed position information of one or more virtual navigation instructions to be superimposed is obtained, wherein the superimposed position information is determined based on the destination position information and the current position information of the device.
  • the device can obtain destination location information in a variety of feasible ways.
  • the destination location information may be directly set according to the user's input or selected destination address.
  • a list of destinations may be presented on the display medium of the device for selection by the user using the device, or an input box may be presented for the user to input information related to the destination.
  • virtual objects indicating possible nearby destinations in a more friendly manner may be presented on the display medium of the device for the user to select, such as virtual road signs, images or icons reflecting the destination type (for example, cheering Stations, restaurants, bookstores, etc.), etc., so that the user can select the desired destination by clicking on the relevant virtual object.
  • the destination location information may also be determined based at least in part on information related to the destination.
  • the user may not know the specific destination address, but enter or select information related to the destination, such as the entered or selected "flight number", "gate 1", restaurant name, etc.
  • the device can query or retrieve a pre-established database for navigation, a scene map, or a scene information library to determine the corresponding destination location information.
  • the database, scene map, or scene information library used for navigation can be stored on a server (navigation server) that provides navigation services, or on a device.
  • the server can send the determined destination location information to the device.
  • the information related to the destination may include information related to the destination name, destination type, destination function, and so on.
  • the information related to the destination type or function for example, the "washroom” input or selected by the user
  • the current location information of the device may be combined to determine the destination location information, for example, the distance from the device Location information of the nearest restroom.
  • the information provided by the user related to the type or function of the destination for example, the "parking space” input or selected by the user
  • the current status information of the destination for example, , The availability of each parking space
  • the information related to the preset navigation purpose can also be used to determine the destination location information. For example, you can click "Find a car with one key", “Go to work”, “Go home”, and “Find nearby Historical footprint” and so on to obtain destination location information. For example, when the user clicks "Find a car with one key", the destination location information can be determined by combining some information about the user's previous parking position stored in the device or server; for example, the user scans the light tag when parking. The determined location information may be the parking space number recorded when the user parked, or the photo containing the parking space information taken when the user parked.
  • the current location information of the device can be combined with the history stored in the device or server about the current location of the device and the destination that the user has visited.
  • the data is used to determine the location information of the destination. For example, when a user arrives in a certain business area, the restaurant, store, coffee shop, etc. that the user has been to in the area can be given last time.
  • the travel route can be determined for the device based on the destination location information and the current location information of the device.
  • the deployment of optical tags in the optical tag network can be: The device provides a planned travel route with one or more light tags along the route. For example, after several feasible routes are determined using the starting point and the destination point, one or more recommended planned travel routes can be provided for the device based on the deployment of optical tags on each route.
  • a travel route with more optical tags deployed along the way so as to continuously navigate the device through the optical tags along the way during the traveling process.
  • information related to the scene can be combined to provide the device with a travel route; for example, refer to road information in the scene , Building information, elevator information, staircase information, access control information, etc., can determine which areas or lines can pass.
  • the specific method of determining the travel route in this embodiment is similar to the existing navigation method, and will not be repeated here.
  • a straight-line travel route can be established between the navigation start point and the destination point.
  • the travel route passes through an obstacle, artificial obstacle avoidance can be performed.
  • the device bypasses the obstacle, it can be based on the device’s
  • the new location re-determines the travel route between the current location of the device and the destination location (detailed below).
  • the superimposed position information of one or more virtual navigation instructions to be superimposed can be determined along the travel route.
  • the virtual navigation indication can be the virtual object introduced above in conjunction with Figure 3, which can take any form that is convenient for guiding the user to travel and identifying the destination, such as an arrow-shaped icon, and the direction displayed in the language corresponding to the current language option of the device. Instructions or virtual road signs, virtual navigation characters or animals, building information on both sides of the route, etc.
  • the superimposed position information of the virtual navigation indication may be the position information of the virtual navigation indication relative to the light tag, but it can be understood that it may also be the position information in the world coordinate system or the coordinate system of a specific venue.
  • a position point can be set at intervals along the travel route to superimpose virtual navigation instructions.
  • the device itself may determine the superimposed position information of the virtual navigation indication, so as to obtain the superimposed position information.
  • the navigation server may determine the superimposed position information of the virtual navigation indication and send it to the device.
  • one or more virtual navigation instructions may be superimposed on the real scene presented by the display medium of the device.
  • a virtual direction arrow can be superimposed on the real-time reality scene presented by the user's handheld device to give the user the direction of travel and route guidance. , It is convenient for users to reach the destination conveniently and quickly.
  • the method may further include continuously tracking and acquiring the current location information and posture information of the device during the travel, and re-based on the newly acquired current location information and posture information of the device and the one or more virtual devices.
  • the superimposed position information of the navigation instructions is superimposed on each virtual navigation instruction in the real scene presented by the display medium of the device.
  • the current position information and posture information of the device re-acquired during the traveling process may be obtained on the basis of the position information and posture information determined in step S403.
  • the device may generally be translated or rotated.
  • Various sensors built into the device can be used to monitor the change of the device's own posture, and based on these posture changes, the device posture information determined in step S403 can be adjusted to Get the current posture information of the device.
  • changes in the location of the device can be monitored through the built-in location sensor of the device, and the device location information determined in step S403 can be adjusted based on the changes in these locations to obtain the current location information of the device.
  • a scene model of the navigation environment may be established in advance, and then during the traveling process, the current position information and posture information of the device can be calibrated by comparing the real scene in the field of view of the device with the scene model. After re-determining the current position information and posture information of the device, it may return to step S404 or step S405 to continue execution.
  • the current position information and posture information of the device can also be calibrated or re-determined by scanning the optical tag, and then return to step S404 or step S405 to continue execution.
  • the scanned optical label may be the same optical label as the optical label scanned in step S401 last time, or may be another optical label.
  • the optical tags scanned by the device during the traveling process may not necessarily be the optical tags along the originally planned traveling route. For example, the user may have deviated from the planned traveling route during the traveling process.
  • the device does not necessarily scan all optical tags along the planned traveling route during the traveling process, but may selectively scan based on actual needs, for example, scan a nearby optical tag when arriving at an intersection.
  • the navigation method in the above-mentioned embodiment of the present invention can provide higher accuracy, especially in the case of lack of GPS signal or GPS signal is not very good, such as navigation in bustling commercial districts or shopping malls. .
  • users can achieve precise navigation through light tags installed on the door of a store or on a building while walking, and GPS navigation is usually difficult to meet the accuracy required in this case.
  • the navigation method provided by the embodiments of the present invention realizes real-world navigation in a real sense, and can superimpose corresponding virtual navigation instructions in real time in the real scene obtained by the device in real time.
  • panoramic maps not only have high production costs and high update and maintenance costs, but also have high requirements for network traffic and storage and computing capabilities of terminal equipment, making it difficult to effectively conduct fast and real-time guidance.
  • An embodiment of the present invention relates to an optical tag-based navigation system, which may include an optical tag, an optical tag server, and a navigation server.
  • the optical label server is used to receive the identification information transmitted by the optical label from the navigation device, and provide the position information of the optical label to the navigation device.
  • the navigation server is configured to determine the superimposed position information of one or more virtual navigation instructions to be superimposed based on the destination position information and the current position information of the navigation device.
  • the optical label server and the navigation server may be two physically separated servers, but may also be integrated together, that is, as different functional modules of the same physical server.
  • the above-mentioned navigation system may also include a navigation device, which may be used to perform the method shown in FIG. 4.
  • the current position of the device is obtained.
  • the device mentioned in this article can be a device carried or controlled by a user (for example, a mobile phone, a tablet computer, smart glasses, AR glasses, a smart helmet, a smart watch, etc.), but it is understood that the device can also be capable of autonomous movement Of machines, such as drones, driverless cars, robots, etc.
  • the device can be equipped with image acquisition devices (such as a camera) and a display medium (such as a display screen).
  • the present invention can be implemented in the form of a computer program.
  • the computer program can be stored in various storage media (for example, a hard disk, an optical disk, a flash memory, etc.), and when the computer program is executed by a processor, it can be used to implement the method of the present invention.
  • the present invention may be implemented in the form of an electronic device.
  • the electronic device includes a processor and a memory, and a computer program is stored in the memory.
  • the computer program When the computer program is executed by the processor, it can be used to implement the method of the present invention.
  • references herein to "each embodiment”, “some embodiments”, “one embodiment”, or “an embodiment”, etc. refer to the specific features, structures, or properties described in connection with the embodiments that are included in In at least one embodiment. Therefore, the appearances of the phrases “in various embodiments”, “in some embodiments”, “in one embodiment”, or “in an embodiment” in various places throughout this document do not necessarily refer to the same implementation example.
  • specific features, structures, or properties can be combined in any suitable manner in one or more embodiments. Therefore, a specific feature, structure, or property shown or described in one embodiment can be combined in whole or in part with the feature, structure, or property of one or more other embodiments without limitation, as long as the combination is not non-limiting. Logical or not working.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Electromagnetism (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Automation & Control Theory (AREA)
  • Navigation (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种基于光通信装置的导航方法、系统、设备及介质,该方法包括:根据设备所采集的包含光通信装置的图像来识别该光通信装置传递的标识信息,并确定该设备相对于该光通信装置的位置信息和姿态信息(S401);利用标识信息获取预设的该光通信装置的位置信息(S402);基于所获取的光通信装置的位置信息以及该设备相对于该光通信装置的位置信息和姿态信息,确定该设备当前的位置信息和姿态信息(S403);获得要叠加的一个或多个虚拟导航指示的叠加位置信息,其中该叠加位置信息是基于目的地位置信息与该设备当前的位置信息而确定的(S404);基于设备当前的位置信息和姿态信息以及一个或多个虚拟导航指示的叠加位置信息,在设备的显示媒介所呈现的现实场景中叠加一个或多个虚拟导航指示(S405)。

Description

基于光通信装置的导航方法、系统、设备及介质 技术领域
本发明涉及光信息技术和位置服务领域,更具体地涉及基于光通信装置的导航方法、系统、设备及介质。
背景技术
现有的导航技术(例如GPS导航)获得了广泛的应用,用户可以根据其携带的终端设备上呈现的平面地图或全景地图中的路线指引到达目的地。然而GPS信号并不能提供设备的姿态信息或方向信息,通常需要结合终端设备(例如手机)中重力传感器或罗盘等获取方向从而给出相关指引,但这些传感器通常精度不高,且容易随着设备移动或手持姿势的变化给出错误引导。虽然通过全景地图呈现的立体实景模型可以帮助用户进行一定程度的修正,但是全景地图不仅制作成本和更新维护成本高,而且需要预先传输或加载全景地图,这对于终端设备的存储和计算能力有较高要求,难以有效进行快速实时的指引。另外,GPS信号不提供高度信息且本身定位精度有限,在室内会受遮挡,这很难满足在例如繁华密集的商业街区或具有若干楼层的大型商场等场景中进行导航的精度要求。
发明内容
本发明实施例的目的在于提供基于光通信装置的导航方法、系统、设备及介质,其能够精确地获取设备的位置信息和姿态信息,从而为设备提供精准的导航提示信息。优选地,本发明还能够通过在设备实时获取的当前现实场景中即时叠加虚拟导航指示来准确地提供实景路线指引。
根据本发明实施例的第一方面,提供了一种基于光通信装置的导航方法,该方法包括:S1)根据设备所采集的包含光通信装置的图像来识别该光通信装置传递的标识信息,并确定该设备相对于该光通信装置的位置信息和姿态信息;S2)利用所述标识信息获取预设的该光通信装置的位置信息;S3)基于所获取的光通信装置的位置信息以及该设备相对于该光通信装置的位置信息和姿态信息,确定该设备当前的位置信息和姿态信息;S4)获得导航提示信息,其中,所述导航提示信息是基于目的地位置信息以及 所述设备当前的位置信息和姿态信息而产生的。
在本发明的一些实施例中,该方法还可包括通过所述设备重新采集任一光通信装置的图像,并返回至所述步骤S1)继续执行。
在本发明的一些实施例中,该方法还可包括通过设备内置的多个传感器监控设备位置和姿态的变化,并基于所监控的位置和姿态的变化更新设备当前的位置信息和姿态信息。
在本发明的一些实施例中,该方法还可包括通过将所述设备视野中现实场景与预先针对该现实场景建立的场景模型进行比较来更新设备当前的位置信息和姿态信息。
在本发明的一些实施例中,所述步骤S4)包括:S41)获得要叠加的一个或多个虚拟导航指示的叠加位置信息,其中,所述叠加位置信息是基于目的地位置信息与该设备当前的位置信息而确定的;S42)基于所述设备当前的位置信息和姿态信息以及所述一个或多个虚拟导航指示的叠加位置信息,在所述设备的显示媒介所呈现的现实场景中叠加一个或多个虚拟导航指示。
在本发明的一些实施例中,该方法还可包括响应于更新的设备当前的位置信息和姿态信息,继续执行步骤S42)或者继续执行S41)和S42)。
在本发明的一些实施例中,所述目的地位置信息可以是通过下面的步骤获取的:在所述设备的显示媒介上呈现目的地列表;以及响应于对于所呈现的目的地列表的其中一个目的地的选择来获取与所选择的目的地相关的目的地位置信息。
在本发明的一些实施例中,所述目的地位置信息可以至少部分地基于与目的地有关的信息来确定的,所述与目的地有关的信息包括下列中的一个或多个或其组合:目的地名称、目的地类型、目的地功能、目的地状态。
在本发明的一些实施例中,所述目的地位置信息可以是基于所述设备收到的与目的地类型或目的地功能有关的信息并结合设备当前的位置信息来确定的。
在本发明的一些实施例中,所述目的地位置信息可以是基于所述设备收到的与目的地类型或目的地功能有关的信息并结合设备当前的位置信息以及目的地的当前状态信息来确定的。
在本发明的一些实施例中,所述目的地位置信息可以是基于预先存储的与目的地有关的信息确定的。
在本发明的一些实施例中,其中,在所述步骤S2)还获取预设的该光通信装置的姿态信息;以及其中,所述步骤S3)包括:基于所获取的光通信装置的位置信息和姿态信息以及该设备相对于该光通信装置的位置信息和姿态信息,确定该设备当前的位置信息和姿态信息。
根据本发明的第二方面,提供了一种存储介质,其中存储有计算机程序,所述计算机程序在被执行时能够用于实现根据本发明实施例的第一方面所述的方法。
根据本发明实施例的第三方面,提供了一种电子设备,包括处理器和存储器,所述存储器中存储有计算机程序,所述计算机程序在被所述处理器执行时能够用于实现根据本发明实施例的第一方面所述的方法。
根据本发明实施例的第四方面,提供了一种基于光通信装置的导航系统,包括光通信装置、光通信装置服务器、导航服务器,其中:所述光通信装置服务器用于从导航设备接收所述光通信装置传递的标识信息,并向所述导航设备提供所述光通信装置的位置信息;以及所述导航服务器用于基于目的地位置信息与所述导航设备当前的位置信息和姿态信息为导航设备提供导航提示信息其中,所述导航设备当前的位置信息和姿态信息基于所述光通信装置的位置信息以及所述导航设备相对于所述光通信装置的位置信息和姿态信息来确定。
在本发明的一些实施例中,其中,所述光通信装置服务器还用于向所述导航设备提供所述光通信装置的姿态信息,以及其中,所述导航设备当前的位置信息和姿态信息基于所述光通信装置的位置信息和姿态信息以及所述导航设备相对于所述光通信装置的位置信息和姿态信息来确定。
在本发明的一些实施例中,所述导航服务器还用于基于目的地位置信息与所述导航设备当前的位置信息确定要叠加的一个或多个虚拟导航指示的叠加位置信息,其中,该叠加位置信息能够被所述导航设备使用以基于其当前的位置信息和姿态信息,在所述导航设备的显示媒介所呈现的现实场景中叠加一个或多个虚拟导航指示。
在本发明的一些实施例中,该导航系统还包括所述导航设备,其用于:采集所述光通信装置的图像;基于所采集的图像来识别该光通信装置传递的标识信息并确定该导航设备相对于该光通信装置的位置信息和姿态信息;利用所述标识信息从光通信装置服务器获取该光通信装置的位置信息;基于所获取的光通信装置的位置信息以及该导航设备相对于该光通信 装置的位置信息和姿态信息,确定该导航设备当前的位置信息和姿态信息;以及基于所述导航设备当前的位置信息和姿态信息以及所述一个或多个虚拟导航指示的叠加位置信息,在所述导航设备的显示媒介所呈现的现实场景中叠加一个或多个虚拟导航指示。
在本发明的一些实施例中,所述导航设备还用于利用所述标识信息从光通信装置服务器获取该光通信装置的姿态信息,以及基于所获取的光通信装置的位置信息和姿态信息以及该导航设备相对于该光通信装置的位置信息和姿态信息,确定该导航设备当前的位置信息和姿态信息。
本发明的实施例提供的技术方案具有但不限于以下有益效果:
通过在环境中布置的光标签实现了对于导航对象位置和姿态的精确识别,从而能够为导航对象提供精准的导航提示信息。另外,本发明的一些方案可以随着在导航对象的显示媒介上所呈现的现实场景的不断变换,在当前呈现的场景中实时地叠加相应的导航指示来实现快速灵活的实景路线指引,不仅适用于室外导航,更适用于室内导航。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本发明。
附图说明
以下参照附图对本发明实施例作进一步说明,其中:
图1示出了一种示例性的光标签;
图2为根据本发明一个实施例的光标签网络的示意图;
图3示出了根据本发明一个实施例的基于光标签在现实场景中叠加虚拟对象的方法的流程示意图;
图4为根据本发明一个实施例的基于光标签的导航方法的流程示意图;
图5为根据本发明实施例的基于光标签的导航方法进行车位导航的示意图。
具体实施方式
为了使本发明的目的,技术方案及优点更加清楚明白,以下结合附图通过具体实施例对本发明进一步详细说明。应当理解,此处所描述的具体 实施例仅用以解释本发明,并不用于限定本发明。
为方便描述,首先对与本发明相关的技术进行简单说明,以帮助理解本发明的实施例,但应指出这些技术说明并不一定构成现有技术。
增强现实技术(Augmented Reality,AR)也被称为混合现实技术,其通过计算机技术将虚拟对象应用到现实场景,使得现实场景和虚拟对象实时地呈现到同一个画面或空间中,从而增强用户对现实世界的感知。在一种增强现实应用中,可以在视野中的固定位置处叠加一些数据信息,例如,飞行员在学习驾驶飞机的时候,可以通过佩戴显示头盔来查看在现实场景上叠加的飞行数据,这些数据通常都是显示在视野中的固定的位置处(例如,始终在左上角)。这种增强现实技术缺乏足够的灵活性。在另一种增强现实应用中,可以首先识别出现实场景中的真实物体,然后将虚拟对象叠加到屏幕上显示的该真实物体上或附近。但是,当前的增强现实技术难以做到在现实场景中的精确位置处叠加虚拟对象,特别是当虚拟对象的叠加位置与识别出的真实物体距离较远时。
光通信装置也称为光标签,这两个术语在本文中可以互换使用。光标签能够通过不同的发光方式来传递信息,其具有识别距离远、可见光条件要求宽松的优势,并且光标签所传递的信息可以随时间变化,从而可以提供大的信息容量和灵活的配置能力(例如在中国专利公开CN105740936A、CN109661666A、CN109936694A等中所描述的光通信装置)。光标签中通常可以包括控制器和至少一个光源,该控制器可以通过不同的驱动模式来驱动光源,以向外传递不同的信息。图1示出了一种示例性的光标签100,其包括三个光源(分别是第一光源101、第二光源102、第三光源103)。光标签100还包括控制器(在图1中未示出),其用于根据要传递的信息为每个光源选择相应的驱动模式。例如,在不同的驱动模式下,控制器可以使用不同的驱动信号来控制光源的发光方式,从而使得当使用具有成像功能的设备拍摄光标签100时,其中的光源的成像可以呈现出不同的外观(例如,不同的颜色、图案、亮度、等等)。通过分析光标签100中的光源的成像,可以解析出各个光源此刻的驱动模式,从而解析出光标签100此刻传递的信息。例如,该光标签的控制器可以控制每个光源发出的光的属性,以便传递信息。例如,可以通过控制每个光源的开启和关闭来表示二进制数字信息的“0”或“1”,从而该光标签中多个光源可以用于表示一个二进制数字信息序列。
为了基于光标签向用户和商家提供对应的服务,每个光标签可以被分配一个标识信息(ID),该标识信息用以由光标签的制造者、管理者及使用者等唯一地识别或标识光标签。通常,可由光标签发布其标识信息,而使用者可以使用例如手机上内置的图像采集设备或成像装置对光标签进行图像采集来获得该光标签传递的信息(例如标识信息),从而可以基于该信息来访问相应的服务,例如,访问与光标签的标识信息相关联的网页、获取与标识信息相关联的其他信息(例如,与该标识信息对应的光标签的位置信息)等等。本文提到的具有图像采集功能的设备例如可以是用户携带或控制的设备(例如,带有摄像头的手机、平板电脑、智能眼镜、AR眼镜、智能头盔、智能手表等等),也可以是能够自主移动的机器(例如,无人机、无人驾驶汽车、机器人等等)。设备例如可以通过其上的摄像头对光标签进行图像采集来获得包含光标签的图像,并通过内置的应用程序来分析图像中的光标签(或光标签中的各个光源)的成像以识别出光标签传递的信息。
光标签可以安装于固定或可变的位置,并可以将光标签的标识信息(ID)以及任何其他信息(例如位置信息)存储于服务器中。在现实中,可以将大量的光标签构建成一个光标签网络。图2示出了一种示例性的光标签网络,该光标签网络包括多个光标签和至少一个服务器,其中,与每个光标签相关的信息可保存在服务器上。例如,可以在服务器上保存每个光标签的标识信息(ID)或任何其他信息,例如与该光标签相关的服务信息、与该光标签相关的描述信息或属性,如光标签的位置信息、型号信息、物理尺寸信息、物理形状信息、姿态或朝向信息等。光标签也可以具有统一的或默认的物理尺寸信息和物理形状信息等。设备可以使用识别出的光标签的标识信息来从服务器查询获得与该光标签有关的其他信息。光标签的位置信息可以是指该光标签在物理世界中的实际位置,其可以通过地理坐标信息来指示。服务器可以是在计算装置上运行的软件程序、一台计算装置或者由多台计算装置构成的集群。光标签可以是离线的,也即,光标签不需要与服务器进行通信。当然,可以理解,能够与服务器进行通信的在线光标签也是可行的。
在一个实施例中,可以将光标签作为锚点,来确定设备的位置和姿态,从而实现虚拟对象到现实场景中的叠加。虚拟对象例如可以是图标、图片、文字、表情符号、虚拟的三维物体、三维场景模型、一段动画、一段视频、 一个可跳转的网页链接等等。图3示出了根据一个实施例的基于光标签在现实场景中叠加虚拟对象的方法,该方法包括如下步骤:
步骤301:设备获得光标签的标识信息。
例如,设备可以通过采集并分析光标签的图像,来识别出光标签传递的标识信息。该标识信息可以与一个或多个虚拟对象相关联。
步骤302:设备使用光标签的标识信息进行查询,以获得待叠加的虚拟对象以及所述虚拟对象的叠加信息,该叠加信息包括叠加位置信息。
设备在识别出光标签的标识信息之后,可以使用该标识信息向服务器发出查询请求。在服务器处可以预先存储与该光标签相关的信息,例如光标签的标识信息、光标签的位置信息、与该光标签(或光标签的标识信息)相关联的一个或多个虚拟对象的描述信息、每个虚拟对象的叠加位置信息等等。虚拟对象的描述信息是用于描述该虚拟对象的相关信息,例如可以包括虚拟对象中包含的图片、文字、图标、虚拟对象的标识信息、形状信息、颜色信息、尺寸信息、等等。基于该描述信息,设备可以呈现出相应的虚拟对象。虚拟对象的叠加位置信息可以是相对于光标签的位置信息(例如,虚拟对象的叠加位置相对于光标签的距离信息和相对于光标签的方向信息),其用于指示虚拟对象的叠加位置。设备通过向服务器发出查询请求,可以获得要在设备当前呈现的现实场景中叠加的虚拟对象的描述信息以及该虚拟对象的叠加信息。在一个实施例中,虚拟对象的叠加信息还可以包括虚拟对象的叠加姿态信息,该叠加姿态信息可以是虚拟对象相对于光标签的姿态信息,也可以是其在现实世界的坐标系中的姿态信息。
需要说明的是,为了确定虚拟对象的叠加姿态,并非必须使用虚拟对象的叠加姿态信息,而是也可以使用虚拟对象的叠加位置信息来确定虚拟对象的叠加姿态。例如,对于一个虚拟对象,可以确定其上的若干个点的叠加位置信息,这些不同的点的叠加位置信息能够用于确定该虚拟对象相对于光标签的姿态或者在现实世界坐标系中的姿态。
步骤303:设备确定其相对于光标签的位置信息。
设备可以采用多种方式来确定其相对于光标签的位置信息,该相对位置信息可以包括设备相对于光标签的距离信息和方向信息。通常情况下,设备相对于光标签的位置信息实际上是设备的图像采集器件相对于光标签的位置信息。在一个实施例中,设备可以通过采集包括光标签的图像并分析该图像来确定其相对于光标签的位置信息。例如,设备可以通过图像 中的光标签成像大小以及可选的其他信息(例如,光标签的实际物理尺寸信息、设备的摄像头的焦距)来确定光标签与识别设备的相对距离(成像越大,距离越近;成像越小,距离越远)。设备可以使用光标签的标识信息从服务器获得光标签的实际物理尺寸信息,或者光标签可以具有统一的物理尺寸并在设备上存储该物理尺寸。设备可以通过包括光标签的图像中的光标签成像的透视畸变以及可选的其他信息(例如,光标签的成像位置),来确定设备相对于光标签的方向信息。设备可以使用光标签的标识信息从服务器获得光标签的物理形状信息,或者光标签可以具有统一的物理形状并在设备上存储该物理形状。在又一个实施例中,设备也可以通过其上安装的深度摄像头或双目摄像头等来直接获得光标签与识别设备的相对距离。设备也可以采用现有的任何其他定位方法来确定其相对于光标签的位置信息。
在又一些实施例中,设备不仅可以确定其相对于光标签的位置信息,还可以基于所确定的设备相对于光标签的位置信息以及光标签本身的位置信息来获得该设备的当前位置信息,从而便于对用户进行精确定位或导航。由于光标签本身具有精确的位置信息,基于光标签获取的设备位置比传统GPS定位更为精确。需要说明的是,进行定位或导航的对象可以不是用户,而可以是能够自主移动的机器,例如,无人机、无人驾驶汽车、机器人等。该能够自主移动的机器上可以安装有图像采集设备,并可以以与手机类似的方式与光标签进行交互,从而获得其自身的位置信息。
在步骤304:设备确定其相对于光标签的姿态信息。
通常情况下,设备的姿态信息实际上是设备的图像采集器件的姿态信息。设备可以根据光标签的成像来确定其相对于光标签的姿态信息,当光标签的成像位置或成像区域位于设备成像视野的中心时,可以认为设备当前正对着光标签。在确定设备的姿态时可以进一步考虑光标签的成像的方向。随着设备的姿态发生改变,光标签在设备上的成像位置和/或成像方向会发生相应的改变,因此,可以根据光标签在设备上的成像来获得设备相对于光标签的姿态信息。
在又一些实施例中,也可以以如下方式来确定设备相对于光标签的位置和姿态信息(也可以统称为位姿信息)。具体地,可以根据光标签建立一个坐标系,该坐标系可以被称为光标签坐标系。可以将光标签上的一些点确定为在光标签坐标系中的一些空间点,并且可以根据光标签的物理尺 寸信息和/或物理形状信息来确定这些空间点在光标签坐标系中的坐标。光标签上的一些点例如可以是光标签的外壳的角、光标签中的光源的端部、光标签中的一些标识点、等等。根据光标签的物体结构特征或几何结构特征,可以在设备相机拍摄的图像中找到与这些空间点分别对应的像点,并确定各个像点在图像中的位置。根据各个空间点在光标签坐标系中的坐标以及对应的各个像点在图像中的位置,结合设备相机的内参信息,可以计算得到拍摄该图像时设备相机在光标签坐标系中的位姿信息(R,t),其中R为旋转矩阵,其可以用于表示设备相机在光标签坐标系中的姿态信息,t为位移向量,其可以用于表示设备相机在光标签坐标系中的位置信息。计算R、t的方法在现有技术中是已知的,例如,可以利用3D-2D的PnP(Perspective-n-Point)方法来计算R、t,为了不模糊本发明,在此不再详细介绍。
实际上,利用旋转矩阵R和位移向量t还可以描述如何将某个点的坐标在光标签坐标系和设备相机坐标系之间转换。例如,通过旋转矩阵R和位移向量t,可以将某个点在光标签坐标系中的坐标转换为在设备相机坐标系中的坐标,并可以进一步转换为图像中的像点的位置。如此,对于具有多个特征点(虚拟对象的轮廓上的多个点)的虚拟对象,可以在该虚拟对象的叠加信息中包括该多个特征点在光标签坐标系中的坐标(也即,相对于光标签的位置信息),基于多个特征点在光标签坐标系中的坐标,可以确定这些特征点在设备相机坐标系中的坐标,从而可以确定这些特征点在设备上的各自成像位置。虚拟对象的多个特征点的各自成像位置一旦确定,就可以相应地确定出该虚拟对象整体的成像的位置、大小、或姿态等。
继续参考图3,在步骤305:基于虚拟对象的叠加信息、设备相对于光标签的位置信息和姿态信息,在设备的显示媒介上呈现所述虚拟对象,从而在现实场景中叠加所述虚拟对象。
虚拟对象的叠加位置信息体现了待叠加的虚拟对象相对于光标签的位置信息。在通过上述步骤获得了虚拟对象的叠加位置信息和设备相对于光标签的位置信息之后,实际上可以创建出以光标签为原点的三维空间坐标系,其中,设备和待叠加的虚拟对象均具有在该坐标系中的准确的空间坐标。在一个实施例中,也可以基于虚拟对象的叠加位置信息和设备相对于光标签的位置信息来确定待叠加的虚拟对象相对于设备的位置信息。在上述基础上,可以基于设备的姿态信息在现实场景中叠加该虚拟对象。例 如,可以基于设备和待叠加的虚拟对象的相对距离来确定待叠加的虚拟对象的成像大小,可以基于设备和待叠加的虚拟对象的相对方向和设备的姿态信息来确定待叠加的虚拟对象在设备上的成像位置。基于该成像位置和成像大小,可以在现实场景中实现虚拟对象的准确叠加。在一个实施例中,待叠加的虚拟对象可以具有默认的成像大小,在这种情况下,可以仅确定待叠加的虚拟对象在设备上的成像位置,而不确定其成像大小。在叠加信息包括虚拟对象的叠加姿态信息的情况下,可以进一步确定所叠加的虚拟对象的姿态。在一个实施例中,可以根据上文所计算的设备(更准确地说,该设备的相机)相对于光标签的位姿信息(R,t)来确定待叠加的虚拟对象在设备上的成像的位置、大小及姿态等。在一种情况下,如果确定待叠加的虚拟对象当前不在设备的视野中(例如,虚拟对象的成像位置在显示屏幕之外),则不显示虚拟对象。
设备可以使用各种可行的方式来呈现现实场景。例如,设备可以通过摄像头采集现实世界的信息并使用上述信息在显示屏幕上再现出现实场景,虚拟对象的图像可以被叠加在该显示屏幕上。设备(例如智能眼镜)也可以不通过显示屏幕来再现现实场景,而是可以简单地通过棱镜、透镜、反射镜、透明物体(例如玻璃)等来再现现实场景,虚拟对象的图像可以被光学地叠加到该现实场景中。上述显示屏幕、棱镜、透镜、反射镜、透明物体等可以统称为设备的显示媒介,虚拟对象可以被呈现在该显示媒介上。例如,在一种光学透视式增强现实设备中,用户通过特定的透镜观察到现实场景,同时该透镜可以将虚拟对象的成像反射到用户的眼中。在一个实施例中,设备的用户可以直接观察到现实场景或其部分,该现实场景或其部分在被用户的眼睛观察到之前不需要经过任何媒介进行再现,并且虚拟对象可以被光学地叠加到该现实场景中。因此,现实场景或其中的部分在被用户的眼睛观察到之前并不一定需要通过设备来呈现或再现。
在叠加了虚拟对象之后,设备可能会发生平移和/或旋转,在这种情况下,可以例如使用设备内置的加速度传感器、陀螺仪、摄像头等通过本领域已知的方法(例如,惯性导航、视觉里程计、SLAM、VSLAM、SFM等)来测量或跟踪其位置变化和/或姿态变化,从而对虚拟对象的显示进行调整,例如,改变其成像位置、成像大小、观察角度、虚拟对象进入设备视野、虚拟对象离开设备视野等等。这在本领域中是已知的,不再赘述。在一些实施例中,设备还可以重新(例如,当光标签离开设备视野后重新 进入设备视野时,或者在光标签保持在设备视野中的情况下每隔一定时间)确定其相对于光标签的位置信息以及其相对于光标签的姿态信息,并基于虚拟对象的叠加位置信息、设备相对于光标签的位置信息、设备相对于光标签的姿态信息,重新确定虚拟对象的成像位置和/或成像大小,从而校正所述虚拟对象在现实场景中的叠加。
在上文中,基于虚拟对象相对于光标签的位置信息以及设备相对于光标签的位置信息和姿态信息,在设备的显示媒介所呈现的现实场景中实现了虚拟对象的叠加,但可以理解,这并非必须的,也可以使用其他坐标系下位置信息或姿态信息来实现虚拟对象的叠加。
在本发明的又一个实施例中,提供了一种基于光标签的导航方法,其流程示意如图4所示。该方法可以由设备执行,主要包括:步骤S401,根据设备所采集的包含光通信装置的图像来识别该光通信装置传递的标识信息,并确定该设备相对于该光通信装置的位置信息和姿态信息;步骤S402,利用所述标识信息获取预设的该光通信装置的位置信息;步骤S403,基于所获取的光通信装置的位置信息以及该设备相对于该光通信装置的位置信息和姿态信息,确定该设备当前的位置信息和姿态信息;步骤S404,获得要叠加的一个或多个虚拟导航指示的叠加位置信息,其中,该叠加位置信息是基于目的地位置信息与该设备当前的位置信息而确定的;步骤S405,基于所述设备当前的位置信息和姿态信息以及所述一个或多个虚拟导航指示的叠加位置信息,在所述设备的显示媒介所呈现的现实场景中叠加一个或多个虚拟导航指示。下文详细描述该方法中的各个步骤。
在步骤S401,带有图像采集部件的设备或者携带该设备的人在行进中可以使用设备对视野范围内某个或某些光标签进行图像采集。如上文结合图1-3所描述的,在经由设备采集到包含光标签的图像之后,根据采集的图像来识别其中光标签的标识信息;并且还可以基于所采集的图像确定该设备相对于该光标签的位置信息和姿态信息。
在获得光标签的标识信息之后,在步骤S402,如上文介绍的,可以基于光标签的标识信息从服务器获取该光标签的位置信息。在一个实施例中个,还可以获取该光标签的姿态信息。应理解,光标签的标识信息是为了指示特定的光标签,当要进行导航的环境中布置两个或更多个光标签时,需要识别光标签的标识信息;而如果在特定场馆或特定环境中只布置有一个光标签时,则可以不需要识别光标签的标识信息,设备直接访问预设的 服务器获取与该唯一光标签的相关信息即可。
接着,在步骤S403,基于所获取的光标签的位置信息以及设备相对于该光标签的位置信息和姿态信息,确定该设备当前的位置信息和姿态信息。这里,设备的当前位置信息和姿态信息可以是在用于进行导航的坐标系中的位置和姿态。例如,如果该用于进行导航的坐标系是上文提到的光标签坐标系,那么设备当前的位置信息和姿态信息就是该设备相对于光标签的位置信息和姿态信息。如果用于进行导航的坐标系为世界坐标系或者特定场馆的坐标系,则如上文提到的,可以利用预先设定的实际的光标签的位置信息结合设备相对于该光标签的位置信息和姿态信息来计算得到该设备在该用于进行导航的坐标系中的位置信息和姿态信息来作为该设备当前的位置信息和姿态信息。在一个实施例中,可以基于所获取的光标签的位置信息和姿态信息以及设备相对于该光标签的位置信息和姿态信息,确定该设备当前的位置信息和姿态信息。
在步骤S404,获得要叠加的一个或多个虚拟导航指示的叠加位置信息,其中,该叠加位置信息是基于目的地位置信息与该设备当前的位置信息而确定的。
设备可以通过多种可行方式来获取目的地位置信息。例如,目的地位置信息可以是直接根据用户的输入或选择的目的地址来设置的。在一个实施例中,可以在设备的显示媒介上呈现目的地列表,供使用该设备的用户进行选择,或者呈现输入框,供用户输入与目的地有关的信息。在又一个实施例中,可以在设备的显示媒介上呈现以更友好方式指示附近可能的目的地的虚拟对象来供用户选择,例如以虚拟路牌、体现目的地类型的图像或图标(例如,加油站、饭店、书店等)等,使得用户点击相关虚拟对象可以选择所期望的目的地。
在又一些实施例中,目的地位置信息也可以是至少部分地基于与目的地有关的信息来确定的。例如,用户可能不了解具体的目的地址,而是输入或选择与目的地有关的信息,例如输入或选择的“航班号”、“1号登机口”、饭店名称等。设备在获得与目的地有关的信息后,可以查询或检索预先建立的用于导航的数据库、场景地图或场景信息库等来确定相应的目的地位置信息。用于导航的数据库、场景地图或场景信息库可以存储在提供导航服务的服务器(导航服务器)上,也可以存储在设备上。当通过服务器确定目的位置信息时,该服务器可以将所确定的目的地位置信息发送 给设备。与目的地有关的信息可以包括与目的地名称、目的地类型、目的地功能等等相关的信息。在又一个实施例中,可以将与目的地类型或功能有关的信息(例如,用户输入或选择的“洗手间”)与设备当前的位置信息相结合来确定目的地位置信息,例如,距离该设备最近的洗手间的位置信息。在又一个实施例中,可以使用用户提供的与目的地类型或功能有关的信息(例如,用户输入或选择的“停车位”)并结合设备当前的位置信息以及目的地的当前状态信息(例如,各个停车位空闲状况)来确定目的地位置信息,例如与设备最接近的空闲停车位的位置信息。
在又一个实施例中,还可以使用与预先设置的导航目的相关的信息来确定目的地位置信息,例如可以通过点击“一键找车”、“上班”、“回家”、“寻找附近的历史足迹”等等来获得目的地位置信息。例如,用户点击“一键找车”时,可以结合在设备或者服务器中预先存储的与该用户之前的停车位置相关的一些信息来确定目的地位置信息;这些信息例如停车时用户通过扫描光标签而确定的位置信息,或者用户停车时所记录的停车位号码,或者用户停车时拍摄的包含停车位信息的照片。当用户选择的导航目的为“寻找附近的历史足迹”时,可以将该设备的当前位置信息与在设备或者服务器中存储的在该设备当前位置附近与该用户曾经去过的目的地相关的历史数据来确定目的地位置信息,例如当用户到达某个商业区时,可以给出该用户在该地区最近一次去过的饭店、商店、咖啡店等等。
在确定目的地位置信息之后,可以基于目的地位置信息与该设备当前的位置信息为设备确定行进路线。在一个实施例中,如果设备所处环境中布置有光标签网络,那么在获得了导航起始点(设备的当前位置)和目的点之后,可以基于光标签网络中的光标签的部署情况,为设备提供规划的行进路线,该行进路线的沿途具有一个或多个光标签。例如,可以在使用起始点和目的点确定了若干条可行路线之后,基于每条路线上的光标签部署情况,为设备提供一条或多条推荐的规划行进路线。在其他条件相同的情况下,优选地推荐沿途部署了较多光标签的行进路线,以便于在行进过程持续地通过沿途的光标签为设备进行导航。在又一个实施例中,如果在设备所处环境中已经建立了场景地图数据、场景模型或场景信息库等,可以结合与场景有关的信息来为设备提供行进路线;例如参考场景中的道路信息、建筑物信息、电梯信息、楼梯信息、门禁信息等,可以确定哪些区域或者线路可以通行。在该实施例中确定行进路线的具体方式与现有导航 方法类似,在此不再赘述。在又一个实施例中,可以在导航起点与目的点之间建立直线形式的行进路线,如果该行进路线穿过障碍物,可以进行人为避障,当设备绕开障碍物时,可以基于设备的新的位置重新确定设备的当前位置与目的位置之间的行进路线(下文中详述)。
在确定了目的地位置与该设备当前的位置之间的行进路线之后,可以沿该行进路线确定要叠加的一个或多个虚拟导航指示的叠加位置信息。其中虚拟导航指示可以是上文结合图3所介绍的虚拟对象,其可以采用便于指引用户行进和识别目的地的任何形式,例如箭头形式的图标、采用与设备当前语言选项对应的语言显示的方向指示或虚拟路牌、虚拟导航人物或动物、沿行进路线的两侧建筑物信息等等。如上文介绍的,虚拟导航指示的叠加位置信息可以是虚拟导航指示相对于光标签的位置信息,但是可以理解,其也可以是在世界坐标系或者特定场馆的坐标系下的位置信息。可以沿行进路线每间隔一段距离设置一个位置点来叠加虚拟导航指示。
在一个实施例中,可以由设备自身来确定虚拟导航指示的叠加位置信息,从而获得该叠加位置信息。在另一个实施例中,可以由导航服务器确定虚拟导航指示的叠加位置信息,并将其发送给设备。
在步骤S405,基于上述一个或多个虚拟导航指示的叠加位置信息以及设备当前的位置信息和姿态信息,可以在设备的显示媒介所呈现的现实场景中叠加一个或多个虚拟导航指示。如图5所示,当用户采用上述实施例中的导航方法进行车位导航时,可以在用户手持的设备呈现的实时现实场景中叠加虚拟的方向箭头来对用户给出前方行进的方向和路线指引,便于用户方便快速地到达目的地。
在一些实施例中,该方法还可以包括在行进过程中持续跟踪和获取设备的当前位置信息和姿态信息,以及重新基于新获取的设备的当前位置信息和姿态信息以及所述一个或多个虚拟导航指示的叠加位置信息,在设备的显示媒介所呈现的现实场景中叠加各个虚拟导航指示。其中,在行进过程中重新获取的设备的当前位置信息和姿态信息可以是在步骤S403确定的位置信息和姿态信息的基础上获取的。例如,在行进过程中,设备通常有可能发生平移或旋转,可以通过设备内置的各种传感器来监控设备本身姿态的变化,并基于这些姿态变化对在步骤S403确定的设备姿态信息进行调整,以获取该设备当前的姿态信息。同样,可以通过设备内置的位置传感器监控设备位置的变化,基于这些位置的变化对在步骤S403确定的 设备位置信息进行调整,以获取该设备当前的位置信息。在又一个实施例中,还可以预先建立导航环境的场景模型,然后在行进过程中,通过将设备视野中的现实场景与该场景模型进行比较来校准设备的当前位置信息和姿态信息。在重新确定了设备当前的位置信息和姿态信息之后,可以返回至步骤S404或者步骤S405继续执行。
在一个优选的实施例中,在行进过程中,还可以通过扫描光标签来校准或重新确定设备当前的位置信息和姿态信息,然后返回至步骤S404或者步骤S405继续执行。所扫描的光标签可以是与上次在步骤S401中扫描的光标签相同的光标签,也可以是其他的光标签。设备在行进过程中扫描的光标签不一定是原来规划的行进路线沿途的光标签,例如,用户在行进过程可能已经偏离了规划的行进路线。并且,设备在行进过程中不一定扫描规划的行进路线沿途的所有光标签,而是可以基于实际需要选择性进行扫描,例如,在到达路口时扫描附近的某个光标签。
本发明上述实施例中的导航方法相比于常见的GPS导航能够提供更高的精度,尤其是在缺乏GPS信号或GPS信号不是很良好的情况下,例如在繁华的商业街区或商场中的导航。在该商业街区中,用户可以在行走过程中通过商店门头或建筑物上安装的光标签来实现精确的导航,而GPS导航通常难以满足这种情况下所需的精度。并且,与现有的基于GPS的全景地图导航相比,本发明实施例提供的导航方法实现了真正意义上的实景导航,可以在通过设备实时获取的现实场景中及时叠加相应的虚拟导航指示来给出快速灵活的实景路线指引,不需要预先制作、传输和加载全景地图模型,降低了对网络的传输的要求和对设备存储和计算能力的要求。与之不同,全景地图不仅制作成本和更新维护成本高,而且对于网络流量以及对于终端设备的存储和计算能力有较高要求,难以有效进行快速实时的指引。
本发明的一个实施例涉及一种基于光标签的导航系统,其可以包括光标签、光标签服务器和导航服务器。光标签服务器用于从导航设备接收光标签传递的标识信息,并向导航设备提供光标签的位置信息。导航服务器用于基于目的地位置信息与导航设备当前的位置信息确定要叠加的一个或多个虚拟导航指示的叠加位置信息。本领域技术人员可以理解,光标签服务器和导航服务器可以是物理上分离的两个服务器,但也可以集成在一起,也即,作为同一个物理服务器的不同功能模块。上述导航系统还可以 包括导航设备,该导航设备可以用于执行图4所示的方法。
上文结合虚拟导航指示描述了本发明的一些实施例,但是需要说明的是,在现实场景中叠加虚拟导航指示并不是必须的,在本发明的一些实施例中,在获得了设备的当前位置信息和姿态信息(参考上文描述的步骤S403)之后,也可以基于目的地位置信息以及设备的当前位置信息和姿态信息,通过其他各种可行的方式来为设备提供导航提示信息,例如,可以在设备所显示的导航地图上提供方向指示或路线指示、可以通过语音来为设备用户提供导航提示信息、等等。由于本发明的方法能够获得设备的精确位置信息,并且能够另外获得设备的姿态信息,因此,相比于现有技术中的常规导航方式(例如GPS导航),可以为设备提供更为精准的导航。
本文中提到的设备可以是用户携带或控制的设备(例如,手机、平板电脑、智能眼镜、AR眼镜、智能头盔、智能手表、等等),但是可以理解,该设备也可以是能够自主移动的机器,例如,无人机、无人驾驶汽车、机器人等。设备上可以安装有图像采集器件(例如摄像头)和显示媒介(例如显示屏)。
在本发明的又一个实施例中,可以以计算机程序的形式来实现本发明。计算机程序可以存储于各种存储介质(例如,硬盘、光盘、闪存等)中,当该计算机程序被处理器执行时,能够用于实现本发明的方法。
在本发明的又一个实施例中,可以以电子设备的形式来实现本发明。该电子设备包括处理器和存储器,在存储器中存储有计算机程序,当该计算机程序被处理器执行时,能够用于实现本发明的方法。
本文中针对“各个实施例”、“一些实施例”、“一个实施例”、或“实施例”等的参考指代的是结合所述实施例所描述的特定特征、结构、或性质包括在至少一个实施例中。因此,短语“在各个实施例中”、“在一些实施例中”、“在一个实施例中”、或“在实施例中”等在整个本文中各处的出现并非必须指代相同的实施例。此外,特定特征、结构、或性质可以在一个或多个实施例中以任何合适方式组合。因此,结合一个实施例中所示出或描述的特定特征、结构或性质可以整体地或部分地与一个或多个其他实施例的特征、结构、或性质无限制地组合,只要该组合不是非逻辑性的或不能工作。本文中出现的类似于“根据A”、“基于A”、“通过A”或“使用A”的表述意指非排他性的,也即,“根据A”可以涵盖“仅仅根据A”,也可以涵盖“根据A和B”,除非特别声明或者根据上下文明确可知其含 义为“仅仅根据A”。在本申请中为了清楚说明,以一定的顺序描述了一些示意性的操作步骤,但本领域技术人员可以理解,这些操作步骤中的每一个并非是必不可少的,其中的一些步骤可以被省略或者被其他步骤替代。这些操作步骤也并非必须以所示的方式依次执行,相反,这些操作步骤中的一些可以根据实际需要以不同的顺序执行,或者并行执行,只要新的执行方式不是不符合逻辑的或不能工作。
由此描述了本发明的至少一个实施例的几个方面,可以理解,对本领域技术人员来说容易地进行各种改变、修改和改进。这种改变、修改和改进意于在本发明的精神和范围内。虽然本发明已经通过优选实施例进行了描述,然而本发明并非局限于这里所描述的实施例,在不脱离本发明范围的情况下还包括所作出的各种改变以及变化。

Claims (19)

  1. 一种基于光通信装置的导航方法,所述方法包括:
    S1)根据设备所采集的包含光通信装置的图像来识别该光通信装置传递的标识信息,并确定该设备相对于该光通信装置的位置信息和姿态信息;
    S2)利用所述标识信息获取预设的该光通信装置的位置信息;
    S3)基于所获取的光通信装置的位置信息以及该设备相对于该光通信装置的位置信息和姿态信息,确定该设备当前的位置信息和姿态信息;
    S4)获得导航提示信息,其中,所述导航提示信息是基于目的地位置信息以及所述设备当前的位置信息和姿态信息而产生的。
  2. 根据权利要求1所述的导航方法,还包括通过所述设备重新采集任一光通信装置的图像,并返回至所述步骤S1)继续执行。
  3. 根据权利要求1所述的导航方法,还包括通过设备内置的多个传感器监控设备位置和姿态的变化,并基于所监控的位置和姿态的变化更新设备当前的位置信息和姿态信息。
  4. 根据权利要求1所述的导航方法,还包括通过将所述设备视野中现实场景与预先针对该现实场景建立的场景模型进行比较来更新设备当前的位置信息和姿态信息。
  5. 根据权利要求1-4中任一项所述的导航方法,其中,所述步骤S4)包括:
    S41)获得要叠加的一个或多个虚拟导航指示的叠加位置信息,其中,所述叠加位置信息是基于目的地位置信息与该设备当前的位置信息而确定的;
    S42)基于所述设备当前的位置信息和姿态信息以及所述一个或多个虚拟导航指示的叠加位置信息,在所述设备的显示媒介所呈现的现实场景中叠加一个或多个虚拟导航指示。
  6. 根据权利要求5所述的导航方法,还包括响应于更新的设备当前的位置信息和姿态信息,继续执行步骤S42)或者继续执行S41)和S42)。
  7. 根据权利要求1-4中任一项所述的导航方法,其中所述目的地位置信息是通过下面的步骤获取的:
    在所述设备的显示媒介上呈现目的地列表;
    响应于对于所呈现的目的地列表的其中一个目的地的选择来获取与所选择的目的地相关的目的地位置信息。
  8. 根据权利要求1-4中任一项所述的导航方法,其中所述目的地位置信息至少部分地基于与目的地有关的信息来确定的,所述与目的地有关的信息包括下列中的一个或多个或其组合:目的地名称、目的地类型、目的地功能、目的地状态。
  9. 根据权利要求8所述的导航方法,其中所述目的地位置信息是基于所述设备收到的与目的地类型或目的地功能有关的信息并结合设备当前的位置信息来确定的。
  10. 根据权利要求8所述的导航方法,其中所述目的地位置信息是基于所述设备收到的与目的地类型或目的地功能有关的信息并结合设备当前的位置信息以及目的地的当前状态信息来确定的。
  11. 根据权利要求1-4中任一项所述的导航方法,其中所述目的地位置信息是基于预先存储的与目的地有关的信息确定的。
  12. 根据权利要求1所述的导航方法,其中,
    在所述步骤S2)还获取预设的该光通信装置的姿态信息;
    以及其中,所述步骤S3)包括:基于所获取的光通信装置的位置信息和姿态信息以及该设备相对于该光通信装置的位置信息和姿态信息,确定该设备当前的位置信息和姿态信息。
  13. 一种电子设备,包括处理器和存储器,所述存储器中存储有计算 机程序,所述计算机程序在被所述处理器执行时能够用于实现权利要求1-12中任一项所述的导航方法。
  14. 一种存储介质,其中存储有计算机程序,所述计算机程序在被执行时能够用于实现权利要求1-12中任一项所述的导航方法。
  15. 一种基于光通信装置的导航系统,包括光通信装置、光通信装置服务器、导航服务器,其中:
    所述光通信装置服务器用于从导航设备接收所述光通信装置传递的标识信息,并向所述导航设备提供所述光通信装置的位置信息;以及
    所述导航服务器用于基于目的地位置信息与所述导航设备当前的位置信息和姿态信息为导航设备提供导航提示信息,其中,所述导航设备当前的位置信息和姿态信息基于所述光通信装置的位置信息以及所述导航设备相对于所述光通信装置的位置信息和姿态信息来确定。
  16. 根据权利要15所述的导航系统,其中,所述光通信装置服务器还用于向所述导航设备提供所述光通信装置的姿态信息,以及其中,所述导航设备当前的位置信息和姿态信息基于所述光通信装置的位置信息和姿态信息以及所述导航设备相对于所述光通信装置的位置信息和姿态信息来确定。
  17. 根据权利要15或16所述的导航系统,其中,所述导航服务器还用于基于目的地位置信息与所述导航设备当前的位置信息确定要叠加的一个或多个虚拟导航指示的叠加位置信息,其中,该叠加位置信息能够被所述导航设备使用以基于其当前的位置信息和姿态信息,在所述导航设备的显示媒介所呈现的现实场景中叠加一个或多个虚拟导航指示。
  18. 根据权利要17所述的导航系统,还包括所述导航设备,其用于:
    采集所述光通信装置的图像;
    基于所采集的图像来识别该光通信装置传递的标识信息并确定该导航设备相对于该光通信装置的位置信息和姿态信息;
    利用所述标识信息从光通信装置服务器获取该光通信装置的位置信 息;
    基于所获取的光通信装置的位置信息以及该导航设备相对于该光通信装置的位置信息和姿态信息,确定该导航设备当前的位置信息和姿态信息;以及
    基于所述导航设备当前的位置信息和姿态信息以及所述一个或多个虚拟导航指示的叠加位置信息,在所述导航设备的显示媒介所呈现的现实场景中叠加一个或多个虚拟导航指示。
  19. 根据权利要18所述的导航系统,其中,所述导航设备还用于利用所述标识信息从光通信装置服务器获取该光通信装置的姿态信息,以及基于所获取的光通信装置的位置信息和姿态信息以及该导航设备相对于该光通信装置的位置信息和姿态信息,确定该导航设备当前的位置信息和姿态信息。
PCT/CN2020/117639 2019-09-26 2020-09-25 基于光通信装置的导航方法、系统、设备及介质 WO2021057886A1 (zh)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201910915679.X 2019-09-26
CN201910915679 2019-09-26
CN201911119692.0A CN112558008B (zh) 2019-09-26 2019-11-15 基于光通信装置的导航方法、系统、设备及介质
CN201911119692.0 2019-11-15

Publications (1)

Publication Number Publication Date
WO2021057886A1 true WO2021057886A1 (zh) 2021-04-01

Family

ID=75030234

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/117639 WO2021057886A1 (zh) 2019-09-26 2020-09-25 基于光通信装置的导航方法、系统、设备及介质

Country Status (3)

Country Link
CN (1) CN112558008B (zh)
TW (1) TWI750821B (zh)
WO (1) WO2021057886A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023246530A1 (zh) * 2022-06-20 2023-12-28 中兴通讯股份有限公司 Ar导航方法、终端、存储介质

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114596511A (zh) * 2022-02-10 2022-06-07 深圳市瑞立视多媒体科技有限公司 主动光刚体识别方法、装置、设备及存储介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102829775A (zh) * 2012-08-29 2012-12-19 成都理想境界科技有限公司 一种室内导航方法、系统及设备
CN105371847A (zh) * 2015-10-27 2016-03-02 深圳大学 一种室内实景导航方法及系统
CN106441268A (zh) * 2016-08-30 2017-02-22 西安小光子网络科技有限公司 一种基于光标签的定位方法
US20170154424A1 (en) * 2015-12-01 2017-06-01 Canon Kabushiki Kaisha Position detection device, position detection method, and storage medium
CN107734449A (zh) * 2017-11-09 2018-02-23 陕西外号信息技术有限公司 一种基于光标签的室外辅助定位方法、系统及设备
CN109936712A (zh) * 2017-12-19 2019-06-25 陕西外号信息技术有限公司 基于光标签的定位方法及系统
CN110470312A (zh) * 2018-05-09 2019-11-19 北京外号信息技术有限公司 基于光标签网络的导航方法和相应的计算设备
CN111026107A (zh) * 2019-11-08 2020-04-17 北京外号信息技术有限公司 用于确定可移动物体的位置的方法和系统

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5987381A (en) * 1997-03-11 1999-11-16 Visteon Technologies, Llc Automobile navigation system using remote download of data
WO2001043104A1 (en) * 1999-12-10 2001-06-14 David Sitrick Methodology, apparatus, and system for electronic visualization of traffic conditions
WO2007077613A1 (ja) * 2005-12-28 2007-07-12 Fujitsu Limited ナビゲーション情報表示システム、ナビゲーション情報表示方法、およびそのためのプログラム
GB0822602D0 (en) * 2008-12-11 2009-01-21 Tomtom Int Bv Navigation device & Methods
KR102021050B1 (ko) * 2012-06-06 2019-09-11 삼성전자주식회사 내비게이션 정보를 제공하는 방법, 기계로 읽을 수 있는 저장 매체, 이동 단말 및 서버
CN103335657B (zh) * 2013-05-30 2016-03-02 佛山电视台南海分台 一种基于图像捕获和识别技术增强导航功能的方法和系统
KR101558060B1 (ko) * 2013-12-15 2015-10-19 광운대학교 산학협력단 가시광 통신 기반 실내 위치 인식 시스템, 실내 네비게이션 방법, 실내 네비게이션 시스템, 실내 네비게이션을 수행하는 서버 및 전자 기기
CN109099915B (zh) * 2018-06-27 2020-12-25 未来机器人(深圳)有限公司 移动机器人定位方法、装置、计算机设备和存储介质

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102829775A (zh) * 2012-08-29 2012-12-19 成都理想境界科技有限公司 一种室内导航方法、系统及设备
CN105371847A (zh) * 2015-10-27 2016-03-02 深圳大学 一种室内实景导航方法及系统
US20170154424A1 (en) * 2015-12-01 2017-06-01 Canon Kabushiki Kaisha Position detection device, position detection method, and storage medium
CN106441268A (zh) * 2016-08-30 2017-02-22 西安小光子网络科技有限公司 一种基于光标签的定位方法
CN107734449A (zh) * 2017-11-09 2018-02-23 陕西外号信息技术有限公司 一种基于光标签的室外辅助定位方法、系统及设备
CN109936712A (zh) * 2017-12-19 2019-06-25 陕西外号信息技术有限公司 基于光标签的定位方法及系统
CN110470312A (zh) * 2018-05-09 2019-11-19 北京外号信息技术有限公司 基于光标签网络的导航方法和相应的计算设备
CN111026107A (zh) * 2019-11-08 2020-04-17 北京外号信息技术有限公司 用于确定可移动物体的位置的方法和系统

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023246530A1 (zh) * 2022-06-20 2023-12-28 中兴通讯股份有限公司 Ar导航方法、终端、存储介质

Also Published As

Publication number Publication date
TW202113391A (zh) 2021-04-01
CN112558008B (zh) 2024-03-12
TWI750821B (zh) 2021-12-21
CN112558008A (zh) 2021-03-26

Similar Documents

Publication Publication Date Title
WO2021218546A1 (zh) 一种设备定位方法和系统
US20200363216A1 (en) Localizing transportation requests utilizing an image based transportation request interface
WO2019079211A1 (en) LIDAR-CAMERA CALIBRATION TO GENERATE HIGH DEFINITION MAPS
US11520033B2 (en) Techniques for determining a location of a mobile object
EP3848674B1 (en) Location signaling with respect to an autonomous vehicle and a rider
CN109459029A (zh) 一种用于确定目标对象的导航路线信息的方法与设备
CN110392908A (zh) 用于生成地图数据的电子设备及其操作方法
JP2019153274A (ja) 位置算出装置、位置算出プログラム、位置算出方法、及びコンテンツ付加システム
WO2021057886A1 (zh) 基于光通信装置的导航方法、系统、设备及介质
WO2019214640A1 (zh) 基于光标签网络的导航方法和相应的计算设备
US11645789B2 (en) Map driven augmented reality
TWI750822B (zh) 用於為目標設置可呈現的虛擬對象的方法和系統
US11656089B2 (en) Map driven augmented reality
US20240118703A1 (en) Display apparatus, communication system, display control method, and recording medium
CN112055034B (zh) 基于光通信装置的交互方法和系统
CN112055033B (zh) 基于光通信装置的交互方法和系统
WO2020244578A1 (zh) 基于光通信装置的交互方法和电子设备
WO2020244576A1 (zh) 基于光通信装置叠加虚拟对象的方法和相应的电子设备
WO2022121606A1 (zh) 用于获得场景中的设备或其用户的标识信息的方法和系统
TWI747333B (zh) 基於光通信裝置的交互方法、電子設備以及電腦可讀取記錄媒體
TWI759764B (zh) 基於光通信裝置疊加虛擬物件的方法、電子設備以及電腦可讀取記錄媒體
JP7572517B1 (ja) 表示制御装置、表示制御方法、情報処理端末、及びプログラム
TWI734464B (zh) 基於光通信裝置的資訊顯示方法、電子設備、以及電腦可讀取記錄媒體
WO2023178495A1 (zh) 无人机、控制终端、服务器及其控制方法
CN111752293A (zh) 用于对能够自主移动的机器进行导引的方法和电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20869314

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20869314

Country of ref document: EP

Kind code of ref document: A1