CN110794955A - Positioning tracking method, device, terminal equipment and computer readable storage medium - Google Patents

Positioning tracking method, device, terminal equipment and computer readable storage medium Download PDF

Info

Publication number
CN110794955A
CN110794955A CN201810891134.5A CN201810891134A CN110794955A CN 110794955 A CN110794955 A CN 110794955A CN 201810891134 A CN201810891134 A CN 201810891134A CN 110794955 A CN110794955 A CN 110794955A
Authority
CN
China
Prior art keywords
marker
information
acquiring
position information
terminal device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810891134.5A
Other languages
Chinese (zh)
Other versions
CN110794955B (en
Inventor
胡永涛
戴景文
贺杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Virtual Reality Technology Co Ltd
Original Assignee
Guangdong Virtual Reality Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Virtual Reality Technology Co Ltd filed Critical Guangdong Virtual Reality Technology Co Ltd
Priority to CN201810891134.5A priority Critical patent/CN110794955B/en
Priority to PCT/CN2019/098200 priority patent/WO2020024909A1/en
Priority to US16/687,699 priority patent/US11127156B2/en
Publication of CN110794955A publication Critical patent/CN110794955A/en
Application granted granted Critical
Publication of CN110794955B publication Critical patent/CN110794955B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/245Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning

Abstract

The application discloses a positioning and tracking method, a positioning and tracking device, terminal equipment and a computer readable storage medium, wherein the method comprises the following steps: acquiring an image comprising a marker; identifying a marker in the image and obtaining first position information; acquiring pose change information of the terminal equipment, wherein the pose change information comprises position change information and attitude change information of the terminal equipment; acquiring second position information of the terminal equipment according to the pose change information; and acquiring the current position information of the terminal equipment based on the first position information and/or the second position information. According to the method and the device, the position of the user can be determined through marker tracking, the position of the user is calculated and obtained by combining pose information of the terminal device when the marker cannot be detected, and the accuracy of indoor positioning tracking is improved.

Description

Positioning tracking method, device, terminal equipment and computer readable storage medium
Technical Field
The present application relates to the field of virtual reality technologies, and in particular, to a positioning and tracking method, an apparatus, a terminal device, and a computer-readable storage medium.
Background
With the development of Virtual Reality (VR) and Augmented Reality (AR) technologies, terminal devices related to Virtual Reality and Augmented Reality gradually come into the daily lives of people. When people use the AR/VR equipment indoors, the marker can be used as a reference after being collected by the camera shooting assembly on the equipment, and the position of the user in the room is determined.
Disclosure of Invention
The application provides a positioning and tracking method and device, a terminal device and a computer readable storage medium, the current position of a user can be obtained through calculation by combining pose information of the terminal device when a marker cannot be detected, and the accuracy of indoor positioning and tracking is improved.
In a first aspect, an embodiment of the present application provides a positioning and tracking method, where the method includes: acquiring an image comprising a marker; identifying a marker in the image and obtaining first position information; acquiring pose change information of the terminal equipment, wherein the pose change information comprises position change information and attitude change information of the terminal equipment; acquiring second position information of the terminal equipment according to the pose change information; and acquiring the current position information of the terminal equipment based on the first position information and/or the second position information.
In a second aspect, an embodiment of the present application provides a positioning and tracking apparatus, including: an acquisition module for acquiring an image containing a marker; the identification module is used for identifying the marker in the image and acquiring first position information; the first pose module is used for acquiring pose change information of the terminal equipment, and the pose change information comprises position change information and pose change information of the terminal equipment; the second pose module is used for acquiring second position information of the terminal equipment according to the pose change information; and the positioning module is used for acquiring the current position information of the terminal equipment based on the first position information and/or the second position information.
In a third aspect, an embodiment of the present application provides a terminal device, which includes a display, a memory, and a processor, where the display and the memory are coupled to the processor, and the memory stores instructions, and when the instructions are executed by the processor, the processor performs the method of the first aspect.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium having program code executable by a processor, where the program code causes the processor to execute the method of the first aspect.
According to the positioning and tracking method, the positioning and tracking device, the terminal equipment and the computer readable storage medium, images containing markers are collected firstly; then identifying a marker in the image and acquiring first position information; acquiring pose change information of the terminal equipment, wherein the pose change information comprises position change information and attitude change information of the terminal equipment; acquiring second position information of the terminal equipment according to the pose change information; and finally, acquiring the current position information of the terminal equipment based on the first position information and/or the second position information. According to the embodiment of the application, the position of the user can be determined through marker tracking, and the position of the user can be calculated and obtained by combining pose information of the terminal equipment when the marker cannot be detected, so that the accuracy of indoor positioning tracking is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a diagram illustrating an application scenario of a positioning and tracking method according to an embodiment of the present application;
fig. 2 shows a block diagram of a terminal device according to an embodiment of the present application;
fig. 3 shows an interaction diagram of a terminal device and a server according to an embodiment of the present application;
fig. 4 is a schematic flowchart illustrating a positioning and tracking method provided by an embodiment of the present application;
fig. 5 is a diagram illustrating another application scenario of the positioning and tracking method according to the embodiment of the present application;
fig. 6 is a schematic flow chart illustrating another positioning and tracking method provided in the embodiment of the present application;
FIG. 7 is a block diagram of a positioning and tracking device provided by an embodiment of the present application;
fig. 8 shows a block diagram of another positioning and tracking apparatus provided in the embodiments of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
With the development of technologies such as VR and AR, terminal devices related to VR/AR gradually enter people's daily life. When people use the VR/AR device, a camera assembly on the device (for example, a head-mounted display) can be used to collect a Marker (also called Marker or Tag) in a real environment, the position and rotation (posture) information of the device relative to the Marker can be obtained through image processing, and the position of a user (terminal device) in a virtual map corresponding to the real environment can be obtained through corresponding calculation.
However, the inventors have found in their research that, in the current VR/AR indoor environment provided with markers, the distribution of the markers in the real environment is generally more dispersed. For example, in a VR/AR museum, an indoor space is large, different markers for displaying images of virtual exhibits are dispersed, if a camera of a terminal device cannot acquire an image including a marker in a process that a user moves from one marker to the vicinity of another marker, the terminal device does not have a reference object to position itself, a position of the user in a virtual map corresponding to a real environment will be inaccurate or even lost, and a virtual image displayed to the user by a display screen of the VR/AR device may be deviated or lost.
In order to solve the above problems, the inventors have studied and proposed a positioning and tracking method, an apparatus, a terminal device, and a computer-readable storage medium in the embodiments of the present application.
The following describes in detail a positioning and tracking method, an apparatus, a terminal device, and a storage medium provided in the embodiments of the present application with specific embodiments.
Referring to fig. 1, a diagram of an application scenario of the positioning and tracking method provided in the embodiment of the present application is shown, where the application scenario includes a display system 10. The display system 10 includes: a terminal device 20 and a tag 30.
In this embodiment, the terminal device 20 may be a head-mounted display device, a mobile phone, a tablet, or the like, wherein the head-mounted display device may be an integrated head-mounted display device. The terminal device 20 may be an intelligent terminal such as a mobile phone connected to an external head-mounted display device. Referring to fig. 2, as an embodiment, the terminal device 20 may include: a processor 21, a memory 22, a display device 23, and a camera 24. The memory 22, the display device 23 and the camera 24 are all connected to the processor 21.
The camera 24 is used for acquiring an image of an object to be photographed and sending the image to the processor 21. The camera 24 may be an infrared camera, a color camera, etc., and the specific type of the camera 24 is not limited in the embodiment of the present application.
The processor 21 may comprise any suitable type of general or special purpose microprocessor, digital signal processor or microcontroller. The processor 21 may be configured to receive data and/or signals from various components of the system via, for example, a network. The processor 21 may also process the data and/or signals to determine one or more operating conditions in the system. For example, the processor 21 generates image data of a virtual world from image data stored in advance, and transmits the image data to the display device for display; the image data sent by the intelligent terminal or the computer can be received through a wired or wireless network, and the image of the virtual world is generated and displayed according to the received image data; and the display device 23 can also identify and position the virtual world according to the image acquired by the camera, determine the corresponding display content in the virtual world according to the positioning information, and send the display content to the virtual world for display.
The memory 22 may be used to store software programs and modules, and the processor 21 executes various functional applications and data processing by operating the software programs and modules stored in the memory 22. The memory 22 may include high speed random access memory and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory.
The display device and the camera of the terminal device 20 are connected to a terminal device having a memory function of the memory and a processing function of the processor. It is to be understood that the processing executed by the processor in the above embodiments is executed by the processor of the terminal device, and the data stored by the memory in the above embodiments is stored by the memory of the terminal device.
In the embodiment of the present application, the terminal device 20 may further include a communication module, and the communication module is connected to the processor. The communication module is used for communication between the terminal device 20 and other terminals.
In the embodiment of the present application, the display system 10 further includes a marker 30 placed in the field of view of the camera 24 of the terminal device 20, i.e., the camera 24 can acquire an image of the marker 30. The image of the marker 30 is stored in the terminal device 20 for locating the position of the terminal device 20 relative to the marker 30.
The marker 30 may include at least one sub-marker therein, and the sub-marker may be a pattern having a certain shape. In one embodiment, each sub-marker may have one or more feature points, wherein the shape of the feature points is not limited, and may be a dot, a ring, a triangle, or other shapes. In the embodiment of the present application, the distribution rules of the sub-markers within different markers are different, and therefore, each marker 30 may have different identity information, and the terminal device 20 may obtain the identity information corresponding to the marker 30 by identifying the sub-markers included in the marker 30, where the identity information may be information that can be used to uniquely identify the marker 30, such as a code, but is not limited thereto.
In one embodiment, the outline of the marker 30 may be rectangular, and the shape of the marker 30 may be other shapes, which is not limited herein. In the embodiment of the present application, the marker 30 may be a pattern that the terminal device 20 can recognize. It should be noted that the specific marker 30 is not limited in the embodiment of the present application, and only needs to be identified and tracked by the terminal device 20.
The terminal device 20 further stores virtual objects corresponding to different markers 30, and the virtual objects may be buildings, scenes, trees, characters, and the like.
When the user uses the terminal device 20, the terminal device 20 may acquire a marker image containing the marker 30 when the marker 30 is within the field of view of the terminal device 20. The processor of the terminal device 20 acquires the marker image and the related information, calculates and identifies the marker 30, acquires the position and rotation relationship between the marker 30 and the camera of the terminal device 20, and further acquires the position and rotation relationship of the marker 30 relative to the terminal device 20.
When the marker 30 is not within the field of view of the terminal device 20, the terminal device 20 may acquire, in real time, 6DOF (degree of freedom) information of the terminal device 20 through a VIO (Visual-Inertial odometer), and the 6DOF information may include information of rotation, orientation, and the like of the terminal device 20. The terminal device 20 may acquire an image in real time through the camera 24, and the VIO may calculate the relative 6DOF information of the terminal device 20 through key points (or feature points) included in the image acquired by the camera 24, thereby calculating the current position and posture of the terminal device 20.
Referring to fig. 3, in the embodiment of the present application, the terminal device 20 may further be communicatively connected to a server 40 through a network. Wherein, a client of the AR/VR application runs on the terminal device 20, and a server of the AR/VR application corresponding to the client runs on the server 40. By one approach, the server 40 may store identity information corresponding to each marker, virtual image data bound to the marker corresponding to the identity information, and location information of the marker in a real environment or a virtual map.
For the above display system, an embodiment of the present application provides a positioning and tracking method performed by the above system, and specifically, please refer to the following embodiments.
Referring to fig. 4, fig. 4 is a schematic flowchart illustrating a positioning and tracking method provided in an embodiment of the present application. The positioning and tracking method includes the steps of firstly acquiring an image containing a marker, then identifying the marker in the image, acquiring first position information, then acquiring pose change information of terminal equipment, acquiring second position information of the terminal equipment according to the pose change information, and finally acquiring current position information of the terminal equipment based on the first position information and/or the second position information. In a specific embodiment, the positioning and tracking method can be applied to the positioning and tracking device 300 shown in fig. 7 and the terminal equipment 20 (fig. 1) equipped with the positioning and tracking device 300. The flow shown in fig. 4 will be described in detail below by taking HMD (Head mounted Display) as an example. The above positioning and tracking method may specifically comprise the steps of:
step S101: an image containing the marker is acquired.
In this embodiment, the Marker (also called Marker or Tag) may be any figure or object with identifiable characteristic mark. The marker can be placed in the camera view field of the terminal equipment, namely the camera can acquire the image of the marker. The image containing the marker can be stored in the terminal equipment after being collected by the camera, and is used for positioning the position or the posture of the terminal equipment relative to the marker. The marker may include at least one sub-marker therein, and the sub-marker may be a pattern having a certain shape. In one embodiment, each sub-marker may have one or more feature points, wherein the shape of the feature points is not limited, and may be a circle, a triangle, or other shapes. In the embodiment of the present application, the distribution rules of the sub-markers in different markers are different, and therefore, each marker may have different identity information, and the terminal device may acquire the identity information corresponding to the marker by identifying the sub-markers included in the marker, where the identity information may be information that can be used to uniquely identify the marker, such as a code, but is not limited thereto.
In one embodiment, the outline of the marker may be rectangular, but the shape of the marker may be other shapes, and is not limited herein. It should be noted that the shape, style, color, feature point number and distribution of the specific marker are not limited in this embodiment, and it is only necessary that the marker can be identified and tracked by the terminal device, for example, in other possible embodiments, the marker may also be a barcode, a two-dimensional code or other identifiable graphics.
Step S102: markers in the image are identified and first position information is obtained.
In this embodiment, after the image including the marker is collected by the camera of the terminal device, the position and the posture of the terminal device relative to the marker can be obtained according to the position information and the rotation information of the marker in the image.
As one mode, the first location information may be a location of the terminal device in the real environment (or a virtual map constructed based on the real environment), where the location is obtained after the terminal device is located based on the marker. Wherein the first location information may include a location and a posture of the terminal device.
In some embodiments, multiple markers may be discretely located at multiple locations in a real-world environment (which may be indoor or outdoor), where one marker may be located as a target marker near an entrance to the real-world environment (which may be at a doorway of a room or near an entrance to an area), i.e., near a starting location where a user enters the environment. After the terminal device detects the target marker, the camera can acquire an image containing the target marker to perform initial positioning on the position of the terminal device in the real environment, and at this time, the position of the terminal device determined according to the target marker is the initial position of the terminal device in the real environment (or a virtual map corresponding to the real environment).
It can be understood that the first location information may represent an initial location of the terminal device in the virtual map, or may represent a location of the terminal device obtained after positioning based on other markers in the real environment during the movement of the terminal device in the real environment.
Step S103: and acquiring pose change information of the terminal equipment.
In this embodiment, the pose change information may include position change information and posture change information of the terminal device. When a user moves in a real environment, as markers are not arranged in all areas in the real environment, when a camera of the terminal equipment cannot acquire an image containing the markers, the current position of the terminal equipment can be calculated by acquiring pose change information of the terminal equipment relative to an initial pose.
As one mode, the terminal device may obtain, in real time, 6DOF (degree of freedom) information of the terminal device through a VIO (Visual-Inertial odometer), where the 6DOF information may include information such as rotation and orientation of the terminal device. The terminal equipment can acquire images in real time through the camera, and the VIO can calculate the relative 6DOF information of the terminal equipment through key points (or characteristic points) contained in the images acquired by the camera of the terminal equipment, so as to further calculate the current position and posture of the terminal equipment. When a user enters a real environment, a target marker arranged near an entrance of the real environment is detected through terminal equipment for positioning, and first position information corresponding to the target marker is obtained, wherein the first position information can be used as a reference for calculating the position of the terminal equipment through VIO subsequently; when the user continues to move in the real environment, the position and attitude change information of the terminal equipment relative to the first position information corresponding to the target marker can be acquired in real time through the VIO, and the current position of the terminal equipment, namely the user can be calculated in real time.
Step S104: and acquiring second position information of the terminal equipment according to the pose change information.
In this embodiment, the second location information may represent a current location and a current posture of the terminal device in the real environment (or a virtual map corresponding to the real environment), which are obtained by the terminal device through VIO positioning calculation. As one mode, the terminal device obtains pose change information of the terminal device with respect to an initial pose (first position information corresponding to the target marker) through the VIO, and then performs calculation according to the initial pose and the pose change information to obtain a current pose (second position information) of the terminal device.
Step S105: and acquiring the current position information of the terminal equipment based on the first position information and/or the second position information.
In this embodiment, the current location information of the terminal device may represent the most accurate current location information (including a location and a posture) of the terminal device, which is obtained by the terminal device performing comprehensive judgment according to the previously obtained first location information and/or second location information.
As a way, when the terminal device acquires only one of the first location information and the second location information, the information may be directly used as the current location information of the terminal device; when the terminal device obtains the first location information and the second location information at a certain location, the most accurate location information of the first location information and the second location information may be preferentially used as the current location information of the terminal device.
For example, when the terminal device obtains the first location information according to a certain marker, if the terminal device does not start the VIO (for example, when the first location information is obtained for the first time), the first location information may be directly used as the current location information of the terminal device at this time; if the camera of the terminal equipment does not acquire the image containing the marker, second position information is acquired only according to the VIO, and the second position information can be directly used as the current position information of the terminal equipment; if the terminal device simultaneously obtains the first position information by positioning the marker through the camera and starts the VIO to obtain the second position information, and if the marker is positioned more accurately relative to the VIO, the more accurate first position information can be selected as the current position information of the terminal device.
The positioning and tracking method provided by the embodiment can calculate and acquire the position of the user through the pose change information acquired by the VIO when the camera of the terminal device cannot detect the marker, and can be applied to virtual reality or augmented reality scenes with scattered distribution of a plurality of markers and large space.
For example, as shown in fig. 5, in a VR/AR museum, since the number and the position of markers (e.g., M in fig. 5) are generally fixed, and the space of the museum is large, the distribution of the markers is relatively dispersed, after the positioning and tracking method provided by this embodiment is used, when a camera detects a marker (e.g., at the position of user a or user B in fig. 5), a terminal device may display a virtual image corresponding to the marker by using a display module, and position its own pose according to the marker (obtain first position information); when a camera of the terminal device cannot detect a marker (for example, at the position of the user C in fig. 5) while the user wears the terminal device and moves from the vicinity of one virtual exhibit (marker) to the vicinity of another virtual exhibit in the museum, the terminal device may obtain the pose change information of the user relative to the first position information in real time through the VIO, and locate the current position of the user (obtain the second position information).
As a mode, when a user moves from the vicinity of one marker to the vicinity of the next marker, the terminal device may acquire some associated virtual images through current position information acquired through VIO positioning, and render and display the virtual images in a virtual scene, so as to be displayed to the user through a display module of the terminal device, thereby realizing that the virtual images associated with the position and posture of the user can also be displayed when the terminal device does not detect the marker in the vicinity. For example, in a VR/AR museum, a dynamic road sign indicator line may be displayed in a display module of a terminal device to guide a user to find a next marker (virtual exhibit).
The above examples are only part of practical applications of the positioning and tracking method provided by this embodiment, and it can be understood that, with further development and popularization of VR/AR technology, the positioning and tracking method provided by this embodiment can play a role in more practical application scenarios.
According to the positioning and tracking method provided by the embodiment of the application, the position of the user can be determined through marker tracking, the position of the user can be calculated and obtained by combining pose information of the terminal equipment when the marker cannot be detected, the positioning based on the marker and the VIO positioning are combined, the position of the user can be accurately obtained in real time, and the accuracy of indoor positioning and tracking is improved.
Referring to fig. 6, fig. 6 is a schematic flowchart illustrating another positioning and tracking method according to an embodiment of the present application. The following will describe in detail the flow shown in fig. 6 by taking a mobile phone as an example. The above positioning and tracking method may specifically comprise the steps of:
step S201: an image containing the marker is acquired.
Step S202: markers in the image are identified and first position information is obtained.
In this embodiment, the step S202 may be further divided into a step S202a, a step S202b, a step S202c and a step S202 d.
Step S202 a: and identifying the marker in the image and acquiring the identity information of the marker.
In this embodiment, after the image including the marker is collected by the camera of the terminal device, the identity information corresponding to the marker can be acquired, that is, the identification of the marker in the image is completed.
As one way, when each marker includes a plurality of feature points, the number of feature points may be used as the identity Information (ID) of each marker. For example, a certain marker includes a white background and 7 black feature points, after a camera of the terminal device acquires an image including the marker, the image has 7 black regions corresponding to the feature points, and the number "7" of the black regions may be used as the ID of the marker, that is, the identity information of the marker may be "No. 7".
It is understood that, in other possible embodiments, the identity information of the marker may also be set according to the color, shape, distribution area, and other characteristics of the feature points on the marker, and different markers correspond to different identity information.
Step S202 b: and acquiring the mark position information of the marker in a pre-stored map based on the identity information.
In this embodiment, each marker corresponds to its identity information one to one, and similarly, each identity information corresponds to its marker location information in the pre-stored map to which the marker is bound one to one. After the identity information of the marker is obtained, the marker position information of the marker in the pre-stored map corresponding to the marker can be obtained according to the identity information.
As one mode, the pre-stored map may be a virtual map that is established in advance according to a real environment, and each position in the pre-stored map corresponds to the real environment. It will be appreciated that the location of the marker in the pre-stored map is indicative of the actual location of the marker in the real environment. The marker position information may be spatial coordinates of the marker in a pre-stored map.
It is understood that the terminal device may search for the tag location information corresponding to the identity information of the tag locally or on the server, and the terminal device may store the identity information of a plurality of tags and the tag location information corresponding to each tag locally or on the server.
Step S202 c: and acquiring relative relation information of the terminal equipment and the marker.
In this embodiment, the relative relationship information may include relative position information and relative posture information between the terminal device and the marker. In one embodiment, the relative relationship information may be obtained by calculation based on the change characteristics such as the displacement, inclination, and size of each feature point image in the marker image with respect to a reference image (for example, the marker image being obtained by the camera).
Step S202 d: and acquiring first position information in a pre-stored map of the terminal equipment based on the mark position information and the relative relation information.
In this embodiment, the position of the terminal device in the map at present can be located by calculating the mark position information of the marker in the pre-stored map, which is obtained through the identity recognition, and the relative relationship information calculated through the image processing, that is, the first position information is obtained.
In this embodiment, as one mode, after step S202a, step S203 and step S204 may be performed.
Step S203: judging whether the marker is a target marker or not according to the identity information;
step S204: and when the marker is the target marker, constructing a virtual scene matched with the pre-stored map based on the target marker.
In this embodiment, after the camera of the terminal device acquires a certain marker and acquires the identity information of the marker, the identity information of the marker can be detected to determine whether the marker is a target marker. If the marker is the target marker, a virtual scene corresponding to a pre-stored map can be constructed based on the target marker, and the virtual scene is displayed to a user through a display module of the terminal device.
By one approach, different target markers may be placed at the boundaries of some areas of the AR/VR environment. For example, in a multi-theme AR/VR museum, there are multiple exhibition themes such as ocean, grassland, starry sky, etc., different themes correspond to different areas in the museum, at this time, a target marker corresponding to the area theme may be set at an entrance of each area, after the terminal device collects the target marker of the area entrance with the ocean as the theme, a virtual scene related to the ocean may be constructed based on the target marker, and the virtual scene related to the ocean is displayed to the user through the display module of the terminal device; when a user moves from a sea theme zone to a starry sky theme zone, after the terminal equipment acquires a target marker of a zone entrance taking the starry sky as a theme, a virtual scene related to the starry sky can be constructed based on the target marker, the previous virtual scene related to the sea is replaced, and the virtual scene related to the starry sky is displayed to the user through a display module of the terminal equipment.
In this embodiment, as a mode, the virtual scene is already established through step S204, and after step S202a, step S205 may also be performed.
Step S205: virtual object data corresponding to the marker is acquired, and a virtual object corresponding to the marker is displayed according to the virtual object data.
In this embodiment, the identity information of each marker corresponds to the virtual object bound to the marker one to one. After the terminal equipment acquires the identity information of the marker, the virtual object corresponding to the marker can be acquired according to the identity information, and the virtual object is displayed to a user through a display module of the terminal equipment. In one embodiment, the virtual object corresponding to the marker may be displayed alone or in combination with the virtual scene. For example, when a user visits in a museum, a marker may be disposed beside an exhibit, and the terminal device may display a virtual object related to the exhibit by acquiring an image of the marker, such as a text introduction of the exhibit, a related virtual animation, and the like, and the virtual object may also be a virtual mini game, and the like, but is not limited thereto. The user can see the virtual object and the real scene through the terminal equipment to be displayed in an overlapping mode, and the interaction feeling is enhanced.
As one way, the virtual object may be a 3D image in the virtual scene, and the virtual object displayed in the display module may be changed according to the posture and the position for different postures and positions of the terminal device. For example, when the user moves towards a certain marker, the corresponding virtual object of the marker in the virtual scene becomes gradually larger; when the user rotates around the marker, the corresponding virtual object of the marker in the virtual scene rotates along with the marker, so that different images are respectively displayed at different angles.
Step S206: and acquiring pose change information of the terminal equipment.
In this embodiment, step S206 may be further divided into step S206a, step S206b, and step S206 c.
Step S206 a: a current image containing the keypoints is acquired.
In this embodiment, the key point may be a point having an obvious feature in the current image, for example, an edge, a corner, and the like of an object in the image may be a feature point that may be used to represent a position where a certain point is located in a real environment. As one way, the current image may include multiple frames of images taken within a certain period of time, where each frame of image includes multiple key points that can be used for positioning.
Step S206 b: description vectors of the key points in the current image are extracted.
In this embodiment, the terminal device may extract the positions of the same keypoint in the adjacent frame images in the two images through the VIO, so as to obtain the description vector of the keypoint from the previous position of the previous frame image to the next position of the adjacent next frame image.
Step S206 c: and obtaining pose change information of the terminal equipment based on the description vector.
In this embodiment, after the terminal device extracts the description vector of the key point in the current image, knowing the time interval of shooting the adjacent frame image and the module length and direction of the description vector, the spatial displacement of the key point relative to the camera of the terminal device within the shooting time interval can be calculated, and the position change information of the terminal device can be obtained. The terminal device can also be arranged in an Inertial Measurement Unit (IMU), and attitude change information of the terminal device is obtained in real time through the IMU, so that 6DOF information of the terminal device in a pre-stored map is obtained, namely the attitude change information of the terminal device in the pre-stored map is obtained.
Step S207: and acquiring second position information of the terminal equipment according to the pose change information.
Step S208: and acquiring the current position information of the terminal equipment based on the first position information and/or the second position information.
In this embodiment, after step S208, step S209 may be further performed.
Step S209: and displaying a virtual picture corresponding to the current position information in the virtual scene.
As one mode, after the current position information is acquired, a virtual screen corresponding to the current position information (current spatial position and posture) may also be displayed in the virtual scene. For example, when a user moves from near one marker to near the next marker, the terminal device may render and display a virtual image associated with current position information in advance in a virtual scene, so as to be displayed to the user through a display module of the terminal device, and when the terminal device does not detect the marker nearby, the virtual image associated with the position posture of the user may also be displayed. For example, in a VR/AR museum, a dynamic road sign indicator line may be displayed in a display module of a terminal device to guide a user to find a next marker (virtual exhibit).
As one mode, the current location information may be associated with the virtual screen in advance, and the associated information is stored in the local or cloud of the terminal device.
In the present embodiment, after step S208, step S210, step S211, step S212, and step S213 may be further performed.
Step S210: an image containing the new marker is acquired.
In this embodiment, the new marker may be a new marker acquired after the marker acquired in step S201.
Step S211: and identifying the new marker, and acquiring new marker position information of the new marker in a pre-stored map.
In this embodiment, the new marker position information is the marker position information of the new marker in the pre-stored map.
Step S212: and recalculating the first position information of the terminal device in the pre-stored map according to the new mark position information.
In this embodiment, after the new marker position information of the new marker is obtained, the first position information of the terminal device currently in the pre-stored map may be recalculated through the processes similar to the steps S202c to S202 d. It should be noted that the first position information is new first position information that is recalibrated by the terminal device based on the new marker.
Step S213: and calibrating the pose change information of the terminal equipment based on the recalculated first position information.
In this embodiment, in the process of acquiring a new marker point, the terminal device may also acquire, by the VIO, the present pose change information of the terminal device relative to the previous marker, and in some cases, the VIO may have a deviation in the process of continuously measuring the pose change information of the terminal device itself. In other embodiments, the position information of the new marker may be obtained first, then the relative position and posture relationship between the new marker and the initial marker (the marker acquired in step S201) may be calculated, and the pose change information may be calibrated according to the relative position and posture relationship, that is, the pose change information of the VIO is calibrated or based on the initial marker.
Particularly, when the new marker identified by the terminal device is a new target marker (not the initially identified target marker), as a mode, the pose change information acquired by the VIO can be directly cleared, and the new target marker is taken as a reference for recalculation; as another mode, the current position and posture of the terminal device relative to the new target marker may be obtained according to the new target marker, the relative position and posture of the new target marker and the initial target marker (the target marker recognized by the terminal device for the first time and generally set at the entrance of the venue) may be obtained, the position and posture of the terminal device relative to the initial target marker may be calculated, and the posture change information obtained by the VIO may be calibrated. Compared with the former direct zero clearing mode, the VIO of the latter mode has a response curve, the pose change information of the VIO can be gradually calibrated according to the data of the marker, and sudden change can not be generated on the picture displayed by the terminal equipment, so that a user can have better visual experience.
According to the positioning and tracking method, the dynamic virtual image can be displayed in a correlated mode through the current position change of the terminal equipment, and the current pose change information can be calibrated when a new marker is collected, so that the scheme is more intelligent and humanized in application, and the positioning accuracy is further improved.
Referring to fig. 7, fig. 7 is a block diagram illustrating a positioning and tracking device 300 according to an embodiment of the present disclosure. As will be explained below with respect to the block diagram of the modules shown in fig. 7, the positioning and tracking device 300 includes: an acquisition module 310, an identification module 320, a first posture module 330, a second posture module 340, and a positioning module 350, wherein:
an acquisition module 310 for acquiring an image containing a marker.
An identification module 320 for identifying the marker in the image and obtaining first location information.
The first pose module 330 is configured to obtain pose change information of a terminal device, where the pose change information includes position change information and pose change information of the terminal device.
And the second pose module 340 is configured to obtain second position information of the terminal device according to the pose change information.
A positioning module 350, configured to obtain current location information of the terminal device based on the first location information and/or the second location information.
The positioning and tracking device provided by the embodiment of the application can calculate and acquire the position of a user through pose change information acquired by the VIO when a camera of the terminal equipment cannot detect a marker, and can be applied to virtual reality or augmented reality scenes with scattered distribution and large space of a plurality of markers.
Referring to fig. 8, fig. 8 is a block diagram illustrating another positioning and tracking device 400 according to an embodiment of the present application. As will be explained below with respect to the block diagram of the modules shown in fig. 8, the positioning and tracking device 400 includes:
an acquisition module 410 for acquiring an image containing a marker.
An identification module 420 for identifying the marker in the image and obtaining first position information. Further, the identification module 420 includes: an identification unit 421, a location unit 422, an opposite unit 423 and a positioning unit 424, wherein:
the identification unit 421 is configured to identify a marker in the image and acquire identity information of the marker;
a location unit 422, configured to obtain, based on the identity information, mark location information of the marker in a pre-stored map;
an opposite unit 423, configured to acquire relative relationship information between a terminal device and the marker, where the relative relationship information includes relative position information and relative posture information of the terminal device and the marker;
a positioning unit 424, configured to obtain first location information in a pre-stored map of the terminal device based on the marked location information and the relative relationship information.
The first pose module 430 is configured to acquire pose change information of a terminal device, where the pose change information includes position change information and pose change information of the terminal device. Further, the first attitude module 430 includes: the acquisition unit 431, the extraction unit 432 and the pose unit 433, wherein:
an acquisition unit 431 for acquiring a current image containing a key point;
an extracting unit 432, configured to extract description vectors of key points in the current image;
and the pose unit 433 is configured to obtain pose change information of the terminal device based on the description vector.
And the second pose module 440 is configured to obtain second position information of the terminal device according to the pose change information.
A positioning module 450, configured to obtain current location information of the terminal device based on the first location information and/or the second location information.
The judging module 461 is configured to judge whether the marker is a target marker according to the identity information.
A constructing module 462, configured to construct a virtual scene matched with a pre-stored map based on the target marker when the marker is the target marker.
The first display module 471 is configured to acquire virtual object data corresponding to the marker, and display a virtual object corresponding to the marker according to the virtual object data.
A second display module 472, configured to display a virtual picture corresponding to the current position information in the virtual scene.
A first update module 481 for acquiring an image containing a new marker;
a second updating module 482, configured to identify the new marker, and obtain new marker position information of the new marker in a pre-stored map;
a third updating module 483, configured to recalculate the first location information of the terminal device in the pre-stored map according to the new marked location information;
a fourth updating module 484, configured to calibrate the pose change information of the terminal device based on the recalculated first position information.
The positioning and tracking device provided by the embodiment of the application can be used for displaying dynamic virtual images in a correlated manner through the current position change of the terminal equipment, and can be used for calibrating the current pose change information when new markers are collected, so that the application of the scheme is more intelligent and humanized, and the positioning accuracy is further improved.
An embodiment of the present application provides a terminal device, which includes a display, a memory, and a processor, where the display and the memory are coupled to the processor, and the memory stores instructions that, when executed by the processor, perform:
acquiring an image comprising a marker;
identifying a marker in the image and obtaining first position information;
acquiring pose change information of terminal equipment, wherein the pose change information comprises position change information and posture change information of the terminal equipment;
acquiring second position information of the terminal equipment according to the pose change information;
and acquiring the current position information of the terminal equipment based on the first position information and/or the second position information.
An embodiment of the present application provides a computer-readable storage medium having program code executable by a processor, the program code causing the processor to execute:
acquiring an image comprising a marker;
identifying a marker in the image and obtaining first position information;
acquiring pose change information of terminal equipment, wherein the pose change information comprises position change information and posture change information of the terminal equipment;
acquiring second position information of the terminal equipment according to the pose change information;
and acquiring the current position information of the terminal equipment based on the first position information and/or the second position information.
To sum up, the positioning and tracking method, the positioning and tracking device, the terminal device and the computer readable storage medium provided by the embodiment of the present application acquire an image including a marker first; then identifying a marker in the image and acquiring first position information; acquiring pose change information of the terminal equipment, wherein the pose change information comprises position change information and attitude change information of the terminal equipment; acquiring second position information of the terminal equipment according to the pose change information; and finally, acquiring the current position information of the terminal equipment based on the first position information and/or the second position information. Compared with the prior art, the position of the user can be determined through marker tracking, the position of the user can be calculated and obtained through the position and posture information of the terminal device when the marker cannot be detected, and the accuracy of indoor positioning tracking is improved.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. For the device-like embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment. For any processing manner described in the method embodiment, all the processing manners may be implemented by corresponding processing modules in the apparatus embodiment, and details in the apparatus embodiment are not described again.
It should be understood that the above-mentioned terminal device is not limited to a head-mounted display or a smartphone, a tablet computer, but it should refer to a computer device that can be used in mobile. Specifically, the terminal device refers to a mobile computer device equipped with an intelligent operating system, and the terminal device includes, but is not limited to, a head-mounted display, a smart phone, a smart watch, a tablet computer, and the like.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (terminal device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical capture of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments. In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (10)

1. A method of location tracking, the method comprising:
acquiring an image comprising a marker;
identifying a marker in the image and obtaining first position information;
acquiring pose change information of terminal equipment, wherein the pose change information comprises position change information and posture change information of the terminal equipment;
acquiring second position information of the terminal equipment according to the pose change information;
and acquiring the current position information of the terminal equipment based on the first position information and/or the second position information.
2. The method of claim 1, wherein identifying a marker in the image and obtaining first location information comprises:
identifying a marker in the image and acquiring identity information of the marker;
acquiring mark position information of the marker in a pre-stored map based on the identity information;
acquiring relative relationship information of a terminal device and the marker, wherein the relative relationship information comprises relative position information and relative posture information of the terminal device and the marker;
and acquiring first position information in a pre-stored map of the terminal equipment based on the marked position information and the relative relation information.
3. The method of claim 2, wherein after identifying the marker in the image and obtaining identity information of the marker, the method further comprises:
judging whether the marker is a target marker or not according to the identity information;
and when the marker is a target marker, constructing a virtual scene matched with a pre-stored map based on the target marker.
4. The method according to claim 3, wherein after obtaining the current location information of the terminal device based on the first location information and/or the second location information, the method further comprises:
and displaying a virtual picture corresponding to the current position information in the virtual scene.
5. The method of claim 1, wherein after identifying the marker in the image, the method further comprises:
and acquiring virtual object data corresponding to the marker, and displaying the virtual object corresponding to the marker according to the virtual object data.
6. The method according to claim 1, wherein acquiring pose change information of the terminal device comprises:
collecting a current image containing key points;
extracting description vectors of key points in the current image;
and obtaining pose change information of the terminal equipment based on the description vector.
7. The method of claim 1, further comprising:
acquiring an image comprising the new marker;
identifying the new marker, and acquiring new marker position information of the new marker in a pre-stored 3D map;
recalculating first position information of the terminal equipment in the pre-stored 3D map according to the new mark position information;
calibrating the pose change information of the terminal device based on the recalculated first position information.
8. A position tracking apparatus, the apparatus comprising:
an acquisition module for acquiring an image containing a marker;
the identification module is used for identifying the marker in the image and acquiring first position information;
the first pose module is used for acquiring pose change information of the terminal equipment, and the pose change information comprises position change information and pose change information of the terminal equipment;
the second pose module is used for acquiring second position information of the terminal equipment according to the pose change information;
and the positioning module is used for acquiring the current position information of the terminal equipment based on the first position information and/or the second position information.
9. A terminal device comprising a display, a memory, and a processor, the display and the memory coupled to the processor, the memory storing instructions that, when executed by the processor, the processor performs the method of any of claims 1-7.
10. A computer-readable storage medium having program code executable by a processor, the program code causing the processor to perform the method of any one of claims 1-7.
CN201810891134.5A 2018-08-02 2018-08-02 Positioning tracking method, device, terminal equipment and computer readable storage medium Active CN110794955B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201810891134.5A CN110794955B (en) 2018-08-02 2018-08-02 Positioning tracking method, device, terminal equipment and computer readable storage medium
PCT/CN2019/098200 WO2020024909A1 (en) 2018-08-02 2019-07-29 Positioning and tracking method, terminal device, and computer readable storage medium
US16/687,699 US11127156B2 (en) 2018-08-02 2019-11-19 Method of device tracking, terminal device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810891134.5A CN110794955B (en) 2018-08-02 2018-08-02 Positioning tracking method, device, terminal equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN110794955A true CN110794955A (en) 2020-02-14
CN110794955B CN110794955B (en) 2021-06-08

Family

ID=69425746

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810891134.5A Active CN110794955B (en) 2018-08-02 2018-08-02 Positioning tracking method, device, terminal equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN110794955B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111595342A (en) * 2020-04-02 2020-08-28 清华大学 Indoor positioning method and system capable of being deployed in large scale
CN111698646A (en) * 2020-06-08 2020-09-22 浙江商汤科技开发有限公司 Positioning method and device
CN112051596A (en) * 2020-07-29 2020-12-08 武汉威图传视科技有限公司 Indoor positioning method and device based on node coding
CN113313966A (en) * 2020-02-27 2021-08-27 华为技术有限公司 Pose determination method and related equipment
CN113630593A (en) * 2021-08-17 2021-11-09 宁波未知数字信息技术有限公司 Multi-mode high-precision full-space hybrid positioning system
CN113628284A (en) * 2021-08-10 2021-11-09 深圳市人工智能与机器人研究院 Pose calibration data set generation method, device and system, electronic equipment and medium
CN115039015A (en) * 2020-02-19 2022-09-09 Oppo广东移动通信有限公司 Pose tracking method, wearable device, mobile device and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050105772A1 (en) * 1998-08-10 2005-05-19 Nestor Voronka Optical body tracker
CN101164253A (en) * 2005-04-19 2008-04-16 Sk泰力康姆株式会社 Location-based service method and system using location data included in image data
CN106446815A (en) * 2016-09-14 2017-02-22 浙江大学 Simultaneous positioning and map building method
CN106713773A (en) * 2017-03-31 2017-05-24 联想(北京)有限公司 Shooting control method and electronic device
CN107562189A (en) * 2017-07-21 2018-01-09 广州励丰文化科技股份有限公司 A kind of space-location method and service equipment based on binocular camera
CN107747941A (en) * 2017-09-29 2018-03-02 歌尔股份有限公司 A kind of binocular visual positioning method, apparatus and system
CN108022264A (en) * 2016-11-01 2018-05-11 狒特科技(北京)有限公司 Camera pose determines method and apparatus

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050105772A1 (en) * 1998-08-10 2005-05-19 Nestor Voronka Optical body tracker
CN101164253A (en) * 2005-04-19 2008-04-16 Sk泰力康姆株式会社 Location-based service method and system using location data included in image data
CN106446815A (en) * 2016-09-14 2017-02-22 浙江大学 Simultaneous positioning and map building method
CN108022264A (en) * 2016-11-01 2018-05-11 狒特科技(北京)有限公司 Camera pose determines method and apparatus
CN106713773A (en) * 2017-03-31 2017-05-24 联想(北京)有限公司 Shooting control method and electronic device
CN107562189A (en) * 2017-07-21 2018-01-09 广州励丰文化科技股份有限公司 A kind of space-location method and service equipment based on binocular camera
CN107747941A (en) * 2017-09-29 2018-03-02 歌尔股份有限公司 A kind of binocular visual positioning method, apparatus and system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
AKRAM SALEM 等: "A GPS/Wi-Fi/Maker Analysis Based simulataneous and Hierarchical Multi-Positioning System", 《INTERNATIONAL COMPETITION ON EVALUATING AAL SYSTEMS THROUGH COMPETITIVE BENCHMARKING》 *
JUNJIE ZHANG 等: "An Improvement Algorithm for OctoMap Based on RGB-D SLAM", 《2018 CHINESE CONTROL AND DECISION CONFERENCE》 *
吴洪飞: "增强现实中标记设计与识别方法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115039015A (en) * 2020-02-19 2022-09-09 Oppo广东移动通信有限公司 Pose tracking method, wearable device, mobile device and storage medium
CN113313966A (en) * 2020-02-27 2021-08-27 华为技术有限公司 Pose determination method and related equipment
CN111595342A (en) * 2020-04-02 2020-08-28 清华大学 Indoor positioning method and system capable of being deployed in large scale
CN111595342B (en) * 2020-04-02 2022-03-18 清华大学 Indoor positioning method and system capable of being deployed in large scale
CN111698646A (en) * 2020-06-08 2020-09-22 浙江商汤科技开发有限公司 Positioning method and device
CN111698646B (en) * 2020-06-08 2022-10-18 浙江商汤科技开发有限公司 Positioning method and device
CN112051596A (en) * 2020-07-29 2020-12-08 武汉威图传视科技有限公司 Indoor positioning method and device based on node coding
CN113628284A (en) * 2021-08-10 2021-11-09 深圳市人工智能与机器人研究院 Pose calibration data set generation method, device and system, electronic equipment and medium
CN113628284B (en) * 2021-08-10 2023-11-17 深圳市人工智能与机器人研究院 Pose calibration data set generation method, device and system, electronic equipment and medium
CN113630593A (en) * 2021-08-17 2021-11-09 宁波未知数字信息技术有限公司 Multi-mode high-precision full-space hybrid positioning system

Also Published As

Publication number Publication date
CN110794955B (en) 2021-06-08

Similar Documents

Publication Publication Date Title
CN110794955B (en) Positioning tracking method, device, terminal equipment and computer readable storage medium
US11080885B2 (en) Digitally encoded marker-based augmented reality (AR)
US10499002B2 (en) Information processing apparatus and information processing method
EP2915140B1 (en) Fast initialization for monocular visual slam
EP3151202B1 (en) Information processing device and information processing method
KR20210046592A (en) Augmented reality data presentation method, device, device and storage medium
EP3039655B1 (en) System and method for determining the extent of a plane in an augmented reality environment
EP2727332B1 (en) Mobile augmented reality system
US9721388B2 (en) Individual identification character display system, terminal device, individual identification character display method, and computer program
CN111638796A (en) Virtual object display method and device, computer equipment and storage medium
US20160123742A1 (en) Image processing device, image processing method, and program
CN104180814A (en) Navigation method in live-action function on mobile terminal, and electronic map client
EP2733675A1 (en) Object display device, object display method, and object display program
CN112287928A (en) Prompting method and device, electronic equipment and storage medium
CN110873963B (en) Content display method and device, terminal equipment and content display system
JP6625734B2 (en) Method and apparatus for superimposing a virtual image on a photograph of a real scene, and a portable device
CN111862205A (en) Visual positioning method, device, equipment and storage medium
CN111815781A (en) Augmented reality data presentation method, apparatus, device and computer storage medium
KR20180039013A (en) Feature data management for environment mapping on electronic devices
CN111833457A (en) Image processing method, apparatus and storage medium
WO2020024909A1 (en) Positioning and tracking method, terminal device, and computer readable storage medium
CN112215964A (en) Scene navigation method and device based on AR
CN113610702B (en) Picture construction method and device, electronic equipment and storage medium
EP3007136B1 (en) Apparatus and method for generating an augmented reality representation of an acquired image
CN110737326A (en) Virtual object display method and device, terminal equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Positioning and tracking methods, devices, terminal devices, and computer readable storage media

Effective date of registration: 20230417

Granted publication date: 20210608

Pledgee: China Merchants Bank Limited by Share Ltd. Guangzhou branch

Pledgor: GUANGDONG VIRTUAL REALITY TECHNOLOGY Co.,Ltd.

Registration number: Y2023980038285

PE01 Entry into force of the registration of the contract for pledge of patent right