CN110598605B - Positioning method, positioning device, terminal equipment and storage medium - Google Patents

Positioning method, positioning device, terminal equipment and storage medium Download PDF

Info

Publication number
CN110598605B
CN110598605B CN201910822716.2A CN201910822716A CN110598605B CN 110598605 B CN110598605 B CN 110598605B CN 201910822716 A CN201910822716 A CN 201910822716A CN 110598605 B CN110598605 B CN 110598605B
Authority
CN
China
Prior art keywords
image
marker
image elements
target
target image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910822716.2A
Other languages
Chinese (zh)
Other versions
CN110598605A (en
Inventor
胡永涛
戴景文
贺杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Virtual Reality Technology Co Ltd
Original Assignee
Guangdong Virtual Reality Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Virtual Reality Technology Co Ltd filed Critical Guangdong Virtual Reality Technology Co Ltd
Priority to CN201910822716.2A priority Critical patent/CN110598605B/en
Publication of CN110598605A publication Critical patent/CN110598605A/en
Application granted granted Critical
Publication of CN110598605B publication Critical patent/CN110598605B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/10Recognition assisted with metadata

Abstract

The application discloses a positioning method, a positioning device, terminal equipment and a storage medium. The method comprises the steps of acquiring an image through a target carrier provided with a marker to obtain a target image, wherein the target image at least comprises a preset number of image elements in the marker, identifying the image elements contained in the target image and acquiring a sub-coding sequence corresponding to the image elements, determining position information of the image elements in the marker based on the sub-coding sequence, and acquiring relative positions and rotation information of terminal equipment and the target carrier based on the sub-coding sequence.

Description

Positioning method, positioning device, terminal equipment and storage medium
Technical Field
The present application relates to the field of tracking interaction technologies, and in particular, to a positioning method, an apparatus, a terminal device, and a storage medium.
Background
In recent years, with the progress of science and technology, technologies such as Augmented Reality (AR) and Virtual Reality (VR) have become hot spots of research at home and abroad. Taking augmented reality as an example, augmented reality is a technique for increasing the user's perception of the real world through information provided by a computer system, which overlays computer-generated virtual objects, scenes, or system cues into a real scene to enhance or modify the perception of the real world environment or data representing the real world environment.
In interactive systems such as virtual reality systems and augmented reality systems, a target object needs to be identified and tracked. The traditional identification and tracking methods are usually realized by using a magnetic sensor, an optical sensor, ultrasonic waves and the like, but the identification and tracking methods are usually not ideal in identification and tracking effects, such as the magnetic sensor, the optical sensor, the ultrasonic waves and the like are usually greatly influenced by the environment.
Disclosure of Invention
The embodiment of the application provides a positioning method, a positioning device, a terminal device and a storage medium, which can improve the accuracy of positioning and tracking.
In a first aspect, an embodiment of the present application provides a positioning method, which is applied to a terminal device, and the method includes: acquiring an image of a target carrier provided with a marker to obtain a target image, wherein the target image at least comprises a preset number of image elements in the marker, the marker comprises a plurality of image elements, the plurality of image elements are arranged at intervals, and the arrangement of the preset number of image elements adjacently arranged in each group is different from the arrangement of the preset number of image elements adjacently arranged in other groups; identifying image elements contained in the target image, and acquiring a sub-coding sequence corresponding to the image elements contained in the target image; determining positional information of image elements contained in the target image in the marker based on the sub-coded sequence; obtaining physical coordinates of image elements contained in the target image on the target carrier based on the position information; and obtaining the relative position and rotation information of the terminal equipment and the target carrier according to the physical coordinates of the image elements contained in the target image and the pixel coordinates in the target image.
In a second aspect, an embodiment of the present application provides a positioning apparatus, including: the image acquisition module is used for acquiring an image of a target carrier provided with a marker to obtain a target image, wherein the target image at least comprises a preset number of image elements in the marker, the marker comprises a plurality of image elements, the plurality of image elements are arranged at intervals, and the arrangement of the preset number of image elements adjacently arranged in each group is different from the arrangement of the preset number of image elements adjacently arranged in other groups; the image identification module is used for identifying image elements contained in the target image and acquiring a sub-coding sequence corresponding to the image elements contained in the target image; a position information determination module for determining position information of image elements contained in the target image in the marker based on the sub-coding sequence; a physical coordinate obtaining module, configured to obtain physical coordinates of image elements included in the target image on the target carrier based on the position information; and the rotation information acquisition module is used for acquiring the relative position and the rotation information of the terminal equipment and the target carrier according to the physical coordinates of the image elements contained in the target image and the pixel coordinates in the target image.
In a third aspect, an embodiment of the present application provides a terminal device, including: one or more processors; a memory; wherein the memory has stored therein one or more application programs configured to be executed by the one or more processors, the one or more programs configured to perform the positioning method provided by the second aspect.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where a program code is stored in the computer-readable storage medium, and the program code can be called by a processor to execute the positioning method provided in the second aspect.
According to the scheme provided by the embodiment of the application, the target image is obtained by carrying out image acquisition on the target carrier provided with the marker, the target image at least comprises the preset number of image elements in the marker, the image elements contained in the target image are identified, the sub-coding sequences corresponding to the image elements are obtained, the position information of the image elements in the marker is determined based on the sub-coding sequences, and the relative position and the rotation information of the terminal equipment and the target carrier are obtained based on the sub-coding sequences
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings required to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the description below are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic diagram of an identification tracking system provided by an embodiment of the present application;
FIG. 2 illustrates another schematic diagram of a recognition tracking system provided by embodiments of the present application;
FIG. 3 is a schematic diagram of a recognition tracking system provided by an embodiment of the present application;
FIG. 4a shows a partially schematic view of a tag provided by an embodiment of the present application;
FIG. 4b shows another partial schematic view of a marker provided in embodiments of the present application;
FIG. 4c shows a further partial schematic view of a tag provided by an embodiment of the present application;
FIG. 4d shows a further partial schematic view of a tag provided by an embodiment of the present application;
FIG. 5 illustrates yet another partial schematic view of a tag provided by an embodiment of the present application;
fig. 6 is a schematic flowchart illustrating a positioning method according to another embodiment of the present application;
fig. 7 is a schematic flowchart illustrating a positioning method according to another embodiment of the present application;
fig. 8 is a schematic perspective view of an interaction device according to another embodiment of the present application;
FIG. 9 illustrates a schematic view of a scenario provided by yet another embodiment of the present application;
fig. 10 is a schematic flow chart illustrating a positioning method according to still another embodiment of the present application;
fig. 11 is a block diagram illustrating a positioning apparatus provided in an embodiment of the present application;
fig. 12 is a block diagram illustrating a terminal device according to an embodiment of the present application, configured to execute a positioning method according to an embodiment of the present application;
fig. 13 illustrates a storage unit for storing or carrying program codes for implementing a positioning method according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
An application scenario of the positioning method provided in the embodiment of the present application is described below.
Referring to fig. 1, a recognition and tracking system 10 provided by an embodiment of the present application is shown, which includes a terminal device 100 and an object carrier 300. The object carrier 300 may be an interactive device (such as the interactive device 400 shown in fig. 2) or a carrier fixed in position in real space (such as the wall 600 shown in fig. 3). The terminal device 100 may or may not be connected to the object carrier 300, and specifically, when the object carrier 300 is an interactive device (for example, the interactive device 400 in fig. 2), the terminal device 100 and the object carrier 300 may be connected through communication methods such as bluetooth, wi-Fi (Wireless Fidelity), zigBee (ZigBee technology), and the like, or may be connected through communication in a wired manner such as a data line. Of course, the connection manner of the terminal device 100 and the object carrier 300 may not be limited in the embodiment of the present application. The object carrier 300 may be provided with a marker 200 thereon.
In the embodiment of the present application, the terminal device 100 may be a head-mounted display apparatus, and may also be a mobile device such as a mobile phone and a tablet. When the terminal device 100 is a head-mounted display device, the head-mounted display device may be an integrated head-mounted display device. The terminal device 100 may also be an external head-mounted display device, and is connected to an intelligent terminal such as a mobile phone as a processing and storage device of the head-mounted display device, that is, the terminal device 100 may be inserted into or connected to the intelligent terminal to display virtual content on the external head-mounted display device. In some embodiments, the terminal device 100 may be inserted or connected to an external head-mounted display device as a processing and storage device of the head-mounted display device, and display the virtual content in the head-mounted display device.
When a user wears the head-mounted display device and enters a preset virtual scene, and when the target carrier 300 is within the visual range of the image sensor on the terminal device 100, the terminal device 100 may use the marker 200 arranged on the target carrier as a target marker, collect an image containing a preset number of image elements in the target marker, and identify the image elements contained in the collected image to obtain the relative position and rotation information of the terminal device and the target carrier, thereby positioning and tracking the target carrier.
Referring to fig. 4a to 4d, in some embodiments, the marker 200 may include a plurality of image elements 210 and a background 230, the image elements 210 are disposed at intervals through the background 230, that is, the image elements 210 are disposed at intervals on the background 230, and a position between two adjacent image elements 210 exposes the background 230, wherein two adjacent image elements 210 may be disposed at intervals, and other elements different from the image elements may be disposed at intervals between two adjacent image elements 210, or other elements may not be disposed, which is not limited herein. The spacing between two adjacent picture elements 210 may be the same or different. Further, the arrangement of the n image elements 210 adjacently disposed in each group is different from the arrangement of the n image elements adjacently disposed in other groups, for example, the arrangement may be such that at least one image element 210 in the n image elements 210 adjacently disposed is different from the arrangement of the n image elements adjacently disposed in other groups, the color of at least one image element 210 in the n image elements 210 adjacently disposed is different from the color of the n image elements adjacently disposed in other groups, the arrangement order of the n image elements 210 adjacently disposed is different from the arrangement order of the n image elements adjacently disposed in other groups, and the like, where n is an integer greater than 0, for example, n =2, 3, 5, and the like. As an embodiment, n image elements adjacently arranged may be taken as one element group, and the marker may include a plurality of element groups, each element group may include at least n image elements adjacently arranged, and the arrangement of the image elements of each element group is different from that of the image elements of other element groups, where n may be the minimum number of image elements included in the element group.
For example, taking the nine picture elements 210 arranged in fig. 4a as an example, the nine picture elements 210 are sequentially named A1, A2, 8230 \823030, A9 from left to right along the arrangement direction. The above-mentioned A1, A2, 8230a 9, A9 are merely exemplary and not limiting examples for indicating a plurality of picture elements 210. Along the arrangement direction of the preset arrangement mode (for example, from left to right in the figure), the arrangement of any adjacently arranged 3 picture elements is different from the arrangement of other adjacently arranged 3 picture elements, for example, the adjacently arranged 3 picture elements A1, A2, A3 are different from the arrangement of other groups of adjacently arranged 3 picture elements (such as A2, A3, A4, A5, A6, A7, etc., which are not listed herein).
In one embodiment, the marker 200 includes k different types of picture elements 210, where k is an integer greater than or equal to 2, such as the picture elements shown in FIG. 4a including three different types of picture elements 210, a triangular pattern, a square pattern, and a circular pattern. In some embodiments, the total number of image elements included in the marker may be related to the minimum number of image elements n included in each element group and the number of categories of image elements k, and the total number of image elements included in the marker may be less than or equal to k n . As shown in fig. 4a, the marker 200 may include three types of image elements 210, i.e., a triangular pattern, a square pattern and a circular pattern, wherein the four image elements 210 enclosed by the dotted lines are adjacently disposed image elements satisfying the above arrangement, and then the total number of the image elements included in the marker 200 may be less than or equal to 3 4 I.e. 81. As an embodiment, k and n may be set according to a requirement, where the setting of n may be determined according to the number of image elements that can be captured by the image sensor, for example, if the image sensor can capture at least 4 image elements at a time, the setting of n may be a positive integer less than or equal to 4. Further, k and n may be set in relation to the target on which the marker 200 is placedRegarding the carrier, if the marker 200 is disposed on an interactive device such as a controller, the required application length of the marker is shorter, i.e. the total number of image elements required to be included is smaller, k and n can be set to smaller values, and if the marker 200 is disposed on a fixed carrier (e.g. a wall) in real space, the required application length of the marker is longer, i.e. the total number of image elements required to be included is larger, k and n can be set to larger values, so as to ensure that the total number of image elements in the marker meets the requirement. When the required application length of the marker is short, k and n can also be set to large values, and the marker can contain only the required number of picture elements.
As an embodiment, the image element may include at least one of a pattern, a number, a letter, and a symbol, for example, the marker 200 shown in fig. 4a is a marker in which k is 3, n is 3, and the image element 210 is a pattern, the marker 200 shown in fig. 4b is a marker in which k is 3, n is 3, and the image element 210 is a letter, the marker 200 shown in fig. 4c is a marker in which k is 2, n is 3, and the image element 210 is a pattern, the marker 200 shown in fig. 4d is a marker in which k is 5, n is 3, and the image element 210 includes both a pattern and a letter, and it is understood that the image element may include a pattern, a number, a letter, a symbol, and the like.
As an embodiment, the element group may include n + a image elements, where a is an integer greater than 0, and the arrangement of the n + a image elements adjacently disposed in each group in the marker is different from the arrangement of the n + a image elements adjacently disposed in other groups. For example, 3 picture elements 210 arranged adjacently starting with a first picture element 210 in fig. 4a are arranged differently from 3 picture elements arranged adjacently starting with another group, such as 3 picture elements 210 arranged adjacently starting with a second picture element 210. If a is 1, the arrangement of the 4 picture elements 210 adjacently arranged starting with the first picture element 210 is different from the arrangement of the 4 picture elements adjacently arranged starting with the other groups, such as the 4 picture elements 210 adjacently arranged starting with the second picture element 210.
As an embodiment, a first image element and a last image element arranged in a plurality of image elements in the marker may be regarded as adjacently arranged image elements, and there is at least one group of adjacently arranged n image elements including an image element arranged in a first one of the plurality of image elements and an image element arranged in a last one of the plurality of image elements. As an embodiment, the at least one element group includes a first image element and a last image element arranged in the plurality of image elements. For example, taking the nine image elements 210 arranged in fig. 4a as an example, the nine image elements 210 are named A1, A2 \8230; A9 in sequence along the arrangement direction, where along the arrangement direction of the preset arrangement manner (for example, from left to right in the figure), the first image element A1 and the last image element A9 arranged in the nine image elements 210 may be regarded as adjacently arranged image elements, and then the adjacently arranged 3 image elements may be A8, A9, A1.
In some embodiments, the image elements may have corresponding encoding information, where different image elements correspond to different encoding information, for example, taking nine image elements 210 arranged in fig. 4a as an example, the image elements 210 include three types of image elements, where a triangle pattern corresponds to the encoding 0, a square pattern corresponds to the encoding 1, and a circle pattern corresponds to the encoding 2, and the image elements may be identified in a certain order to obtain the encoding corresponding to each image element, so as to obtain the encoding sequence of the marker. For example, by identifying the plurality of image elements 210 in fig. 4a in the order from left to right in fig. 4a, the coding sequence corresponding to the marker 200 can be 000100201. The encoded information may be represented by numbers, letters, symbols, and the like, and it is understood that the above-described encoding is merely illustrative and not a limitation on the encoded information.
In some embodiments, as shown in fig. 5, the tag 200 further includes at least one identification element 220 that can be used to identify the identity of the tag 200 for distinguishing from other tags, e.g., two tags can include image elements arranged in the same arrangement, but different identification elements, such that the identity of the tag can be determined by identifying the identification element. The marking element 220 may be different from the plurality of image elements 210, the marking element 220 may be different from the image elements 210, and may be different in element type, for example, as shown in fig. 5, the plurality of image elements 210 includes a triangular pattern, a square pattern, and a circular pattern, and the marking element 220 is a pattern including two white dots, that is, the marking element 220 is different from the image elements in element type.
In some embodiments, n image elements adjacently disposed between every two identification elements 220 may be spaced, for example, three identification elements 220 are arranged in fig. 5 as an example, and the three identification elements 220 are sequentially named B1, B2, and B3 in an arrangement direction (e.g., from left to right in the drawing), where it should be noted that B1, B2, and B3 are merely illustrative for referring to a plurality of identification elements 220 by way of example, and are not limited. Then, B1, B2, and B3 may be respectively spaced by three picture elements adjacently disposed. In some embodiments, the number of adjacently disposed image elements 210 spaced between every two identification elements 220 may be a number other than n, and the number of image elements disposed between different identification elements 220 may also be different, for example, two image elements may be spaced between identification element B1 and identification element B2, and three image elements may be spaced between identification element B2 and identification element B3.
In some embodiments, each of the identification elements 220 is the same, e.g., each of the identification elements 220 is a pattern comprising two white dots as shown in fig. 5. The set position of the identification element 220 may include at least one of a position arranged before the first picture element 210, a position between the plurality of picture elements 210, and a position arranged after the last picture element 210, and as shown in fig. 5, the plurality of identification elements 220 may be respectively set at positions arranged before the first picture element 210 and between the plurality of picture elements 210.
Referring to fig. 6, fig. 6 is a schematic flowchart illustrating a method for generating virtual content according to another embodiment of the present application. As will be described in detail below with respect to the embodiment shown in fig. 6, the method may specifically include the following steps:
step S110: the method comprises the steps of carrying out image acquisition on a target carrier provided with a marker to obtain a target image, wherein the target image at least comprises a preset number of image elements in the marker, the marker comprises a plurality of image elements, the plurality of image elements are arranged at intervals, and the arrangement of the preset number of image elements which are adjacently arranged in each group is different from the arrangement of the preset number of image elements which are adjacently arranged in other groups.
In some embodiments, the object carrier refers to a carrier provided with a marker, wherein the object carrier may be an interaction device (e.g. the interaction device 400 shown in fig. 2) or a fixed carrier in real space (e.g. a wall, a door frame, etc.).
In some embodiments, the image containing at least part of the marker in the object carrier may be captured by the image sensor when part or all of the object carrier is within the visual range of the image capture device of the terminal device. Alternatively, the marker may be integrated into the target carrier, or may be adhesively attached to the target carrier, and the image of the partial image element including the marker in the target carrier may be captured by the image capturing device to obtain the target image. Wherein the target image is an image containing information of the marker on the target carrier. In some embodiments, the marker may include a plurality of image elements, the plurality of image elements are spaced apart from each other, and the arrangement of each of the image elements arranged adjacently is different from the arrangement of the image elements arranged adjacently, for example, as shown in fig. 4a, the nine image elements 210 are named as A1, A2, 8230, and A9 in sequence along the arrangement direction, taking the nine image elements 210 arranged in fig. 4a as an example. If the first image element 210 starts to be adjacently disposed 3 image elements 210 (i.e., A1, A2, and A3) along the arrangement direction of the predetermined arrangement (e.g., from left to right in the figure), the arrangement of the first image element 210 and the second image element 210 that start to be adjacently disposed 3 image elements 210 (i.e., A2, A3, and A4) is different, that is, the arrangement of A1, A2, and A3 is different from that of A2, A3, and A4. Further, the target image should at least include a preset number of image elements in the marker, for example, if the preset number is 3, the target image may include more than or equal to 3 image elements, and the preset number refers to the number of image elements satisfying the arrangement.
Step S120: and identifying the image elements contained in the target image, and acquiring the sub-coding sequences corresponding to the image elements contained in the target image.
In some embodiments, the marker may correspond to a coding sequence, and the coding sequence may be such that the coding corresponding to each image element in the marker is formed in the order of ordering of the image elements. The image acquisition device of the terminal device may send the acquired target image to a processor of the terminal device, so that the processor may identify image elements included in the target image to obtain shapes, contents, and the like of the image elements. In some embodiments, the target image is identified, and a binarization process may be performed on the target image to identify each connected component included in the target image, and the pattern elements included in the marker image are identified according to the connected components. Further, if the image element is a character, the character included in the target image may be obtained by a method such as character recognition.
After the image elements contained in the target image are identified, the sub-coding sequences corresponding to the corresponding image elements can be obtained. As one way, a corresponding code for each of the image elements contained in the target image may be obtained. The processor may identify each image element included in the target image, and respectively obtain a code corresponding to each image element, where a correspondence between the image element and the code may be pre-stored. For example, image acquisition is performed on the marker 200 to obtain a target image including partial image elements as shown in fig. 4a, a plurality of image elements 210 in the target image are identified, the image elements 210 are identified to include a triangle pattern, a square pattern and a circle pattern, and by searching the correspondence between the shapes and the codes of the image elements stored in advance, the corresponding code of the triangle pattern is 0, the corresponding code of the square pattern is 1, and the corresponding code of the circle pattern is 2.
In some embodiments, the corresponding sub-coding sequences may be obtained sequentially according to the known codes corresponding to each image element, and specifically, for example, as shown in the dashed box in fig. 4a, the target image includes image elements, and then the sub-coding sequence corresponding to the image element in the dashed box in fig. 4a may be 0100 if the known code corresponding to the triangular pattern is 0, the code corresponding to the square pattern is 1, and the code corresponding to the circular pattern is 2.
In some embodiments, the number of image elements included in the image may be compared with the preset number, and when the number is the same, the corresponding sub-coding sequences may be obtained sequentially according to the known codes corresponding to each image element, for example, when the number of image elements included in the image and the preset number are both 3, the corresponding sub-coding sequences may be obtained sequentially according to the sequence of the collected codes corresponding to 3 image elements.
As an embodiment, when the number of the image elements is different, for example, as shown in fig. 4a, the image elements included in the image are portions enclosed by a dashed line, the number is 4, and the preset number is 3, the corresponding sub-coding sequences can be obtained by sequentially obtaining codes corresponding to the 4 image elements included in the image, that is, the obtained sub-coding sequence is 0100, or 3 image elements can be selected from the 4 image elements included in the image (for example, 3 image elements are continuously selected from a first image element in the dashed line), and the corresponding sub-coding sequence can be obtained according to the selected 3 image elements, that is, 010. As an embodiment, when the number of image elements included in the image is less than the preset number, the position or the focal length of the image capturing device may be adjusted so that the image capturing device can capture at least the same number of image elements as the preset number.
As an implementation manner, image elements included in the target image may be acquired at preset intervals, the number of the acquired image elements is compared with a preset number, and when the number is the same, corresponding sub-coding sequences may be sequentially obtained according to the known codes corresponding to the image elements according to the acquisition order. For example, as shown in fig. 4b, the number of the image elements included in the image is 7, 4 image elements can be obtained every 1 image element 210 in the order from left to right in the figure, and when the preset number is also 4, the corresponding sub-coding sequences can be obtained sequentially according to the obtaining order for the codes corresponding to the obtained 4 image elements.
In some embodiments, when the number of the image elements is different, as shown in fig. 4b, the image elements included in the image are portions enclosed by a dashed line, the number of the image elements is 7, 4 image elements can be obtained by 1 image element 210 at an interval in a left-to-right sequence in the figure, and when the preset number is 3, the corresponding sub-coding sequences can be obtained by sequentially obtaining codes corresponding to the obtained 4 image elements according to the obtaining sequence, or 3 image elements can be selected from the obtained 4 image elements (for example, 3 image elements are continuously selected from a first image element in the dashed line according to the obtaining sequence), and the corresponding sub-coding sequences are obtained according to the selected 3 image elements. As an embodiment, when the number of the acquired image elements is less than the preset number, the position or the focal length of the image acquisition device may be adjusted, so that the number of the image elements acquired by the image acquisition device at the preset interval may be at least the same as the preset number.
Step S130: the positional information of the image element contained in the target image in the marker is determined based on the sub-coded sequence.
In some embodiments, the position of the image element contained in the target image in the marker may be determined from the resulting sub-coded sequence. Because the arrangement of the image elements of the preset number adjacently arranged in each group in the marker is different from the arrangement of the image elements of the preset number adjacently arranged in other groups, the identified sub-coding sequence is unique in the coding sequence corresponding to the marker, and the coding sequence does not have the repetition of other sub-coding sequences and the identified sub-coding sequence, that is, the position of the sub-coding sequence in the coding sequence corresponding to the marker is fixed, the position information of the image elements in the marker contained in the image can be determined according to the position of the sub-coding sequence in the coding sequence corresponding to the marker, wherein the position information of the image elements in the marker can be the arrangement sequence or the number of the image elements in the marker.
Step S140: based on the position information, physical coordinates of image elements contained in the target image on the target carrier are obtained.
In some embodiments, the physical coordinates are coordinates of image elements included in the target image in a physical coordinate system corresponding to the target carrier, and the physical coordinates of each image element included in the target image are actual physical positions of the image element on the target carrier. The correspondence between the position information of each image element in the marker and its physical coordinates on the object carrier may be stored in advance, and the physical coordinates corresponding to the position information may be acquired according to the correspondence.
As an embodiment, the center point of the object carrier may be chosen as the origin, establishing a physical coordinate system. And the coordinate value of the physical coordinate of the image element in the physical coordinate system is a preset value. For example, an XYZ coordinate system is established with the center point of the object carrier as the origin, and the distance of each image element to the origin and the rotation information can be measured, so that the physical coordinates (X1, Y1, Z1) of each image element in the XYZ coordinate system can be determined. After establishing the physical coordinate system, the physical coordinates of each image element in the marker on the object carrier may be stored. In other embodiments, the physical coordinate system may also be established using other points on the object carrier as the origin.
Step S150: and obtaining the relative position and rotation information of the terminal equipment and the target carrier according to the physical coordinates of the image elements contained in the target image and the pixel coordinates in the target image.
In some embodiments, the pixel coordinates are the coordinates of each image element within the image coordinate system corresponding to the target image, and the processor may obtain the pixel coordinates of all image elements.
After the pixel coordinates and the physical coordinates of each image element in the target image are acquired, the position information and the rotation information between the image acquisition device and the target carrier are acquired according to the pixel coordinates and the physical coordinates of each image element, and specifically, the mapping parameters between a camera coordinate system and a physical coordinate system are acquired according to the pixel coordinates and the physical coordinates of each image element and the internal parameters of the image acquisition device acquired in advance.
For example, by operating the acquired pixel coordinates and physical coordinates of the image elements and the pre-acquired internal parameters of the image capturing device through a preset algorithm (such as an SVD algorithm), rotation parameters between a camera coordinate system and a physical coordinate system of the image capturing device and translation parameters between the camera coordinate system and the physical coordinate system of the image capturing device can be acquired.
It should be noted that the rotation parameter and the translation parameter are used as the relative position and rotation information between the image capturing device and the object carrier. The rotation parameter represents a relative rotation state between a camera coordinate system and a physical coordinate system of the target carrier, the translation parameter represents a relative movement state between the camera coordinate system and the physical coordinate system of the target carrier, and the rotation parameter and the translation parameter can be directly used as relative position and rotation information between the terminal device and the target carrier.
The positioning method provided by the embodiment comprises the steps of carrying out image acquisition on a target carrier provided with a marker to obtain a target image; identifying image elements contained in the target image, and acquiring a sub-coding sequence corresponding to the image elements contained in the target image; determining positional information of image elements contained in the target image in the marker based on the sub-coding sequence; obtaining physical coordinates of image elements contained in the target image on the target carrier based on the position information; and obtaining the relative position and rotation information of the terminal equipment and the target carrier according to the physical coordinates of the image elements contained in the target image and the pixel coordinates in the target image. Therefore, the position information of the image elements in the marker is determined based on the corresponding sub-coding sequence of the image elements, and the relative position and rotation information of the terminal equipment and the target carrier are obtained based on the position information, so that the positioning accuracy is improved.
Referring to fig. 7, fig. 7 is a schematic flowchart illustrating a positioning method according to another embodiment of the present application. As will be described in detail below with respect to the embodiment shown in fig. 7, the method may specifically include the following steps:
step S210: and carrying out image acquisition on the target carrier provided with the marker to obtain a target image.
Step S220: and identifying the image elements contained in the target image, and acquiring the sub-coding sequences corresponding to the image elements contained in the target image.
Step S230: the positional information of the image element contained in the target image in the marker is determined based on the sub-coded sequence.
Step S240: based on the position information, physical coordinates of image elements contained in the target image on the target carrier are obtained.
For the detailed description of steps S210 to S240, refer to steps S110 to S140, which are not described herein again.
Step S250: and obtaining the relative position and rotation information of the terminal equipment and the target carrier according to the physical coordinates of the image elements contained in the target image and the pixel coordinates in the target image.
In some embodiments, when the object carrier is an interactive device, after step S250, the following steps may be further included:
step S260A: and obtaining the six-degree-of-freedom information of the interactive device according to the relative position and the rotation information of the terminal equipment and the interactive device.
In some embodiments, the six-degree-of-freedom information of the interaction apparatus may be obtained according to the relative position and rotation information of the terminal device and the interaction apparatus, where the six-degree-of-freedom information of the interaction apparatus may include the degrees of freedom of movement and the degrees of freedom of rotation of the interaction apparatus along three cartesian axes in space. As an embodiment, a coordinate system may be established with the interaction apparatus as a center, and translation information and rotation information of the interaction apparatus on a coordinate axis of the coordinate system, that is, information of six degrees of freedom of the interaction apparatus, may be obtained according to the relative position and rotation information of the terminal device and the interaction apparatus. The interactive device can also be arranged in an Inertial Measurement Unit (IMU), and attitude change information of the interactive device is obtained in real time through the IMU, so that six-degree-of-freedom information of the interactive device is obtained.
In some embodiments, the interaction device may be the interaction device 400 shown in fig. 2, please refer to fig. 2 and 8, the interaction device 400 includes a control portion 430 and a main portion 410, the main portion 410 includes a housing 411, the housing is connected to the control portion, the housing has a first wall 4111 and a second wall 4113 facing away from each other, the markers may include a first marker disposed on the first wall 4111 and a second marker disposed on the second wall 4113. The control portion 430 is used for being manipulated by a user, for example, for being held and manipulated by the user, for being worn and manipulated by the user, and the like, and is not limited herein, in addition, the user manipulates the control portion 430 including, but not limited to, a shaking control portion 430, a rotating control portion 430, keys on the touch control portion 430, and the like, the control portion 430 includes a holding portion 431 and a connecting portion 433, and the connecting portion 433 is used for connecting the holding portion 431 and the housing 411. In some embodiments, the holding portion 431 and the housing 411 can rotate relative to each other through the connecting portion 433, so that the housing 411 and the holding portion 431 can form any angle therebetween to meet the requirement of the interaction habit of the user.
In some embodiments, the first marker and the second marker may be respectively captured from different portions of the same marker, and the marker satisfies that the arrangement of each adjacently disposed preset number of image elements is different from the arrangement of other adjacently disposed preset number of image elements, so that the terminal device may determine, by identifying image elements included in the target image, a sub-coding sequence corresponding to the image elements, determine a position of the sub-coding sequence in the coding sequence, thereby determining whether each image element in the target image belongs to the first marker or the second marker, and obtaining a determination result. And obtaining the physical coordinates of the image elements contained in the target image on the target carrier according to the pre-stored corresponding relation based on the determination result and the position information. Thus, the accuracy of the acquired physical coordinates is improved by determining whether the first marker or the second marker belongs to and the position information to acquire the physical coordinates. For example, as shown in the interaction apparatus 400 of fig. 8, it is possible to determine whether an image element is on a first wall or a second wall of the interaction apparatus 400 according to the position of the corresponding sub-coding sequence of the image element in the coding sequence, and obtain the determination result. Based on the determination result and the position information, the physical coordinates of the image elements included in the target image at the interaction apparatus 400 are obtained according to the correspondence stored in advance.
In some embodiments, the target image acquired by the image sensor includes an image element of the first marker and an image element of the second marker, so that the first marker and the second marker can be distinguished according to the type of the identified image element, two corresponding sub-code sequences are obtained according to the identified image element, the position information of the first sub-code sequence in the first marker and the position information of the second sub-code sequence in the second marker are respectively obtained, and the interactive device is positioned based on the two position information.
In some embodiments, when the first marker and the second marker belong to different markers, it may be determined whether the image element included in the target image belongs to the first marker or the second marker by identifying the type of the image element.
In some embodiments, when the first marker and the second marker belong to different markers and both the first marker and the second marker include a marker element, it is possible to determine whether the image element included in the target image belongs to the first marker or the second marker by recognizing the marker element.
In some embodiments, when the target carrier is a fixed-position carrier in real space, after step S250, the following steps may be further included:
step S260B: and acquiring the spatial position of the carrier with fixed position in the real space.
The fixed carrier may be various, for example, but not limited to, a wall in a room, fixed furniture, etc. In some embodiments, the spatial position of the fixed-position carrier in the real space may be stored in advance, so that the spatial position of the carrier in the real space may be obtained, specifically, a spatial coordinate system of the real space may be established, and the spatial coordinates of the carrier in the spatial coordinate system may be determined according to the spatial position of the fixed-position carrier in the real space, and may be stored, where the spatial coordinate system may be a spatial coordinate system established with a world coordinate origin selected in the real space.
Step S270B: and obtaining the position and posture information of the terminal equipment in the real space according to the spatial position and the relative position and rotation information of the terminal equipment and the carrier with the fixed position.
In some embodiments, the position and posture information of the terminal device in the real space may be obtained according to the spatial position of the fixed carrier in the real space and the relative position and rotation information between the terminal device and the fixed carrier. As a specific embodiment, the coordinates can be converted according to the spatial coordinates of the fixed-position carrier in the spatial coordinate system of the real space, and the relative position and rotation information between the terminal device and the carrier, so as to obtain the spatial coordinates of the terminal device in the spatial coordinate system, and obtain the position and posture information of the terminal device in the real space. For example, as shown in the scene diagram of fig. 9, a spatial coordinate system of a real space may be established with the center of a room as an origin, spatial coordinates of a wall 600 to which the marker 200 is attached in the spatial coordinate system may be acquired, and according to the acquired relative relationship between the angle and the distance between the terminal device 100 and the wall 600, and the angle information of the wall 600 in the spatial coordinate system and the relative position and rotation information to the space, spatial coordinates of the terminal device 100 in the spatial coordinate system may be obtained, and further, position and posture information of the terminal device 100 in the room may be obtained. Indoor positioning and tracking of the terminal equipment can be achieved.
It should be understood that the target carrier may be any object in the real environment, and is not limited to the above-mentioned interaction device and fixed carrier, but may also be other objects that need to be tracked, for example, a marker may be disposed on the medical instrument to track the medical instrument, or a marker may be disposed on the blackboard to track the blackboard, and is not limited herein.
According to the positioning method provided by the embodiment, when the target carrier is an interactive device, the six-degree-of-freedom information of the interactive device can be obtained according to the relative position and rotation information of the terminal device and the interactive device, and when the target carrier is a carrier with a fixed position in a real space, the position information and posture information of the terminal device in the real space can be obtained according to the spatial position in the real space and the relative position and rotation information of the terminal device and the target carrier, so that the interactive device or indoor positioning is realized.
Referring to fig. 10, fig. 10 is a flow chart illustrating a positioning method according to still another embodiment. As will be explained in detail below with respect to the embodiment shown in fig. 10, the method may specifically include the following steps:
step S310: and carrying out image acquisition on the target carrier provided with the marker to obtain a target image.
For detailed description of step S310, please refer to step S110, which is not described herein again.
Step S320: and identifying the image elements contained in the target image, and acquiring the sub-coding sequences corresponding to the image elements contained in the target image.
In some embodiments, the codes corresponding to each image element in the image elements included in the target image may be sequentially acquired, and the sub-code sequences may be formed according to the acquisition order of the codes corresponding to each image element. Specifically, for example, if the image elements included in the target image are shown in fig. 4B, and the code corresponding to the character "a" is 0, the code corresponding to the character "B" is 1, and the code corresponding to the character "C" is 2, the codes corresponding to each image element 210 may be sequentially obtained in the order from left to right in the drawing, and the corresponding sub-code sequence is 000100201101 in the obtaining order.
In some embodiments, the codes corresponding to partial image elements in the image elements included in the target image may be sequentially acquired at preset intervals, and the sub-coding sequences may be formed according to an acquisition order of the codes corresponding to the partial image elements. Specifically, for example, if the target image includes image elements as shown in fig. 4a, the code corresponding to the known triangular pattern is 0, the code corresponding to the square pattern is 1, and the code corresponding to the circular pattern is 2, the code corresponding to each image element 210 may be obtained by spacing 1 image element 210 every time in the order from left to right in the figure, and then the corresponding sub-code sequence is obtained as 00021 in the obtaining order. For example, if the target image includes image elements as shown in fig. 4c, the code corresponding to the black dot pattern is 0, the code corresponding to the annular pattern is 1, and the code corresponding to each image element 210 can be obtained by spacing 1 image element 210 every time according to the left-to-right sequence in the figure, and then the corresponding sub-code sequence is obtained as 00010 according to the obtaining sequence.
Step S330: and searching the position area of the sub-coding sequence in the coding sequence corresponding to the pre-stored marker.
In some embodiments, based on the known sub-coding sequence, the location region of the sub-coding sequence in the coding sequence corresponding to the pre-stored marker can be searched. Specifically, after finding the coding sequence corresponding to the pre-stored marker, the sub-coding sequence may be compared with the pre-stored coding sequence to obtain a position of the first code of the sub-coding sequence in the coding sequence, so as to obtain a position of the sub-coding sequence in the coding sequence corresponding to the pre-stored marker. For example, referring to fig. 4a again, four image elements 210 framed by a dotted line in fig. 4a may be image elements 210 included in a target image acquired by an image sensor, and the image elements 210 are converted into codes to obtain corresponding encoded sub-sequences 0100, and the encoded sub-sequences are compared with the pre-stored encoded sequence 000100201, so that it can be known that a first bit of the 0100 is encoded as a3 rd bit of the pre-stored encoded sequence, a second bit is encoded as a4 th bit of the pre-stored encoded sequence, a third bit is encoded as a5 th bit of the pre-stored encoded sequence, and a fourth bit is encoded as a6 th bit of the pre-stored encoded sequence, so as to obtain a position area of the encoded sub-sequence 0100 in the pre-stored encoded sequence 000100201.
Step S340: position information of image elements contained in the target image in the marker is determined based on the position area.
In some embodiments, the position information of the image element contained in the target image in the marker can be determined according to the position region of the sub-coding sequence in the coding sequence corresponding to the pre-stored marker. In particular, the position of the first image element of the captured predetermined number of image elements in the marker may be located according to the position of the first bit of the sub-coding sequence encoded in the coding sequence. For example, in the above step 240, according to the fact that the first bit of the known sub-coding sequence 0100 is coded at the 3 rd bit of the pre-stored coding sequence, the second bit is coded at the 4 th bit of the pre-stored coding sequence, the third bit is coded at the 5 th bit of the pre-stored coding sequence, and the fourth bit is coded at the 6 th bit of the pre-stored coding sequence, the 3 rd, 4 th, 5 th, and 6 th bits of each image element in the marker 200 from left to right among the four image elements in the dashed frame shown in fig. 4a can be located.
In some embodiments, the target image may further comprise at least one identification element in the marker, wherein the identification element is distinct from the plurality of image elements for identifying the identity of the marker. The processor can also identify the identification element contained in the target image, acquire the code corresponding to the identification element, and determine the identity information of the marker based on the code corresponding to the identification element according to the corresponding relationship between the code and the identity information of the marker.
The positioning method provided by the above embodiment acquires an image of the target carrier provided with the marker to obtain a target image, identifies image elements included in the target image, and obtains the sub-coding sequences corresponding to the preset number of image elements. And searching the position area of the sub-coding sequence in the coding sequence corresponding to the pre-stored marker. Position information of image elements contained in the target image in the marker is determined based on the position area. The position information of the image element in the marker is determined by searching the position area of the sub-coding sequence in the coding sequence corresponding to the marker, so that the accuracy of determining the position information of the image element in the marker is improved.
Referring to fig. 11, fig. 11 is a block diagram illustrating a positioning apparatus 500 according to an embodiment of the present disclosure. As will be explained below with respect to the block diagram shown in fig. 11, the positioning apparatus 500 includes: an image acquisition module 510, an image recognition module 520, a location information determination module 530, a physical coordinate acquisition module 540, and a rotation information acquisition module 550, wherein:
the image acquisition module 510 is configured to acquire an image of a target carrier provided with a marker to obtain a target image, where the target image at least includes a preset number of image elements in the marker, the marker includes a plurality of image elements, the plurality of image elements are spaced apart from each other, and the arrangement of the preset number of image elements that are adjacently set up in each group is different from the arrangement of the preset number of image elements that are adjacently set up in other groups.
The image recognition module 520 is configured to recognize image elements included in the target image and obtain a sub-coding sequence corresponding to the image elements included in the target image.
In some embodiments, the image recognition module 520 may also include: each image element corresponds to a code acquisition submodule and a first sub-coding sequence forming module, wherein:
and each image element corresponds to a code obtaining submodule, which is used for sequentially obtaining a code corresponding to each image element in the image elements contained in the target image.
And the first sub-coding sequence forming module is used for forming the sub-coding sequences according to the acquisition sequence of the codes corresponding to each image element.
In some embodiments, the image recognition module 520 may also include: partial image element corresponding code acquisition submodule and second sub-coding sequence forming module, wherein:
and the partial image element corresponding code acquisition sub-module is used for sequentially acquiring codes corresponding to partial image elements in the image elements contained in the target image according to a preset interval.
And the second sub-coding sequence forming module is used for forming the sub-coding sequences according to the acquisition sequence of the codes corresponding to the partial image elements.
A position information determining module 530 for determining position information of the image element contained in the target image in the marker based on the sub-coded sequence.
In some embodiments, the location information determining module 530 may also include: a search submodule and a determination submodule, wherein:
and the searching submodule is used for searching the position area of the sub-coding sequence in the coding sequence corresponding to the pre-stored marker.
A determination submodule for determining position information of image elements contained in the target image in the marker based on the position area.
A physical coordinate obtaining module 540, configured to obtain physical coordinates of the image elements included in the target image on the target carrier based on the position information.
In some embodiments, the physical coordinate obtaining module 540 may also include: an image element determination submodule and a physical coordinate acquisition submodule, wherein:
and the image element determining submodule is used for determining whether the image elements contained in the target image belong to the first marker or the second marker based on the sub-coding sequence and obtaining a determination result.
And the physical coordinate obtaining submodule is used for obtaining the physical coordinates of the image elements contained in the target image on the target carrier based on the determination result and the position information.
And a rotation information obtaining module 550, configured to obtain the relative position and rotation information between the terminal device and the target carrier according to the physical coordinates of the image elements included in the target image and the pixel coordinates in the target image.
In some embodiments, the object carrier is an interactive device, and the positioning device 500 may also include: a six degree of freedom information acquisition module, wherein:
and the six-degree-of-freedom information acquisition module is used for acquiring the six-degree-of-freedom information of the interaction device according to the relative position and rotation information of the terminal equipment and the interaction device.
In some embodiments, the target carrier is a fixed-position carrier in real space, and the positioning apparatus 500 may also include: spatial position acquisition module and attitude information acquisition module, wherein:
and the spatial position acquisition module is used for acquiring the spatial position of the carrier with the fixed position in the real space.
And the attitude information acquisition module is used for acquiring the position and attitude information of the terminal equipment in the real space according to the spatial position and the relative position and rotation information of the terminal equipment and the carrier with the fixed position.
In some embodiments, the target image further includes at least one identification element in the marker, the identification element being different from the plurality of image elements, and the positioning apparatus 500 may also include: an identification element recognition module and a marker determination module, wherein:
the identification element identification module is used for identifying the identification elements contained in the target image and acquiring codes corresponding to the identification elements;
and the identity information determining module is used for determining the identity information of the marker based on the code corresponding to the identification element.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and modules may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In several embodiments provided by the present invention, the coupling of the modules to each other may be electrical, mechanical or other type of coupling.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing module, or each module may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
Referring to fig. 12, which shows a block diagram of a terminal device provided in an embodiment of the present application, the terminal device 100 may include one or more of the following components: a processor 110, a memory 120, and an image sensor 130, wherein the memory 120 has one or more applications stored therein, the one or more applications configured to be executed by the one or more processors 110, the one or more programs configured to perform the methods as described in the foregoing method embodiments.
Processor 110 may include one or more processing cores. The processor 110 connects various parts within the entire terminal device 100 using various interfaces and lines, and performs various functions of the terminal device 100 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 120 and calling data stored in the memory 120. Alternatively, the processor 110 may be implemented in hardware using at least one of Digital Signal Processing (DSP), field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 110 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 110, but may be implemented by a communication chip.
The Memory 120 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). The memory 120 may be used to store instructions, programs, code sets, or instruction sets. The memory 120 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like. The storage data area may also store data created by the terminal device 100 in use, and the like.
In the embodiment of the present application, the image sensor 130 is used for capturing images of real objects and capturing scene images of a target scene. The image sensor 130 may be an infrared camera or a visible light camera, and the specific type is not limited in the embodiment of the present application.
In one embodiment, the electronic device is a head-mounted display device, and may further include one or more of the following components in addition to the processor, the memory, and the image sensor described above: display module assembly, optical module assembly, communication module and power.
The display module may include a display control unit. The display control unit is used for receiving the display image of the virtual content rendered by the processor, and then displaying and projecting the display image onto the optical module, so that a user can view the virtual content through the optical module. The display device may be a display screen or a projection device, and may be used to display an image.
The optical module can adopt an off-axis optical system or a waveguide optical system, and a display image displayed by the display device can be projected to eyes of a user after passing through the optical module. The user sees the display image that display device throws through optical module group simultaneously. In some embodiments, the user can also observe the real environment through the optical module, and experience the augmented reality effect after the virtual content and the real environment are superimposed.
The communication module can be a module such as bluetooth, wiFi (Wireless-Fidelity), zigBee (purple peak technology) and the like, and the head-mounted display device can establish communication connection with the electronic device through the communication module. The head-mounted display device which is in communication connection with the electronic equipment can interact information and instructions with the electronic equipment. For example, the head-mounted display device may receive image data transmitted from the electronic device via the communication module, and generate and display virtual content of a virtual world from the received image data.
The power supply can supply power for the whole head-mounted display device, and the normal operation of each part of the head-mounted display device is ensured.
Referring to fig. 13, a block diagram of a computer-readable storage medium provided in an embodiment of the present application is shown. The computer-readable storage medium 700 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Optionally, the computer-readable storage medium 700 comprises a non-volatile computer-readable medium. The computer readable storage medium 700 has storage space for program code 710 for performing any of the method steps of the method described above. The program code can be read from or written to one or more computer program products. The program code 710 may be compressed, for example, in a suitable form.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (11)

1. A positioning method is applied to terminal equipment, and is characterized in that the method comprises the following steps:
acquiring an image of a target carrier provided with a marker to obtain a target image, wherein the target image at least comprises a preset number of image elements in the marker, the marker comprises a plurality of image elements, the plurality of image elements are arranged at intervals, and the arrangement of the preset number of image elements adjacently arranged in each group is different from the arrangement of the preset number of image elements adjacently arranged in other groups; wherein, the marker corresponds to a coding sequence, and the coding sequence corresponding to the marker is formed by the coding corresponding to each image element in the marker according to the sequencing order of the image elements;
identifying image elements contained in the target image, arranging codes corresponding to the image elements contained in the target image according to the arrangement sequence of the image elements contained in the target image, and obtaining a sub-coding sequence corresponding to the image elements contained in the target image;
determining positional information of image elements contained in the target image in the marker based on the sub-coded sequence;
obtaining physical coordinates of image elements contained in the target image on the target carrier based on the position information;
and obtaining the relative position and rotation information of the terminal equipment and the target carrier according to the physical coordinates of the image elements contained in the target image and the pixel coordinates in the target image.
2. The method of claim 1, wherein the target carrier is an interactive device;
after the obtaining of the relative position and the rotation information of the terminal device and the target carrier, the method further includes:
and obtaining the six-degree-of-freedom information of the interaction device according to the relative position and the rotation information of the terminal equipment and the interaction device.
3. The method of claim 2, wherein the interaction device comprises a control portion and a body portion, the body portion comprising a housing, the housing being coupled to the control portion and having a first wall and a second wall facing away from each other, the marker comprising a first marker and a second marker, the first marker being disposed on the first wall and the second marker being disposed on the second wall;
the obtaining, based on the position information, physical coordinates of image elements contained in the target image on the target carrier includes:
determining whether an image element contained in the target image belongs to the first marker or the second marker based on the sub-coding sequence, and obtaining a determination result;
obtaining physical coordinates of image elements contained in the target image on the target carrier based on the determination result and the position information.
4. The method of claim 1, wherein the target carrier is a fixed-position carrier in real space;
after the obtaining of the relative position and the rotation information of the terminal device and the target carrier, the method further includes:
acquiring the spatial position of a carrier with a fixed position in the real space;
and obtaining the position and posture information of the terminal equipment in the real space according to the spatial position and the relative position and rotation information of the terminal equipment and the carrier with the fixed position.
5. The method according to any one of claims 1-4, wherein said determining positional information of image elements contained in said target image in said marker based on said sub-coded sequence comprises:
searching a position area of the sub-coding sequence in a pre-stored coding sequence corresponding to the marker;
determining position information of image elements contained in the target image in the marker based on the position region.
6. The method according to any one of claims 1-4, wherein said obtaining a sub-coded sequence corresponding to an image element contained in the target image comprises:
sequentially acquiring codes corresponding to each image element in the image elements contained in the target image;
and forming the sub-coding sequence according to the acquisition sequence of the code corresponding to each image element.
7. The method according to any one of claims 1 to 4, wherein the obtaining of the sub-coded sequence corresponding to the image element included in the target image comprises:
sequentially acquiring codes corresponding to partial image elements in image elements contained in the target image according to preset intervals;
and forming the sub-coding sequence according to the acquisition sequence of the codes corresponding to the partial image elements.
8. The method of any of claims 1-4, wherein the target image further comprises at least one marker element in the marker, the marker element being different from the plurality of image elements, the method further comprising:
identifying identification elements contained in the target image, and acquiring codes corresponding to the identification elements;
and determining the identity information of the marker and the target carrier to which the marker belongs based on the code corresponding to the identification element.
9. A positioning device, applied to a terminal device, the device comprising:
the image acquisition module is used for acquiring an image of a target carrier provided with a marker to obtain a target image, wherein the target image at least comprises a preset number of image elements in the marker, the marker comprises a plurality of image elements, the plurality of image elements are arranged at intervals, and the arrangement of the preset number of image elements adjacently arranged in each group is different from the arrangement of the preset number of image elements adjacently arranged in other groups; wherein, the marker corresponds to a coding sequence, and the coding sequence corresponding to the marker is formed by the coding corresponding to each image element in the marker according to the sequencing order of the image elements;
the image identification module is used for identifying the image elements contained in the target image, arranging the codes corresponding to the image elements contained in the target image according to the arrangement sequence of the image elements contained in the target image, and obtaining the sub-coding sequences corresponding to the image elements contained in the target image;
a position information determination module for determining position information of image elements contained in the target image in the marker based on the sub-coded sequence;
a physical coordinate obtaining module, configured to obtain physical coordinates of image elements included in the target image on the target carrier based on the position information;
and the rotation information acquisition module is used for acquiring the relative position and the rotation information of the terminal equipment and the target carrier according to the physical coordinates of the image elements contained in the target image and the pixel coordinates in the target image.
10. A terminal device, comprising:
one or more processors;
memory, wherein the memory has stored therein one or more application programs configured for execution by the one or more processors, the one or more programs configured for performing the method of any of claims 1-8.
11. A computer-readable storage medium, characterized in that a program code is stored in the computer-readable storage medium, which program code can be called by a processor to execute the method according to any one of claims 1-8.
CN201910822716.2A 2019-09-02 2019-09-02 Positioning method, positioning device, terminal equipment and storage medium Active CN110598605B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910822716.2A CN110598605B (en) 2019-09-02 2019-09-02 Positioning method, positioning device, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910822716.2A CN110598605B (en) 2019-09-02 2019-09-02 Positioning method, positioning device, terminal equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110598605A CN110598605A (en) 2019-12-20
CN110598605B true CN110598605B (en) 2022-11-22

Family

ID=68856919

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910822716.2A Active CN110598605B (en) 2019-09-02 2019-09-02 Positioning method, positioning device, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110598605B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112489082A (en) * 2020-12-03 2021-03-12 海宁奕斯伟集成电路设计有限公司 Position detection method, position detection device, electronic equipment and readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106372702A (en) * 2016-09-06 2017-02-01 深圳市欢创科技有限公司 Positioning identification and positioning method thereof
CN106570549A (en) * 2016-10-28 2017-04-19 网易(杭州)网络有限公司 Coding pattern generation and identification methods and coding pattern generation and identification devices
CN109102527A (en) * 2018-08-01 2018-12-28 甘肃未来云数据科技有限公司 The acquisition methods and device of video actions based on identification point
CN110120099A (en) * 2018-02-06 2019-08-13 广东虚拟现实科技有限公司 Localization method, device, recognition and tracking system and computer-readable medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106372702A (en) * 2016-09-06 2017-02-01 深圳市欢创科技有限公司 Positioning identification and positioning method thereof
CN106570549A (en) * 2016-10-28 2017-04-19 网易(杭州)网络有限公司 Coding pattern generation and identification methods and coding pattern generation and identification devices
CN110120099A (en) * 2018-02-06 2019-08-13 广东虚拟现实科技有限公司 Localization method, device, recognition and tracking system and computer-readable medium
CN109102527A (en) * 2018-08-01 2018-12-28 甘肃未来云数据科技有限公司 The acquisition methods and device of video actions based on identification point

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"增强现实中基于三维标志物的跟踪技术研究";刘经伟;《中国优秀硕士学位论文全文数据库 信息科技辑》;20111215;第I138-1071页 *

Also Published As

Publication number Publication date
CN110598605A (en) 2019-12-20

Similar Documents

Publication Publication Date Title
JP4032776B2 (en) Mixed reality display apparatus and method, storage medium, and computer program
CN110443853B (en) Calibration method and device based on binocular camera, terminal equipment and storage medium
US11625841B2 (en) Localization and tracking method and platform, head-mounted display system, and computer-readable storage medium
CN111078003B (en) Data processing method and device, electronic equipment and storage medium
CN110737414B (en) Interactive display method, device, terminal equipment and storage medium
CN110569006B (en) Display method, display device, terminal equipment and storage medium
CN111198608A (en) Information prompting method and device, terminal equipment and computer readable storage medium
JP5812550B1 (en) Image display device, image display method, and program
CN111766936A (en) Virtual content control method and device, terminal equipment and storage medium
CN110659587B (en) Marker, marker identification method, marker identification device, terminal device and storage medium
CN110598605B (en) Positioning method, positioning device, terminal equipment and storage medium
CN110737326A (en) Virtual object display method and device, terminal equipment and storage medium
CN111427452A (en) Controller tracking method and VR system
CN111913674A (en) Virtual content display method, device, system, terminal equipment and storage medium
CN111913560A (en) Virtual content display method, device, system, terminal equipment and storage medium
CN111913639B (en) Virtual content interaction method, device, system, terminal equipment and storage medium
KR102176805B1 (en) System and method for providing virtual reality contents indicated view direction
CN110908508A (en) Control method of virtual picture, terminal device and storage medium
CN111198609A (en) Interactive display method and device, electronic equipment and storage medium
CN111913564B (en) Virtual content control method, device, system, terminal equipment and storage medium
CN209821887U (en) Marker substance
CN110473257A (en) Information scaling method, device, terminal device and storage medium
CN111399630B (en) Virtual content interaction method and device, terminal equipment and storage medium
CN110120060B (en) Identification method and device for marker and identification tracking system
CN111103969B (en) Information identification method, information identification device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant