CN110442235B - Positioning tracking method, device, terminal equipment and computer readable storage medium - Google Patents

Positioning tracking method, device, terminal equipment and computer readable storage medium Download PDF

Info

Publication number
CN110442235B
CN110442235B CN201910642093.0A CN201910642093A CN110442235B CN 110442235 B CN110442235 B CN 110442235B CN 201910642093 A CN201910642093 A CN 201910642093A CN 110442235 B CN110442235 B CN 110442235B
Authority
CN
China
Prior art keywords
information
image acquisition
marker
moment
acquisition device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910642093.0A
Other languages
Chinese (zh)
Other versions
CN110442235A (en
Inventor
于国星
胡永涛
王国泰
戴景文
贺杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Virtual Reality Technology Co Ltd
Original Assignee
Guangdong Virtual Reality Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Virtual Reality Technology Co Ltd filed Critical Guangdong Virtual Reality Technology Co Ltd
Priority to CN201910642093.0A priority Critical patent/CN110442235B/en
Priority to PCT/CN2019/098200 priority patent/WO2020024909A1/en
Publication of CN110442235A publication Critical patent/CN110442235A/en
Priority to US16/687,699 priority patent/US11127156B2/en
Application granted granted Critical
Publication of CN110442235B publication Critical patent/CN110442235B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application discloses a positioning tracking method, a positioning tracking device, terminal equipment and a computer readable storage medium, and relates to the technical field of display. The positioning tracking method comprises the following steps: acquiring relative position and posture information between the first image acquisition device and the marker according to a first image which is acquired by the first image acquisition device and contains the marker, and acquiring first information; acquiring position and posture information of a second image acquisition device in a target scene according to a second image which is acquired by the second image acquisition device and contains the target scene, and acquiring second information, wherein a marker and terminal equipment are positioned in the target scene; and acquiring the position and posture information of the terminal equipment relative to the marker by using the first information and the second information to obtain target information. The method can more accurately acquire the position and posture information of the terminal equipment relative to the marker by utilizing the first information and the second information.

Description

Positioning tracking method, device, terminal equipment and computer readable storage medium
Technical Field
The present invention relates to the field of display technologies, and in particular, to a positioning and tracking method, a positioning and tracking device, a terminal device, and a computer readable storage medium.
Background
In recent years, with the advancement of technology, technologies such as augmented Reality (AR, augmented Reality) and Virtual Reality (VR) have gradually become hot spots for research at home and abroad. For example, augmented reality is a technique that augments a user's perception of the real world by information provided by a computer system that superimposes computer-generated virtual objects, scenes, or system cues into the real scene to augment or modify the perception of the real world environment or data representing the real world environment. Therefore, how to accurately and effectively perform positioning tracking on a display device (such as a head-mounted display device, smart glasses, a smart phone, etc.) is a problem to be solved.
Disclosure of Invention
The embodiment of the application provides a positioning tracking method, a positioning tracking device, terminal equipment and a computer readable storage medium, which can improve the positioning tracking accuracy of the terminal equipment.
In a first aspect, an embodiment of the present application provides a positioning tracking method, which is applied to a terminal device. The method comprises the following steps: acquiring relative position and posture information between the first image acquisition device and the marker according to a first image which is acquired by the first image acquisition device and contains the marker, and acquiring first information; acquiring position and posture information of a second image acquisition device in a target scene according to a second image which is acquired by the second image acquisition device and contains the target scene, and acquiring second information, wherein a marker and terminal equipment are positioned in the target scene; and acquiring the position and posture information of the terminal equipment relative to the marker by using the first information and the second information to obtain target information.
In a second aspect, an embodiment of the present application provides a positioning and tracking device, applied to a terminal device, where the device includes: the system comprises a first information acquisition module, a second information acquisition module and a target information acquisition module, wherein the first information acquisition module is used for acquiring relative position and posture information between a first image acquisition device and a marker according to a first image which is acquired by the first image acquisition device and contains the marker, so as to obtain first information. The second information acquisition module is used for acquiring the position and posture information of the second image acquisition device in the target scene according to the second image which is acquired by the second image acquisition device and contains the target scene, so as to obtain second information, wherein the marker and the terminal equipment are positioned in the target scene. The target information acquisition module is used for acquiring the position and posture information of the terminal equipment relative to the marker by using the first information and the second information to obtain target information.
In a third aspect, an embodiment of the present application provides a terminal device, including: one or more processors; a memory; an image acquisition device; an inertial measurement unit; one or more applications, wherein the one or more applications are stored in memory and configured to be executed by one or more processors, the one or more applications configured to perform the positioning tracking method provided in the first aspect described above.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium having program code stored therein, the program code being executable by a processor to perform the positioning tracking method provided in the first aspect.
According to the scheme provided by the embodiment of the application, the positioning tracking of the virtual object is realized through the first image acquisition device and the second image acquisition device, the relative position and posture information between the first image acquisition device and the marker can be acquired according to the first image which is acquired by the first image acquisition device and contains the marker, the first information is obtained, then the position and posture information of the second image acquisition device in the target scene is acquired according to the second image which is acquired by the second image acquisition device and contains the target scene, the second information is obtained, and finally the terminal equipment can acquire the position and posture information of the terminal equipment relative to the marker by utilizing the first information and the second information, so that the target information is obtained. Obviously, the position and the gesture information of the terminal equipment relative to the marker, which are finally acquired, can be more accurate by combining the first information and the second information, so that the accuracy of positioning and tracking of the terminal equipment can be improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 shows a schematic diagram of an application scenario suitable for use in embodiments of the present application.
Fig. 2 shows a method flow diagram of a positioning tracking method according to an embodiment of the present application.
Fig. 3 illustrates a positional relationship between a marker, a terminal device and a target scene in the positioning tracking method according to an embodiment of the present application.
Fig. 4 shows a method flow diagram of a positioning tracking method according to another embodiment of the present application.
Fig. 5 shows a flowchart of step S220 in a positioning tracking method according to another embodiment of the present application.
Fig. 6 shows a flowchart of other steps in a position tracking method according to another embodiment of the present application.
Fig. 7 shows a method flow diagram of a positioning tracking method according to yet another embodiment of the present application.
Fig. 8 shows a flowchart of step S330 in a positioning tracking method according to still another embodiment of the present application.
Fig. 9 shows a flowchart of other steps in a position tracking method according to yet another embodiment of the present application.
Fig. 10 is a diagram showing a specific example of acquiring target information in a positioning and tracking method according to still another embodiment of the present application.
Fig. 11 shows a method flow diagram of a positioning tracking method according to yet another embodiment of the present application.
Fig. 12 shows a flowchart of step S440 in a positioning tracking method according to still another embodiment of the present application.
Fig. 13 is a diagram illustrating an example of data transmission performed by a terminal device in a positioning tracking method according to still another embodiment of the present application.
Fig. 14 shows a detailed flowchart of step S443 in step S440 in the positioning tracking method according to still another embodiment of the present application.
FIG. 15 illustrates a block diagram of a position tracking device according to one embodiment of the present application.
FIG. 16 illustrates a block diagram of the target information acquisition module 530 in the position tracking device according to one embodiment of the present application.
Fig. 17 is a block diagram of a terminal device for performing a positioning tracking method according to an embodiment of the present application.
Fig. 18 is a memory unit for storing or carrying program codes for implementing the positioning tracking method according to the embodiment of the present application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, as provided in the accompanying drawings, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures. Furthermore, the terms "first," "second," and the like, are used merely to distinguish between descriptions and should not be construed as indicating or implying relative importance.
The following describes an application scenario of the positioning tracking method provided in the embodiment of the present application.
Referring to fig. 1, a virtual content display system 10 provided in an embodiment of the present application is shown. The display system 10 includes: the terminal device 100 and the tag 200.
In the embodiment of the present application, the terminal device 100 may be a head-mounted display device, or may be a mobile device such as a mobile phone or a tablet. When the terminal device 100 is a head-mounted display device, the head-mounted display device may be an integrated head-mounted display device. The terminal device 100 may be an intelligent terminal such as a mobile phone connected to an external head-mounted display device, that is, the terminal device 100 may be used as a processing and storage device of the head-mounted display device, and inserted into or connected to the external head-mounted display device, so as to display virtual content in the head-mounted display device.
In some embodiments, the terminal device 100 may include two image capturing devices, a first image capturing device and a second image capturing device, respectively, which may each be mounted on the terminal device 100. The first image capturing device and the second image capturing device may be an infrared camera, a color camera, or the like, and the specific types of the first image capturing device and the second image capturing device are not limited in the embodiments of the present application. In addition, the first image capturing Device and the second image capturing Device may also include image sensors, which may be CMOS (Complementary Metal Oxide Semiconductor ) sensors, or CCD (Charge-coupled Device) sensors, or the like.
In the embodiment of the present application, the marker 200 may be a pattern having a topology structure, where the topology structure refers to a communication relationship between the sub-markers and the feature points in the marker. When the marker 200 is within the visual range of the first image capturing device of the terminal device 100, the terminal device 100 may use the marker 200 within the visual range of the first image capturing device as a target marker, and may capture an image of the target marker, and the processor of the terminal device 100 may obtain spatial position information such as a position and an orientation of the terminal device 100 relative to the target marker, and identification results such as identity information of the target marker by identifying the image of the target marker. The terminal device 100 may display a corresponding virtual object based on the spatial position information of the target marker relative to the terminal device 100, such as the solar system 300 shown in fig. 1, that is, the virtual object displayed corresponding to the marker 200, where the virtual object may be displayed in a superimposed manner at the position of the marker 200 or may be displayed in a superimposed manner at a position other than the position where the marker 200 is located. The user can see the virtual object superimposed and displayed in the real world through the terminal device 100, and the effect of augmented reality is achieved. It should be understood that the specific marker 200 is not limited in this embodiment, and may be identified and tracked by the terminal device 100.
A specific positioning tracking method is described below.
Referring to fig. 2, in one embodiment, the present application provides a positioning tracking method, which may be applied to a terminal device, and the method may include steps S110 to S130.
Step S110: and acquiring relative position and posture information between the first image acquisition device and the marker according to the first image which is acquired by the first image acquisition device and contains the marker, so as to obtain first information.
The terminal equipment is provided with a first image acquisition device which is mainly used for acquiring images of the markers, and the markers can be any graph or object with identifiable characteristic marks. The marker can be placed in the visual range of the first image acquisition device, namely the first image acquisition device can acquire a first image containing the marker, and the first image containing the marker can be used for determining the position and the gesture information of the first image acquisition device relative to the marker after being acquired by the first image acquisition device.
In some embodiments, the tag may include at least one sub-tag therein, which may be a pattern having a certain shape. As one way, each sub-marker may have one or more feature points, where the shape of the feature points is not limited, and may be a dot, a ring, or a triangle, or other shapes. The distribution rules of the sub-markers in different markers are different, so that each marker can have different identity information, and the terminal equipment can acquire the identity information corresponding to the marker by identifying the sub-marker contained in the marker so as to distinguish the relative position and posture information of the different markers, wherein the identity information can be information which can be used for uniquely identifying the marker by coding and the like, but is not limited to the information.
The relative position and posture information between the first image capturing device and the marker may also be referred to as 6DOF (degrEE of frEEdom ) information of the first image capturing device relative to the marker, the 6DOF information may include three translational degrees of freedom and three rotational degrees of freedom, the three translational degrees of freedom being used to describe coordinate values of the three-dimensional object X, Y, Z; the three rotational degrees of freedom include Pitch angle (Pitch), roll angle (Roll), and Yaw angle (Yaw). Specifically, the terminal device may identify and track the marker according to the first image including the marker, so as to obtain the relative position and posture information, i.e. the first information, between the first image acquisition device and the marker.
Step S120: and acquiring the position and posture information of the second image acquisition device in the target scene according to the second image which is acquired by the second image acquisition device and contains the target scene, and obtaining second information, wherein the marker and the terminal equipment are positioned in the target scene.
The terminal device may also be provided with a second image acquisition device for acquiring a second image of the target scene, i.e. the second image is a scene image acquired by the second image acquisition device within the visual range. In some embodiments, the terminal device and the marker are both included in the target scene, in order to clearly illustrate the relationship between the terminal device, the marker and the target scene, the present embodiment gives a diagram as shown in fig. 3, where 101 in fig. 3 represents the target scene, 102 represents the marker, and 103 represents the terminal device, and it can be seen from fig. 3 that both the marker 102 and the terminal device 103 are located in the target scene 101. In addition, it is known from the above description that the terminal device 103 includes a first image capturing device and a second image capturing device, where the first image capturing device mainly captures an image of the marker 102, and the second image capturing device mainly captures an image of the target scene 101.
The second image acquisition device can also store the second image of the target scene in the terminal equipment after acquiring the second image, and the terminal equipment can acquire the position and posture information of the second image acquisition device in the target scene by using the second image to obtain second information. In this embodiment, the second information may be obtained by calculating using a VIO (visual-inertial odometry, visual inertial odometer), where the VIO may calculate the relative degree of freedom information of the terminal device through key points (or feature points) included in the second image acquired by the second image acquisition device, so as to further calculate the current position and posture of the terminal device. In other words, the VIO may acquire the second image in real time by using the second image acquisition device, and acquire the angular velocity and acceleration data of the terminal device through the inertial measurement unit, and the position and posture information of the second image acquisition device in the target scene may be acquired by combining the second image acquired by the second image acquisition device.
Step S130: and acquiring the position and posture information of the terminal equipment relative to the marker by using the first information and the second information to obtain target information.
After the terminal equipment acquires the first information and the second information by utilizing the first image acquisition device and the second image acquisition device, the position and the gesture information of the terminal equipment relative to the marker, namely the target information, can be comprehensively acquired through the first information and the second information. In one embodiment, since the first image capturing device and the second image capturing device are both mounted on the terminal device, the first information between the first image capturing device and the marker may be used as target information of the terminal device relative to the marker, and the second information of the second image capturing device in the target scene may also be used as target information of the terminal device relative to the marker.
In order to make the obtained target information more accurate and effective, the embodiment can comprehensively obtain the target information by combining the first information and the second information, namely, the first information and the second information can be fused to obtain the target information. There are various ways of fusing the first information and the second information, and an average value of the first information and the second information may be used as target information, or different specific gravities may be assigned to perform weighting calculation on the first information and the second information, or the like. In some embodiments, the terminal device may also obtain the position and posture information of the terminal device relative to the marker through an inertial measurement unit (Inertial measurement unit, IMU), and then update the position and posture information of the terminal device relative to the marker obtained by the inertial measurement unit by using at least one of the first information and the second information, thereby obtaining the target information. The inertial measurement unit mainly serves to measure the three-axis attitude angle (or angular rate) and acceleration of the terminal device, and generally comprises three single-axis accelerometers and three single-axis gyroscopes, wherein the accelerometers detect three-axis acceleration signals of the terminal device, the gyroscopes detect angular velocity signals of a carrier relative to a navigation coordinate system, the angular velocity and acceleration of the terminal device are measured, and the attitude of the terminal device is calculated according to the angular velocity and acceleration signals.
According to the positioning tracking method, the target information of the terminal equipment relative to the marker is obtained through the first information acquired by the first image acquisition device and the second information acquired by the second image acquisition device, and the target information is acquired by combining the first information and the second information, so that compared with the position and the gesture information of the terminal equipment relative to the marker acquired by the positioning tracking method in the prior art, the positioning tracking method is more accurate and effective, and the positioning tracking of the terminal equipment is more accurate.
In another embodiment of the present application, a positioning and tracking method is provided, where the terminal device further includes an inertial measurement unit, as shown in fig. 4, and the position and posture information of the terminal device relative to the marker is obtained by using the first information and the second information, so as to obtain target information, and may include steps S210 to S240.
Step S210: and acquiring the predicted position and posture information of the terminal equipment relative to the marker at different moments by using an inertial measurement unit to obtain the predicted information at different moments.
Before the inertial measurement unit is used for acquiring the predicted position and posture information of the terminal equipment relative to the marker at different moments, the terminal equipment can acquire a first initial rigid body relation between the first image acquisition device and the inertial measurement unit, meanwhile, the terminal equipment can acquire a second initial rigid body relation between the second image acquisition device and the inertial measurement unit, and the initial position and posture information of the terminal equipment relative to the marker can be acquired by combining the first initial rigid body relation and the position and posture information of the first image acquisition device relative to the marker.
In some embodiments, the terminal device may be provided with an inertial measurement unit, through which information about the predicted position and posture of the terminal device relative to the marker at different moments can be obtained, and through the above description, it is known that the inertial measurement unit measures data of the object to be measured in terms of its inclination angle, declination angle and rotation angle by using a small gyroscope. The inertial measurement unit may measure angular changes of three rotational degrees of freedom of the terminal device using gyroscopes and measure displacements of three translational degrees of freedom of the terminal device using accelerometers. Because the position and posture information of the terminal device relative to the marker will be different along with the change of time, the predicted position and posture information of the terminal device relative to the marker obtained by the inertial measurement unit at different moments may also be different.
The inertial measurement unit can accumulate the position change and the attitude change of the terminal equipment, so that the position and the attitude information of the terminal equipment relative to the marker at different moments can be predicted according to the accumulated result. In other words, after the inertial measurement unit obtains the prediction information of the previous moment, the prediction information of the current moment can be obtained through integration and the prediction information of the previous moment, so that the position and posture information of the terminal equipment at the current moment relative to the marker can be obtained.
Step S220: when the first information of the first moment is acquired, the first information is utilized to update the prediction information of the first moment to obtain the first prediction information, and the prediction information after the first moment is acquired again based on the first prediction information.
When the terminal equipment acquires the first information at the first moment, the first information can be used for updating the prediction information at the first moment, and because the inertia measurement unit can acquire the prediction information at different moments, the corresponding prediction information is different when the moments are different, wherein the first information at the first moment refers to the relative position and posture information between the first image acquisition device and the marker, which are obtained according to the first image acquired at the first moment. The terminal device obtains the first information of the first moment by using the first image collected by the first image collecting device, and simultaneously obtains the prediction information corresponding to the first moment by using the inertial measurement unit. After the first information and the prediction information are obtained, the terminal device may update the prediction information with the first information to obtain the first prediction information, and re-obtain the prediction information at each time after the first time based on the first prediction information, where the first prediction information may refer to the prediction information updated at the first time.
In some embodiments, when the inertial measurement unit is in an initial state, a first image including a marker may be acquired by the first image acquisition device, and according to the first image, the relative position and posture information between the first image acquisition device and the marker may be acquired, and then according to the first initial rigid body relationship between the first image acquisition device and the inertial measurement unit and the relative position and posture information between the first image acquisition device and the marker, the relative position and posture information between the inertial measurement unit and the marker may be obtained, and the relative position and posture information between the inertial measurement unit and the marker may be used as initial position and posture information of the terminal device relative to the marker, that is, initial prediction information of the inertial measurement unit may be used to predict the position and posture information of the terminal device relative to the marker at different moments based on the initial prediction information. When the inertial measurement unit is in an initial state, the first image acquisition device does not acquire the first image, the inertial measurement unit does not acquire the initial position and posture information of the terminal equipment relative to the marker, and the terminal equipment can be in a waiting state all the time.
In some embodiments, as shown in fig. 5, step S220 may include steps S221 to S223.
Step S221: a first rigid body relation between the first image acquisition device and the inertial measurement unit is acquired.
The first rigid body relation between the first image acquisition device and the inertial measurement unit refers to a structural placement relation between the first image acquisition device and the inertial measurement unit, specifically, the placement relation can include information such as distance and azimuth between the first image acquisition device and the inertial measurement unit, and the placement relation can be obtained through actual measurement, can also be obtained by utilizing a structural design value, or can be obtained through calibration. The placement relationship can reflect a rotation amount and a displacement amount of the first image capturing device relative to the inertial measurement unit or the inertial measurement unit relative to the first image capturing device, where the rotation amount and the displacement amount can be used to represent rotation and displacement required when the spatial coordinates of the first image capturing device are overlapped with the spatial coordinates of the inertial measurement unit, where the spatial coordinates of the first image capturing device may be a three-dimensional coordinate system established with a center point of the first image capturing device, and the spatial coordinates of the inertial measurement unit may be a three-dimensional coordinate system established with a center point of the inertial measurement unit.
Alternatively, the first rigid body relationship between the first image capturing device and the inertial measurement unit may include a relative translational relationship of the first image capturing device and the inertial measurement unit, a rotational relationship of the first image capturing device and the inertial measurement unit, and the like.
Step S222, position and posture information of the inertia measurement unit relative to the marker is obtained according to the first information at the first moment and the first rigid body relation.
The first information is relative position and posture information between the first image acquisition device and the marker, and the first rigid body relation is a placement relation of the first image acquisition device and the inertial measurement unit structure, wherein the rigid body relation between the first image acquisition device and the inertial measurement unit can be obtained through actual measurement. After the first rigid body relation is obtained, the relative relation between the inertial measurement unit and the marker can be obtained according to the first rigid body relation, and the main reason is that the first image acquisition device and the inertial measurement unit are simultaneously arranged on the terminal equipment, namely, the first image acquisition device and the inertial measurement unit can obtain the placement relation through actual measurement, and a certain mapping relation exists between the first image acquisition device and the marker. In addition, after the terminal equipment obtains the relative relation between the inertial measurement unit and the marker, the terminal equipment can combine the first information to obtain the position and the gesture information of the inertial measurement unit relative to the marker.
And S223, updating the prediction information at the first moment by utilizing the position and posture information of the inertial measurement unit relative to the marker to obtain first prediction information.
The terminal equipment obtains the position and posture information of the inertial measurement unit relative to the marker through the first rigid body relation, and then updates the predicted information at the first moment by using the position and posture information to obtain the first predicted information. As a specific embodiment, an information update parameter may be obtained according to the position and posture information of the inertial measurement unit relative to the marker at the first time and the prediction information at the first time, where the information update parameter may be a deviation value between the position and posture information of the inertial measurement unit relative to the marker and the prediction information, and the prediction information at the first time is updated based on the information update parameter. In another embodiment, the position and posture information of the inertial measurement unit relative to the marker at the first moment and the prediction information at the first moment may be weighted to obtain the first prediction information, and the weighted weight may be set according to the actual requirement.
In some embodiments, after the terminal device obtains the first rigid relation between the first image acquisition device and the inertia measurement unit, the first rigid relation between the first image acquisition device and the inertia measurement unit may be updated and corrected, so that the first rigid relation is more accurate. Detailed steps as shown in fig. 6, it can be appreciated from fig. 6 that updating the first rigid body relation may include steps S224 to S226.
Step S224: and predicting the relative position and the posture information between the first image acquisition device and the marker by using the first rigid body relation and the first prediction information to obtain first posture prediction information.
In one embodiment, the terminal device may perform coordinate transformation on the first prediction information by using a first rigid relation between the first image acquisition device and the inertia measurement unit, and recalculate the relative position and posture information between the first image acquisition device and the marker, to obtain the first posture prediction information.
Step S225: an error between the first information at the first time and the first pose prediction information is obtained.
The first information is actually measured position and posture information between the first image acquisition device and the marker, the first posture prediction information is predicted position and posture information between the first image acquisition device and the marker, and errors between the first information and the first posture prediction information can be obtained by obtaining the first information and the first posture prediction information. In some embodiments, the first information may be subtracted from the first attitude prediction information and taken as an absolute value, or the first information may be subtracted from the first attitude prediction information and taken as an absolute value, so that an error between the first information and the first attitude prediction information may be obtained.
Step S226: and updating the first rigid body relation according to the error.
By the above description, it can be known that the error between the first information and the first posture prediction information mainly refers to the error between the position between the first image acquisition device and the marker and the actual calculated value and the predicted value of the posture information, and the first rigid body relationship can be updated by using the error between the first information and the first posture prediction information. The smaller the error between the first information and the first gesture prediction information, the more accurate the acquired first rigid body relation is, the more the update times of the first rigid body relation can be acquired, whether the update times are larger than the preset times is judged, and if the update times are larger than the preset times, the update of the first rigid body relation can be ended.
In some embodiments, the prediction information after the first time may be re-acquired according to the first prediction information, and specifically, the inertial measurement unit may integrate the position and the posture change of each time after the first time on the basis of the first prediction information, so as to re-acquire the prediction information of each time after the first time.
Step S230: when the second information of the second moment is acquired, the second information is utilized to update the prediction information of the second moment to obtain the second prediction information, and the prediction information after the second moment is acquired again based on the second prediction information.
The second information is the position and posture information of the second image acquisition device in the target scene, and the position and posture information can be acquired through a second image containing the target scene. The predicted information at the second moment refers to the position and posture information of the terminal device relative to the marker, which is predicted by the inertial measurement unit at the second moment. When the terminal equipment acquires the second information of the second image acquisition device in the target scene, the second information can be used for updating the predicted information of the second moment.
Step S240: the prediction information at the current time is used as target information.
The terminal device can take the predicted information of the current moment obtained by the inertial measurement unit as the position and posture information of the terminal device relative to the marker at the current moment, namely the target information, and the predicted information at different moments can be taken as the target information at the corresponding moment.
The positioning tracking method provided by the embodiment of the application can enable the position and posture information of the terminal equipment relative to the marker to be acquired more accurately and effectively by utilizing the inertial measurement unit, namely, the prediction information acquired by the inertial measurement unit can be updated by combining the first information and the second information. In addition, the embodiment of the application can improve the positioning accuracy by continuously updating the first rigid body relation through the prediction information.
Referring to fig. 7, still another embodiment of the present application provides a positioning tracking method applicable to a terminal device, and specifically, the method may include steps S310 to S340.
Step S310: and acquiring the predicted position and posture information of the terminal equipment relative to the marker at different moments by using an inertial measurement unit to obtain the predicted information at different moments.
Step S320: when the first information of the first moment is acquired, the first information is utilized to update the prediction information of the first moment to obtain the first prediction information, and the prediction information after the first moment is acquired again based on the first prediction information.
Step S330: when the second information of the second moment is acquired, the second information is utilized to update the prediction information of the second moment to obtain the second prediction information, and the prediction information after the second moment is acquired again based on the second prediction information.
When the terminal equipment acquires second information of the second moment, the second information can be used for updating prediction information of the second moment, wherein the second information of the second moment refers to position and posture information of a second image acquisition device in a target scene, which is obtained according to a second image acquired at the second moment. The terminal device may update the prediction information with the second information to obtain the second prediction information, and re-acquire the prediction information at each time after the second time based on the second prediction information, where the second prediction information may refer to the prediction information updated at the second time.
In one embodiment, step S330 may include steps S331 to S333 as shown in fig. 8.
Step S331: and acquiring a second rigid body relation between the second image acquisition device and the inertia measurement unit.
The second rigid body relationship between the second image capturing device and the inertial measurement unit refers to a structural placement relationship between the second image capturing device and the inertial measurement unit, and specifically, the placement relationship may include rotation and displacement between the second image capturing device and the inertial measurement unit. In one embodiment, the second rigid body relationship between the second image acquisition device and the inertial measurement unit may be obtained through actual measurement, or may be obtained by using a structural design value. The second rigid body relationship may reflect a rotation amount and a translation amount required for the second image capturing device to the inertial measurement unit or for the inertial measurement unit to the second image capturing device, which may be used to represent rotation and displacement required when the spatial coordinates of the second image capturing device are overlapped with the spatial coordinates of the inertial measurement unit, wherein the spatial coordinates of the second image capturing device may be a three-dimensional coordinate system established with a center point of the second image capturing device, and the spatial coordinates of the inertial measurement unit may be a three-dimensional coordinate system established with a center point of the inertial measurement unit.
Alternatively, the second rigid body relationship between the second image capturing device and the inertial measurement unit may include a relative translational relationship of the second image capturing device and the inertial measurement unit, a rotational relationship of the second image capturing device and the inertial measurement unit, and the like.
Step S332: and performing coordinate conversion on the second information at the second moment by using the first rigid body relation and the second rigid body relation of the first image acquisition device and the inertia measurement unit to obtain the intermediate position and the posture information of the terminal equipment relative to the marker.
When the first image acquisition device acquires a first image containing the marker, the relative position and posture information between the first image acquisition device and the marker can be acquired according to the first image. The third rigid body relation between the first image acquisition device and the second image acquisition device can be obtained according to the first rigid body relation between the first image acquisition device and the inertia measurement unit and the second rigid body relation between the second image acquisition device and the inertia measurement unit. And carrying out coordinate conversion on the relative position and posture information between the first image acquisition device and the marker by utilizing the third rigid body relation to obtain the relative position and posture information between the second image acquisition device and the marker, wherein the relative position and posture information between the second image acquisition device and the marker can be used as initial position and posture information of the second image acquisition device relative to the marker. When the second image acquisition device acquires the second image, the position and the posture information of the second image acquisition device in the target scene, namely the second information, can be acquired according to the second image, the second information can be converted into the relative position and the posture information between the second image acquisition device and the marker based on the initial position and the posture information of the second image acquisition device relative to the marker, and the relative relation between the inertial measurement unit and the marker is acquired according to the second rigid body relation.
In addition, because the second image acquisition device and the inertia measurement unit are arranged on the terminal equipment at the same time, the structural arrangement relation between the second image acquisition device and the inertia measurement unit can be obtained through actual measurement, the relative relation between the inertia measurement unit and the marker can be obtained through calculation, and the coordinate conversion of second information can be realized by combining the relative relation between the inertia measurement unit and the marker and the second rigid body relation, so that the intermediate position and the posture information of the terminal equipment relative to the marker can be obtained.
Step S333: and updating the prediction information at the second moment by using the intermediate position and the posture information of the terminal equipment relative to the marker to obtain second prediction information.
The terminal equipment can acquire the predicted information of the terminal equipment relative to the marker at different moments by using the inertial measurement unit, and after acquiring the intermediate position and the posture information of the terminal equipment relative to the marker through the first rigid body relation, the second rigid body relation and the second information, the terminal equipment can update the predicted information at the second moment by using the intermediate position and the posture information to acquire the second predicted information. The prediction information after the second time may be updated by using the intermediate position and orientation information.
In some embodiments, the terminal device may further include steps S334 to S336 shown in fig. 9 after acquiring the second rigid body relationship between the second image acquisition device and the inertial measurement unit.
Step S334: and predicting the position and the posture information of the second image acquisition device in the target scene by using the second rigid body relation and the second prediction information to obtain second posture prediction information.
In one embodiment, the terminal device may recalculate the position and posture information of the second image acquisition device in the target scene by using the second rigid relation between the second image acquisition device and the inertia measurement unit and the second prediction information, to obtain the second posture prediction information. The second predicted information is the predicted position and posture information of the terminal equipment relative to the marker, and the second rigid body relation is the structural arrangement relation between the second image acquisition device and the inertial measurement unit, so the second posture predicted information can be obtained through coordinate conversion and combination of the second rigid body relation and the second predicted information.
Step S335: and acquiring an error between the second information at the second moment and the second attitude prediction information.
The second information is the position and posture information of the second image acquisition device in the target scene, the second posture prediction information is the predicted position and posture information of the second image acquisition device in the target scene, and the second information and the second posture prediction information can acquire errors between the two information. In some embodiments, the second information may be subtracted from the second information and taken as an absolute value, or the second information may be subtracted from the second information and taken as an absolute value, so that an error between the second information and the second attitude prediction information at the second time may be obtained.
Step S336: and updating the second rigid body relation according to the error.
Through the above description, it can be known that the error between the second information and the second posture prediction information at the second moment mainly refers to the error between the position of the second image acquisition device in the target scene and the actual measurement value and the prediction value of the posture information, and the second rigid body relationship refers to the structural placement relationship between the second image acquisition device and the inertial measurement unit. Therefore, the second rigid body relation can be updated with an error between the second information and the second posture prediction information.
In some embodiments, the content of the prediction information after the second time may be reacquired according to the second prediction information, where the second prediction information is obtained by updating the second prediction information at the second time with the second information at the second time, and the content of the prediction information after the second time may be reacquired based on the second prediction information after the second prediction information is obtained. Specifically, the second prediction information may be integrated to obtain the content of the prediction information after the second time.
Step S340: the prediction information at the current time is used as target information.
In order to more specifically describe the specific flow of acquiring the target information in the positioning and tracking method, an exemplary diagram is shown in fig. 10, in which the IMU in fig. 10 acquires the prediction information of the terminal device relative to the marker by using the inertial measurement unit, the tag acquires the image of the marker by using the first image acquisition device, and the VIO acquires the visual image of the VIO algorithm by using the second image acquisition device. In fig. 10, a1, a2, a3, a4 are prediction information of the inertial measurement unit at different times, and prediction information of the next time can be obtained by integrating the prediction information of the previous time, where the integration may refer to integration of acceleration, attitude angle, and the like in the inertial measurement unit.
After the first information is acquired by using the first image acquired by the first image acquisition device, the first information can be converted into the position b1 of the inertial measurement unit relative to the marker according to the rigid body relation between the first image acquisition device and the inertial measurement unit at the moment T1, and the prediction information of the inertial measurement unit acquired at the moment T1 is updated by using the position b1 to obtain a1'. The inertial measurement unit may perform integral prediction again using the updated a1 'to obtain the pose a2', the pose a3 'at the time T3, the pose a4' at the time T4, and the like. In addition, the second image acquired by the second image acquisition device can acquire the second information of the second image acquisition device in the target scene, the position and posture information c1 of the terminal equipment at the moment T2 relative to the marker can be acquired by utilizing the second rigid relation between the second image acquisition device and the inertia measurement unit, and the predicted information a2' at the moment T2 is updated according to the c1 to acquire the pose a2 at the moment T2, the pose a3 at the moment T3, the pose a4 at the moment T4 and the like.
According to the positioning tracking method, the prediction information of the target time point is updated by introducing the first rigid body relation and the second rigid body relation of the first image acquisition device and the second image acquisition device relative to the inertia measurement unit, so that the accuracy of the terminal equipment relative to the marker position and the gesture information can be further ensured.
Referring to fig. 11, still another embodiment of the present application provides a positioning and tracking method, which is applicable to a terminal device, where the terminal device further includes a microprocessor and a processor, a first image acquisition device is connected to the microprocessor, and a second image acquisition device is connected to the processor, and specifically, the method may include steps S410 to S460.
Step S410: and acquiring relative position and posture information between the first image acquisition device and the marker according to the first image which is acquired by the first image acquisition device and contains the marker, so as to obtain first information.
Step S420: and acquiring the position and posture information of the second image acquisition device in the target scene according to the second image which is acquired by the second image acquisition device and contains the target scene, and obtaining second information, wherein the marker and the terminal equipment are positioned in the target scene.
Step S430: and acquiring the predicted position and posture information of the terminal equipment relative to the marker at different moments by using an inertial measurement unit to obtain the predicted information at different moments.
Step S440: when the first information of the first moment is acquired, the first information is utilized to update the prediction information of the first moment to obtain the first prediction information, and the prediction information after the first moment is acquired again based on the first prediction information.
Referring to fig. 12, step S440 may include steps S441 to S444.
Step S441: the method comprises the steps that a plurality of interrupt moments are acquired through a processor, wherein the interrupt moments are moments when a first image acquisition device sends interrupt signals to the processor.
In one embodiment, the terminal device may not only be provided with a first image acquisition device, a second image acquisition device and an inertial measurement unit, but also include a processor and a microprocessor, where the connection relationship between the first image acquisition device, the second image acquisition device, the microprocessor and the processor is shown in fig. 13. In fig. 13, 401 denotes a first image pickup device, and 402 denotes a second image pickup device. As can be seen from fig. 13, the first image acquisition device 401 is connected to a microprocessor, the second image acquisition device 402 is connected to the processor, and the inertial measurement unit is also connected to the processor.
When the first image capturing device 401 captures a first image including a marker, an interrupt signal, for example, a GPIO (General purpose input/output) interrupt signal, may be sent to the processor, and a time when the interrupt signal is received by the processor may be referred to as an interrupt time T1, and the interrupt time T1 may be stored. The first image capturing device 401 captures a first image with multiple exposures, and each exposure generates an interrupt, and since capturing the marker image is a process of continuously capturing multiple frames of images, and capturing each frame generates an interrupt, the processor can acquire multiple interrupt times T1, for example, the interrupt times when the processor receives the interrupt signal are respectively T11, T12, T13, and T14 and … ….
Step S442: the receiving time is obtained by the processor and is the time when the processor receives the first image sent by the microprocessor.
The first image acquisition device 401 acquires a first image, and may transmit the first image to the microprocessor, and the microprocessor receives the first image and may send the first image to the processor, and the processor may record a time when the first image is received, which may be referred to as a receiving time. The terminal device may acquire the reception time by using the processor, and store and process the reception time.
Step S443: the first time is determined using the receive time and the plurality of interrupt times.
In some embodiments, as in fig. 14, determining the first time using the reception time and the plurality of interruption times may include steps S4431 to S4434.
Step S4431: the method includes the steps of acquiring a delay time length from the first image acquisition device to the processor receiving the first image, wherein the delay time length is the sum of the processing time length and the transmission time length of the first image.
The first image capturing device 401 may have a delay period Δt during which the first image is transmitted to the processor, where the delay period may include a processing period T1 and a transmission period T2 of the first image. The processing duration t1 refers to a time consumed by the microprocessor to process the first image, and in one embodiment, the processing duration t1 is related to a frame rate of an image sensor in the terminal device, and the longer the frame rate of the image sensor, the shorter the processing duration t1 of the first image. The transmission time period T2 refers to the time required for the first image to be transmitted from the microprocessor to the processor, and the delay time period Δt in this embodiment may be the sum of the processing time period T1 and the transmission time period T2, that is, Δt=t1+t2.
Step S4432: and acquiring the exposure time of the first image by using the receiving time and the delay time.
The processor may use its acquired receive time T2 and delay time Δt to obtain an exposure time T3 of the first image, which exposure time T3 may also be referred to as a theoretical exposure time of the first image. In one embodiment, the theoretical exposure time of the first image may be obtained by subtracting the delay time Δt from the receiving time T2, i.e. the theoretical exposure time T3 of the first image may be the difference Δt between the receiving time T2 and the delay time, i.e. t3=t2—Δt.
Step S4433: and calculating a difference value between the exposure time and each interruption time, and judging whether the difference value is smaller than a preset threshold value.
It can be known from the above description that the first image capturing device 401 captures a first image as a process of continuously capturing multiple frames of images, that is, the processor may receive and store multiple interruption moments, which may be respectively referred to as T11, T12, T13, T14 and … …. The processor obtains the interruption moments, and can calculate the difference delta t between the exposure moment and each interruption moment, wherein the difference delta t1, delta t2, delta t3 and delta t4 … … are respectively. When the processor obtains the difference between each interruption time and the exposure time, it may determine whether Δt 1=t3-T11, Δt2=t3-T12, Δt3=t3-T13, Δt14=t3-T14 … …, and whether the difference Δt is smaller than the preset threshold Th, i.e. determine whether Δt1 is smaller than Th, Δt2 is smaller than Th, Δt3 is smaller than Th, Δt4 is smaller than Th, and if Δt is smaller than the preset threshold Th, step S4434 is proceeded.
And step 4434, if the interruption time is smaller than the preset threshold value, taking the interruption time smaller than the preset threshold value as the first time.
When the difference between the exposure time T3 and the interruption time T1 is smaller than the preset threshold Th, the interruption time T1 smaller than the preset threshold Th is taken as the first time, i.e. which of Δt1, Δt2, Δt3, Δt4 … … is smaller than the preset threshold Th, and the interruption time is taken as the first time. In some embodiments, when the interruption time satisfying the difference value less than the preset threshold value is plural, since the actual delay time may be longer than the theoretical delay time, it may be further determined whether the interruption time having the difference value from the exposure time less than the preset threshold value is less than the exposure time T3. For example, when the receiving time T2 of the processor receiving the first image is 100 and the acquired delay time Δt is 30, the exposure time T3 of the first image is T2- Δt=70. The interrupt time T11, T12, T13, T14 and T15 recorded by the processor are 20, 40, 60, 80 and 100 respectively, the difference between the interrupt time T11, T12, T13, T14 and T15 and the exposure time T3 can be calculated respectively, the difference is 50, 30, 10 and 30 respectively, a threshold th=15 can be set, the condition T13 and T14 is met, the exposure time T3 can be further compared with the exposure time T13 and T14, the interrupt time T13 less than or equal to T3 is selected, and T13 can be taken as the first time, namely the first time is 60.
Step S444: and acquiring the prediction information of the first moment, and updating the prediction information of the first moment by using the first information of the first moment.
Step S450: when the second information of the second moment is acquired, the second information is utilized to update the prediction information of the second moment to obtain the second prediction information, and the prediction information after the second moment is acquired again based on the second prediction information.
Step S460: the prediction information at the current time is used as target information.
In some embodiments, the terminal device may further send, by the processor, a time synchronization instruction to the microprocessor, where the time synchronization instruction includes a clock time of the processor, and the time synchronization instruction is used to instruct the microprocessor to adjust the clock time of the microprocessor according to the clock time of the processor.
The positioning and tracking method provided by the embodiment of the application can realize the synchronization of the data among the inertial measurement unit, the microprocessor and the processor by utilizing the data transmission relation among the first image acquisition device, the second image acquisition device, the microprocessor and the processor, and can reduce errors generated by the positioning and tracking method due to data transmission to a certain extent.
Referring to fig. 15, which shows a block diagram of a positioning and tracking device 500 provided in an embodiment of the present application, the positioning and tracking device 500 is applied to a terminal device, and the block diagram shown in fig. 15 will be explained below, where the positioning and tracking device 500 may include: the first information acquisition module 510, the second information acquisition module 520, and the third information acquisition module 530.
The first information obtaining module 510 is configured to obtain, according to a first image including a marker collected by the first image collecting device, relative position and posture information between the first image collecting device and the marker, and obtain first information.
The second information obtaining module 520 is configured to obtain, according to a second image including the target scene collected by the second image collecting device, position and posture information of the second image collecting device in the target scene, and obtain second information, where the marker and the terminal device are located in the target scene.
The target information obtaining module 530 is configured to obtain the target information by using the first information and the second information to obtain the position and the posture information of the terminal device relative to the marker.
Further, the target information acquisition module 530 may include a prediction information acquisition unit 531, a first update unit 532, a second update unit 533, and an information acquisition unit 534 as shown in fig. 16.
The predicted information obtaining unit 531 is configured to obtain predicted position and posture information of the terminal device relative to the marker at different moments by using the inertial measurement unit, so as to obtain predicted information at different moments.
The first updating unit 532 is configured to, when the first information at the first time is acquired, update the prediction information at the first time with the first information to obtain the first prediction information, and re-acquire the prediction information after the first time based on the first prediction information.
Further, the first updating unit 532 may be configured to obtain a first rigid relationship between the first image capturing device and the inertial measurement unit, obtain the position and posture information of the inertial measurement unit relative to the marker according to the first information at the first moment and the first rigid relationship, and update the prediction information at the first moment by using the position and posture information of the inertial measurement unit relative to the marker to obtain the first prediction information.
Further, the first updating unit 532 may be further configured to predict the relative position and posture information between the first image capturing device and the marker by using the first rigid body relationship and the first prediction information, obtain the first posture prediction information, obtain an error between the first information and the first posture prediction information at the first moment, and update the first rigid body relationship according to the error.
Further, the first updating unit 532 may be further configured to obtain, by using the processor, a plurality of interrupt times, where the interrupt times are times when the first image capturing device sends an interrupt signal to the processor, obtain, by using the processor, a receiving time, where the receiving time is a time when the processor receives the first image sent by the microprocessor, determine the first time by using the receiving time and the plurality of interrupt times, obtain prediction information of the first time, and update the prediction information of the first time by using the first information of the first time.
Further, the first updating unit 532 may be further configured to obtain a delay time period from the first image capturing device capturing the first image to the processor receiving the first image, where the delay time period is a sum of a processing time period and a transmission time period of the first image, obtain an exposure time period of the first image by using the receiving time period and the delay time period, calculate a difference value between the exposure time period and each interruption time period, and determine whether the difference value is smaller than a preset threshold value, and if the difference value is smaller than the preset threshold value, use the interruption time period smaller than the preset threshold value as the first time period.
Further, the first update unit 532 may also send, to the microprocessor, a time synchronization instruction, where the time synchronization instruction includes a clock time of the processor, and the time synchronization instruction is used to instruct the microprocessor to adjust the clock time of the microprocessor according to the clock time of the processor.
The second updating unit 533 is configured to update the predicted information at the second time with the second information when the second information at the second time is acquired, obtain the second predicted information, and re-acquire the predicted information after the second time based on the second predicted information.
Further, the second updating unit 533 may be configured to obtain a second rigid relation between the second image capturing device and the inertial measurement unit, coordinate convert second information at the second moment by using the first rigid relation between the first image capturing device and the inertial measurement unit and the second rigid relation to obtain intermediate position and posture information of the terminal device relative to the marker, and update prediction information at the second moment by using the intermediate position and posture information of the terminal device relative to the marker to obtain the second prediction information.
Further, the second updating unit 533 may be further configured to predict the position and the posture information of the second image capturing device in the target scene by using the second rigid relation and the second prediction information, obtain the second posture prediction information, obtain an error between the second information at the second moment and the second posture prediction information, and update the second rigid relation according to the error.
And the information acquisition unit is used for taking the prediction information of the current moment as target information.
In several embodiments provided herein, the coupling of the modules to each other may be electrical, mechanical, or other.
In addition, each functional module in each embodiment of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module. The integrated modules may be implemented in hardware or in software functional modules.
Referring to fig. 17, a block diagram of a terminal device according to an embodiment of the present application is shown. The terminal device 600 may be a terminal device capable of running an application program, such as a smart phone, a tablet computer, an electronic book, etc. The terminal device 600 in the present application may include one or more of the following components: the processor 610, the memory 620, the image acquisition device 630, the inertial measurement unit 640 and one or more application programs, wherein the one or more application programs may be stored in the memory 620 and configured to be executed by the one or more processors 610, the one or more program(s) configured to perform the method as described in the foregoing method embodiments.
Processor 610 may include one or more processing cores. The processor 610 connects various parts within the overall terminal device 600 using various interfaces and lines, performs various functions of the terminal device 600 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 620, and invoking data stored in the memory 620. Alternatively, the processor 610 may be implemented in hardware in at least one of digital signal processing (Digital Signal Processing, DSP), field programmable gate array (Field-Programmable Gate Array, FPGA), programmable logic array (Programmable Logic Array, PLA). The processor 610 may integrate one or a combination of several of a central processing unit (Central Processing Unit, CPU), an image processor (Graphics Processing Unit, GPU), and a modem, etc. The CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for being responsible for rendering and drawing of display content; the modem is used to handle wireless communications. It will be appreciated that the modem may not be integrated into the processor 610 and may be implemented solely by a single communication chip.
The Memory 620 may include a random access Memory (Random ACCess Memory, RAM) or a Read-Only Memory (Read-Only Memory). Memory 620 may be used to store instructions, programs, code sets, or instruction sets. The memory 620 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the various method embodiments described below, etc. The storage data area may also store data created by the terminal device 600 in use, etc.
In the embodiment of the present application, the image capturing device 630 is configured to capture an image of the marker and capture a scene image of the target scene. The image capturing device 630 may be an infrared camera or a color camera, and the specific camera type is not limited in the embodiment of the present application. The inertial measurement unit 640 is configured to obtain position and posture information of the terminal device in real time, so as to obtain six-degree-of-freedom information of the terminal device, that is, pose change information of the terminal device.
Referring to fig. 18, a block diagram of a computer readable storage medium according to an embodiment of the present application is shown. The computer readable storage medium 700 has stored therein program code that can be invoked by a processor to perform the methods described in the method embodiments described above.
The computer readable storage medium 700 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Optionally, the computer readable storage medium 700 comprises a non-volatile computer readable medium (non-transitory computer-readable storage medium). The computer readable storage medium 700 has memory space for program code 710 that performs any of the method steps described above. The program code can be read from or written to one or more computer program products. Program code 710 may be compressed, for example, in a suitable form.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and are not limiting thereof; although the present application has been described in detail with reference to the foregoing embodiments, one of ordinary skill in the art will appreciate that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not drive the essence of the corresponding technical solutions to depart from the spirit and scope of the technical solutions of the embodiments of the present application.

Claims (11)

1. A positioning and tracking method, characterized in that it is applied to a terminal device, the terminal device comprising a first image acquisition device, a second image acquisition device and an inertial measurement unit, the method comprising:
acquiring relative position and posture information between the first image acquisition device and the marker according to a first image which is acquired by the first image acquisition device and contains the marker, so as to obtain first information;
acquiring position and posture information of the second image acquisition device in the target scene according to a second image which is acquired by the second image acquisition device and contains the target scene, and obtaining second information, wherein the marker and the terminal equipment are positioned in the target scene;
acquiring the predicted position and posture information of the terminal equipment relative to the marker at different moments by using the inertial measurement unit to obtain predicted information at different moments;
when first information at a first moment is acquired, updating prediction information at the first moment by using the first information to acquire first prediction information, and re-acquiring the prediction information after the first moment based on the first prediction information;
When second information of a second moment is acquired, updating prediction information of the second moment by using the second information to acquire second prediction information, and re-acquiring prediction information after the second moment based on the second prediction information;
the prediction information at the current time is used as target information.
2. The method according to claim 1, wherein when the first information of the first time is obtained, updating the prediction information of the first time with the first information to obtain the first prediction information includes:
acquiring a first rigid body relation between the first image acquisition device and the inertia measurement unit;
acquiring position and posture information of the inertia measurement unit relative to the marker according to first information at a first moment and the first rigid body relation;
and updating the prediction information at the first moment by utilizing the position and posture information of the inertial measurement unit relative to the marker to obtain first prediction information.
3. The method according to claim 2, wherein after obtaining the first prediction information, the method comprises:
predicting the relative position and posture information between the first image acquisition device and the marker by using the first rigid body relation and the first prediction information to obtain first posture prediction information;
Acquiring an error between the first information at the first moment and the first posture prediction information;
and updating the first rigid body relation according to the error.
4. The method according to claim 1, wherein when the second information at the second time is obtained, updating the predicted information at the second time with the second information to obtain the second predicted information includes:
acquiring a second rigid body relation between the second image acquisition device and the inertia measurement unit;
performing coordinate conversion on second information at a second moment by using a first rigid body relation and a second rigid body relation of the first image acquisition device and the inertia measurement unit to obtain intermediate position and posture information of the terminal equipment relative to the marker;
and updating the prediction information at the second moment by using the intermediate position and the posture information of the terminal equipment relative to the marker to obtain second prediction information.
5. The method of claim 4, further comprising, after the obtaining the second prediction information:
predicting the position and posture information of the second image acquisition device in the target scene by using the second rigid body relation and the second prediction information to obtain second posture prediction information;
Acquiring an error between the second information at the second moment and the second attitude prediction information;
and updating the second rigid body relation according to the error.
6. The method of claim 1, wherein the terminal device further comprises a microprocessor and a processor, the first image acquisition device is connected to the microprocessor, and the second image acquisition device is connected to the processor;
when the first information of the first moment is obtained, updating the prediction information of the first moment by using the first information includes:
acquiring a plurality of interrupt moments by the processor, wherein the interrupt moments are moments when the first image acquisition device sends interrupt signals to the processor;
acquiring a receiving moment by the processor, wherein the receiving moment is the moment when the processor receives the first image sent by the microprocessor;
determining a first time using the receive time and the plurality of interrupt times;
and acquiring the prediction information of the first moment, and updating the prediction information of the first moment by utilizing the first information of the first moment.
7. The method of claim 6, wherein said determining a first time using said receive time and said plurality of interrupt times comprises:
Acquiring a delay time length from the first image acquisition device to the processor receiving the first image, wherein the delay time length is the sum of the processing time length and the transmission time length of the first image;
acquiring the exposure time of the first image by utilizing the receiving time and the delay time;
calculating a difference value between the exposure time and each interruption time, and judging whether the difference value is smaller than a preset threshold value or not;
if the interruption time is smaller than the preset threshold value, the interruption time is taken as the first time.
8. The method of claim 6, wherein the method further comprises:
and sending a time synchronization instruction to the microprocessor through the processor, wherein the time synchronization instruction comprises the clock time of the processor, and the time synchronization instruction is used for indicating the microprocessor to adjust the clock time of the microprocessor according to the clock time of the processor.
9. A positioning and tracking device, characterized in that it is applied to a terminal device, said terminal device comprising a first image acquisition device, a second image acquisition device and an inertial measurement unit, said device comprising:
The first information acquisition module is used for acquiring relative position and posture information between the first image acquisition device and the marker according to the first image which is acquired by the first image acquisition device and contains the marker, so as to obtain first information;
the second information acquisition module is used for acquiring the position and posture information of the second image acquisition device in the target scene according to the second image which is acquired by the second image acquisition device and contains the target scene, so as to obtain second information, wherein the marker and the terminal equipment are positioned in the target scene;
the target information acquisition module is used for acquiring the predicted position and posture information of the terminal equipment relative to the marker at different moments by utilizing the inertial measurement unit to obtain the predicted information at different moments; when first information at a first moment is acquired, updating prediction information at the first moment by using the first information to acquire first prediction information, and re-acquiring the prediction information after the first moment based on the first prediction information; when second information of a second moment is acquired, updating prediction information of the second moment by using the second information to acquire second prediction information, and re-acquiring prediction information after the second moment based on the second prediction information; the prediction information at the current time is used as target information.
10. A terminal device, comprising:
one or more processors;
a memory;
an image acquisition device;
an inertial measurement unit;
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more applications configured to perform the method of any of claims 1-8.
11. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a program code, which is callable by a processor for executing the method according to any one of claims 1-8.
CN201910642093.0A 2018-08-02 2019-07-16 Positioning tracking method, device, terminal equipment and computer readable storage medium Active CN110442235B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201910642093.0A CN110442235B (en) 2019-07-16 2019-07-16 Positioning tracking method, device, terminal equipment and computer readable storage medium
PCT/CN2019/098200 WO2020024909A1 (en) 2018-08-02 2019-07-29 Positioning and tracking method, terminal device, and computer readable storage medium
US16/687,699 US11127156B2 (en) 2018-08-02 2019-11-19 Method of device tracking, terminal device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910642093.0A CN110442235B (en) 2019-07-16 2019-07-16 Positioning tracking method, device, terminal equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN110442235A CN110442235A (en) 2019-11-12
CN110442235B true CN110442235B (en) 2023-05-23

Family

ID=68430545

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910642093.0A Active CN110442235B (en) 2018-08-02 2019-07-16 Positioning tracking method, device, terminal equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN110442235B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021164712A1 (en) * 2020-02-19 2021-08-26 Oppo广东移动通信有限公司 Pose tracking method, wearable device, mobile device, and storage medium
CN111935644B (en) * 2020-08-10 2021-08-24 腾讯科技(深圳)有限公司 Positioning method and device based on fusion information and terminal equipment
CN112788583B (en) * 2020-12-25 2024-01-05 深圳酷派技术有限公司 Equipment searching method and device, storage medium and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105445937A (en) * 2015-12-27 2016-03-30 深圳游视虚拟现实技术有限公司 Mark point-based multi-target real-time positioning and tracking device, method and system
CN105892638A (en) * 2015-12-01 2016-08-24 乐视致新电子科技(天津)有限公司 Virtual reality interaction method, device and system
CN107113415A (en) * 2015-01-20 2017-08-29 高通股份有限公司 The method and apparatus for obtaining and merging for many technology depth maps

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI357582B (en) * 2008-04-18 2012-02-01 Univ Nat Taiwan Image tracking system and method thereof

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107113415A (en) * 2015-01-20 2017-08-29 高通股份有限公司 The method and apparatus for obtaining and merging for many technology depth maps
CN105892638A (en) * 2015-12-01 2016-08-24 乐视致新电子科技(天津)有限公司 Virtual reality interaction method, device and system
CN105445937A (en) * 2015-12-27 2016-03-30 深圳游视虚拟现实技术有限公司 Mark point-based multi-target real-time positioning and tracking device, method and system

Also Published As

Publication number Publication date
CN110442235A (en) 2019-11-12

Similar Documents

Publication Publication Date Title
CN110442235B (en) Positioning tracking method, device, terminal equipment and computer readable storage medium
CN111198608B (en) Information prompting method and device, terminal equipment and computer readable storage medium
EP1437645A2 (en) Position/orientation measurement method, and position/orientation measurement apparatus
KR20160122709A (en) Methods and systems for determining elstimation of motion of a device
US20050256391A1 (en) Information processing method and apparatus for finding position and orientation of targeted object
CN111091587B (en) Low-cost motion capture method based on visual markers
US11127156B2 (en) Method of device tracking, terminal device, and storage medium
WO2016031105A1 (en) Information-processing device, information processing method, and program
CN108427479B (en) Wearable device, environment image data processing system, method and readable medium
WO2015093130A1 (en) Information processing device, information processing method, and program
JP2007098555A (en) Position indicating method, indicator and program for achieving the method
JP2023502635A (en) CALIBRATION METHOD AND APPARATUS, PROCESSOR, ELECTRONICS, STORAGE MEDIUM
WO2015040119A1 (en) 3d reconstruction
CN103900473A (en) Intelligent mobile device six-degree-of-freedom fused pose estimation method based on camera and gravity inductor
WO2020014864A1 (en) Pose determination method and device, and computer readable storage medium
CN110688002B (en) Virtual content adjusting method, device, terminal equipment and storage medium
CN111862150A (en) Image tracking method and device, AR device and computer device
US11436818B2 (en) Interactive method and interactive system
US11481920B2 (en) Information processing apparatus, server, movable object device, and information processing method
CN111489376B (en) Method, device, terminal equipment and storage medium for tracking interaction equipment
US11232291B2 (en) Posture estimation system, posture estimation apparatus, error correction method, and error correction program
CN112037261A (en) Method and device for removing dynamic features of image
CN116576866B (en) Navigation method and device
CN116592876B (en) Positioning device and positioning method thereof
CN116295327A (en) Positioning method, positioning device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant