CN110442235A - Positioning and tracing method, device, terminal device and computer-readable storage medium - Google Patents
Positioning and tracing method, device, terminal device and computer-readable storage medium Download PDFInfo
- Publication number
- CN110442235A CN110442235A CN201910642093.0A CN201910642093A CN110442235A CN 110442235 A CN110442235 A CN 110442235A CN 201910642093 A CN201910642093 A CN 201910642093A CN 110442235 A CN110442235 A CN 110442235A
- Authority
- CN
- China
- Prior art keywords
- information
- moment
- image
- terminal device
- marker
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/20—Scenes; Scene-specific elements in augmented reality scenes
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the present application discloses a kind of positioning and tracing method, device, terminal device and computer-readable storage medium, is related to field of display technology.The positioning and tracing method include: according to the first image acquisition device include marker the first image, obtain the relative position between the first image collecting device and marker and posture information, obtain the first information;According to the second image acquisition device include target scene the second image, obtain position and posture information of second image collecting device in target scene, obtain the second information, wherein marker and terminal device are located in target scene;Using the position and posture information of the first information and second information acquisition terminal equipment relative index's object, target information is obtained.This method using the first information and the second information can be more accurate the position for getting terminal device relative index's object and posture information.
Description
Technical field
This application involves field of display technology, more particularly, to a kind of positioning and tracing method, device, terminal device and
Computer-readable storage medium.
Background technique
In recent years, with the development of science and technology augmented reality (AR, Augmented Reality) and virtual reality (VR,
Virtual Reality) etc. technologies be increasingly becoming the hot spot studied both at home and abroad.By taking augmented reality as an example, augmented reality is logical
The information for crossing computer system offer increases the technology that perceive to real world of user, by the dummy object of computer generation,
Into real scene, Lai Zengqiang or modification to real world environments or indicate real world ring for scene or system prompt information superposition
The perception of the data in border.Therefore, how accurately and effectively to display device (such as head-wearing display device, intelligent glasses, intelligent hand
Machine etc.) carry out locating and tracking be a problem to be solved.
Summary of the invention
The embodiment of the present application proposes a kind of positioning and tracing method, device, terminal device and computer-readable storage and is situated between
Matter can be improved the accuracy of the locating and tracking of terminal device.
In a first aspect, the embodiment of the present application provides a kind of positioning and tracing method, it is applied to terminal device.This method packet
Include: according to the first image acquisition device include marker the first image, obtain the first image collecting device and mark
Remember the relative position between object and posture information, obtains the first information;What it is according to the second image acquisition device includes mesh
The second image of scene is marked, position and posture information of second image collecting device in target scene is obtained, obtains the second letter
Breath, wherein marker and terminal device are located in target scene;It is opposite using the first information and the second information acquisition terminal equipment
The position of marker and posture information, obtain target information.
Second aspect, the embodiment of the present application provide a kind of positioning and tracking device, are applied to terminal device, and device includes:
The first information obtains module, the second data obtaining module and target information and obtains module, wherein the first information obtains module and is used for
According to the first image acquisition device include marker the first image, obtain the first image collecting device and marker
Between relative position and posture information, obtain the first information.Second data obtaining module is used for according to the second image collector
Set acquisition includes the second image of target scene, obtains position and posture of second image collecting device in target scene
Information obtains the second information, wherein marker and terminal device are located in target scene.Target information obtains module for benefit
With the position and posture information of the first information and second information acquisition terminal equipment relative index's object, target information is obtained.
The third aspect, the embodiment of the present application provide a kind of terminal device, comprising: one or more processors;Memory;
Image collecting device;Inertial Measurement Unit;One or more application program, wherein one or more application programs, which are stored in, deposits
It in reservoir and is configured as being performed by one or more processors, one or more programs are configured to carry out above-mentioned first aspect
The positioning and tracing method of offer.
Fourth aspect, the embodiment of the present application provide a kind of computer-readable storage medium, computer-readable storage
Program code is stored in medium, program code can be called the locating and tracking side for executing above-mentioned first aspect and providing by processor
Method.
Scheme provided by the embodiments of the present application is realized virtual by the first image collecting device and the second image collecting device
The locating and tracking of object, first can according to the first image acquisition device include marker the first image, obtain
Relative position and posture information between first image collecting device and marker, obtain the first information, then according to the second figure
As acquisition device acquire include target scene the second image, obtain position of second image collecting device in target scene
It sets and posture information, the second information is obtained, finally, terminal device can use the first information and the second information acquisition terminal equipment
The position of relative index's object and posture information, obtain target information.Obviously, the application is by combining the first information and the second information
Position and the posture information of the terminal device relative index's object finally obtained can be made more accurate, and then terminal can be improved and set
The accuracy of standby location tracking.
Detailed description of the invention
In order to more clearly explain the technical solutions in the embodiments of the present application, make required in being described below to embodiment
Attached drawing is briefly described, it should be apparent that, the drawings in the following description are only some examples of the present application, for
For those skilled in the art, without creative efforts, it can also be obtained according to these attached drawings other attached
Figure.
Fig. 1 shows a kind of schematic diagram of application scenarios suitable for the embodiment of the present application.
Fig. 2 shows the method flow diagrams of the positioning and tracing method of the application one embodiment.
Fig. 3 shows marker in the positioning and tracing method of the application one embodiment, terminal device and target scene
Positional relationship.
Fig. 4 shows the method flow diagram of the positioning and tracing method of the application another embodiment.
Fig. 5 shows the flow chart of step S220 in the positioning and tracing method of another embodiment of the application.
Fig. 6 shows the flow chart of other steps in the positioning and tracing method of another embodiment of the application.
Fig. 7 shows the method flow diagram of the positioning and tracing method of the application another embodiment.
Fig. 8 shows the flow chart of step S330 in the positioning and tracing method of another embodiment of the application.
Fig. 9 shows the flow chart of other steps in the positioning and tracing method of another embodiment of the application.
Figure 10 shows the specific example figure that target information is obtained in the positioning and tracing method of another embodiment of the application.
Figure 11 shows the method flow diagram of the positioning and tracing method of the application further embodiment.
Figure 12 shows the flow chart of step S440 in the positioning and tracing method of the application further embodiment.
What Figure 13 showed that terminal device in the positioning and tracing method of the application further embodiment carries out data transmission shows
Example diagram.
Figure 14 shows in the positioning and tracing method of the application further embodiment the detailed of step S443 in step S440
Flow chart.
Figure 15 shows the block diagram of the application one embodiment positioning and tracking device.
Figure 16 shows the block diagram that target information in the application one embodiment positioning and tracking device obtains module 530.
Figure 17 is the embodiment of the present application for executing the terminal device of the positioning and tracing method according to the embodiment of the present application
Block diagram.
Figure 18 is the embodiment of the present application for saving or carrying the locating and tracking side realized according to the embodiment of the present application
The storage unit of the program code of method.
Specific embodiment
To keep the purposes, technical schemes and advantages of the embodiment of the present application clearer, below in conjunction with the embodiment of the present application
In attached drawing, the technical scheme in the embodiment of the application is clearly and completely described, it is clear that described embodiment is
Some embodiments of the present application, instead of all the embodiments.The application being usually described and illustrated herein in the accompanying drawings is implemented
The component of example can be arranged and be designed with a variety of different configurations.
Therefore, the detailed description of the embodiments herein provided in the accompanying drawings is not intended to limit below claimed
Scope of the present application, but be merely representative of the selected embodiment of the application.Based on the embodiment in the application, this field is common
Technical staff's every other embodiment obtained without creative efforts belongs to the model of the application protection
It encloses.
It should also be noted that similar label and letter indicate similar terms in following attached drawing, therefore, once a certain Xiang Yi
It is defined in a attached drawing, does not then need that it is further defined and explained in subsequent attached drawing.In addition, term " the
One ", " second " etc. is only used for distinguishing description, is not understood to indicate or imply relative importance.
The application scenarios of positioning and tracing method provided by the embodiments of the present application are introduced below.
Referring to Fig. 1, showing a kind of virtual content display system 10 provided by the embodiments of the present application.The display system 10
It include: terminal device 100 and marker 200.
In the embodiment of the present application, terminal device 100 can be head-wearing display device, be also possible to the shifting such as mobile phone, plate
Dynamic equipment.When terminal device 100 is head-wearing display device, head-wearing display device can be integral type head-wearing display device.Terminal
Equipment 100 is also possible to the intelligent terminals such as the mobile phone connecting with circumscribed head-wearing display device, i.e. terminal device 100 can be used as head
The processing and storage equipment of display device, insertion or access circumscribed head-wearing display device are worn, it is right in head-wearing display device
Virtual content is shown.
In some embodiments, terminal device 100 may include two image collecting devices, be that the first image is adopted respectively
Acquisition means and the second image collecting device, the first image collecting device and the second image collecting device can be mounted on terminal and set
On standby 100.First image collecting device and the second image collecting device can be infrared camera, colour imagery shot etc., first
The concrete type of image collecting device and the second image collecting device is not intended as limiting in the embodiment of the present application.In addition, the
One image collecting device and the second image collecting device also may include imaging sensor, and imaging sensor can be CMOS
(Complementary Metal Oxide Semiconductor, complementary metal oxide semiconductor) sensor or CCD
(Charge-coupled Device, charge coupled cell) sensor etc..
In the embodiment of the present application, marker 200 can be the pattern with topological structure, and topological structure refers in marker
Sub- marker and characteristic point etc. between connected relation.When marker 200 is in 100 first image collecting device of terminal device
In visual range, terminal device 100 can regard the marker 200 in the first image collecting device visual range as target mark
Remember object, and the image of the target label object can be acquired, the figure that the processor of terminal device 100 passes through the identification target label object
Terminal device 100 can be obtained with respect to spatial positional informations and target label objects such as position, the directions of the target label object in picture
The recognition results such as identity information.Terminal device 100 can be based on the spatial positional information of target label object relative termination equipment 100
Show corresponding virtual objects, the solar system 300 as shown in Figure 1, as virtual objects shown by correspondence markings object 200 should
Virtual objects can be with Overlapping display at the position of marker 200, can also be with Overlapping display in 200 location of marker
Outer position.User can see virtual objects Overlapping display in real world by terminal device 100, realize augmented reality
Effect.It is to be appreciated that specific marker 200 is not intended as limiting in the embodiment of the present application, can only be set by terminal
Standby 100 identification tracking.
Specific positioning and tracing method is introduced below.
Referring to Fig. 2, the application provides a kind of positioning and tracing method in one embodiment, it can be applied to terminal and set
Standby, this method may include step S110 to step S130.
Step S110: according to the first image acquisition device include marker the first image, obtain the first figure
As between acquisition device and marker relative position and posture information, obtain the first information.
The first image collecting device is provided on terminal device, which is acquisition label
The image of object, marker can be any figure or object for having and can recognize signature.Marker can be placed in the first figure
As acquisition device visual range in, i.e. the first image collecting device can collect the first image comprising marker, the packet
The first image containing marker can be used for determining after the first image acquisition device the first image collecting device relative index
The position of object and posture information.
It in some embodiments, may include at least one sub- marker in marker, sub- marker, which can be, to be had
The pattern of certain shapes.As a kind of mode, every sub- marker can have one or more features point, wherein characteristic point
Shape without limitation, can be dot, annulus, be also possible to triangle, other shapes.Sub- marker in different markers
Distribution rule is different, and therefore, each marker can have different identity informations, and terminal device by including in identification tag
Sub- marker, available identity information corresponding with marker, to distinguish relative position and the appearance of relatively different markers
State information, which, which can be coding etc., can be used for the information of unique identification marker, but not limited to this.
Relative position and posture information between first image collecting device and marker are properly termed as the first image again and adopt
6DOF (degrEE of frEEdom, freedom degree) information of acquisition means relative index's object, 6DOF information may include being three to put down
Freedom degree and three rotary freedoms are moved, three translation freedoms are for describing three dimensional object X, Y, Z coordinate value;Three rotations are certainly
It include pitch angle (Pitch), roll angle (Roll) and lateral angle (Yaw) by degree.Specifically, terminal device can according to include mark
First image of note object carries out identification tracking to marker, to obtain the opposite position between the first image collecting device and marker
It sets and posture information, i.e. the first information.
Step S120: according to the second image acquisition device include target scene the second image, obtain second
Position and posture information of the image collecting device in target scene, obtain the second information, wherein marker and terminal device position
In in target scene.
Terminal device is also provided with the second image collecting device, and second image collecting device is for acquiring target field
Second image of scape, i.e. the second image are the scene image in the second image acquisition device visual range.In some implementations
In mode, terminal device and marker are both contained in target scene, in order to clearly state terminal device, marker and target
Relationship between this three of scene, this gives diagram as shown in Figure 3,101 expressions is target scene in Fig. 3,
102 expressions are markers, and what 103 indicated is terminal device, as can be seen from Figure 3 marker 102 and terminal device 103
It is respectively positioned in target scene 101.In addition, by above-mentioned introduction know terminal device 103 include the first image collecting device and
Second image collecting device, wherein the main function of the first image collecting device is the image for acquiring marker 102, and second
The main function of Image Acquisition is then the image for acquiring target scene 101.
Equally the image can be stored in terminal after second image acquisition device to the second image of target scene
In equipment, terminal device then can use second image and obtain the second image collecting device position and posture in target scene
Information obtains the second information.The second information can use VIO (visual-inertial odometry, vision in the present embodiment
Inertia odometer) it calculates and obtains, VIO can be by the key point that includes in the second image of the second image acquisition device
The relative freedom information of terminal device is calculated in (or characteristic point), and then extrapolates the current position of terminal device and appearance
State.In other words, VIO can use the second image collecting device and obtain the second image in real time, and be obtained by Inertial Measurement Unit
The angular speed and acceleration information for taking terminal device can be got in conjunction with the second image of the second image acquisition device
Position and posture information of second image collecting device in target scene.
Step S130: believed using the position and posture of the first information and second information acquisition terminal equipment relative index's object
Breath, obtains target information.
Terminal device gets the first information and the second letter using the first image collecting device and the second image collecting device
Position and the posture information of terminal device relative index object can be obtained after breath by the first information and the second informix, i.e.,
Target information.In one embodiment, because the first image collecting device and the second image collecting device are mounted on terminal and set
It is standby upper, it therefore, can be using the first information between the first image collecting device and marker as terminal device relative index's object
Target information, can also be using second information of second image collecting device in target scene as terminal device relative index
The target information of object.
In order to make the target information more accurate and effective obtained, the present embodiment can be comprehensive in conjunction with the first information and the second information
It closes and obtains target information, it can merged the first information and the second information to obtain target information.The first information and second
There are many amalgamation modes of information, can be using the first information and the average value of the second information as target information, or distribution is not
The first information and the second information is weighted in same specific gravity.In some embodiments, terminal device can also lead to
Cross position and appearance that Inertial Measurement Unit (Inertial measurement unit, IMU) obtains terminal device relative index object
State information, the terminal device that then Inertial Measurement Unit is obtained using at least one information in the first information and the second information
The position of relative index's object and posture information are updated, and then obtain target information.Inertial Measurement Unit main function is to survey
The triaxial attitude angle (or angular speed) and acceleration of terminal device are measured, usual Inertial Measurement Unit includes three uniaxial acceleration
Degree meter and three uniaxial gyros, accelerometer detects the 3-axis acceleration signal of terminal device, and gyro detection carrier is opposite
In the angular velocity signal of navigational coordinate system, the angular speed and acceleration of measuring terminals equipment, and terminal device is calculated with this
Posture.
The method of locating and tracking provided by the embodiments of the present application by the first image collecting device obtain the first information and
The second information that second image collecting device obtains obtains the target information of terminal device relative index's object, because the target information is
In conjunction with the first information and the second acquisition of information, therefore the terminal device relative index that the application is got compared to the prior art
The position of object and posture information more accurate and effective, and then the locating and tracking to terminal device can be made more accurate.
Another embodiment of the application provides a kind of positioning and tracing method, and terminal device further includes Inertial Measurement Unit, such as
Shown in Fig. 4, using the position and posture information of the first information and second information acquisition terminal equipment relative index's object, target is obtained
Information may include step S210 to step S240.
Step S210: using Inertial Measurement Unit obtain different moments under terminal device relative index's object predicted position and
Posture information obtains the predictive information of different moments.
The predicted position and posture information of terminal device relative index's object under different moments are obtained using Inertial Measurement Unit
Before, the first initial rigid body relationship between available first image collecting device of terminal device and Inertial Measurement Unit, together
When available the second initial rigid body relationship between the second image collecting device and Inertial Measurement Unit of terminal device, and eventually
The initial position and posture information of end equipment relative index's object can combine the first initial rigid body relationship and the first image collector
The position and posture information for setting relative index's object obtain.
In some embodiments, Inertial Measurement Unit can be set on terminal device, has passed through the Inertial Measurement Unit
The predicted position and posture information of terminal device relative index object, are known that by above-mentioned introduction under available different moments
Inertial Measurement Unit is to measure data of the object to be measured in terms of its inclination angle, drift angle and corner using miniature gyroscope.Inertia is surveyed
Amount unit can use the angle change of three rotary freedoms of gyroscope measuring terminals equipment, and utilize accelerometer measures
The displacement of three one-movement-freedom-degrees of terminal device.Because the position of terminal device relative index's object and posture information can be with the times
Variation and it is different, therefore Inertial Measurement Unit obtain different moments under terminal device relative index's object predicted position and posture believe
Breath is also likely to be different.
By Inertial Measurement Unit can change in location to terminal device and attitudes vibration accumulate, so as to basis
Accumulation results the prediction position of terminal device relative index object and posture information under different moments.In other words, inertia is utilized
After measuring unit gets the predictive information of previous moment, it can get and work as by integral and the predictive information of previous moment
The predictive information at preceding moment is to get to the position of current time terminal device relative index object and posture information.
Step S220: when getting the first information at the first moment, the prediction at the first moment is believed using the first information
Breath is updated, and obtains the first predictive information, and the prediction letter after the first moment is reacquired based on the first predictive information
Breath.
It is pre- to the first moment to can use the first information when terminal device gets the first information at the first moment
Measurement information is updated, and because of the predictive information of Inertial Measurement Unit available different moments, therefore then its is corresponding for moment difference
Predictive information is not also identical, wherein the first information at the first moment refers to obtaining according in the first image of the first moment acquisition
Relative position and posture information between the first image collecting device arrived and marker.Terminal device utilizes the first Image Acquisition
While first image of device acquisition gets the first information at the first moment, it can use Inertial Measurement Unit and get this
First moment corresponding predictive information.After obtaining the first information and predictive information, terminal device can use the first information pair
Predictive information is updated, and obtains the first predictive information, and reacquire after the first moment respectively based on the first predictive information
The predictive information at a moment, first predictive information may refer to the first moment updated predictive information.
In some embodiments, when Inertial Measurement Unit is in original state, the first image collecting device can be passed through
Acquisition includes the first image of marker, and according to opposite between the first image the first image collecting device of acquisition and marker
Position and posture information, further according to the first initial rigid body relationship between the first image collecting device and Inertial Measurement Unit, with
And relative position and posture information between first image collecting device and marker, obtain Inertial Measurement Unit and marker
Between relative position and posture information, can by between the Inertial Measurement Unit and marker relative position and posture information make
For the initial position and posture information namely Inertial Measurement Unit initial predicted information of the relatively described marker of terminal device, base
In the initial predicted information, Inertial Measurement Unit can to position of the relatively described marker of terminal device under different moments and
Posture information is predicted.When Inertial Measurement Unit is in original state, the first image collecting device does not collect first
Image, Inertial Measurement Unit do not get the initial position and posture information of the relatively described marker of terminal device, can be always
It is waited for.
In some embodiments, as shown in figure 5, step S220 may include step S221 to step S223.
Step S221: the first rigid body relationship of the first image collecting device and Inertial Measurement Unit is obtained.
First rigid body relationship of the first image collecting device and Inertial Measurement Unit refer to the first image collecting device with
Relationship is put in structure between Inertial Measurement Unit, specifically, which may include the first image collecting device
The information such as the distance between Inertial Measurement Unit, orientation, the relationship of putting can be obtained by actual measurement, also can use knot
Structure design value obtains, or is obtained by calibration.The relationship of putting is able to reflect out the survey of the first image collecting device relative inertness
Unit or Inertial Measurement Unit are measured with respect to the rotation amount and translational movement of the first image collecting device, the rotation amount and translational movement can
Required rotation when for indicating for the space coordinate of the first image collecting device to be overlapped with the space coordinate of Inertial Measurement Unit
Turn and be displaced, wherein the central point foundation that the space coordinate of the first image collecting device can be with the first image collecting device
Three-dimensional system of coordinate, the space coordinate of Inertial Measurement Unit can be with the three-dimensional coordinate of the central point of Inertial Measurement Unit foundation
System, it is possible to understand that ground, space coordinate are not limited to be established with central point, be not limited thereto.
Optionally, the first rigid body relationship between the first image collecting device and Inertial Measurement Unit may include the first figure
As the rotation of acquisition device and Inertial Measurement Unit relative translation relationship and the first image collecting device and Inertial Measurement Unit
Relationship etc..
Step S222: according to the first information at the first moment and the first rigid body Relation acquisition Inertial Measurement Unit relative index
The position of object and posture information.
Relative position and posture information of the first information between the first image collecting device and marker, and the first rigid body
Relationship is then to put relationship on the first image collecting device and Inertial Measurement Unit structure, wherein the first image collecting device
Rigid body relationship between Inertial Measurement Unit can be obtained by actual measurement.After obtaining the first rigid body relationship, Ke Yigen
According to the first rigid body Relation acquisition to the relativeness between Inertial Measurement Unit and marker, main cause is that the first image is adopted
Acquisition means and Inertial Measurement Unit are set on terminal device simultaneously, i.e. the first image collecting device and Inertial Measurement Unit can be with
Relationship is put by actual measurement acquisition, and there is certain mapping passes between the first image collecting device and marker
System.In addition, terminal device can obtain after getting the relativeness between Inertial Measurement Unit and marker in conjunction with the first information
Obtain position and the posture information of Inertial Measurement Unit relative index object.
Step S223: the prediction at the first moment is believed using the position and posture information of Inertial Measurement Unit relative index's object
Breath is updated, and obtains the first predictive information.
Using the available predictive information to different moments terminal device relative index object of Inertial Measurement Unit, terminal is set
For by can use the position after the first rigid body Relation acquisition to the position of Inertial Measurement Unit relative index's object and posture information
It sets and posture information is updated the predictive information at the first moment, obtain the first predictive information.As a kind of specific embodiment party
Formula inscribes the position of Inertial Measurement Unit relative index's object and the predictive information of posture information and the first moment when can be according to first
Obtain information update parameter, the information update parameter can be Inertial Measurement Unit relative index's object position and posture information with
Deviation between predictive information is updated the predictive information at the first moment based on the information update parameter.As another
Kind of embodiment inscribes position and posture information and the first moment of Inertial Measurement Unit relative index's object when can also be by first
Predictive information is weighted, and obtains the first predictive information, and the weight of weighted calculation can be set according to actual needs.
In some embodiments, terminal device has got the first of the first image collecting device and Inertial Measurement Unit just
The first rigid body relationship of the first image collecting device and Inertial Measurement Unit can also be updated after body relationship, calibration,
Keep the first rigid body relationship more accurate.Detailed step is updated the first rigid body relationship as shown in fig. 6, being known that from Fig. 6
It may include step S224 to step S226.
Step S224: using the first rigid body relationship and the first predictive information predict the first image collecting device and marker it
Between relative position and posture information, obtain the first attitude prediction information.
In one embodiment, terminal device can use the first of the first image collecting device and Inertial Measurement Unit just
Body relationship carries out coordinate conversion to the first predictive information, recalculates the opposite position between the first image collecting device and marker
It sets and posture information, obtains the first attitude prediction information.
Step S225: the error between the first information and the first attitude prediction information at the first moment is obtained.
The position of actual measurement of the first information between the first image collecting device and marker and posture information, and the
One attitude prediction information is then position and the posture information of the prediction between the first image collecting device and marker, obtain this
Error between one information and the two available information of the first attitude prediction information.In some embodiments, using the
One information subtracts the first attitude prediction information and takes absolute value, or subtracts the first information using the first attitude prediction information and take
Absolute value so can be obtained by the error between the first information and the first attitude prediction information.
Step S226: the first rigid body relationship is updated according to error.
It is known that the error between the first information and the first attitude prediction information generally refers to by above-mentioned introduction
Error between one image collecting device and marker between position and the practical calculated value and predicted value of posture information, Ke Yili
The first rigid body relationship is updated with the error between both the first information and the first attitude prediction information.Wherein, first
Error between information and the first attitude prediction information is smaller, shows that the first rigid body relationship got is more accurate, while can also
To obtain the update times of the first rigid body relationship, and judge whether the update times are greater than preset times, if it is greater than default time
It is several, the update of the first rigid body relationship can be terminated.
In some embodiments, the predictive information after the first moment can be reacquired according to the first predictive information,
Specifically, Inertial Measurement Unit can be on the basis of the first predictive information, to the position at each moment after the first moment
And attitudes vibration is integrated, and then reacquires the predictive information at each moment after the first moment.
Step S230: when getting second information at the second moment, the prediction at the second moment is believed using the second information
Breath is updated, and obtains the second predictive information, and the prediction letter after the second moment is reacquired based on the second predictive information
Breath.
Second information is position and posture information of second image collecting device in target scene, the position and posture letter
Breath can be obtained by the inclusion of the second image for having target scene.The predictive information at second moment refers to inscribing when second used
The position for terminal device relative index's object that property measuring unit is predicted and posture information.Terminal device gets the second image
It can use second information when the second information of the acquisition device in target scene to carry out more the predictive information at the second moment
Newly.
Step S240: using the predictive information at current time as target information.
Terminal device can be using the predictive information at the current time obtained by Inertial Measurement Unit as current time terminal
The position of equipment relative index's object and posture information, i.e. target information, the predictive information of different moments can be used as the corresponding moment
Target information.
The method of locating and tracking provided by the embodiments of the present application can make terminal device is opposite to mark using Inertial Measurement Unit
The acquisition of the position and posture information of remembering object is more accurate effectively, i.e., the application is by combining the first information and the second information can be with
The predictive information that Inertial Measurement Unit is got is updated.In addition, the embodiment of the present application is constantly updated by predictive information
First rigid body relationship, can be improved the accuracy of positioning.
Referring to Fig. 7, the another embodiment of the application provides a kind of positioning and tracing method, terminal device can be applied to, have
Body, this method may include step S310 to step S340.
Step S310: using Inertial Measurement Unit obtain different moments under terminal device relative index's object predicted position and
Posture information obtains the predictive information of different moments.
Step S320: when getting the first information at the first moment, the prediction at the first moment is believed using the first information
Breath is updated, and obtains the first predictive information, and the prediction letter after the first moment is reacquired based on the first predictive information
Breath.
Step S330: when getting second information at the second moment, the prediction at the second moment is believed using the second information
Breath is updated, and obtains the second predictive information, and the prediction letter after the second moment is reacquired based on the second predictive information
Breath.
It is pre- to the second moment to can use second information when terminal device gets second information at the second moment
Measurement information is updated, wherein second information at the second moment refers to obtaining according in the second image of the second moment acquisition
Position and posture information of second image collecting device in target scene.Terminal device can use the second information to prediction
Information is updated, and obtains the second predictive information, and reacquire when each after the second moment based on the second predictive information
The predictive information at quarter, second predictive information may refer to the second moment updated predictive information.
In one embodiment, step S330 as shown in Figure 8 may include step S331 to step S333.
Step S331: the second rigid body relationship of the second image collecting device and Inertial Measurement Unit is obtained.
Second rigid body relationship of the second image collecting device and Inertial Measurement Unit refer to the second image collecting device with
Relationship is put in structure between Inertial Measurement Unit, specifically, which may include the second image collecting device
Rotation and displacement between Inertial Measurement Unit.In one embodiment, the second image collecting device and Inertial Measurement Unit
The second rigid body relationship can be obtained by actual measurement, also can use structure design value acquisition.Second rigid body relationship can be with
Reflect the second image collecting device relative inertness measuring unit or by Inertial Measurement Unit with respect to the second image collecting device
Required rotation amount and translational movement, the rotation amount and translational movement can be used for indicating by the space coordinate of the second image collecting device
Required rotation and displacement when being overlapped with the space coordinate of Inertial Measurement Unit, wherein the space of the second image collecting device
Coordinate can be with the three-dimensional system of coordinate of the central point foundation of the second image collecting device, and the space coordinate of Inertial Measurement Unit can
To be the three-dimensional system of coordinate established with the central point of Inertial Measurement Unit, it is possible to understand that ground, space coordinate are not limited in
Heart point is established, and is not limited thereto.
Optionally, the second rigid body relationship between the second image collecting device and Inertial Measurement Unit may include the second figure
As the rotation of the relative translation relationship and the second image collecting device and Inertial Measurement Unit of acquisition device and Inertial Measurement Unit
Transfer the registration of Party membership, etc. from one unit to another.
Step S332: it is closed using the first rigid body relationship and the second rigid body of the first image collecting device and Inertial Measurement Unit
It is that coordinate conversion is carried out to second information at the second moment, obtains middle position and the posture letter of terminal device relative index's object
Breath.
When the first image acquisition device to the first image for including marker, first can be obtained according to the first image
Relative position and posture information between image collecting device and marker.It can be according to the first image collecting device and inertia measurement
Second rigid body relationship of the first rigid body relationship of unit and the second image collecting device and Inertial Measurement Unit, obtains first
The third rigid body relationship of image collecting device and the second image collecting device.Using third rigid body relationship to the first image collector
Set relative position between marker and posture information carry out coordinate conversion, obtain the second image collecting device and marker it
Between relative position and posture information, can by between second image collecting device and marker relative position and posture information
Initial position and posture information as second image collecting device relative index's object.When the second image acquisition device to
Two images, can obtain position and posture information of second image collecting device in target scene according to the second image, i.e., and second
Information can convert for the second information based on the initial position and posture information of second image collecting device relative index's object
Relative position and posture information between two image collecting devices and marker, and inertia measurement is obtained according to the second rigid body relationship
Relativeness between unit and marker.
In addition, because the second image collecting device and Inertial Measurement Unit are set on terminal device simultaneously, therefore the second image
The relationship of putting between acquisition device and Inertial Measurement Unit in structure can be obtained by actual measurement, pass through above-mentioned calculating
The available relativeness between Inertial Measurement Unit and marker, in conjunction with the phase between Inertial Measurement Unit and marker
Coordinate conversion to the second information may be implemented to relationship and the second rigid body relationship, and then obtain terminal device relative index's object
Middle position and posture information.
Step S333: the prediction at the second moment is believed using the middle position and posture information of terminal device relative index's object
Breath is updated, and obtains the second predictive information.
Using the available predictive information to different moments terminal device relative index object of Inertial Measurement Unit, terminal is set
It is standby to pass through the first rigid body relationship, the second rigid body relationship and the second acquisition of information to the middle position of terminal device relative index's object
And after posture information, it can use the middle position and posture information and the predictive information at the second moment be updated, obtain
Two predictive information.Alternatively, it is also possible to be carried out more using the middle position and posture information to the predictive information after the second moment
Newly.
In some embodiments, terminal device has got the second of the second image collecting device and Inertial Measurement Unit just
S334 as shown in Figure 9 can also be included the steps that step S336 after body relationship.
Step S334: predict the second image collecting device in target scene using the second rigid body relationship and the second predictive information
Interior position and posture information obtains the second attitude prediction information.
In one embodiment, terminal device can use the second of the second image collecting device and Inertial Measurement Unit just
Body relationship and the second predictive information recalculate position and posture information of second image collecting device in target scene, obtain
Second attitude prediction information.Because the second predictive information is the predicted position and posture information of terminal device relative index's object, and the
Two rigid body relationships are then to put relationship in structure between the second image collecting device and Inertial Measurement Unit, therefore pass through coordinate
The second attitude prediction information can be obtained in conversion and the second rigid body relationship of combination and the second predictive information.
Step S335: the error between second information and the second attitude prediction information at the second moment is obtained.
Second information is position and posture information of second image collecting device in target scene, and the second attitude prediction
Information is then predicted position and posture information of second image collecting device in target scene, obtains second information and second
Error between the two available information of attitude prediction information.In some embodiments, the is subtracted using the second information
Two attitude prediction information simultaneously take absolute value, or subtract the second information using the second attitude prediction information and take absolute value, so
It can be obtained by the error between second the second information of moment and the second attitude prediction information.
Step S336: the second rigid body relationship is updated according to error.
The error master between second information at the second moment and the second attitude prediction information is known that by above-mentioned introduction
Refer to the second image collecting device between the actual measured value and predicted value of position and posture information in target scene
Error, and the second rigid body relationship then refers to that the second image collecting device and Inertial Measurement Unit put pass in structure
System.Therefore, the error that can use between both the second information and the second attitude prediction information carries out the second rigid body relationship
It updates.
In some embodiments, the predictive information after the second moment can be reacquired according to the second predictive information
Content, the second predictive information are that second information at the second moment updates acquisition to the predictive information at the second moment, obtain second
The content of the predictive information after the second moment can be reacquired after predictive information based on second predictive information.Specifically,
Integrate to the second predictive information and then get the content of the predictive information after the second moment.
Step S340: using the predictive information at current time as target information.
In order to obtain the detailed process of target information in positioning and tracing method detailed further, this gives such as
Exemplary diagram shown in Fig. 10, IMU refers to the prediction that terminal device relative index object is obtained using Inertial Measurement Unit in Figure 10
Information, tag refers to marking object image using the first image acquisition device, and VIO then refers to adopting using the second image
The visual pattern of acquisition means acquisition VIO algorithm.To be respectively Inertial Measurement Unit pre- in different moments by a1, a2, a3, a4 in Figure 10
Measurement information, the predictive information of later moment in time can be integrated to obtain according to the predictive information of previous moment, what which can refer to
It is the integral of acceleration and attitude angle etc. in Inertial Measurement Unit.
After getting the first information using the first image of the first image acquisition device, the T1 moment can be according to first
The first information is converted Inertial Measurement Unit relative index by rigid body relationship between image collecting device and Inertial Measurement Unit
The position b1 of object, and be updated to obtain a1 ' using predictive information of the b1 to T1 moment collected Inertial Measurement Unit.Inertia
Measuring unit can use updated a1 ' and re-start integral prediction and then obtain pose a2 ', the position at T3 moment at T2 moment
The pose a4 ' etc. at appearance a3 ' and T4 moment.In addition, the second image using the second image acquisition device available arrives
Second image collecting device target scene the second information, using between the second image collecting device and Inertial Measurement Unit
The position of the available T2 moment terminal device relative index's object of second rigid body relationship and posture information c1, and when according to c1 to T2
The predictive information a2 ' at quarter is updated to obtain the pose a4^ at pose a3^ and the T4 moment at pose a2^, the T3 moment at T2 moment
Deng.
The method of locating and tracking provided by the embodiments of the present application is adopted by introducing the first image collecting device and the second image
The the first rigid body relationship and the second rigid body relationship of acquisition means relative inertness measuring unit come to the predictive information of object time point into
Row updates, and so may further ensure that the accuracy of terminal device relative index object location and posture information.
Figure 11 is please referred to, the application another embodiment provides a kind of positioning and tracing method, can be applied to terminal device, should
Terminal device further includes microprocessor and processor, and the first image collecting device is connect with microprocessor, the second image collector
It sets and is connect with processor, specifically, this method may include step S410 to step S460.
Step S410: according to the first image acquisition device include marker the first image, obtain the first figure
As between acquisition device and marker relative position and posture information, obtain the first information.
Step S420: according to the second image acquisition device include target scene the second image, obtain second
Position and posture information of the image collecting device in target scene, obtain the second information, wherein marker and terminal device position
In in target scene.
Step S430: using Inertial Measurement Unit obtain different moments under terminal device relative index's object predicted position and
Posture information obtains the predictive information of different moments.
Step S440: when getting the first information at the first moment, the prediction at the first moment is believed using the first information
Breath is updated, and obtains the first predictive information, and the prediction letter after the first moment is reacquired based on the first predictive information
Breath.
Figure 12 is please referred to, step S440 may include step S441 to step S444.
Step S441: obtaining multiple interruption moment by processor, and interrupting the moment is the first image collecting device to processing
At the time of device sends interrupt signal.
In one embodiment, the first image collecting device, the second Image Acquisition not only can be set on terminal device
Device and Inertial Measurement Unit further include processor and microprocessor, wherein the first image collecting device, the second Image Acquisition
The connection relationship of the equipment such as device, microprocessor and processor is as shown in figure 13.401 expressions is that the first image is adopted in Figure 13
Acquisition means, 402 expressions are the second image collecting devices.First image collecting device 401 and micro process as can be seen from Figure 13
Device connection, the second image collecting device 402 is connect with processor, while Inertial Measurement Unit is also connect with processor.
When first image collecting device 401 collects the first image comprising marker, one can be sent to processor
Interrupt signal, for example, sending a GPIO (General-purpose input/output, universal input/output) interrupts letter
Number, it is known as interrupting moment T1 at the time of processor can be received to the interrupt signal, the interruption moment, T1 can be stored.The
When acquiring the first image multiple exposure can occur for one image collecting device 401, and the primary interruption of generation can all be corresponded to every time by exposing,
It is the process of continuous acquisition multiple image because obtaining label object image, and an interruption can be generated by acquiring each frame all, therefore handle
Device is available to arrive multiple interruptions moment T1, for example, the interruption moment of processor reception interrupt signal is respectively T11, T12,
T13、T14……。
Step S442: the time of reception is obtained by processor, the time of reception is that processor receives microprocessor is sent the
At the time of one image.
First image collecting device 401 collects the first image, can be by the first image transmitting to microprocessor, micro process
Device, which receives first image, can be sent to first image processor, and processor then can recorde receive this first
At the time of image, this moment can be known as the time of reception.Terminal device can use processor and obtain the time of reception, and right
The time of reception is stored and is handled.
Step S443: determined for the first moment using the time of reception and multiple interruption moment.
In some embodiments, such as Figure 14, determine that the first moment can wrap using the time of reception and multiple interruption moment
Step S4431 is included to step S4434.
Step S4431: it obtains and receives prolonging for the first image from first the first image of image acquisition device to processor
Slow duration, delay duration are the handling duration of the first image and the summation of transmission duration.
First image collecting device 401 is by there are certain delay duration △ during the first image transmitting to processor
T, the delay duration may include the handling duration t1 and transmission duration t2 of the first image.Wherein, handling duration t1 is referred to micro-
Processor handles the time consumed by the first image, and in one embodiment, handling duration t1 is passed with image in terminal device
The frame per second of sensor is related, and the frame per second of imaging sensor is longer, then the handling duration t1 of the first image is shorter.Transmission duration t2 refers to
The first image from microprocessor be transmitted to processor needed for the time, when delay duration △ T can be processing in the present embodiment
The summation of long t1 and transmission duration t2, i.e. △ T=t1+t2.
Step S4432: the time of exposure of the first image is obtained using the time of reception and delay duration.
Processor can use the time of reception T2 and delay duration △ T of its acquisition to obtain the time of exposure of the first image
T3, time of exposure T3 are referred to as the theoretical time of exposure of the first image.In one embodiment, the theory of the first image
The time of exposure can subtract delay duration △ T by time of reception T2 and obtain, i.e. the theoretical time of exposure T3 of the first image can be with
It is the difference △ T, i.e. T3=T2- △ T between time of reception T2 and delay duration.
Step S4433: the time of exposure and each difference for interrupting the moment are calculated, and judges whether the difference is less than default threshold
Value.
Being known that the first image collecting device 401 acquires the first image by above-mentioned introduction is continuous acquisition multiple image
Process, i.e. processor can receive and be stored with multiple interruption moment, these interruption moment can be referred to as T11,
T12, T13, T14 ....Processor gets these and interrupts the moment, can calculate the time of exposure and each difference for interrupting the moment
△ t, these differences for interrupting the moment are respectively △ t1, △ t2, △ t3, △ t4 ....Wherein, △ t1=T3-T11, △ t2=
T3-T12, △ t3=T3-T13, △ t14=T3-T14 ..., processor obtain each difference for interrupting moment and the time of exposure
When, it can be determined that whether these differences △ t is less than preset threshold Th, that is, judges whether △ t1 is less than Th, and whether △ t2 is less than Th,
Whether △ t3 is less than Th, and whether △ t4 is less than Th etc., enters step S4434 if △ t is less than preset threshold Th.
Step S4434: if it is less, the interruption moment of preset threshold will be less than as the first moment.
When carving Th when the difference between T3 and interruption moment T1 is less than preset threshold upon exposure, then it will be less than preset threshold
The interruption moment T1 of Th is as the first moment, i.e. △ t1, which interrupts the moment and is less than default threshold in △ t2, △ t3, △ t4 ...
This is then interrupted the moment as the first moment by value Th.In some embodiments, when meeting difference less than in preset threshold
When the disconnected moment is multiple, due to actual delay duration may than growing up when theoretic delay, can further judge and
Whether the interruption moment that the difference of the time of exposure is less than preset threshold is less than time of exposure T3.For example, processor receives the first figure
The time of reception T2 of picture is 100, and the delay duration △ T got is 30, then the time of exposure T3 of the first image is T2- △ T=
70.Interruption the moment T11, T12, T13, T14 and T15 of processor record are respectively 20,40,60,80,100, can be calculated separately
Disconnected moment T11, T12, T13, the difference of T14 and T15 and time of exposure T3, which is respectively 50,30,10,10,30, can be set
Threshold value Th=15, meet condition has T13, T14, and further time of exposure T3 can be compared with T13, T14, and chooses
Interruption moment T13 less than or equal to T3, can be using T13 as the first moment, i.e. the first moment is 60.
Step S444: the predictive information at the first moment is obtained, and using the first information at the first moment to the first moment
Predictive information is updated.
Step S450: when getting second information at the second moment, the prediction at the second moment is believed using the second information
Breath is updated, and obtains the second predictive information, and the prediction letter after the second moment is reacquired based on the second predictive information
Breath.
Step S460: using the predictive information at current time as target information.
In some embodiments, terminal device can also by processor to microprocessor sending instant synchronic command,
Timing synchronization instruction includes the clock time of processor, and timing synchronization instruction is used to indicate microprocessor according to processor
Clock time is adjusted the clock time of microprocessor.
Positioning and tracing method provided by the embodiments of the present application can use the first image collecting device, the second image collector
It sets, the data transmission relations between microprocessor and processor are realized between Inertial Measurement Unit, microprocessor and processor
The synchronization of data can reduce positioning and tracing method because data transmit generated error to a certain extent.
Figure 15 is please referred to, it illustrates a kind of module frame charts of positioning and tracking device 500 provided by the embodiments of the present application, should
Positioning and tracking device 500 is applied to terminal device, will be illustrated below for module frame chart shown in figure 15, locating and tracking dress
Setting 500 may include: that the first information obtains module 510, the second data obtaining module 520 and third data obtaining module 530.
The first information obtains module 510, for according to the first image acquisition device including the first of marker
Image obtains the relative position between the first image collecting device and marker and posture information, obtains the first information.
Second data obtaining module 520, for according to the second image acquisition device including the of target scene
Two images obtain position and posture information of second image collecting device in target scene, obtain the second information, wherein mark
Note object and terminal device are located in target scene.
Target information obtains module 530, for utilizing the first information and second information acquisition terminal equipment relative index's object
Position and posture information, obtain target information.
Further, it may include predictive information acquiring unit 531, that target information as shown in figure 16, which obtains module 530,
One updating unit 532, the second updating unit 533 and information acquisition unit 534.
Predictive information acquiring unit 531, for being obtained using Inertial Measurement Unit, terminal device under different moments is opposite to be marked
The predicted position and posture information for remembering object, obtain the predictive information of different moments.
First updating unit 532, for when getting the first information at the first moment, when using the first information to first
The predictive information at quarter is updated, and obtains the first predictive information, and based on the first predictive information reacquire the first moment it
Predictive information afterwards.
Further, the first updating unit 532 can be used for obtaining the first image collecting device and Inertial Measurement Unit
First rigid body relationship, according to the first information at the first moment and first rigid body Relation acquisition Inertial Measurement Unit relative index's object
Position and posture information, using the position and posture information of Inertial Measurement Unit relative index's object to the predictive information at the first moment
It is updated, obtains the first predictive information.
Further, the first updating unit 532 can be also used for predicting using the first rigid body relationship and the first predictive information
Relative position and posture information between first image collecting device and marker, obtain the first attitude prediction information, obtain the
Error between the first information at one moment and the first attitude prediction information is updated the first rigid body relationship according to error.
Further, the first updating unit 532 can be also used for obtaining multiple interruption moment by processor, interrupt the moment
At the time of sending interrupt signal to processor for the first image collecting device, the time of reception, the time of reception are obtained by processor
At the time of receiving the first image that microprocessor is sent for processor, when determining first using the time of reception and multiple interruption moment
It carves, obtains the predictive information at the first moment, and carry out more using predictive information of the first information at the first moment to the first moment
Newly.
Further, the first updating unit 532 can be also used for obtaining from first the first image of image acquisition device
The delay duration of the first image is received to processor, delay duration is the handling duration and the summation of transmission duration of the first image,
The time of exposure that the first image is obtained using the time of reception and delay duration, calculate the time of exposure and each difference for interrupting the moment
Value, and judge whether the difference is less than preset threshold, if it is less, when will be less than the interruption moment of preset threshold as first
It carves.
Further, the first updating unit 532 can also by processor to microprocessor sending instant synchronic command, when
The clock time that synchronic command includes processor is carved, when timing synchronization instruction is used to indicate clock of the microprocessor according to processor
Quarter is adjusted the clock time of microprocessor.
Second updating unit 533, for when getting second information at the second moment, when using the second information to second
The predictive information at quarter is updated, and obtains the second predictive information, and based on the second predictive information reacquire the second moment it
Predictive information afterwards.
Further, the second updating unit 533 can be used for obtaining the second image collecting device and Inertial Measurement Unit
Second rigid body relationship utilizes the first rigid body relationship and the second rigid body relationship pair of the first image collecting device and Inertial Measurement Unit
Second information at the second moment carries out coordinate conversion, obtains middle position and the posture information of terminal device relative index's object, benefit
The predictive information at the second moment is updated with the middle position of terminal device relative index's object and posture information, obtains second
Predictive information.
Further, the second updating unit 533 can be also used for predicting using the second rigid body relationship and the second predictive information
Position and posture information of second image collecting device in target scene obtain the second attitude prediction information, when obtaining second
The error between the second information and the second attitude prediction information carved, is updated the second rigid body relationship according to error.
Information acquisition unit, for using the predictive information at current time as target information.
In several embodiments provided herein, the mutual coupling of module can be electrical property, mechanical or other
The coupling of form.
It, can also be in addition, can integrate in a processing module in each functional module in each embodiment of the application
It is that modules physically exist alone, can also be integrated in two or more modules in a module.Above-mentioned integrated mould
Block both can take the form of hardware realization, can also be realized in the form of software function module.
Figure 17 is please referred to, it illustrates a kind of structural block diagrams of terminal device provided by the embodiments of the present application.The terminal is set
Standby 600, which can be smart phone, tablet computer, e-book etc., can run the terminal device of application program.End in the application
End equipment 600 may include one or more such as lower component: processor 610, memory 620, image collecting device 630, inertia
Measuring unit 640 and one or more application program, wherein one or more application programs can be stored in memory 620
In and be configured as being executed by one or more processors 610, one or more programs be configured to carry out as preceding method reality
Apply method described in example.
Processor 610 may include one or more processing core.Processor 610 is whole using various interfaces and connection
Various pieces in a terminal device 600, by run or execute the instruction being stored in memory 620, program, code set or
Instruction set, and the data being stored in memory 620 are called, execute the various functions and processing data of terminal device 600.It can
Selection of land, processor 610 can use Digital Signal Processing (Digital Signal Processing, DSP), field-programmable
Gate array (Field-Programmable Gate Array, FPGA), programmable logic array (Programmable Logic
Array, PLA) at least one of example, in hardware realize.Processor 610 can integrating central processor (Central
Processing Unit, CPU), in image processor (Graphics Processing Unit, GPU) and modem etc.
One or more of combinations.Wherein, the main processing operation system of CPU, user interface and application program etc.;GPU is for being responsible for
Show the rendering and drafting of content;Modem is for handling wireless communication.It is understood that above-mentioned modem
It can not be integrated into processor 610, be realized separately through one piece of communication chip.
Memory 620 may include random access memory (Random ACCess Memory, RAM), also may include read-only
Memory (Read-Only Memory).Memory 620 can be used for store instruction, program, code, code set or instruction set.It deposits
Reservoir 620 may include storing program area and storage data area, wherein the finger that storing program area can store for realizing operating system
Enable, for realizing at least one function instruction (such as touch function, sound-playing function, image player function etc.), be used for
Realize the instruction etc. of following each embodiments of the method.Storage data area can be created in use with storage terminal device 600
Data etc..
In the embodiment of the present application, image collecting device 630 is used to acquire the image and acquisition target scene of marker
Scene image.Image collecting device 630 can be infrared camera, be also possible to colour imagery shot, specific camera class
Type is not intended as limiting in the embodiment of the present application.Inertial Measurement Unit 640 for obtaining position and the appearance of terminal device in real time
State information gets the pose change information of terminal device to obtain the six-degree-of-freedom information of terminal device.
Figure 18 is please referred to, it illustrates a kind of structural frames of computer readable storage medium provided by the embodiments of the present application
Figure.Program code is stored in the computer readable storage medium 700, program code can be called by processor and execute the above method
Method described in embodiment.
Computer readable storage medium 700 can be such as flash memory, EEPROM (electrically erasable programmable read-only memory),
The electronic memory of EPROM, hard disk or ROM etc.Optionally, computer readable storage medium 700 includes non-volatile meter
Calculation machine readable medium (non-transitory computer-readable storage medium).Computer-readable storage
Medium 700 has the memory space for the program code 710 for executing any method and step in the above method.These program codes can
With from reading or be written in one or more computer program product in this one or more computer program product.
Program code 710 can for example be compressed in a suitable form.
Finally, it should be noted that above embodiments are only to illustrate the technical solution of the application, rather than its limitations;Although
The application is described in detail with reference to the foregoing embodiments, those skilled in the art are when understanding: it still can be with
It modifies the technical solutions described in the foregoing embodiments or equivalent replacement of some of the technical features;And
These are modified or replaceed, do not drive corresponding technical solution essence be detached from each embodiment technical solution of the application spirit and
Range.
Claims (12)
1. a kind of positioning and tracing method, which is characterized in that be applied to terminal device, the terminal device includes the first Image Acquisition
Device and the second image collecting device, which comprises
According to the first image acquisition device acquisition include marker the first image, obtain the first image acquisition
Relative position and posture information between device and the marker, obtain the first information;
According to second image acquisition device include target scene the second image, obtain second image and adopt
Position and posture information of the acquisition means in the target scene, obtain the second information, wherein the marker and terminal device
In the target scene;
Utilize the position of the relatively described marker of terminal device and posture described in the first information and second acquisition of information
Information obtains target information.
2. the method according to claim 1, wherein the terminal device further includes Inertial Measurement Unit;
It is described using the position of the relatively described marker of terminal device described in the first information and second acquisition of information and
Posture information obtains target information, comprising:
Using the Inertial Measurement Unit obtain different moments under the relatively described marker of the terminal device predicted position and
Posture information obtains the predictive information of different moments;
When getting the first information at the first moment, carried out using predictive information of the first information to first moment
It updates, obtains the first predictive information, and the prediction after first moment is reacquired based on first predictive information
Information;
When getting second information at the second moment, carried out using predictive information of second information to second moment
It updates, obtains the second predictive information, and the prediction after second moment is reacquired based on second predictive information
Information;
Using the predictive information at current time as target information.
3. according to the method described in claim 2, it is characterized in that, described when getting the first information at the first moment, benefit
It is updated with predictive information of the first information to first moment, obtains the first predictive information, comprising:
Obtain the first rigid body relationship of the first image acquisition device and the Inertial Measurement Unit;
According to the relatively described label of Inertial Measurement Unit described in the first information at the first moment and the first rigid body Relation acquisition
The position of object and posture information;
The prediction at first moment is believed using the position and posture information of the relatively described marker of the Inertial Measurement Unit
Breath is updated, and obtains the first predictive information.
4. according to the method described in claim 3, it is characterized in that, it is described obtain the first predictive information after, comprising:
Utilize the first rigid body relationship and first predictive information prediction the first image acquisition device and the label
Relative position and posture information between object obtain the first attitude prediction information;
Obtain the error between the first information at first moment and the first attitude prediction information;
The first rigid body relationship is updated according to the error.
5. according to the method described in claim 2, it is characterized in that, described when getting second information at the second moment, benefit
It is updated with predictive information of second information to second moment, obtains the second predictive information, comprising:
Obtain the second rigid body relationship of second image collecting device Yu the Inertial Measurement Unit;
It is closed using the first rigid body relationship and second rigid body of the first image acquisition device and the Inertial Measurement Unit
Be that coordinate conversion is carried out to second information at the second moment, obtain the relatively described marker of the terminal device middle position and
Posture information;
The prediction at second moment is believed using the middle position and posture information of the relatively described marker of the terminal device
Breath is updated, and obtains the second predictive information.
6. according to the method described in claim 5, it is characterized in that, it is described obtain the second predictive information after, further includes:
Predict second image collecting device in the target using the second rigid body relationship and second predictive information
Position and posture information in scene obtain the second attitude prediction information;
Obtain the error between second information at second moment and the second attitude prediction information;
The second rigid body relationship is updated according to the error.
7. according to the method described in claim 2, it is characterized in that, the terminal device further includes microprocessor and processor,
The first image acquisition device is connect with the microprocessor, and second image collecting device is connected to the processor;
It is described when getting the first information at the first moment, using the first information to the predictive information at first moment
It is updated, comprising:
Multiple interruption moment are obtained by the processor, and the interruptions moment is the first image acquisition device to the place
At the time of managing device transmission interrupt signal;
The time of reception is obtained by the processor, the time of reception is that the processor receives what the microprocessor was sent
At the time of the first image;
Determined for the first moment using the time of reception and the multiple interruption moment;
Obtain the predictive information at first moment, and pre- to first moment using the first information at first moment
Measurement information is updated.
8. the method according to the description of claim 7 is characterized in that when the utilization time of reception and the multiple interruption
It carves and determined for the first moment, comprising:
It obtains from the first image acquisition device and acquires the first image to processor reception the first image
Postpone duration, the delay duration is the handling duration of the first image and the summation of transmission duration;
The time of exposure of the first image is obtained using the time of reception and the delay duration;
The time of exposure and each difference for interrupting the moment are calculated, and judges whether the difference is less than preset threshold;
If it is less, the interruption moment of preset threshold will be less than as the first moment.
9. the method according to the description of claim 7 is characterized in that the method also includes:
By the processor to the microprocessor sending instant synchronic command, the timing synchronization instruction includes the processing
The clock time of device, the timing synchronization instruction are used to indicate the microprocessor according to the clock time of the processor to institute
The clock time for stating microprocessor is adjusted.
10. a kind of positioning and tracking device, which is characterized in that be applied to terminal device, the terminal device includes that the first image is adopted
Acquisition means and the second image collecting device, described device include:
The first information obtain module, for according to the first image acquisition device acquire include marker the first figure
Picture obtains the relative position between the first image acquisition device and the marker and posture information, obtains the first information;
Second data obtaining module, for according to second image acquisition device include target scene the second figure
Picture obtains position and posture information of second image collecting device in the target scene, obtains the second information,
In, the marker and terminal device are located in the target scene;
Target information obtains module, for utilizing terminal device described in the first information and second acquisition of information with respect to institute
Position and the posture information for stating marker, obtain target information.
11. a kind of terminal device characterized by comprising
One or more processors;
Memory;
Image collecting device;
Inertial Measurement Unit;
One or more application program, wherein one or more of application programs are stored in the memory and are configured
To be executed by one or more of processors, one or more of programs are configured to carry out as claim 1-9 is any
Method described in.
12. a kind of computer-readable storage medium, which is characterized in that be stored with journey in the computer-readable storage medium
Sequence code, said program code can be called by processor and execute such as the described in any item methods of claim 1-9.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910642093.0A CN110442235B (en) | 2019-07-16 | 2019-07-16 | Positioning tracking method, device, terminal equipment and computer readable storage medium |
PCT/CN2019/098200 WO2020024909A1 (en) | 2018-08-02 | 2019-07-29 | Positioning and tracking method, terminal device, and computer readable storage medium |
US16/687,699 US11127156B2 (en) | 2018-08-02 | 2019-11-19 | Method of device tracking, terminal device, and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910642093.0A CN110442235B (en) | 2019-07-16 | 2019-07-16 | Positioning tracking method, device, terminal equipment and computer readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110442235A true CN110442235A (en) | 2019-11-12 |
CN110442235B CN110442235B (en) | 2023-05-23 |
Family
ID=68430545
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910642093.0A Active CN110442235B (en) | 2018-08-02 | 2019-07-16 | Positioning tracking method, device, terminal equipment and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110442235B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111935644A (en) * | 2020-08-10 | 2020-11-13 | 腾讯科技(深圳)有限公司 | Positioning method and device based on fusion information and terminal equipment |
CN112788583A (en) * | 2020-12-25 | 2021-05-11 | 深圳酷派技术有限公司 | Equipment searching method and device, storage medium and electronic equipment |
CN115039015A (en) * | 2020-02-19 | 2022-09-09 | Oppo广东移动通信有限公司 | Pose tracking method, wearable device, mobile device and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090262977A1 (en) * | 2008-04-18 | 2009-10-22 | Cheng-Ming Huang | Visual tracking system and method thereof |
CN105445937A (en) * | 2015-12-27 | 2016-03-30 | 深圳游视虚拟现实技术有限公司 | Mark point-based multi-target real-time positioning and tracking device, method and system |
CN105892638A (en) * | 2015-12-01 | 2016-08-24 | 乐视致新电子科技(天津)有限公司 | Virtual reality interaction method, device and system |
CN107113415A (en) * | 2015-01-20 | 2017-08-29 | 高通股份有限公司 | The method and apparatus for obtaining and merging for many technology depth maps |
-
2019
- 2019-07-16 CN CN201910642093.0A patent/CN110442235B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090262977A1 (en) * | 2008-04-18 | 2009-10-22 | Cheng-Ming Huang | Visual tracking system and method thereof |
CN107113415A (en) * | 2015-01-20 | 2017-08-29 | 高通股份有限公司 | The method and apparatus for obtaining and merging for many technology depth maps |
CN105892638A (en) * | 2015-12-01 | 2016-08-24 | 乐视致新电子科技(天津)有限公司 | Virtual reality interaction method, device and system |
CN105445937A (en) * | 2015-12-27 | 2016-03-30 | 深圳游视虚拟现实技术有限公司 | Mark point-based multi-target real-time positioning and tracking device, method and system |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115039015A (en) * | 2020-02-19 | 2022-09-09 | Oppo广东移动通信有限公司 | Pose tracking method, wearable device, mobile device and storage medium |
CN111935644A (en) * | 2020-08-10 | 2020-11-13 | 腾讯科技(深圳)有限公司 | Positioning method and device based on fusion information and terminal equipment |
CN112788583A (en) * | 2020-12-25 | 2021-05-11 | 深圳酷派技术有限公司 | Equipment searching method and device, storage medium and electronic equipment |
CN112788583B (en) * | 2020-12-25 | 2024-01-05 | 深圳酷派技术有限公司 | Equipment searching method and device, storage medium and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN110442235B (en) | 2023-05-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10225506B2 (en) | Information processing apparatus and information processing method | |
CN111091587B (en) | Low-cost motion capture method based on visual markers | |
JP6635037B2 (en) | Information processing apparatus, information processing method, and program | |
JP5920352B2 (en) | Information processing apparatus, information processing method, and program | |
CN110442235A (en) | Positioning and tracing method, device, terminal device and computer-readable storage medium | |
Bostanci et al. | User tracking methods for augmented reality | |
KR101768958B1 (en) | Hybird motion capture system for manufacturing high quality contents | |
KR102198851B1 (en) | Method for generating three dimensional model data of an object | |
EP1437645A2 (en) | Position/orientation measurement method, and position/orientation measurement apparatus | |
CN206961066U (en) | A kind of virtual reality interactive device | |
WO2016131279A1 (en) | Movement track recording method and user equipment | |
CN111198608A (en) | Information prompting method and device, terminal equipment and computer readable storage medium | |
CN108154533A (en) | A kind of position and attitude determines method, apparatus and electronic equipment | |
US11127156B2 (en) | Method of device tracking, terminal device, and storage medium | |
CN110456905A (en) | Positioning and tracing method, device, system and electronic equipment | |
CN110327048A (en) | A kind of human upper limb posture reconstruction system based on wearable inertial sensor | |
CN111609868A (en) | Visual inertial odometer method based on improved optical flow method | |
WO2015093130A1 (en) | Information processing device, information processing method, and program | |
WO2020149270A1 (en) | Method for generating 3d object arranged in augmented reality space | |
JP2007098555A (en) | Position indicating method, indicator and program for achieving the method | |
CN108427479A (en) | Wearable device, the processing system of ambient image data, method and readable medium | |
KR102190743B1 (en) | AUGMENTED REALITY SERVICE PROVIDING APPARATUS INTERACTING WITH ROBOT and METHOD OF THEREOF | |
CN108882156A (en) | A kind of method and device for calibrating locating base station coordinate system | |
CN111862150A (en) | Image tracking method and device, AR device and computer device | |
CN110688002B (en) | Virtual content adjusting method, device, terminal equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |