CN115328299A - Pose determination method and device, computer equipment and storage medium - Google Patents

Pose determination method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN115328299A
CN115328299A CN202210803195.8A CN202210803195A CN115328299A CN 115328299 A CN115328299 A CN 115328299A CN 202210803195 A CN202210803195 A CN 202210803195A CN 115328299 A CN115328299 A CN 115328299A
Authority
CN
China
Prior art keywords
pose
preset
model
target object
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210803195.8A
Other languages
Chinese (zh)
Inventor
胡永涛
戴景文
贺杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Virtual Reality Technology Co Ltd
Original Assignee
Guangdong Virtual Reality Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Virtual Reality Technology Co Ltd filed Critical Guangdong Virtual Reality Technology Co Ltd
Priority to CN202210803195.8A priority Critical patent/CN115328299A/en
Publication of CN115328299A publication Critical patent/CN115328299A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/18Stabilised platforms, e.g. by gyroscope
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Abstract

The application discloses a pose determining method, a pose determining device, computer equipment and a storage medium, and relates to the technical field of positioning. The method comprises the following steps: acquiring pose data detected by a plurality of IMUs on a target object; acquiring a preset pose calculation model matched with the target object from a plurality of preset pose calculation models to serve as a target model; calculating the pose data by using the target model to obtain the pose information of the target object. Therefore, the preset pose calculation model matched with the target object is used for calculating pose data detected by the IMUs, so that the obtained pose information of the target object is more accurate and has higher precision; meanwhile, the preset pose calculation model is used for calculating pose data, so that the pose calculation speed is greatly increased, the waiting time for pose calculation processing is reduced, and the timeliness and the accuracy of positioning the target object are ensured.

Description

Pose determination method and device, computer equipment and storage medium
Technical Field
The present disclosure relates to the field of positioning technologies, and in particular, to a pose determination method, a pose determination apparatus, a computer device, and a storage medium.
Background
With the continuous development of science and technology, more and more fields can relate to the motion tracking of objects. For example, applications of location tracking technology are related to the fields of Virtual Reality (VR), augmented Reality (AR), and Mixed Reality (MR). Generally, an Inertial Measurement Unit (IMU) is mounted on an object, and then, based on data detected by the IMU mounted on the object, changes in position and attitude of the object are calculated to implement motion tracking.
In the related art, in order to improve the reliability of motion tracking of an object by IMU data, an optimal current drift compensation technique of a plurality of sensors by weighted statistical calculation is employed. However, in a multi-sensor system, a very large number of sensors are required to significantly reduce overall drift, increasing the cost of tracking objects, while also significantly increasing the time to process sensor data, which in turn leads to reduced timeliness and accuracy of motion tracking.
Disclosure of Invention
In view of this, the present application provides a pose determination method, an apparatus, a computer device, and a storage medium.
In a first aspect, an embodiment of the present application provides a pose determination method, where the method includes: acquiring pose data detected by a plurality of Inertial Measurement Units (IMUs) on a target object; acquiring a preset pose calculation model corresponding to a preset object matched with the target object from a plurality of preset pose calculation models to serve as the target model, wherein the preset pose calculation models are obtained by pre-training an initial model according to a pose training sample set of the preset object, and the pose training sample set comprises preset pose data detected in advance by a plurality of IMUs on the preset object, the relative position of each IMU compared with the central position of the preset object and preset pose information of the central position of the preset object; calculating the pose data by using the target model to obtain the pose information of the target object.
In a second aspect, an embodiment of the present application provides a pose determination apparatus, including: the device comprises a data acquisition module, a model acquisition module and a pose determination module. The data acquisition module is used for acquiring pose data detected by a plurality of IMUs on the target object; a model obtaining module, configured to obtain, from a plurality of preset pose calculation models, a preset pose calculation model corresponding to a preset object matched with the target object, as the target model, where the preset pose calculation model is obtained by pre-training an initial model according to a pose training sample set of a preset object, where the pose training sample set includes preset pose data pre-detected by a plurality of IMUs on the preset object, a relative position of each IMU with respect to a center position of the preset object, and preset pose information of the center position of the preset object; and the pose determining module is used for calculating the pose data by using the target model to obtain the pose information of the target object.
In a third aspect, an embodiment of the present application provides a computer device, including: one or more processors; a memory; one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the pose determination method provided by the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium, where program codes are stored in the computer-readable storage medium, and the program codes may be called by a processor to execute the pose determination method provided in the first aspect.
In the scheme provided by the application, pose data detected by a plurality of IMUs on a target object are acquired; acquiring a preset pose calculation model matched with the target object from a plurality of preset pose calculation models to serve as a target model; calculating the pose data by using the target model to obtain the pose information of the target object. Therefore, the preset pose calculation model matched with the target object is used for calculating pose data detected by the IMUs, so that the obtained pose information of the target object is more accurate and has higher precision; meanwhile, the preset pose calculation model is used for calculating pose data, so that the pose calculation speed is greatly increased, the waiting time for pose calculation processing is reduced, and the timeliness and the accuracy of positioning the target object are ensured.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings required to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the description below are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 shows a schematic diagram of an application scenario in which an embodiment of the present application provides a pose determination method.
Fig. 2 shows a schematic flowchart of a pose determination method provided in an embodiment of the present application.
Fig. 3 shows a schematic flowchart of a pose determination method according to another embodiment of the present application.
Fig. 4 shows a flow diagram of the substeps of step S330 in fig. 3.
Fig. 5 shows a schematic flowchart of a pose determination method according to still another embodiment of the present application.
Fig. 6 is a block diagram of a pose determination apparatus provided according to an embodiment of the present application.
Fig. 7 is a block diagram of a computer device for executing a pose determination method according to an embodiment of the present application.
Fig. 8 is a storage unit according to an embodiment of the present application, configured to store or carry program code for implementing a pose determination method according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
With the continuous development of science and technology, more and more fields can relate to the motion tracking of objects. For example, applications of location tracking technology are related to the fields of Virtual Reality (VR), augmented Reality (AR), and Mixed Reality (MR). Generally, an Inertial Measurement Unit (IMU) is mounted on an object, and then, based on data detected by the IMU mounted on the object, changes in position and posture of the object are calculated to realize motion tracking.
In the related art, in order to improve the reliability of motion tracking of an object by IMU data, an optimal current drift compensation technique of a plurality of sensors by weighted statistical calculation is employed. However, in a multi-sensor system, a very large number of sensors are required to significantly reduce the overall drift, increasing the cost of tracking the object, while also significantly increasing the time to process the sensor data, which in turn leads to a decrease in timeliness and accuracy of motion tracking.
In view of the above problems, the inventors propose a pose determination method, a pose determination apparatus, a computer device, and a storage medium, which can calculate pose data detected by a plurality of IMUs by using a preset pose calculation model matched with a target object to obtain pose information of the target object. This is described in detail below.
An application environment of the pose determination method provided by the embodiment of the present application is described below.
Referring to fig. 1, fig. 1 shows a schematic diagram of an application scenario of a pose determination method provided for an embodiment of the present application, where the application scenario includes a pose determination system 10. The pose determination system 10 includes a target object 101 and a plurality of IMUs 102, where the target object 101 may be a VR head-mounted display device, an AR head-mounted display device, an MR head-mounted display device, a VR controller, an AR controller, an MR controller, a smart phone, a smart watch, a smart bracelet, a vehicle, or the like, or may be other objects to be tracked, for example, a moving object in any shape such as a sphere, a triangle, a rectangle, or the like, and this embodiment is not limited thereto, and the plurality of IMUs 102 are respectively disposed on the surface of the target object 101, and in other embodiments, the plurality of IMUs 102 may be disposed at any position of the target object 101.
In some embodiments, the target object 101 may acquire pose data detected by a plurality of IMUs 102 on the target object 101, acquire a preset pose estimation model matching the target object 101 from among a plurality of preset pose estimation models, and use the target model to estimate the pose data to obtain pose information of the target object 101.
In other embodiments, the pose determination system 10 may further include a server, where the server is configured to obtain pose data detected by the plurality of IMUs 102 on the target object 101, obtain a preset pose estimation model matched with the target object 101 from the plurality of preset pose estimation models, use the preset pose estimation model as a target model, and estimate the pose data by using the target model to obtain the pose information of the target object 101. The pose data may be directly sent to the server by the plurality of IMUs 102, or may be sent to the target object 101 by the plurality of IMUs 102, and then forwarded to the server by the target object 101, which is not limited in this embodiment; servers include, but are not limited to, individual servers, server clusters, local servers, cloud servers, and the like.
Referring to fig. 2, fig. 2 is a diagram of a pose determination method, an apparatus, a computer device, and a storage medium according to an embodiment of the present application. The pose determination method provided by the embodiment of the present application will be described in detail below with reference to fig. 2. The pose determination method may include the steps of:
step S210: pose data detected by a plurality of IMUs on the target object are acquired.
In this embodiment, the target object is an object to be tracked and located, the IMU generally includes three single-axis accelerometers and three single-axis gyroscopes, the accelerometers are used to detect acceleration signals of the object in three independent axes of the carrier coordinate system, and the gyroscopes detect angular velocity signals of the carrier relative to the navigation coordinate system, that is, the IMU can be used to measure acceleration and angular velocity of the object in a three-dimensional space, thereby providing a calculation basis for measuring pose information of the object in the three-dimensional space. It is to be understood that the above pose data may be understood as the relevant data for calculating the pose information of the target object, and since the IMU is composed of an accelerometer and a gyroscope, the above pose data correspondingly includes the acceleration and angular velocity of the position where each IMU of the plurality of IMUs detected its deployment on the target object.
Optionally, a magnetometer may be further added to the IMU to provide a reference of the geomagnetic field for a heading angle calculated by the angular velocity, so as to reduce the drift phenomenon.
Step S220: acquiring a preset pose calculation model corresponding to a preset object matched with the target object from a plurality of preset pose calculation models to serve as the target model, wherein the preset pose calculation model is obtained by pre-training an initial model according to a pose training sample set of the preset object, and the pose training sample set comprises preset pose data detected in advance by a plurality of IMUs on the preset object, the relative position of each IMU compared with the central position of the preset object and preset pose information of the central position of the preset object.
It will be appreciated that the target object to be tracked and located may be any type of object for which the appearance properties, the mode of motion and the distribution of the IMUs over its surface are generally different. Therefore, a plurality of preset pose calculation models for different types of preset objects can be stored in advance, so that the preset objects of different types can calculate the pose information by using different preset pose calculation models, namely different pose calculation models are provided for the preset objects of different types in a targeted manner, and the pose calculation accuracy is improved, namely the tracking and positioning accuracy of the target object is improved. The preset pose calculation model may be obtained by pre-training based on a neural network, and the neural network may be a feed-forward type neural network or a feedback type neural network, which is not limited in this embodiment.
Based on this, the initial model may be pre-trained in advance according to a pose training sample set of each preset object, where the pose training sample set includes preset pose data detected in advance by a plurality of IMUs on the preset object, a relative position of each IMU with respect to a center position of the preset object, and preset pose information of the center position of the preset object. Specifically, preset pose data detected in advance by a plurality of IMUs on a preset object and the relative position of each IMU compared with the center position of the preset object are input into an initial model for pose calculation, and appointed pose information of the center position of the preset object output by the initial model is obtained; and performing iterative training on the initial model according to the difference degree between the designated pose information and the preset pose information until a preset training condition is met, and obtaining the trained initial model as a preset pose calculation model corresponding to a preset object. Wherein, the difference degree can be calculated through a loss function to obtain a corresponding loss value; correspondingly, the preset training condition may be that the loss value is smaller than the preset value, the loss value does not change any more, or the training times reach the preset times, and the like. It can be understood that after the initial model is subjected to iterative training for a plurality of training cycles according to the pose training sample set, wherein each training cycle comprises a plurality of iterative training cycles, parameters in the initial model are continuously optimized, so that the loss value becomes smaller and smaller, and finally becomes a fixed value or is smaller than a preset value, and at this time, the initial model is converged; of course, it may also be determined that the initial model has converged after the number of training times reaches the preset number of times, and at this time, the initial model may be used as the preset pose calculation model corresponding to each preset object. The preset value and the preset times are preset, and the numerical value of the preset value and the numerical value of the preset time can be adjusted according to different application scenes. The parameters in the initial model may be optimized by a gradient descent method, for example, a batch gradient descent method, a random gradient descent method, or a small batch gradient descent method, and of course, the parameters in the initial model may also be optimized by using an optimization algorithm such as newton method, quasi newton method, DFP (Davidon-Fletcher-Powell algorithm), or modified iterative scale method, which is not limited in this embodiment. Therefore, the training process is full-automatic, the calculation method for determining the central posture of the object according to the input data can be trained and learned only by providing a real or simulated posture training sample, and the posture information of the target object can be calculated more quickly and accurately.
Specifically, a mapping relationship between each type of preset object and the corresponding preset pose calculation model may be stored in advance, and the preset pose calculation model corresponding to the preset object that matches the type of the target object may be obtained as the target model according to the mapping relationship.
In some embodiments, if the target object is an electronic device such as an AR head-mounted display device, an MR head-mounted display device, a smart phone, a smart watch, a smart bracelet, a vehicle, etc., the mapping relationship may be stored in the target object, and the plurality of preset pose calculation models may also be stored in the target object, and the target object may directly obtain the preset pose calculation model corresponding to the preset object matching itself from the plurality of preset pose calculation models according to the locally stored mapping relationship, so that the target model is filtered from the local storage, thereby improving the filtering efficiency and further improving the pose calculation speed.
In other embodiments, if the target object is an electronic device such as an AR head-mounted display device, an MR head-mounted display device, a smart phone, a smart watch, a smart bracelet, a vehicle, etc., the mapping relationship may be stored in the target object, the mapping relationship may include a model identifier of each preset pose estimation model, and a plurality of preset pose estimation models may be stored in a target server in communication connection with the target object. Based on the above, the target object may obtain a model identifier of a preset pose calculation model corresponding to the preset object matched with the target object according to a locally stored mapping relationship, and send a model downloading request to the target server, where the downloading request carries the model identifier matched with the target object; correspondingly, the target server responds to the downloading request, takes the preset pose calculation model corresponding to the model identification in the downloading request as a target model, and transmits the target model to the target object, so that the target object can calculate the pose information of the target object by using the target model. Therefore, the preset pose calculation models are stored in the server, the local storage space of the target object is saved, the problems of slow calculation and the like caused by insufficient local storage space are solved, and the smooth progress of the pose calculation process of the target object is effectively guaranteed.
In still other embodiments, if the target object is not an electronic device, but is only an object or an animal to be tracked, which cannot execute a program, in which case, the target object may be pose-positioned by another electronic device or a server according to pose data detected by the IMU on the target object. Therefore, the mapping relationship and the plurality of preset pose estimation models mentioned in the foregoing embodiment may be stored in other electronic devices or servers, and the other electronic devices or servers select, as the target model, a preset pose estimation model corresponding to a preset object that matches the target object from the plurality of preset pose estimation models according to the target object and the mapping relationship.
Step S230: calculating the pose data by using the target model to obtain the pose information of the target object.
In this embodiment, the pose information includes position information and pose information, and the pose data includes data detected by each IMU of the plurality of IMUs, so that the target model may calculate the pose information of each IMU according to the data detected by each IMU, and further calculate the pose information of the center position of the target object, that is, the pose information of the target object, according to the relative position relationship between each IMU and the center position of the target object as shown in fig. 1. Therefore, the pose information of the target object is calculated by combining the pose data detected by the IMUs, so that the accuracy of the pose information can be improved, and the problems of large data jitter, pose drift in the use process and the like when the pose data is detected by only one IMU are avoided; meanwhile, the target model matched with the target object is used for calculation, so that the calculation efficiency and the calculation accuracy are improved.
In other embodiments, considering that noise may exist in motion information of an object at any time, in order to improve accuracy of pose information of a target object calculated based on pose data, target motion data may be filtered by a preset filtering algorithm; and then, calculating the filtered target motion data by using the target model to obtain the pose information of the target object. The preset filtering algorithm includes, but is not limited to, a mean filtering algorithm, a median filtering algorithm, a kalman filtering algorithm, and the like.
In the embodiment, the preset pose calculation model matched with the target object is used for calculating pose data, so that the pose information of the target object can be calculated more quickly and accurately; moreover, the pose data are detected by a plurality of IMUs, so that the pose information of the target object determined based on the pose data is more accurate and has higher precision; meanwhile, the preset pose calculation model is used for calculating pose data, so that the pose calculation speed is greatly increased, the waiting time for pose calculation processing is reduced, and the timeliness and the accuracy of positioning a target object are ensured; and the pose estimation is carried out through the preset pose estimation model, a large number of IMUs do not need to be deployed on the target object, and accurate positioning tracking can be realized under the condition of low cost. By utilizing a plurality of IMUs, a series of existing problems of large data jitter, drifting in the using process and the like of a single IMU can be solved, and compared with the traditional optical positioning method, the real 360-degree all-directional tracking can be realized without visible angle limitation; moreover, compared with the traditional positioning method, the IMU has smaller size, the tracking form tends to be miniaturized, and the IMU is conveniently applied to various application scenes.
Referring to fig. 3, fig. 3 is a pose determination method, apparatus, computer device and storage medium according to another embodiment of the present application. The pose determination method provided by the embodiment of the present application will be described in detail below with reference to fig. 3. The pose determination method may include the steps of:
step S310: pose data detected by a plurality of IMUs on the target object are acquired.
In this embodiment, the content of the foregoing embodiment can be referred to in the detailed implementation of step S310, and this embodiment does not limit this.
Step S320: and acquiring the appearance attribute of the target object.
In this embodiment, the appearance attribute may include attribute information such as shape and size, and the type of the target object may be determined by the appearance attribute, for example, objects with the same shape, size and/or weight are used as the same type of object, and correspondingly, the preset pose estimation models used by the same type of object may also be the same. Therefore, the appearance attribute of the target object can be acquired.
In some embodiments, if the execution subject is the target object, the target object may obtain its own appearance attribute according to the attribute information pre-stored locally.
In another embodiment, if the execution subject is another electronic device or a server, the attribute information of the target object may be acquired from attribute information corresponding to the target object stored in advance, so that the appearance attribute of the target object may be acquired quickly. Of course, the target image including the target object may be acquired by the image acquisition device, and the image of the target image may be recognized to recognize the appearance attribute of the target object, so that the acquired appearance attribute of the target object may be closer to the appearance attribute of the target object at the current time.
Step S330: and acquiring a preset pose calculation model corresponding to a preset object matched with the appearance attribute of the target object from the plurality of preset pose calculation models to serve as the target model.
In some embodiments, referring to fig. 4, step S330 may include the following steps:
step S331: and determining a preset pose calculation model corresponding to a preset object matched with the appearance attribute of the target object from the plurality of preset pose calculation models as a model to be selected.
In this embodiment, the change of the pose information of the objects with the same appearance attribute is similar in the motion process, so that the same preset pose calculation model can be used for the objects with the same appearance attribute, and thus, the problems that the storage resources occupy too much due to too many pre-stored models can be avoided, and the storage resources are saved. Based on this, the mapping relationship between each preset pose calculation model and the appearance attribute of the preset object applied to the preset pose calculation model can be stored in advance, and then the preset pose calculation model corresponding to the preset object matched with the appearance attribute of the target object can be determined from the plurality of preset pose calculation models according to the mapping relationship and serves as the model to be selected.
Step S332: and if the number of the models to be selected is multiple, acquiring the distribution information of the IMUs on the target object.
Step S333: and acquiring a candidate model matched with the distribution information from the plurality of candidate models to be used as the target model.
In some embodiments, objects with the same appearance attributes may require different positioning accuracy, which in turn may result in different distribution information for the plurality of IMUs deployed on the object. Therefore, objects having the same appearance attribute and the same distribution information of the plurality of IMUs on the objects are regarded as the same type of objects. At this time, by the way of the foregoing step S331, the number of the candidate models determined by the appearance attributes may be 1 or more, and when the number of the candidate models is 1, the candidate models are directly used as the target models of the target object.
Optionally, if the number of the determined models to be selected is multiple, the distribution information of the plurality of IMUs on the target object is obtained, and the model to be selected matched with the distribution information is obtained from the multiple models to be selected as the target model. The distribution information may include the number of IMUs and the relative position relationship of each IMU to the target object. The distribution information may be determined based on the target image including the target object acquired by the image acquisition device, or may be obtained based on pre-stored distribution information related to the target object, which is not limited in this embodiment.
Exemplarily, if the number of the models to be selected is 2, two IMUs are respectively deployed right above and below the surface of the cube-shaped object corresponding to the preset distribution information of the model a to be selected, two IMUs are respectively deployed right above, right in front of and right to the surface of the cube-shaped object corresponding to the preset distribution information of the model B to be selected, and two IMUs are respectively deployed right above and below the distribution information of the target object. Based on this, the candidate model a may be determined to be the target model.
Step S340: calculating the pose data by using the target model to obtain the pose information of the target object.
In this embodiment, the content in the foregoing embodiment can be referred to in the detailed implementation manner of step S340, and this embodiment does not limit this.
In this embodiment, a target model is screened from a plurality of preset pose calculation models by combining two conditions, namely, the appearance attribute of the target object and the distribution information of a plurality of IMUs deployed on the target object, so that a pose calculation model more adaptive to the target object can be obtained, that is, the pose data acquired by the target model and the plurality of IMUs of the target object are more adaptive to each other, and further, the pose information of the target object calculated by using the target model for the pose data is more accurate, higher in precision and higher in speed. In addition, because objects with the same appearance attributes and the same IMU distribution information use the same preset pose calculation model, the pose calculation cost is reduced, and the objects can be tracked and positioned with high precision and high accuracy under the condition of limited cost.
Referring to fig. 5, fig. 5 is a diagram illustrating a pose determination method, an apparatus, a computer device, and a storage medium according to still another embodiment of the present application. The pose determination method provided by the embodiment of the present application will be described in detail below with reference to fig. 5. The pose determination method may include the steps of:
step S410: pose data detected by a plurality of IMUs on the target object is acquired.
Step S420: and acquiring a preset pose calculation model corresponding to a preset object matched with the target object from the plurality of preset pose calculation models as a target model.
In this embodiment, the detailed implementation of steps S410 to S420 may refer to the content in the foregoing embodiments, and this embodiment is not described again.
Step S430: acquiring relative positions of the IMUs and the central position of the target object and historical attitude information of the target object;
step S440: and calculating the attitude information of the central position of the target object by using the target model according to the attitude data, the relative position and the historical attitude information to serve as the attitude information of the target object.
In this embodiment, the pose information of the target object is calculated according to the current pose data and by combining with the historical pose information, that is, the pose information calculated by the target model is finely adjusted and optimized by combining with the historical pose information, so that the obtained pose information of the target object is more accurate, that is, the accuracy of pose positioning is improved.
In other embodiments, pose information of the center position of the target object may be estimated as pose information of the target object by using the target model and according to the pose data and the relative position directly after acquiring the relative positions of the plurality of IMUs and the center position of the target object. Therefore, the pose information of the central position of the target object can be calculated on line by using the target model more quickly.
In some embodiments, after the pose information of the target objects is derived, the relative position of each IMU to the center position of the target objects may also be updated based on the pose information of the target objects. Meanwhile, model parameters in the target model can be continuously updated according to pose information calculated by utilizing the actual pose each time. In other words, the target model is updated by iterative training while being calculated according to the real-time pose data, that is, the target model is updated in real time, so that the calculation accuracy and the calculation efficiency of the target model are ensured.
In this embodiment, the historical pose information is combined to perform fine tuning and optimization on the pose information calculated by the target model, so that the obtained pose information of the target object is more accurate, that is, the pose positioning accuracy is improved.
Referring to fig. 6, a block diagram of a pose positioning apparatus 600 according to an embodiment of the present application is shown. The apparatus 600 may include: a data acquisition module 610, a model acquisition module 620, and a pose determination module 630.
The data acquisition module 610 is configured to acquire pose data detected by a plurality of IMUs on a target object.
The model obtaining module 620 is configured to obtain, as a target model, a preset pose estimation model corresponding to a preset object matched with the target object from a plurality of preset pose estimation models, where the preset pose estimation model is obtained by pre-training an initial model according to a pose training sample set of the preset object, and the pose training sample set includes preset pose data pre-detected by a plurality of IMUs on the preset object, a relative position of each IMU with respect to a center position of the preset object, and preset pose information of the center position of the preset object.
The pose determination module 630 is configured to calculate the pose data by using the target model to obtain pose information of the target object.
In some embodiments, the model acquisition module 620 may include: an attribute acquisition unit and a model acquisition unit. The attribute acquiring unit may be configured to acquire an appearance attribute of the target object. The model acquisition unit may be configured to acquire, as the target model, a preset pose estimation model that matches an appearance attribute of the target object from among the plurality of preset pose estimation models.
In this manner, the model obtaining unit may be specifically configured to: determining a preset pose calculation model matched with the appearance attribute of the target object from the plurality of preset pose calculation models to serve as a model to be selected; if the number of the models to be selected is multiple, acquiring the distribution information of the IMUs on the target object; and acquiring a candidate model matched with the distribution information from the plurality of candidate models to serve as the target model.
In some embodiments, the pose determination module 630 may be specifically configured to: obtaining relative positions of the IMUs and the central position of the target object and historical attitude information of the target object; calculating attitude information of the center position of the target object as the attitude information of the target object using the target model and based on the attitude data, the relative position, and the historical attitude information,
In other embodiments, the pose determination module 630 may be specifically configured to: obtaining relative positions of the plurality of IMUs and a center position of the target object; and calculating the posture information of the central position of the target object by using the target model according to the posture data and the relative position, wherein the posture information is used as the posture information of the target object.
In some embodiments, the pose positioning apparatus 600 may further include: and updating the module. The updating module may be configured to, after the pose data is calculated by using the target model to obtain the pose information of the target object, update the relative position based on the pose information of the target object to obtain an updated relative position.
In some embodiments, the pose positioning apparatus 600 may further include: and a model training module. The model training module may be configured to, before acquiring a preset pose estimation model corresponding to a preset object matched with the target object from the plurality of preset pose estimation models and using the preset pose estimation model as the target model, input preset pose data detected in advance by a plurality of IMUs on the preset object and a relative position of each IMU with respect to a center position of the preset object to the initial model for pose estimation, to obtain specified pose information of the center position of the preset object output by the initial model; and performing iterative training on the initial model according to the difference degree between the designated pose information and the preset pose information until a preset training condition is met, so as to obtain a trained initial model which is used as a preset pose calculation model corresponding to the preset object.
It can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working processes of the above-described devices and modules may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, the coupling between the modules may be electrical, mechanical or other type of coupling.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
In summary, in the scheme provided by the embodiment of the present application, pose data detected by a plurality of IMUs on a target object are acquired; acquiring a preset pose calculation model matched with the target object from a plurality of preset pose calculation models to serve as a target model; and calculating the pose data by using the target model to obtain the pose information of the target object. Therefore, the preset pose calculation model matched with the target object is used for calculating pose data detected by the IMUs, so that the obtained pose information of the target object is more accurate and has higher precision; meanwhile, the preset pose calculation model is used for calculating pose data, so that the pose calculation speed is greatly increased, the waiting time for pose calculation processing is reduced, and the timeliness and the accuracy of positioning a target object are ensured; and the pose is calculated by the preset pose calculation model, a large number of IMUs are not required to be deployed on the target object, and accurate positioning tracking can be realized under the condition of low cost. By utilizing a plurality of IMUs, a series of existing problems of large data jitter, drifting in the using process and the like of a single IMU can be solved, and compared with the traditional optical positioning method, the real 360-degree all-dimensional tracking can be realized without visual angle limitation; moreover, compared with the traditional positioning method, the IMU has smaller size, and the tracking form tends to be miniaturized, thereby being conveniently applied to various application scenes.
A computer device provided by the present application will be described with reference to fig. 7.
Referring to fig. 7, fig. 7 shows a block diagram of a computer device 700 according to an embodiment of the present application, and the method for locating a controller according to the embodiment of the present application may be executed by the computer device 700. Computer device 700 may be, among other things, a device capable of running applications.
The computer device 700 in the embodiments of the present application may include one or more of the following components: a processor 701, a memory 702, and one or more applications, wherein the one or more applications may be stored in the memory 702 and configured to be executed by the one or more processors 701, the one or more programs configured to perform a method as described in the aforementioned method embodiments.
Processor 701 may include one or more processing cores. The processor 701 interfaces with various components throughout the computer device 700 using various interfaces and lines to perform various functions of the computer device 700 and process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 702 and invoking data stored in the memory 702. Alternatively, the processor 701 may be implemented in hardware using at least one of Digital Signal Processing (DSP), field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 701 may integrate one or a combination of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may be integrated into the processor 701 and implemented by a single communication chip.
The Memory 702 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). The memory 702 may be used to store instructions, programs, code, sets of codes, or sets of instructions. The memory 702 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like. The storage data area may also store data created by the computer device 700 during use (such as the various correspondences described above), and so on.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and modules may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, the coupling or direct coupling or communication connection between the modules shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or modules may be in an electrical, mechanical or other form.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
Referring to fig. 8, a block diagram of a computer-readable storage medium according to an embodiment of the present application is shown. The computer-readable medium 800 has stored therein a program code that can be called by a processor to execute the method described in the above-described method embodiments.
The computer-readable storage medium 800 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Alternatively, the computer-readable storage medium 800 includes a non-transitory computer-readable storage medium. The computer readable storage medium 800 has storage space for program code 810 to perform any of the method steps of the method described above. The program code can be read from and written to one or more computer program products. The program code 810 may be compressed, for example, in a suitable form.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (10)

1. A pose determination method, characterized in that the method comprises:
acquiring pose data detected by a plurality of Inertial Measurement Units (IMUs) on a target object;
acquiring a preset pose calculation model corresponding to a preset object matched with the target object from a plurality of preset pose calculation models to serve as the target model, wherein the preset pose calculation model is obtained by pre-training an initial model according to a pose training sample set of the preset object, and the pose training sample set comprises preset pose data detected in advance by a plurality of IMUs on the preset object, the relative position of each IMU compared with the central position of the preset object and preset pose information of the central position of the preset object;
and calculating the pose data by using the target model to obtain the pose information of the target object.
2. The method according to claim 1, wherein the acquiring, as the target model, a preset pose estimation model corresponding to a preset object that matches the target object from among a plurality of preset pose estimation models comprises:
acquiring the appearance attribute of the target object;
and acquiring a preset pose calculation model corresponding to a preset object matched with the appearance attribute of the target object from the plurality of preset pose calculation models to serve as the target model.
3. The method according to claim 2, wherein the obtaining, as the target model, a preset pose estimation model corresponding to a preset object that matches the appearance attribute of the target object from among the plurality of preset pose estimation models includes:
determining a preset pose calculation model corresponding to a preset object matched with the appearance attribute of the target object from the plurality of preset pose calculation models as a model to be selected;
if the number of the models to be selected is multiple, acquiring distribution information of the IMUs on the target object;
and acquiring a candidate model matched with the distribution information from the plurality of candidate models to be used as the target model.
4. The method of claim 1, wherein the estimating the pose data using the object model to obtain the pose information of the object comprises:
acquiring relative positions of the IMUs and the central position of the target object and historical attitude information of the target object;
and calculating the posture information of the central position of the target object by using the target model according to the posture data, the relative position and the historical posture information to be used as the posture information of the target object.
5. The method of claim 1, wherein the estimating the pose data using the object model to obtain the pose information of the object comprises:
obtaining relative positions of the plurality of IMUs and a center position of the target object;
and calculating the posture information of the central position of the target object by using the target model according to the posture data and the relative position, wherein the posture information is used as the posture information of the target object.
6. The method according to claim 4 or 5, wherein after the estimating the pose data using the object model to obtain the pose information of the object, the method further comprises:
and updating the relative position based on the pose information of the target object to obtain an updated relative position.
7. The method according to any one of claims 1 to 5, wherein before the obtaining, as the target model, a preset pose estimation model corresponding to a preset object matching the target object from among the plurality of preset pose estimation models, the method further comprises:
inputting preset pose data detected in advance by a plurality of IMUs on a preset object and the relative position of each IMU compared with the central position of the preset object into the initial model for pose calculation to obtain appointed pose information of the central position of the preset object output by the initial model;
and performing iterative training on the initial model according to the difference degree between the designated pose information and the preset pose information until a preset training condition is met, so as to obtain the trained initial model which is used as a preset pose calculation model corresponding to the preset object.
8. A pose determination apparatus, characterized in that the apparatus comprises:
the data acquisition module is used for acquiring pose data detected by a plurality of IMUs on the target object;
a model obtaining module, configured to obtain, as a target model, a preset pose calculation model corresponding to a preset object that matches the target object from among a plurality of preset pose calculation models, where the preset pose calculation model is obtained by pre-training an initial model according to a pose training sample set of preset objects, and the pose training sample set includes preset pose data detected in advance by a plurality of IMUs on the preset object, a relative position of each IMU with respect to a center position of the preset object, and preset pose information of the center position of the preset object;
and the pose determining module is used for calculating the pose data by using the target model to obtain the pose information of the target object.
9. A computer device, comprising:
one or more processors;
a memory;
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the method of any of claims 1-7.
10. A computer-readable storage medium, characterized in that a program code is stored in the computer-readable storage medium, which program code can be called by a processor to execute the method according to any of claims 1 to 7.
CN202210803195.8A 2022-07-07 2022-07-07 Pose determination method and device, computer equipment and storage medium Pending CN115328299A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210803195.8A CN115328299A (en) 2022-07-07 2022-07-07 Pose determination method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210803195.8A CN115328299A (en) 2022-07-07 2022-07-07 Pose determination method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115328299A true CN115328299A (en) 2022-11-11

Family

ID=83917786

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210803195.8A Pending CN115328299A (en) 2022-07-07 2022-07-07 Pose determination method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115328299A (en)

Similar Documents

Publication Publication Date Title
CN109297510B (en) Relative pose calibration method, device, equipment and medium
CN110556012B (en) Lane positioning method and vehicle positioning system
CN110133582B (en) Compensating for distortion in electromagnetic tracking systems
CN105359054B (en) Equipment is positioned and is orientated in space
CN116051640A (en) System and method for simultaneous localization and mapping
US11494987B2 (en) Providing augmented reality in a web browser
KR102212825B1 (en) Method and system for updating map for pose estimation based on images
CN103907139A (en) Information processing device, information processing method, and program
CN111638528B (en) Positioning method, positioning device, electronic equipment and storage medium
CN109211277A (en) The state of vision inertia odometer determines method, apparatus and electronic equipment
CN105103089B (en) System and method for generating accurate sensor corrections based on video input
CN108897836A (en) A kind of method and apparatus of the robot based on semantic progress map structuring
CN113034594A (en) Pose optimization method and device, electronic equipment and storage medium
CN111680596B (en) Positioning true value verification method, device, equipment and medium based on deep learning
CN105910593B (en) A kind of method and device of the geomagnetic sensor of calibrating terminal
CN113610702B (en) Picture construction method and device, electronic equipment and storage medium
CN108236782B (en) External equipment positioning method and device, virtual reality equipment and system
CN111382701B (en) Motion capture method, motion capture device, electronic equipment and computer readable storage medium
US11692829B2 (en) System and method for determining a trajectory of a subject using motion data
CN115328299A (en) Pose determination method and device, computer equipment and storage medium
US11620846B2 (en) Data processing method for multi-sensor fusion, positioning apparatus and virtual reality device
CN112597174B (en) Map updating method and device, electronic equipment and computer readable medium
CN114882587A (en) Method, apparatus, electronic device, and medium for generating countermeasure sample
WO2022269985A1 (en) Information processing device, information processing method, and program
CN112325905B (en) Method, device and medium for identifying measurement error of IMU

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination