CN111382701B - Motion capture method, motion capture device, electronic equipment and computer readable storage medium - Google Patents

Motion capture method, motion capture device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN111382701B
CN111382701B CN202010158698.5A CN202010158698A CN111382701B CN 111382701 B CN111382701 B CN 111382701B CN 202010158698 A CN202010158698 A CN 202010158698A CN 111382701 B CN111382701 B CN 111382701B
Authority
CN
China
Prior art keywords
target
position information
limb
target limb
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010158698.5A
Other languages
Chinese (zh)
Other versions
CN111382701A (en
Inventor
王光伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Douyin Vision Co Ltd
Original Assignee
Douyin Vision Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Douyin Vision Co Ltd filed Critical Douyin Vision Co Ltd
Priority to CN202010158698.5A priority Critical patent/CN111382701B/en
Publication of CN111382701A publication Critical patent/CN111382701A/en
Application granted granted Critical
Publication of CN111382701B publication Critical patent/CN111382701B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Abstract

The embodiment of the disclosure relates to the technical field of gesture detection, and discloses a motion capturing method, a device, electronic equipment and a computer readable storage medium, wherein the motion capturing method comprises the following steps: acquiring the angular velocity of a first target limb of a first action object at the current moment through an inertial action capturing device, and predicting first position information of the first target limb at the target moment according to the angular velocity; then, determining second position information of the first target limb at the target moment based on a first preset optical mark point of the first target limb through the optical motion capturing device; and then, determining the target position of the first target limb at the target moment according to the first position information and the second position information so as to capture the action of the first target limb. The method of the embodiment of the disclosure can reduce errors and offsets introduced by the inertial motion capture device as much as possible, and greatly improve the accuracy of motion capture.

Description

Motion capture method, motion capture device, electronic equipment and computer readable storage medium
Technical Field
The embodiment of the disclosure relates to the technical field of gesture detection, in particular to a motion capturing method, a motion capturing device, electronic equipment and a computer readable storage medium.
Background
The motion capture is a technology for capturing motion made by a moving object, wherein a tracker is arranged at a key part of the moving object, then a real-time position of the tracker is captured by a motion capture system, and data of three-dimensional space coordinates of the tracker are obtained after the data are processed by a computer. When the data is identified by the computer, the method can be applied to the fields of animation production, gait analysis, biomechanics, ergonomics, game production and the like.
For example, in interactive games, various actions of players can drive actions of virtual characters in the game environment, so that a brand new participation experience is brought to the players, and meanwhile, the reality and interactivity of the games are enhanced. As another example, in animation production, motion capture technology greatly improves the development efficiency and development level of animation and game production, and reduces its development cost. For another example, in sports training, the motion capture technology can capture quantitative information such as displacement, speed, acceleration, electromyographic signals and the like of the athlete in the motion process, and by combining the machine learning technology and the human biomechanical principle, the motion of the athlete can be analyzed from the quantitative angle and a scientific improvement method is proposed.
Currently, inertial motion capturing devices are currently widely used motion capturing devices by virtue of miniaturization, low cost, wireless transmission as needed, and the like. However, the inventors of the present disclosure found, in the course of their implementation, that: although the inertial motion capture device can capture continuous data, errors and offsets are prone to occur, affecting the accuracy of motion capture.
Disclosure of Invention
The purpose of the embodiments of the present disclosure is to address at least one of the above technical shortcomings, and to provide this summary section to introduce concepts in a simplified form that are further described below in the detailed description section. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
In one aspect, a motion capture method is provided, including:
acquiring the angular velocity of a first target limb of a first action object at the current moment through an inertial action capturing device, and predicting first position information of the first target limb at the target moment according to the angular velocity;
determining, by the optical motion capture device, second position information of the first target limb at the target moment based on a first preset optical marker point of the first target limb;
And determining the target position of the first target limb at the target moment according to the first position information and the second position information so as to capture the action of the first target limb.
In one aspect, there is provided a motion capture device comprising:
the processing module is used for acquiring the angular velocity of a first target limb of a first action object at the current moment through the inertial action capturing device and predicting first position information of the first target limb at the target moment according to the angular velocity;
the first determining module is used for determining second position information of the first target limb at the target moment based on a first preset optical mark point of the first target limb through the optical motion capturing device;
and the second determining module is used for determining the target position of the first target limb at the target moment according to the first position information and the second position information so as to capture the action of the first target limb.
In one aspect, an electronic device is provided, including a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the motion capture method described above when executing the program.
In one aspect, a computer readable storage medium is provided, on which a computer program is stored, which when executed by a processor, implements the motion capture method described above.
According to the motion capturing method provided by the embodiment of the disclosure, the first position information of the first target limb at the target moment is captured through the inertial motion capturing device, the second position information of the first target limb at the target moment is captured through the optical motion capturing device, and the target position of the first target limb at the target moment is determined according to the first position information and the second position information, so that the first position information of the first target limb captured by the inertial motion capturing device at the target moment can be corrected and updated by means of the second position information of the first target limb captured by a very small amount of the optical motion capturing device, errors and offsets introduced by the inertial motion capturing device are reduced as much as possible, influences on the determination of the position of the first target limb are reduced, the more accurate target position of the first target limb can be obtained, and the motion capturing accuracy is greatly improved.
Additional aspects and advantages of embodiments of the disclosure will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the disclosure.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
FIG. 1 is a flow chart of a motion capture method according to an embodiment of the disclosure;
FIG. 2 is a schematic diagram of a basic structure of a motion capture device according to an embodiment of the disclosure;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below.
It should be noted that the terms "first," "second," and the like in this disclosure are used merely to distinguish one device, module, or unit from another device, module, or unit, and are not intended to limit the order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those skilled in the art will appreciate that the reference to "one or more" is intended to be construed as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the embodiments of the present disclosure will be further described in detail below with reference to the accompanying drawings.
The embodiment of the disclosure provides a motion capture method, a motion capture device, an electronic device and a computer storage medium, which aim to solve the technical problems in the prior art.
The following describes in detail, with specific embodiments, a technical solution of an embodiment of the present disclosure and how the technical solution of the embodiment of the present disclosure solves the foregoing technical problems. The following embodiments may be combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments. Embodiments of the present disclosure will be described below with reference to the accompanying drawings.
One embodiment of the present disclosure provides a motion capture method performed by a computer device, which may be a terminal or a server. The terminal may be a desktop device or a mobile terminal. The servers may be separate physical servers, clusters of physical servers, or virtual servers. As shown in fig. 1, the method includes:
step S110, obtaining the angular velocity of a first target limb of a first action object at the current moment through an inertial action capturing device, and predicting first position information of the first target limb at the target moment according to the angular velocity.
In particular, the inertial motion capture device may capture positional information of a motion object (i.e., the first motion object described above) including, but not limited to, a person, an animal, and the like, via an inertial sensor. In practical applications, the inertial motion capturing device may be bound to the target limb (i.e., the first target limb) of the motion object to capture the position information of the target limb at each time point, so as to obtain the motion of the target limb according to the position information of the target limb, and further obtain the motion of the motion object according to the motion of the target limb.
If the motions of a plurality of target limbs are to be captured, an inertial motion capture device can be bound to each target limb to capture the position information of each target limb at various time points, so that the motions of each target limb can be obtained according to the position information of each target limb. Each inertial motion capture device captures position information under the same coordinate system.
In one example, if the motion object is a human, the target limb is a left arm and a right arm, an inertial motion capturing device a may be bound to the left arm of the human to obtain position information of the left arm so as to capture motion of the left arm, and an inertial motion capturing device B may be bound to the right arm of the human to obtain position information of the right arm so as to capture motion of the right arm.
In another example, if the motion object is a human, and the target limb is a left leg, a right leg and a waist, an inertial motion capturing device C may be bound to the left leg of the human to obtain position information of the left leg, so as to capture motion of the left leg; and binding an inertial motion capturing device D on the right leg of the human body to obtain the position information of the right leg so as to capture the motion of the right leg; meanwhile, an inertial motion capturing device E is bound on the waist of the human body to obtain the position information of the waist so as to capture the motion of the waist.
Specifically, in the process of capturing the position information of the target limb through the inertial motion capturing device, the inertial motion capturing device actually obtains the angular velocity of the target limb at the current moment, and after obtaining the angular velocity, the position information (recorded as the first position information) of the target limb at the target moment is predicted according to the angular velocity.
Step S120, determining, by the optical motion capturing device, second position information of the first target limb at the target moment based on the first preset optical mark point of the first target limb.
Specifically, since the inertial motion capturing device is prone to errors and offsets in capturing position information, after the position information of the target limb is captured by the inertial motion capturing device, the position information is not prone to be directly used as final position information of the target limb, and the position information after correction and updating is used as final position information of the target limb. In the process of correcting and updating the position information of the target limb captured by the inertial motion capturing device, the position information of the target limb captured by the inertial motion capturing device at the target moment can be corrected and updated by means of the second position information of the target limb captured by a very small amount of optical motion capturing devices at the target moment. Wherein the optical motion capture device includes, but is not limited to, an optical sensor, HTC virtual head mounted device.
Specifically, one or several optical marking points (i.e., first preset optical marking points) may be pasted on a predetermined position (such as a bone joint point) of the target limb in advance, and the optical marking points may reflect light (for example, infrared light) emitted by the optical motion capture device. The target limb and the target limb bound by the inertial motion capturing device are the same target limb, and if the inertial motion capturing device is bound on the target limb of the left arm, the optical mark point is stuck on the preset position of the left arm. If the inertial motion capture device is bound to the right arm, the target limb, the optical marker point is affixed to the right arm at a predetermined location.
Specifically, in the process of capturing the position information of the target limb at the target moment through the optical motion capturing device, infrared light can be emitted through the optical motion capturing device, at the moment, the optical mark point on the target limb can reflect the infrared light emitted by the optical motion capturing device, and the optical motion capturing device correspondingly receives the infrared light reflected by the optical mark point, so that the position information (marked as second position information) of the target limb at the target moment is determined according to the optical mark point.
Step S130, determining the target position of the first target limb at the target moment according to the first position information and the second position information so as to capture the action of the first target limb.
Specifically, after the first position information of the target limb at the target moment is obtained through the inertial motion capturing device and the second position information of the same target limb at the same target moment is obtained through the optical motion capturing device, the first position information and the second position information can be comprehensively considered, and the target position of the target limb at the target moment is jointly determined according to the first position information and the second position information, so that errors and deviation caused by the inertial motion capturing device are reduced as much as possible, influence on the determination of the position of the first target limb is greatly improved, and the accuracy of motion capturing is greatly improved.
According to the motion capturing method provided by the embodiment of the disclosure, the first position information of the first target limb at the target moment is captured through the inertial motion capturing device, the second position information of the first target limb at the target moment is captured through the optical motion capturing device, and the target position of the first target limb at the target moment is determined according to the first position information and the second position information, so that the first position information of the first target limb captured by the inertial motion capturing device at the target moment can be corrected and updated by means of the second position information of the first target limb captured by a very small amount of the optical motion capturing device, errors and offsets introduced by the inertial motion capturing device are reduced as much as possible, influences on the determination of the position of the first target limb are reduced, the more accurate target position of the first target limb can be obtained, and the motion capturing accuracy is greatly improved.
The following specifically describes the method according to the embodiment of the present disclosure, taking the action object as an example:
in one possible implementation, the third position information of the second target limb at the target moment may be determined by the optical motion capture device based on a second preset optical marker point of the second target limb of the second motion object; then, a distance between the first action object and the second action object is determined according to the second position information and the third position information.
Specifically, in the process of determining the distance between the first action object and the second action object, the first target limb and the second target limb are limbs of the same part, and the position of the first preset optical mark point is consistent with the position of the second preset optical mark point. Wherein the position of the first preset optical mark point corresponds to the position of the second preset optical mark point, including but not limited to the following cases: the position of the first preset optical marking point is identical to the position of the second preset optical marking point, and the position deviation between the position of the first preset optical marking point and the position of the second preset optical marking point is within a certain error range.
Specifically, in the process of capturing position information, the inertial motion capture device cannot identify the relative position between two motion objects, and the optical motion capture device can acquire the relative position between the two motion objects, at this time, the optical motion capture device can be used for determining the distance between the two motion objects, and the relative position between the two motion objects can be determined according to the distance.
Specifically, if the first action object is Zhang san and the second action object is Liu, in the process of determining the relative position between Zhang san and Liu, the same optical action capturing device can be used for acquiring the position information of the first preset optical mark point of the first target limb of Zhang san at the target moment and the position information of the second preset optical mark point of the second target limb of Liu at the target moment, and then calculating the distance between the two position information, so that the relative position between Zhang san and Liu is determined according to the calculated distance.
Specifically, in determining the relative position between Zhang three and Liu four, an optical marker point (e.g., point L1) may be first calibrated at a predetermined position of the Zhang three target limb, such as an optical marker point L1 at the elbow of the Zhang three left arm, while an optical marker point (e.g., L2) is calibrated at a predetermined position of the Liu four target limb, such as an optical marker point L2 at the elbow of the Liu four left arm; then, the optical motion capturing device emits infrared light, at this time, the optical mark point L1 and the optical mark point L2 reflect the infrared light emitted by the optical motion capturing device, and the optical motion capturing device receives the infrared light reflected by the optical mark point L1 and the optical mark point L2 correspondingly, so as to determine the position information (marked as second position information) of the optical mark point L1 and determine the position information (marked as third position information) of the optical mark point L2; and then, determining the distance between the third position information and the fourth position information according to the second position information and the third position information, wherein the distance between the third position information and the fourth position information can be obtained by making a difference between the second position information and the third position information, and the relative position between the third position information and the fourth position information can be obtained.
It should be noted that, the first target limb of Zhang three and the second target limb of Lisi should be limbs of the same body part, for example, the first target limb of Zhang three is the left arm, and the second target limb of Lisi should also be the left arm, so that the relative position between the two can be accurately calculated, and the relative position has extremely high reference value; if the first target limb of the third arm is a left arm, the second target limb of the fourth arm should also be a left leg, and the first target limb and the second target limb belong to different body parts, so the distance between the first target limb and the second target limb is not comparable, and even if the distance between the first target limb and the second target limb is calculated, the accuracy of the distance is extremely low and the reference value is not provided.
In addition, although the first target limb and the second target limb belong to the limb of the same part, if the position of the first preset optical mark point is inconsistent with the position of the second preset optical mark point, a certain deviation exists in the calculated relative position between Zhang Sanand Lisi Fu. For example, the first preset optical mark point of the third arm is the elbow of the left arm, and the second preset optical mark point of the fourth arm is the wrist of the left arm, and at this time, the calculated distance according to the first preset optical mark point and the second preset optical mark point has a certain reference value, but the accuracy of the calculated relative position between the third arm and the fourth arm is lower because the first preset optical mark point is far away from the second preset optical mark point.
In one possible implementation, in predicting the first position information of the first target limb at the target moment according to the angular velocity, the direction of the first target limb at the target moment may be predicted according to the angular velocity; and then, determining the first position information of the first target limb at the target moment according to the limb length and the orientation of the first target limb.
Specifically, in the process of capturing the position information of the first target limb through the inertial motion capturing device, the inertial motion capturing device actually obtains the angular velocity of the first target limb at the current moment, and after obtaining the angular velocity, predicts the position information (recorded as the first position information) of the target limb at the target moment according to the angular velocity.
Specifically, after the angular velocity is obtained, the direction of the first target limb at the target time may be obtained by integrating the angular velocity in a time interval between the target time and the current time. Because the limb length of the first target limb is fixed, for example, the limb length of the left arm of the third arm is fixed, and the limb length of the right leg of the third arm is fixed, for example, the first position information of the first target limb at the target moment can be determined according to the limb length of the first target limb and the direction of the first target limb. Wherein the limb length of the first target limb is measured in advance.
Specifically, in the process of determining, by the optical motion capturing device, second position information of the first target limb at the target moment based on the first preset optical mark point of the first target limb, fourth position information of the first preset optical mark point at the target moment may be obtained by the optical motion capturing device; next, second position information of the first target limb is determined based on the predetermined action object model according to the fourth position information.
Specifically, if the first motion object is Zhang three, the first target limb of Zhang three is the left arm, and the first preset optical mark point of the first target limb is the elbow of the left arm, the position information (marked as the fourth position information) of the elbow at the target time can be acquired through the optical motion capturing device, at this time, although the position information of the elbow of the left arm is acquired, the position information of the left arm cannot be accurately acquired only, at this time, by means of the predetermined motion object model (i.e., the predetermined mannequin), the position information of the left arm at the target time can be relatively accurately acquired according to the position information of the elbow of the left arm at the target time on the basis of the predetermined mannequin.
In one possible implementation manner, in the process of determining the target position of the first target limb at the target moment according to the first position information and the second position information, a first weight value of the first position information and a second weight value of the second position information can be determined; then, according to the first weight value and the second weight value, fusing the first position information and the second position information to obtain a target position of the first target limb at a target moment; wherein the sum of the first weight value and the second weight value is a predetermined value.
Specifically, after the first position information and the second position information are obtained, a similarly weighted sum operation may be performed on the first position information and the second position information to determine the target position of the first target limb at the target moment. In the process of carrying out weighted summation on the first position information and the second position information, the target position of the first target limb at the target moment can be obtained by carrying out weighted summation on the weight values corresponding to the first position information and the second position information.
Specifically, in the process of carrying out weighted summation on the first position information and the second position information, a weight value (marked as a first weight value) corresponding to the first position information and a weight value (marked as a second weight value) corresponding to the second position information are determined first, and then the first position information and the second position information are fused according to the first weight value and the second weight value, so that the target position of the first target limb at the target moment is obtained.
Specifically, in determining the first weight value of the first location information and the second weight value of the second location information, the first weight value of the first location information and the second weight value of the second location information may be determined according to the accuracy of the first location information (denoted as first accuracy) and the accuracy of the second location information (denoted as second accuracy). When the first accuracy is greater than the second accuracy, the first position information is more accurate, and a larger weight value can be given to the first position information at the moment, namely, the first weight value is greater than the second weight value; when the first accuracy is smaller than the second accuracy, the second position information is more accurate, and a larger weight value can be given to the second position information, namely the first weight value is smaller than the second weight value; when the first accuracy is equal to the second accuracy, it is indicated that the accuracy of the first position information is the same as that of the second position information, and at this time, the second position information may be given the same weight value as that of the first position information, that is, the first weight value is equal to the second weight value.
It should be noted that, the sum of the first weight value and the second weight value is a predetermined value, such as 1, 2, 3, and so on. In one example, the predetermined value is 1, i.e., the sum of the first weight value and the second weight value is 1.
Specifically, after determining the first weight value and the second weight value, a first product of the first weight value and the first position information may be calculated, a second product of the second weight value and the second position information may be calculated, and a product sum between the first product and the second product may be calculated, and the product sum may be used as a target position of the first target limb at the target time.
In one example, if the first location information is w_1, the weight of the first location information is q_1, the second location information is w_2, and the weight of the second location information is q_2, the weighted sum expression used may be: w_1×q_1+w_2×q_2, i.e. the sum of the expressions is the target position of the first target limb at the target moment.
Fig. 2 is a schematic structural diagram of a motion capture device according to another embodiment of the disclosure, and as shown in fig. 2, the device 20 may include a processing module 201, a first determining module 202, and a second determining module 203; wherein:
the processing module 201 is configured to obtain, by using an inertial motion capture device, an angular velocity of a first target limb of a first motion object at a current moment, and predict, according to the angular velocity, first position information of the first target limb at the target moment;
A first determining module 202, configured to determine, by the optical motion capture device, second position information of the first target limb at the target moment based on a first preset optical mark point of the first target limb;
the second determining module 203 is configured to determine a target position of the first target limb at a target moment according to the first position information and the second position information, so as to capture a motion of the first target limb.
In one possible implementation manner, the apparatus further includes a third determining module and a fourth determining module;
the third determining module is used for determining third position information of the second target limb at the target moment based on a second preset optical mark point of the second target limb of the second action object through the optical action capturing device;
a fourth determining module, configured to determine a distance between the first moving object and the second moving object according to the second position information and the third position information;
the first target limb and the second target limb are limbs of the same part; the position of the first preset optical mark point is consistent with the position of the second preset optical mark point.
In one possible implementation, the processing module is configured to, when predicting the first position information of the first target limb at the target time according to the angular velocity:
Predicting the direction of the first target limb at the target moment according to the angular velocity;
and determining first position information of the first target limb at the target moment according to the limb length and the limb direction of the first target limb.
In one possible implementation manner, the first determining module is specifically configured to obtain, by using the optical motion capturing device, fourth position information of the first preset optical mark point at the target moment; and determining second position information of the first target limb according to the fourth position information based on the predetermined action object model.
In one possible implementation manner, the second determining module is configured to determine a first weight value of the first location information and a second weight value of the second location information, where a sum of the first weight value and the second weight value is a predetermined value; and the first position information and the second position information are fused according to the first weight value and the second weight value, so that the target position of the first target limb at the target moment is obtained.
In one possible implementation manner, the second determining module is specifically configured to, when fusing the first position information and the second position information according to the first weight value and the second weight value to obtain the target position of the first target limb at the target moment:
Calculating a first product of the first weight value and the first position information, and calculating a second product of the second weight value and the second position information;
and calculating a product sum between the first product and the second product, and taking the product sum as a target position of the first target limb at the target moment.
In one possible implementation manner, the second determining module is specifically configured to, when determining the first weight value of the first location information and the second weight value of the second location information:
determining a first accuracy of the first location information and a second accuracy of the second location information;
if the first accuracy is greater than the second accuracy, determining that the first weight value is greater than the second weight value;
if the first accuracy is smaller than the second accuracy, determining that the first weight value is smaller than the second weight value;
if the first accuracy is equal to the second accuracy, it is determined that the first weight value is equal to the second weight value.
According to the device provided by the embodiment of the disclosure, the first position information of the first target limb at the target moment is captured through the inertial motion capturing device, the second position information of the first target limb at the target moment is captured through the optical motion capturing device, and the target position of the first target limb at the target moment is determined according to the first position information and the second position information, so that the first position information of the first target limb at the target moment captured by the inertial motion capturing device can be corrected and updated by means of the second position information of the first target limb captured by the minimum optical motion capturing device, errors and offsets introduced by the inertial motion capturing device are reduced as much as possible, and influences caused by the determination of the position of the first target limb are reduced, so that the more accurate target position of the first target limb can be obtained, and the motion capturing accuracy is greatly improved.
It should be noted that, this embodiment is an apparatus embodiment corresponding to the above-mentioned method embodiment, and this embodiment may be implemented in cooperation with the above-mentioned method embodiment. The related technical details mentioned in the above method embodiments are still valid in this embodiment, and in order to reduce repetition, they are not repeated here. Accordingly, the related technical details mentioned in the present embodiment may also be applied in the above-described method item embodiments.
Referring now to fig. 3, a schematic diagram of an electronic device 300 suitable for use in implementing embodiments of the present disclosure is shown. The terminal devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and stationary terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 3 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
The electronic device comprises a memory and a processor, wherein the processor may be referred to as the processing means 301 described below, the memory comprising at least one of a Read Only Memory (ROM) 302, a Random Access Memory (RAM) 303 and a storage means 308, as specifically shown below:
As shown in fig. 3, the electronic device 300 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 301 that may perform various suitable actions and processes in accordance with a program stored in a Read Only Memory (ROM) 302 or a program loaded from a storage means 308 into a Random Access Memory (RAM) 303. In the RAM 303, various programs and data required for the operation of the electronic apparatus 300 are also stored. The processing device 301, the ROM 302, and the RAM 303 are connected to each other via a bus 304. An input/output (I/O) interface 305 is also connected to bus 304.
In general, the following devices may be connected to the I/O interface 305: input devices 306 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 307 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 308 including, for example, magnetic tape, hard disk, etc.; and communication means 309. The communication means 309 may allow the electronic device 300 to communicate with other devices wirelessly or by wire to exchange data. While fig. 3 shows an electronic device 300 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via a communication device 309, or installed from a storage device 308, or installed from a ROM 302. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing means 301.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring the angular velocity of a first target limb of a first action object at the current moment through an inertial action capturing device, and predicting first position information of the first target limb at the target moment according to the angular velocity; then, determining second position information of the first target limb at the target moment based on a first preset optical mark point of the first target limb through the optical motion capturing device; and then, determining the target position of the first target limb at the target moment according to the first position information and the second position information so as to capture the action of the first target limb.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules or units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. The name of the module or the unit is not limited to the unit itself in some cases, for example, the acquiring module may also be described as "a module for acquiring at least one event processing mode corresponding to a predetermined live event when the occurrence of the predetermined live event is detected".
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the preceding. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, there is provided a motion capture method, including:
acquiring the angular velocity of a first target limb of a first action object at the current moment through an inertial action capturing device, and predicting first position information of the first target limb at the target moment according to the angular velocity;
determining, by the optical motion capture device, second position information of the first target limb at the target moment based on a first preset optical marker point of the first target limb;
and determining the target position of the first target limb at the target moment according to the first position information and the second position information so as to capture the action of the first target limb.
In one possible implementation, the method further includes:
determining, by the optical motion capture device, third position information of the second target limb at the target moment based on a second preset optical marker point of the second target limb of the second motion object;
determining a distance between the first action object and the second action object according to the second position information and the third position information;
the first target limb and the second target limb are limbs of the same part; the position of the first preset optical mark point is consistent with the position of the second preset optical mark point.
In one possible implementation, predicting first position information of the first target limb at the target time according to the angular velocity includes:
predicting the direction of the first target limb at the target moment according to the angular velocity;
and determining first position information of the first target limb at the target moment according to the limb length and the limb direction of the first target limb.
In one possible implementation, determining, by the optical motion capture device, second position information of the first target limb at the target moment based on the first preset optical marker point of the first target limb includes:
acquiring fourth position information of a first preset optical mark point at a target moment through an optical motion capturing device;
and determining second position information of the first target limb according to the fourth position information based on the predetermined action object model.
In one possible implementation manner, determining the target position of the first target limb at the target moment according to the first position information and the second position information includes:
determining a first weight value of the first position information and a second weight value of the second position information, wherein the sum of the first weight value and the second weight value is a preset value;
and fusing the first position information and the second position information according to the first weight value and the second weight value to obtain the target position of the first target limb at the target moment.
In one possible implementation manner, according to the first weight value and the second weight value, the first position information and the second position information are fused to obtain a target position of the first target limb at the target moment, including:
calculating a first product of the first weight value and the first position information, and calculating a second product of the second weight value and the second position information;
and calculating a product sum between the first product and the second product, and taking the product sum as a target position of the first target limb at the target moment.
In one possible implementation, determining a first weight value of the first location information and a second weight value of the second location information includes:
determining a first accuracy of the first location information and a second accuracy of the second location information;
if the first accuracy is greater than the second accuracy, determining that the first weight value is greater than the second weight value;
if the first accuracy is smaller than the second accuracy, determining that the first weight value is smaller than the second weight value;
if the first accuracy is equal to the second accuracy, it is determined that the first weight value is equal to the second weight value.
According to one or more embodiments of the present disclosure, there is provided a motion capture device, comprising:
The processing module is used for acquiring the angular velocity of a first target limb of a first action object at the current moment through the inertial action capturing device and predicting first position information of the first target limb at the target moment according to the angular velocity;
the first determining module is used for determining second position information of the first target limb at the target moment based on a first preset optical mark point of the first target limb through the optical motion capturing device;
and the second determining module is used for determining the target position of the first target limb at the target moment according to the first position information and the second position information so as to capture the action of the first target limb.
In one possible implementation manner, the apparatus further includes a third determining module and a fourth determining module;
the third determining module is used for determining third position information of the second target limb at the target moment based on a second preset optical mark point of the second target limb of the second action object through the optical action capturing device;
a fourth determining module, configured to determine a distance between the first moving object and the second moving object according to the second position information and the third position information;
the first target limb and the second target limb are limbs of the same part; the position of the first preset optical mark point is consistent with the position of the second preset optical mark point.
In one possible implementation, the processing module predicts, based on the angular velocity, first position information of the first target limb at the target time for:
predicting the direction of the first target limb at the target moment according to the angular velocity;
and determining first position information of the first target limb at the target moment according to the limb length and the limb direction of the first target limb.
In one possible implementation manner, the first determining module is specifically configured to obtain, by using the optical motion capturing device, fourth position information of the first preset optical mark point at the target moment; and determining second position information of the first target limb according to the fourth position information based on the predetermined action object model.
In one possible implementation manner, the second determining module is configured to determine a first weight value of the first location information and a second weight value of the second location information, where a sum of the first weight value and the second weight value is a predetermined value; and the first position information and the second position information are fused according to the first weight value and the second weight value, so that the target position of the first target limb at the target moment is obtained.
In one possible implementation manner, the second determining module is specifically configured to, when fusing the first position information and the second position information according to the first weight value and the second weight value to obtain the target position of the first target limb at the target moment:
Calculating a first product of the first weight value and the first position information, and calculating a second product of the second weight value and the second position information;
and calculating a product sum between the first product and the second product, and taking the product sum as a target position of the first target limb at the target moment.
In one possible implementation manner, the second determining module is specifically configured to, when determining the first weight value of the first location information and the second weight value of the second location information:
determining a first accuracy of the first location information and a second accuracy of the second location information;
if the first accuracy is greater than the second accuracy, determining that the first weight value is greater than the second weight value;
if the first accuracy is smaller than the second accuracy, determining that the first weight value is smaller than the second weight value;
if the first accuracy is equal to the second accuracy, it is determined that the first weight value is equal to the second weight value.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other technical solutions formed by any combination of features described above or their equivalents without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with technical features having similar functions as disclosed in (but not limited to) the present disclosure.
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims.

Claims (8)

1. A motion capture method, comprising:
acquiring the angular velocity of a first target limb of a first action object at the current moment through an inertial action capturing device, and predicting first position information of the first target limb at the target moment according to the angular velocity; according to the angular velocity, predicting first position information of the first target limb at a target moment includes: predicting the direction of the first target limb at a target moment according to the angular velocity; determining first position information of the first target limb at the target moment according to the limb length and the orientation of the first target limb; determining, by an optical motion capture device, second position information of the first target limb at the target moment based on a first preset optical marker point of the first target limb; wherein the target limb bound by the inertial motion capture device and the target limb bound by the optical motion capture device are the same target limb; the determining, based on the first preset optical mark point of the first target limb, second position information of the first target limb at the target moment includes: acquiring fourth position information of the first preset optical mark point at the target moment through an optical motion capturing device; determining second position information of the first target limb according to the fourth position information based on a predetermined action object model;
And determining the target position of the first target limb at the target moment according to the first position information and the second position information so as to capture the action of the first target limb.
2. The method according to claim 1, characterized in that the method further comprises:
determining, by the optical motion capture device, third position information of a second target limb of a second motion object at the target moment based on a second preset optical marker point of the second target limb;
determining a distance between the first action object and the second action object according to the second position information and the third position information;
the first target limb and the second target limb are limbs of the same part; the position of the first preset optical marking point is consistent with the position of the second preset optical marking point.
3. The method according to any one of claims 1-2, wherein determining the target position of the first target limb at the target moment based on the first position information and the second position information comprises:
determining a first weight value of first position information and a second weight value of second position information, wherein the sum of the first weight value and the second weight value is a preset value;
And fusing the first position information and the second position information according to the first weight value and the second weight value to obtain the target position of the first target limb at the target moment.
4. The method of claim 3, wherein fusing the first position information and the second position information according to the first weight value and the second weight value to obtain the target position of the first target limb at the target time comprises:
calculating a first product of the first weight value and the first location information, and calculating a second product of the second weight value and the second location information;
and calculating a product sum between the first product and the second product, and taking the product sum as a target position of the first target limb at the target moment.
5. The method of claim 3, wherein determining a first weight value for first location information and a second weight value for the second location information comprises:
determining a first accuracy of the first location information and a second accuracy of the second location information;
if the first accuracy is greater than the second accuracy, determining that the first weight value is greater than the second weight value;
If the first accuracy is smaller than the second accuracy, determining that the first weight value is smaller than the second weight value;
and if the first accuracy is equal to the second accuracy, determining that the first weight value is equal to the second weight value.
6. A motion capture device, comprising:
the processing module is used for acquiring the angular velocity of a first target limb of a first action object at the current moment through the inertial action capturing device, and predicting first position information of the first target limb at the target moment according to the angular velocity; according to the angular velocity, predicting first position information of the first target limb at a target moment includes: predicting the direction of the first target limb at a target moment according to the angular velocity; determining first position information of the first target limb at the target moment according to the limb length and the orientation of the first target limb;
the first determining module is used for determining second position information of the first target limb at the target moment based on a first preset optical mark point of the first target limb through the optical motion capturing device; wherein the target limb bound by the inertial motion capture device and the target limb bound by the optical motion capture device are the same target limb; the determining, based on the first preset optical mark point of the first target limb, second position information of the first target limb at the target moment includes: acquiring fourth position information of the first preset optical mark point at the target moment through an optical motion capturing device; determining second position information of the first target limb according to the fourth position information based on a predetermined action object model;
And the second determining module is used for determining the target position of the first target limb at the target moment according to the first position information and the second position information so as to capture the action of the first target limb.
7. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method of any of claims 1-5 when executing the program.
8. A computer readable storage medium, characterized in that the computer readable storage medium has stored thereon a computer program which, when executed by a processor, implements the method of any of claims 1-5.
CN202010158698.5A 2020-03-09 2020-03-09 Motion capture method, motion capture device, electronic equipment and computer readable storage medium Active CN111382701B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010158698.5A CN111382701B (en) 2020-03-09 2020-03-09 Motion capture method, motion capture device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010158698.5A CN111382701B (en) 2020-03-09 2020-03-09 Motion capture method, motion capture device, electronic equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN111382701A CN111382701A (en) 2020-07-07
CN111382701B true CN111382701B (en) 2023-09-22

Family

ID=71218661

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010158698.5A Active CN111382701B (en) 2020-03-09 2020-03-09 Motion capture method, motion capture device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111382701B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113325950B (en) * 2021-05-27 2023-08-25 百度在线网络技术(北京)有限公司 Function control method, device, equipment and storage medium
CN114562993A (en) * 2022-02-28 2022-05-31 联想(北京)有限公司 Track processing method and device and electronic equipment

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102323854A (en) * 2011-03-11 2012-01-18 中国科学院研究生院 Human motion capture device
CN102905007A (en) * 2011-07-25 2013-01-30 上海博路信息技术有限公司 Terminal data exchange method based on action sensing
CN103279186A (en) * 2013-05-07 2013-09-04 兰州交通大学 Multiple-target motion capturing system integrating optical localization and inertia sensing
KR20150058882A (en) * 2013-11-21 2015-05-29 한국 한의학 연구원 Apparatus and method for motion capture using inertial sensor and optical sensor
CN104834917A (en) * 2015-05-20 2015-08-12 北京诺亦腾科技有限公司 Mixed motion capturing system and mixed motion capturing method
JP2016006415A (en) * 2014-05-29 2016-01-14 アニマ株式会社 Method and apparatus for estimating position of optical marker in optical motion capture
CN105912117A (en) * 2016-04-12 2016-08-31 北京锤子数码科技有限公司 Motion state capture method and system
CN107122048A (en) * 2017-04-21 2017-09-01 甘肃省歌舞剧院有限责任公司 One kind action assessment system
KR101840832B1 (en) * 2017-08-29 2018-03-21 엘아이지넥스원 주식회사 Method for controlling wearable robot based on motion information
CN108253954A (en) * 2016-12-27 2018-07-06 大连理工大学 A kind of human body attitude captures system
CN109242887A (en) * 2018-07-27 2019-01-18 浙江工业大学 A kind of real-time body's upper limks movements method for catching based on multiple-camera and IMU
CN109669533A (en) * 2018-11-02 2019-04-23 北京盈迪曼德科技有限公司 A kind of motion capture method, the apparatus and system of view-based access control model and inertia
CN109781104A (en) * 2019-01-31 2019-05-21 深圳创维数字技术有限公司 Athletic posture determination and localization method, device, computer equipment and medium
EP3588325A1 (en) * 2018-06-27 2020-01-01 Baidu Online Network Technology (Beijing) Co., Ltd. Method, device and system for processing image tagging information
WO2020029728A1 (en) * 2018-08-06 2020-02-13 腾讯科技(深圳)有限公司 Movement track reconstruction method and device, storage medium, and electronic device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030134410A1 (en) * 2002-11-14 2003-07-17 Silva Robin M. Compositions and methods for performing biological reactions
US11388788B2 (en) * 2015-09-10 2022-07-12 Brava Home, Inc. In-oven camera and computer vision systems and methods

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102323854A (en) * 2011-03-11 2012-01-18 中国科学院研究生院 Human motion capture device
CN102905007A (en) * 2011-07-25 2013-01-30 上海博路信息技术有限公司 Terminal data exchange method based on action sensing
CN103279186A (en) * 2013-05-07 2013-09-04 兰州交通大学 Multiple-target motion capturing system integrating optical localization and inertia sensing
KR20150058882A (en) * 2013-11-21 2015-05-29 한국 한의학 연구원 Apparatus and method for motion capture using inertial sensor and optical sensor
JP2016006415A (en) * 2014-05-29 2016-01-14 アニマ株式会社 Method and apparatus for estimating position of optical marker in optical motion capture
CN104834917A (en) * 2015-05-20 2015-08-12 北京诺亦腾科技有限公司 Mixed motion capturing system and mixed motion capturing method
CN105912117A (en) * 2016-04-12 2016-08-31 北京锤子数码科技有限公司 Motion state capture method and system
CN108253954A (en) * 2016-12-27 2018-07-06 大连理工大学 A kind of human body attitude captures system
CN107122048A (en) * 2017-04-21 2017-09-01 甘肃省歌舞剧院有限责任公司 One kind action assessment system
KR101840832B1 (en) * 2017-08-29 2018-03-21 엘아이지넥스원 주식회사 Method for controlling wearable robot based on motion information
EP3588325A1 (en) * 2018-06-27 2020-01-01 Baidu Online Network Technology (Beijing) Co., Ltd. Method, device and system for processing image tagging information
CN109242887A (en) * 2018-07-27 2019-01-18 浙江工业大学 A kind of real-time body's upper limks movements method for catching based on multiple-camera and IMU
WO2020029728A1 (en) * 2018-08-06 2020-02-13 腾讯科技(深圳)有限公司 Movement track reconstruction method and device, storage medium, and electronic device
CN109669533A (en) * 2018-11-02 2019-04-23 北京盈迪曼德科技有限公司 A kind of motion capture method, the apparatus and system of view-based access control model and inertia
CN109781104A (en) * 2019-01-31 2019-05-21 深圳创维数字技术有限公司 Athletic posture determination and localization method, device, computer equipment and medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Self-contained optical-inertial motion capturing for assembly planning in digital factory;Wei Fang et al;《The International Journal of Advanced Manufacturing Technology》;全文 *
基于图象的人体运动追踪系统研究;雷小永等;《系统仿真学报》(第S2期);全文 *

Also Published As

Publication number Publication date
CN111382701A (en) 2020-07-07

Similar Documents

Publication Publication Date Title
CN103907139B (en) Information processor, information processing method and program
CN111382701B (en) Motion capture method, motion capture device, electronic equipment and computer readable storage medium
Choi et al. Development of a low-cost wearable sensing glove with multiple inertial sensors and a light and fast orientation estimation algorithm
CN110969159B (en) Image recognition method and device and electronic equipment
CN112818898B (en) Model training method and device and electronic equipment
CN116348916A (en) Azimuth tracking for rolling shutter camera
US11035948B2 (en) Virtual reality feedback device, and positioning method, feedback method and positioning system thereof
US10204420B2 (en) Low latency simulation apparatus and method using direction prediction, and computer program therefor
CN108595095B (en) Method and device for simulating movement locus of target body based on gesture control
CN116079697A (en) Monocular vision servo method, device, equipment and medium based on image
CN110715654A (en) Motion track determination method and device of terminal equipment and electronic equipment
CN113407045B (en) Cursor control method and device, electronic equipment and storage medium
CN111445499B (en) Method and device for identifying target information
CN114116081B (en) Interactive dynamic fluid effect processing method and device and electronic equipment
CN113223012B (en) Video processing method and device and electronic device
CN112784622B (en) Image processing method and device, electronic equipment and storage medium
CN114663553A (en) Special effect video generation method, device and equipment and storage medium
CN113065572B (en) Multi-sensor fusion data processing method, positioning device and virtual reality equipment
CN113741750A (en) Cursor position updating method and device and electronic equipment
US20230418072A1 (en) Positioning method, apparatus, electronic device, head-mounted display device, and storage medium
CN112880675B (en) Pose smoothing method and device for visual positioning, terminal and mobile robot
CN114911564B (en) Page movement processing method, device, equipment and storage medium
CN109255095B (en) IMU data integration method and device, computer readable medium and electronic equipment
CN114115536A (en) Interaction method, interaction device, electronic equipment and storage medium
CN117372512A (en) Method, device, equipment and medium for determining camera pose

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant after: Tiktok vision (Beijing) Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant before: BEIJING BYTEDANCE NETWORK TECHNOLOGY Co.,Ltd.

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant after: Douyin Vision Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant before: Tiktok vision (Beijing) Co.,Ltd.

GR01 Patent grant
GR01 Patent grant