CN111382701A - Motion capture method, motion capture device, electronic equipment and computer-readable storage medium - Google Patents

Motion capture method, motion capture device, electronic equipment and computer-readable storage medium Download PDF

Info

Publication number
CN111382701A
CN111382701A CN202010158698.5A CN202010158698A CN111382701A CN 111382701 A CN111382701 A CN 111382701A CN 202010158698 A CN202010158698 A CN 202010158698A CN 111382701 A CN111382701 A CN 111382701A
Authority
CN
China
Prior art keywords
target
position information
limb
weight value
motion capture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010158698.5A
Other languages
Chinese (zh)
Other versions
CN111382701B (en
Inventor
王光伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN202010158698.5A priority Critical patent/CN111382701B/en
Publication of CN111382701A publication Critical patent/CN111382701A/en
Application granted granted Critical
Publication of CN111382701B publication Critical patent/CN111382701B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the disclosure relates to the technical field of gesture detection, and discloses a motion capture method, a motion capture device, electronic equipment and a computer-readable storage medium, wherein the motion capture method comprises the following steps: acquiring the angular velocity of a first target limb of a first motion object at the current moment through inertial motion capture equipment, and predicting first position information of the first target limb at the target moment according to the angular velocity; then, determining second position information of the first target limb at the target moment based on a first preset optical mark point of the first target limb through the optical motion capture equipment; and then, determining the target position of the first target limb at the target moment according to the first position information and the second position information so as to capture the motion of the first target limb. The method of the embodiment of the disclosure can reduce errors and offsets introduced by inertial motion capture equipment as much as possible, and greatly improve the accuracy of motion capture.

Description

Motion capture method, motion capture device, electronic equipment and computer-readable storage medium
Technical Field
The embodiment of the disclosure relates to the technical field of attitude detection, in particular to an action capturing method and device, electronic equipment and a computer readable storage medium.
Background
The motion capture is a technology for capturing motion of a moving object, and is characterized in that a tracker is arranged at a key part of the moving object, then a motion capture system captures the real-time position of the tracker, and the real-time position is processed by a computer to obtain the data of the three-dimensional space coordinates of the tracker. After the data is identified by the computer, the method can be applied to the fields of animation production, gait analysis, biomechanics, human-machine engineering, game production and the like.
For example, in an interactive game, various actions of a player can drive the actions of virtual characters in a game environment, so that a brand-new participation experience is brought to the player, and the reality sense and the interactivity of the game are enhanced. For another example, in animation production, motion capture technology greatly improves the development efficiency and level of animation and game production, and reduces the development cost thereof. For another example, in sports training, the motion capture technology can capture quantitative information such as displacement, speed, acceleration, myoelectric signals and the like of an athlete in the motion process, and by combining a machine learning technology and a human body biomechanics principle, the motion of the athlete can be analyzed from a quantitative perspective and a scientific improvement method is provided.
Currently, inertial motion capture devices are widely used motion capture devices due to their advantages of miniaturization, low cost, and capability of using wireless transmission as needed. However, in the specific implementation process, the inventor of the present disclosure finds that: while the inertial motion capture device can capture continuous data, it is prone to errors and offsets that affect the accuracy of motion capture.
Disclosure of Invention
The purpose of the disclosed embodiments is to address at least one of the above-mentioned deficiencies, and it is intended to provide a summary in order to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
In one aspect, a motion capture method is provided, including:
acquiring the angular velocity of a first target limb of a first motion object at the current moment through inertial motion capture equipment, and predicting first position information of the first target limb at a target moment according to the angular velocity;
determining second position information of the first target limb at the target moment based on a first preset optical mark point of the first target limb through an optical motion capture device;
and determining the target position of the first target limb at the target moment according to the first position information and the second position information so as to capture the motion of the first target limb.
In one aspect, a motion capture device is provided, comprising:
the processing module is used for acquiring the angular velocity of a first target limb of a first motion object at the current moment through the inertial motion capture equipment and predicting first position information of the first target limb at the target moment according to the angular velocity;
the first determining module is used for determining second position information of the first target limb at the target moment based on a first preset optical marking point of the first target limb through the optical motion capture device;
and the second determining module is used for determining the target position of the first target limb at the target moment according to the first position information and the second position information so as to capture the action of the first target limb.
In one aspect, an electronic device is provided, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and when the processor executes the program, the motion capture method is implemented.
In one aspect, a computer-readable storage medium is provided, on which a computer program is stored, which when executed by a processor implements the motion capture method described above.
The motion capture method provided by the embodiment of the disclosure captures first position information of a first target limb at a target moment through an inertial motion capture device, captures second position information of the first target limb at the target moment through an optical motion capture device, and determines a target position of the first target limb at the target moment according to the first position information and the second position information, so that the first position information of the first target limb at the target moment captured by the inertial motion capture device can be corrected and updated by means of a very small amount of second position information of the first target limb at the target moment captured by the optical motion capture device, errors and offsets introduced by the inertial motion capture device can be reduced as much as possible, and the influence on the determination of the position of the first target limb can be obtained, so that a more accurate target position of the first target limb can be obtained, the accuracy of motion capture is greatly improved.
Additional aspects and advantages of embodiments of the disclosure will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the disclosure.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent from the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
FIG. 1 is a flow chart illustrating a motion capture method according to an embodiment of the disclosure;
FIG. 2 is a schematic diagram of a basic structure of a motion capture device according to an embodiment of the disclosure;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing the devices, modules or units, and are not used for limiting the devices, modules or units to be different devices, modules or units, and also for limiting the sequence or interdependence of the functions executed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that the term "one or more" unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
To make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the embodiments of the present disclosure will be described in further detail below with reference to the accompanying drawings.
The embodiment of the disclosure provides a motion capture method, a motion capture device, an electronic device and a computer storage medium, which aim to solve the above technical problems in the prior art.
The following describes in detail the technical solutions of the embodiments of the present disclosure and how to solve the above technical problems with specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present disclosure will be described below with reference to the accompanying drawings.
One embodiment of the present disclosure provides a motion capture method, which is performed by a computer device, which may be a terminal or a server. The terminal may be a desktop device or a mobile terminal. The servers may be individual physical servers, clusters of physical servers, or virtual servers. As shown in fig. 1, the method includes:
step S110, acquiring the angular velocity of the first target limb of the first motion object at the current moment through the inertial motion capture device, and predicting first position information of the first target limb at the target moment according to the angular velocity.
Specifically, the inertial motion capture device may capture position information of a motion object through an inertial sensor, wherein the motion object (i.e., the first motion object described above) includes, but is not limited to, a person, an animal, and the like. In practical applications, the inertial motion capture device may be bound to a target limb (i.e., the first target limb) of the motion object to capture position information of the target limb at each time point, so as to obtain a motion of the target limb according to the position information of the target limb, and further obtain a motion of the motion object according to the motion of the target limb.
If the motion of a plurality of target limbs is to be captured, an inertial motion capture device may be bound to each target limb to capture the position information of each target limb at various time points, so as to obtain the motion of each target limb according to the position information of each target limb. Wherein each inertial motion capture device captures position information in the same coordinate system.
In one example, if the motion object is a human body and the target limbs are a left arm and a right arm, an inertial motion capture device a may be bound to the left arm of the human body to obtain position information of the left arm to capture the motion of the left arm, and an inertial motion capture device B may be bound to the right arm of the human body to obtain position information of the right arm to capture the motion of the right arm.
In another example, if the motion object is a human body and the target limbs are a left leg, a right leg and a waist, an inertial motion capture device C may be bound to the left leg of the human body to obtain position information of the left leg to capture the motion of the left leg; moreover, an inertial motion capture device D is bound on the right leg of the human body to obtain the position information of the right leg so as to capture the motion of the right leg; meanwhile, an inertial motion capture device E is bound on the waist of the human body to obtain the position information of the waist so as to capture the motion of the waist.
Specifically, in the process of capturing the position information of the target limb by the inertial motion capture device, the inertial motion capture device actually acquires the angular velocity of the target limb at the current time, and after obtaining the angular velocity, predicts the position information (denoted as first position information) of the target limb at the target time according to the angular velocity.
Step S120, determining second position information of the first target limb at the target moment through the optical motion capture device based on the first preset optical mark point of the first target limb.
Specifically, since the inertial motion capture device is prone to errors and offsets during the process of capturing the position information, after the position information of the target limb is captured by the inertial motion capture device, it is not easy to directly use the position information as the final position information of the target limb, but the position information after correction and update is first used as the final position information of the target limb. In the process of correcting and updating the position information of the target limb captured by the inertial motion capture device, the position information of the target limb captured by the inertial motion capture device at the target time can be corrected and updated by using a small amount of second position information of the target limb captured by the optical motion capture device at the target time. Optical motion capture devices include, but are not limited to, optical sensors, HTC Vive head mounted devices, among others.
Specifically, one or several optical marker points (i.e. first preset optical marker points) may be affixed in advance at predetermined positions of the target limb (such as skeletal joint points), and the optical marker points may reflect light (e.g. infrared light) emitted by the optical motion capture device. The target limb and the target limb bound by the inertial motion capture device are the same target limb, and if the inertial motion capture device is bound on the target limb of the left arm, the optical mark point is stuck on a predetermined position of the left arm. If the inertial motion capture device is bound to a target limb, the right arm, the optical mark is affixed to a predetermined location on the right arm.
Specifically, in the process of capturing the position information of the target limb at the target time by the optical motion capture device, infrared light may be emitted by the optical motion capture device, at this time, the optical mark point on the target limb reflects the infrared light emitted by the optical motion capture device, and correspondingly, the optical motion capture device receives the infrared light reflected by the optical mark point, so as to determine the position information (marked as the second position information) of the target limb at the target time according to the optical mark point.
Step S130, determining a target position of the first target limb at the target time according to the first position information and the second position information, so as to capture the motion of the first target limb.
Specifically, after first position information of a target limb at a target moment is acquired through the inertial motion capture device and second position information of the same target limb at the same target moment is acquired through the optical motion capture device, the first position information and the second position information can be comprehensively considered, and the target position of the target limb at the target moment is determined together according to the first position information and the second position information, so that errors and offsets introduced by the inertial motion capture device are reduced as much as possible, the influence on the determination of the position of the first target limb is reduced, and the accuracy of motion capture is greatly improved.
The motion capture method provided by the embodiment of the disclosure captures first position information of a first target limb at a target moment through an inertial motion capture device, captures second position information of the first target limb at the target moment through an optical motion capture device, and determines a target position of the first target limb at the target moment according to the first position information and the second position information, so that the first position information of the first target limb at the target moment captured by the inertial motion capture device can be corrected and updated by means of a very small amount of second position information of the first target limb at the target moment captured by the optical motion capture device, errors and offsets introduced by the inertial motion capture device can be reduced as much as possible, and the influence on the determination of the position of the first target limb can be obtained, so that a more accurate target position of the first target limb can be obtained, the accuracy of motion capture is greatly improved.
The following specifically introduces the method of the embodiment of the present disclosure, taking the action object as an example:
in one possible implementation manner, third position information of a second target limb of the second action object at the target moment may be determined by the optical motion capture device based on a second preset optical marker point of the second target limb; then, the distance between the first action object and the second action object is determined according to the second position information and the third position information.
Specifically, in the process of determining the distance between the first action object and the second action object, the first target limb and the second target limb should be limbs at the same position, and the position of the first preset optical mark point is consistent with the position of the second preset optical mark point. The position of the first preset optical mark point is consistent with the position of the second preset optical mark point, which includes but is not limited to the following cases: the position of the first preset optical mark point is completely the same as that of the second preset optical mark point, and the position deviation between the position of the first preset optical mark point and that of the second preset optical mark point is within a certain error range.
Specifically, the inertial motion capture device cannot identify the relative position between two motion objects in the process of capturing the position information, and the optical motion capture device can acquire the relative position between two motion objects, and at this time, the distance between two motion objects can be determined by the optical motion capture device, and the relative position between two motion objects can be determined according to the distance.
Specifically, if the first action object is zhang san and the second action object is lie san, in the process of determining the relative position between zhang san and lie san, the same optical motion capture device may be used to obtain the position information of the first preset optical marker point of the first target limb of zhang san at the target time and the position information of the second preset optical marker point of the second target limb of lie at the target time, and then calculate the distance between the two pieces of position information, so as to determine the relative position between zhang san and lie san according to the calculated distance.
Specifically, in the process of determining the relative position between zhangsan and lie four, an optical marker (e.g., point L1) may be first marked at a predetermined position of the target limb of zhangsan, for example, an optical marker L1 may be marked at the elbow of the left arm of zhangsan, and an optical marker (e.g., L2) may be marked at a predetermined position of the target limb of lie four, for example, an optical marker L2 may be marked at the elbow of the left arm of lie four; then, infrared light is emitted by the optical motion capture device, and at this time, the optical mark point L1 and the optical mark point L2 reflect the infrared light emitted by the optical motion capture device, and correspondingly, the optical motion capture device receives the infrared light reflected by the optical mark point L1 and the optical mark point L2, so as to determine the position information (marked as second position information) of the optical mark point L1 and determine the position information (marked as third position information) of the optical mark point L2; and then, determining the distance between the third Zhang and the fourth Li according to the second position information and the third position information, wherein the distance between the third Zhang and the fourth Li can be obtained by making a difference between the second position information and the third position information, so that the relative position between the third Zhang and the fourth Li is obtained.
It should be noted that the first target limb of zhangsan and the second target limb of lie xi should be limbs of the same body part, for example, the first target limb of zhangsan is a left arm, and then the second target limb of lie xi should also be a left arm, so that the relative position between the two can be accurately calculated, and the relative position has a very high reference value; if the first target limb of Zhangjin is a left arm, the second target limb of Liquan should also be a left leg, and since the first target limb and the second target limb belong to different body parts, the distance between the two is not comparable, and even if the distance between the two is calculated, the accuracy of the distance is extremely low, and the distance has no reference value.
In addition, although the first target limb and the second target limb belong to the same part of the limb, if the position of the first preset optical mark point is not consistent with the position of the second preset optical mark point, a certain deviation exists in the calculated relative position between zhangsan and lie quadruple. For example, the first preset optical mark point of zhangsan is the elbow of the left arm, and the second preset optical mark point of lie xi is the wrist of the left arm, and at this time, although the distance calculated according to the first preset optical mark point and the second preset optical mark point has a certain reference value, the accuracy of the calculated relative position between zhangsan and lie xi is low because the first preset optical mark point and the second preset optical mark point are far apart.
In one possible implementation manner, in the process of predicting the first position information of the first target limb at the target time according to the angular velocity, the orientation of the first target limb at the target time may be predicted according to the angular velocity; and then, determining first position information of the first target limb at the target moment according to the limb length and the orientation of the first target limb.
Specifically, in the process of capturing the position information of the first target limb by the inertial motion capture device, the inertial motion capture device actually acquires the angular velocity of the first target limb at the current time, and after the angular velocity is obtained, the position information (referred to as first position information) of the target limb at the target time is predicted according to the angular velocity.
Specifically, after obtaining the angular velocity, the orientation of the first target limb at the target time may be obtained by integrating the angular velocity in a time interval between the target time and the current time. Since the limb length of the first target limb is fixed, for example, the limb length of the left arm with three open limbs is fixed, and for example, the limb length of the right leg with three open limbs is fixed, the first position information of the first target limb at the target time can be determined according to the limb length of the first target limb and the orientation of the first target limb. Wherein the limb length of the first target limb is measured in advance.
Specifically, in the process of determining the second position information of the first target limb at the target moment based on the first preset optical mark point of the first target limb through the optical motion capture device, the fourth position information of the first preset optical mark point at the target moment may be acquired through the optical motion capture device; then, second position information of the first target limb is determined according to the fourth position information based on the predetermined motion object model.
Specifically, if the first action object is three, the first target limb of three is the left arm, and the first preset optical mark point of the first target limb is the elbow of the left arm, the position information of the elbow at the target time (denoted as the fourth position information) may be acquired by the optical motion capture device, at this time, although the position information of the left elbow is acquired, the position information of the left arm cannot be accurately obtained only from the position information of the left elbow, and at this time, with the help of a preset action object model (i.e., a preset human body model), the position information of the left arm at the target time may be relatively accurately obtained on the basis of the position information of the left elbow at the target time on the basis of the preset human body model.
In a possible implementation manner, in the process of determining the target position of the first target limb at the target moment according to the first position information and the second position information, a first weight value of the first position information and a second weight value of the second position information may be determined; then, according to the first weight value and the second weight value, fusing the first position information and the second position information to obtain a target position of the first target limb at a target moment; and the sum of the first weight value and the second weight value is a preset value.
Specifically, after the first position information and the second position information are obtained, a similar weighted summation operation may be performed on the first position information and the second position information to determine the target position of the first target limb at the target time. In the process of performing weighted summation on the first position information and the second position information, the target position of the first target limb at the target moment may be obtained by performing weighted summation on the weight values corresponding to the first position information and the second position information respectively.
Specifically, in the process of weighting and summing the first position information and the second position information, a weight value (denoted as a first weight value) corresponding to the first position information and a weight value (denoted as a second weight value) corresponding to the second position information are determined, and then the first position information and the second position information are fused according to the first weight value and the second weight value to obtain a target position of the first target limb at the target moment.
Specifically, in the process of determining the first weight value of the first location information and the second weight value of the second location information, the first weight value of the first location information and the second weight value of the second location information may be determined according to the accuracy of the first location information (denoted as a first accuracy) and the accuracy of the second location information (denoted as a second accuracy). When the first accuracy is higher than the second accuracy, the first position information is more accurate, and a larger weight value can be given to the first position information at the moment, namely the first weight value is higher than the second weight value; when the first accuracy is smaller than the second accuracy, the second position information is more accurate, and a larger weight value can be given to the second position information at the moment, namely the first weight value is smaller than the second weight value; when the first precision is equal to the second precision, it indicates that the accuracies of the first location information and the second location information are the same, and at this time, a weight value that is the same as the first location information may be given to the second location information, that is, the first weight value is equal to the second weight value.
Note that the sum of the first weight value and the second weight value is a predetermined value, such as 1, 2, 3, or the like. In one example, the predetermined value is 1, i.e., the sum of the first weight value and the second weight value is 1.
Specifically, after the first weight value and the second weight value are determined, a first product of the first weight value and the first position information, a second product of the second weight value and the second position information, a product sum between the first product and the second product, and the product sum as a target position of the first target limb at the target time may be calculated.
In one example, if the first location information is W _1, the first location information has a weight of Q _1, the second location information is W _2, and the second location information has a weight of Q _2, the weighted sum expression used may be: w _1 × Q _1 + W _2 × Q _2, i.e., the sum of the expressions, is the target position of the first target limb at the target time.
Fig. 2 is a schematic structural diagram of a motion capture apparatus according to another embodiment of the present disclosure, as shown in fig. 2, the apparatus 20 may include a processing module 201, a first determining module 202, and a second determining module 203; wherein:
the processing module 201 is configured to acquire an angular velocity of a first target limb of a first motion object at a current time through an inertial motion capture device, and predict first position information of the first target limb at a target time according to the angular velocity;
a first determining module 202, configured to determine, by an optical motion capture device, second position information of a first target limb at a target moment based on a first preset optical marker of the first target limb;
the second determining module 203 is configured to determine a target position of the first target limb at the target time according to the first position information and the second position information, so as to capture the motion of the first target limb.
In a possible implementation manner, the apparatus further includes a third determining module and a fourth determining module;
the third determining module is used for determining third position information of a second target limb of the second motion object at the target moment based on a second preset optical marking point of the second target limb through the optical motion capturing device;
the fourth determining module is used for determining the distance between the first action object and the second action object according to the second position information and the third position information;
the first target limb and the second target limb are limbs of the same part; the position of the first preset optical mark point is consistent with the position of the second preset optical mark point.
In one possible implementation, the processing module, in predicting first position information of the first target limb at the target moment in time from the angular velocity, is configured to:
according to the angular speed, predicting the orientation of the first target limb at the target moment;
and determining first position information of the first target limb at the target moment according to the limb length and the orientation of the first target limb.
In a possible implementation manner, the first determining module is specifically configured to acquire, by using an optical motion capture device, fourth position information of the first preset optical marker at the target time; and the second position information of the first target limb is determined according to the fourth position information based on the preset action object model.
In a possible implementation manner, the second determining module is configured to determine a first weight value of the first location information and a second weight value of the second location information, and a sum of the first weight value and the second weight value is a predetermined value; and the second position information is fused according to the first weight value and the second weight value, so that the target position of the first target limb at the target moment is obtained.
In a possible implementation manner, the second determining module is specifically configured to, when the first position information and the second position information are fused according to the first weight value and the second weight value to obtain the target position of the first target limb at the target time:
calculating a first product of the first weight value and the first position information, and calculating a second product of the second weight value and the second position information;
and calculating a product sum between the first product and the second product, and using the product sum as the target position of the first target limb at the target moment.
In a possible implementation manner, the second determining module is specifically configured to, when determining the first weight value of the first location information and the second weight value of the second location information:
determining a first accuracy of the first location information and a second accuracy of the second location information;
if the first accuracy is greater than the second accuracy, determining that the first weight value is greater than the second weight value;
if the first accuracy is less than the second accuracy, determining that the first weight value is less than the second weight value;
if the first accuracy is equal to the second accuracy, determining that the first weight value is equal to the second weight value.
The apparatus provided by the embodiment of the disclosure captures first position information of a first target limb at a target moment through an inertial motion capture device, captures second position information of the first target limb at the target moment through an optical motion capture device, and determines a target position of the first target limb at the target moment according to the first position information and the second position information, so that the first position information of the first target limb at the target moment captured by the inertial motion capture device can be corrected and updated by means of a very small amount of second position information of the first target limb at the target moment captured by the optical motion capture device, thereby reducing errors and offsets introduced by the inertial motion capture device as much as possible, and influencing the determination of the position of the first target limb, so as to obtain a more accurate target position of the first target limb, the accuracy of motion capture is greatly improved.
It should be noted that the present embodiment is an apparatus embodiment corresponding to the method embodiment described above, and the present embodiment can be implemented in cooperation with the method embodiment described above. The related technical details mentioned in the above method embodiment are still valid in this embodiment, and are not described herein again in order to reduce repetition. Accordingly, the related art details mentioned in the present embodiment can also be applied to the above-described method embodiment.
Referring now to FIG. 3, a block diagram of an electronic device 300 suitable for use in implementing embodiments of the present disclosure is shown. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a fixed terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 3 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
The electronic device comprises a memory and a processor, wherein the processor may be referred to as the processing device 301 described below, and the memory comprises at least one of a Read Only Memory (ROM)302, a Random Access Memory (RAM)303, and a storage device 308, which are described below:
as shown in fig. 3, the electronic device 300 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 301 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)302 or a program loaded from a storage device 308 into a Random Access Memory (RAM) 303. In the RAM 303, various programs and data necessary for the operation of the electronic apparatus 300 are also stored. The processing device 301, the ROM 302, and the RAM 303 are connected to each other via a bus 304. An input/output (I/O) interface 305 is also connected to bus 304.
Generally, the following devices may be connected to the I/O interface 305: input devices 306 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 307 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage devices 308 including, for example, magnetic tape, hard disk, etc.; and a communication device 309. The communication means 309 may allow the electronic device 300 to communicate wirelessly or by wire with other devices to exchange data. While fig. 3 illustrates an electronic device 300 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the process described above with reference to the flow diagrams may be implemented as a computer software program. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program containing program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication means 309, or installed from the storage means 308, or installed from the ROM 302. The computer program, when executed by the processing device 301, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring the angular velocity of a first target limb of a first motion object at the current moment through inertial motion capture equipment, and predicting first position information of the first target limb at the target moment according to the angular velocity; then, determining second position information of the first target limb at the target moment through the optical motion capture equipment based on a first preset optical mark point of the first target limb; and then, determining the target position of the first target limb at the target moment according to the first position information and the second position information so as to capture the motion of the first target limb.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules or units described in the embodiments of the present disclosure may be implemented by software or hardware. The name of the module or unit does not form a limitation on the unit itself under certain conditions, for example, the acquiring module may be further described as a module for acquiring at least one event processing mode corresponding to a predetermined live event when the occurrence of the predetermined live event is detected.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, there is provided a motion capture method including:
acquiring the angular velocity of a first target limb of a first motion object at the current moment through inertial motion capture equipment, and predicting first position information of the first target limb at a target moment according to the angular velocity;
determining second position information of the first target limb at the target moment based on a first preset optical mark point of the first target limb through an optical motion capture device;
and determining the target position of the first target limb at the target moment according to the first position information and the second position information so as to capture the motion of the first target limb.
In one possible implementation, the method further includes:
determining, by the optical motion capture device, third position information of a second target limb of a second motion object at the target moment based on a second preset optical marker of the second target limb;
determining the distance between the first action object and the second action object according to the second position information and the third position information;
the first target limb and the second target limb are limbs of the same part; the position of the first preset optical mark point is consistent with the position of the second preset optical mark point.
In one possible implementation, predicting first position information of the first target limb at the target time according to the angular velocity includes:
according to the angular speed, predicting the orientation of the first target limb at the target moment;
and determining first position information of the first target limb at the target moment according to the limb length and the orientation of the first target limb.
In one possible implementation, determining, by the optical motion capture device, second position information of the first target limb at the target moment based on the first preset optical marker point of the first target limb includes:
acquiring fourth position information of the first preset optical mark point at the target moment through optical motion capture equipment;
and determining second position information of the first target limb according to the fourth position information based on the predetermined action object model.
In one possible implementation manner, determining a target position of the first target limb at the target time according to the first position information and the second position information includes:
determining a first weight value of the first position information and a second weight value of the second position information, wherein the sum of the first weight value and the second weight value is a preset value;
and according to the first weight value and the second weight value, fusing the first position information and the second position information to obtain the target position of the first target limb at the target moment.
In a possible implementation manner, the fusing the first position information and the second position information according to the first weight value and the second weight value to obtain a target position of the first target limb at the target time includes:
calculating a first product of the first weight value and the first position information, and calculating a second product of the second weight value and the second position information;
and calculating a product sum between the first product and the second product, and using the product sum as the target position of the first target limb at the target moment.
In one possible implementation manner, determining a first weight value of the first location information and a second weight value of the second location information includes:
determining a first accuracy of the first location information and a second accuracy of the second location information;
if the first accuracy is greater than the second accuracy, determining that the first weight value is greater than the second weight value;
if the first accuracy is less than the second accuracy, determining that the first weight value is less than the second weight value;
if the first accuracy is equal to the second accuracy, determining that the first weight value is equal to the second weight value.
According to one or more embodiments of the present disclosure, there is provided a motion capture apparatus including:
the processing module is used for acquiring the angular velocity of a first target limb of a first motion object at the current moment through the inertial motion capture equipment and predicting first position information of the first target limb at the target moment according to the angular velocity;
the first determining module is used for determining second position information of the first target limb at the target moment based on a first preset optical marking point of the first target limb through the optical motion capture device;
and the second determining module is used for determining the target position of the first target limb at the target moment according to the first position information and the second position information so as to capture the action of the first target limb.
In a possible implementation manner, the apparatus further includes a third determining module and a fourth determining module;
the third determining module is used for determining third position information of a second target limb of the second motion object at the target moment based on a second preset optical marking point of the second target limb through the optical motion capturing device;
the fourth determining module is used for determining the distance between the first action object and the second action object according to the second position information and the third position information;
the first target limb and the second target limb are limbs of the same part; the position of the first preset optical mark point is consistent with the position of the second preset optical mark point.
In one possible implementation, the processing module predicts first position information of the first target limb at the target moment based on the angular velocity, for:
according to the angular speed, predicting the orientation of the first target limb at the target moment;
and determining first position information of the first target limb at the target moment according to the limb length and the orientation of the first target limb.
In a possible implementation manner, the first determining module is specifically configured to acquire, by using an optical motion capture device, fourth position information of the first preset optical marker at the target time; and the second position information of the first target limb is determined according to the fourth position information based on the preset action object model.
In a possible implementation manner, the second determining module is configured to determine a first weight value of the first location information and a second weight value of the second location information, and a sum of the first weight value and the second weight value is a predetermined value; and the second position information is fused according to the first weight value and the second weight value, so that the target position of the first target limb at the target moment is obtained.
In a possible implementation manner, the second determining module is specifically configured to, when the first position information and the second position information are fused according to the first weight value and the second weight value to obtain the target position of the first target limb at the target time:
calculating a first product of the first weight value and the first position information, and calculating a second product of the second weight value and the second position information;
and calculating a product sum between the first product and the second product, and using the product sum as the target position of the first target limb at the target moment.
In a possible implementation manner, the second determining module is specifically configured to, when determining the first weight value of the first location information and the second weight value of the second location information:
determining a first accuracy of the first location information and a second accuracy of the second location information;
if the first accuracy is greater than the second accuracy, determining that the first weight value is greater than the second weight value;
if the first accuracy is less than the second accuracy, determining that the first weight value is less than the second weight value;
if the first accuracy is equal to the second accuracy, determining that the first weight value is equal to the second weight value.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be understood by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features may be replaced with (but not limited to) technical features disclosed in the present disclosure having similar functions.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (10)

1. A motion capture method, comprising:
acquiring the angular velocity of a first target limb of a first motion object at the current moment through inertial motion capture equipment, and predicting first position information of the first target limb at a target moment according to the angular velocity;
determining, by an optical motion capture device, second position information of the first target limb at the target moment based on a first preset optical marker point of the first target limb;
and determining the target position of the first target limb at the target moment according to the first position information and the second position information so as to capture the action of the first target limb.
2. The method of claim 1, further comprising:
determining, by an optical motion capture device, third position information of a second target limb of a second motion object at the target time based on a second preset optical marker of the second target limb;
determining the distance between the first action object and the second action object according to the second position information and the third position information;
the first target limb and the second target limb are limbs of the same part; the position of the first preset optical mark point is consistent with the position of the second preset optical mark point.
3. The method of claim 1, wherein predicting first position information of the first target limb at a target moment in time based on the angular velocity comprises:
predicting the orientation of the first target limb at a target moment according to the angular speed;
and determining first position information of the first target limb at the target moment according to the limb length and the orientation of the first target limb.
4. The method of claim 1, wherein determining, by an optical motion capture device, second position information of the first target limb at the target moment based on a first preset optical marker point of the first target limb comprises:
acquiring fourth position information of the first preset optical mark point at the target moment through optical motion capture equipment;
and determining second position information of the first target limb according to the fourth position information based on a preset action object model.
5. The method according to any one of claims 1-4, wherein determining the target position of the first target limb at the target time based on the first position information and the second position information comprises:
determining a first weight value of first position information and a second weight value of second position information, wherein the sum of the first weight value and the second weight value is a preset value;
and according to the first weight value and the second weight value, fusing the first position information and the second position information to obtain the target position of the first target limb at the target moment.
6. The method according to claim 5, wherein fusing the first location information and the second location information according to the first weight value and the second weight value to obtain the target location of the first target limb at the target time comprises:
calculating a first product of the first weight value and the first location information, and calculating a second product of the second weight value and the second location information;
and calculating a product sum between the first product and the second product, and using the product sum as the target position of the first target limb at the target time.
7. The method of claim 5, wherein determining a first weight value for first location information and a second weight value for second location information comprises:
determining a first accuracy of the first location information and a second accuracy of the second location information;
determining that the first weight value is greater than the second weight value if the first accuracy is greater than the second accuracy;
determining that the first weight value is less than the second weight value if the first accuracy is less than the second accuracy;
determining that the first weight value is equal to the second weight value if the first accuracy is equal to the second accuracy.
8. A motion capture device, comprising:
the processing module is used for acquiring the angular velocity of a first target limb of a first motion object at the current moment through inertial motion capture equipment, and predicting first position information of the first target limb at a target moment according to the angular velocity;
a first determining module, configured to determine, by an optical motion capture device, second position information of the first target limb at the target time based on a first preset optical marker of the first target limb;
and the second determining module is used for determining the target position of the first target limb at the target moment according to the first position information and the second position information so as to capture the action of the first target limb.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method of any of claims 1-7 when executing the program.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when being executed by a processor, carries out the method of any one of claims 1 to 7.
CN202010158698.5A 2020-03-09 2020-03-09 Motion capture method, motion capture device, electronic equipment and computer readable storage medium Active CN111382701B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010158698.5A CN111382701B (en) 2020-03-09 2020-03-09 Motion capture method, motion capture device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010158698.5A CN111382701B (en) 2020-03-09 2020-03-09 Motion capture method, motion capture device, electronic equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN111382701A true CN111382701A (en) 2020-07-07
CN111382701B CN111382701B (en) 2023-09-22

Family

ID=71218661

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010158698.5A Active CN111382701B (en) 2020-03-09 2020-03-09 Motion capture method, motion capture device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111382701B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113325950A (en) * 2021-05-27 2021-08-31 百度在线网络技术(北京)有限公司 Function control method, device, equipment and storage medium
CN113902054A (en) * 2021-09-23 2022-01-07 深圳市瑞立视多媒体科技有限公司 Motion capture method based on mark points, related device and storage medium
CN114562993A (en) * 2022-02-28 2022-05-31 联想(北京)有限公司 Track processing method and device and electronic equipment

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030134410A1 (en) * 2002-11-14 2003-07-17 Silva Robin M. Compositions and methods for performing biological reactions
CN102323854A (en) * 2011-03-11 2012-01-18 中国科学院研究生院 Human motion capture device
CN102905007A (en) * 2011-07-25 2013-01-30 上海博路信息技术有限公司 Terminal data exchange method based on action sensing
CN103279186A (en) * 2013-05-07 2013-09-04 兰州交通大学 Multiple-target motion capturing system integrating optical localization and inertia sensing
KR20150058882A (en) * 2013-11-21 2015-05-29 한국 한의학 연구원 Apparatus and method for motion capture using inertial sensor and optical sensor
CN104834917A (en) * 2015-05-20 2015-08-12 北京诺亦腾科技有限公司 Mixed motion capturing system and mixed motion capturing method
JP2016006415A (en) * 2014-05-29 2016-01-14 アニマ株式会社 Method and apparatus for estimating position of optical marker in optical motion capture
CN105912117A (en) * 2016-04-12 2016-08-31 北京锤子数码科技有限公司 Motion state capture method and system
CN107122048A (en) * 2017-04-21 2017-09-01 甘肃省歌舞剧院有限责任公司 One kind action assessment system
KR101840832B1 (en) * 2017-08-29 2018-03-21 엘아이지넥스원 주식회사 Method for controlling wearable robot based on motion information
CN108253954A (en) * 2016-12-27 2018-07-06 大连理工大学 A kind of human body attitude captures system
US20180324908A1 (en) * 2015-09-10 2018-11-08 Brava Home, Inc. In-oven camera and computer vision systems and methods
CN109242887A (en) * 2018-07-27 2019-01-18 浙江工业大学 A kind of real-time body's upper limks movements method for catching based on multiple-camera and IMU
CN109669533A (en) * 2018-11-02 2019-04-23 北京盈迪曼德科技有限公司 A kind of motion capture method, the apparatus and system of view-based access control model and inertia
CN109781104A (en) * 2019-01-31 2019-05-21 深圳创维数字技术有限公司 Athletic posture determination and localization method, device, computer equipment and medium
EP3588325A1 (en) * 2018-06-27 2020-01-01 Baidu Online Network Technology (Beijing) Co., Ltd. Method, device and system for processing image tagging information
WO2020029728A1 (en) * 2018-08-06 2020-02-13 腾讯科技(深圳)有限公司 Movement track reconstruction method and device, storage medium, and electronic device

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030134410A1 (en) * 2002-11-14 2003-07-17 Silva Robin M. Compositions and methods for performing biological reactions
CN102323854A (en) * 2011-03-11 2012-01-18 中国科学院研究生院 Human motion capture device
CN102905007A (en) * 2011-07-25 2013-01-30 上海博路信息技术有限公司 Terminal data exchange method based on action sensing
CN103279186A (en) * 2013-05-07 2013-09-04 兰州交通大学 Multiple-target motion capturing system integrating optical localization and inertia sensing
KR20150058882A (en) * 2013-11-21 2015-05-29 한국 한의학 연구원 Apparatus and method for motion capture using inertial sensor and optical sensor
JP2016006415A (en) * 2014-05-29 2016-01-14 アニマ株式会社 Method and apparatus for estimating position of optical marker in optical motion capture
CN104834917A (en) * 2015-05-20 2015-08-12 北京诺亦腾科技有限公司 Mixed motion capturing system and mixed motion capturing method
US20180324908A1 (en) * 2015-09-10 2018-11-08 Brava Home, Inc. In-oven camera and computer vision systems and methods
CN105912117A (en) * 2016-04-12 2016-08-31 北京锤子数码科技有限公司 Motion state capture method and system
CN108253954A (en) * 2016-12-27 2018-07-06 大连理工大学 A kind of human body attitude captures system
CN107122048A (en) * 2017-04-21 2017-09-01 甘肃省歌舞剧院有限责任公司 One kind action assessment system
KR101840832B1 (en) * 2017-08-29 2018-03-21 엘아이지넥스원 주식회사 Method for controlling wearable robot based on motion information
EP3588325A1 (en) * 2018-06-27 2020-01-01 Baidu Online Network Technology (Beijing) Co., Ltd. Method, device and system for processing image tagging information
CN109242887A (en) * 2018-07-27 2019-01-18 浙江工业大学 A kind of real-time body's upper limks movements method for catching based on multiple-camera and IMU
WO2020029728A1 (en) * 2018-08-06 2020-02-13 腾讯科技(深圳)有限公司 Movement track reconstruction method and device, storage medium, and electronic device
CN109669533A (en) * 2018-11-02 2019-04-23 北京盈迪曼德科技有限公司 A kind of motion capture method, the apparatus and system of view-based access control model and inertia
CN109781104A (en) * 2019-01-31 2019-05-21 深圳创维数字技术有限公司 Athletic posture determination and localization method, device, computer equipment and medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
WEI FANG ET AL: "Self-contained optical-inertial motion capturing for assembly planning in digital factory", 《THE INTERNATIONAL JOURNAL OF ADVANCED MANUFACTURING TECHNOLOGY》 *
雷小永等: "基于图象的人体运动追踪系统研究", 《系统仿真学报》, no. 2 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113325950A (en) * 2021-05-27 2021-08-31 百度在线网络技术(北京)有限公司 Function control method, device, equipment and storage medium
CN113325950B (en) * 2021-05-27 2023-08-25 百度在线网络技术(北京)有限公司 Function control method, device, equipment and storage medium
CN113902054A (en) * 2021-09-23 2022-01-07 深圳市瑞立视多媒体科技有限公司 Motion capture method based on mark points, related device and storage medium
CN114562993A (en) * 2022-02-28 2022-05-31 联想(北京)有限公司 Track processing method and device and electronic equipment

Also Published As

Publication number Publication date
CN111382701B (en) 2023-09-22

Similar Documents

Publication Publication Date Title
CN111382701B (en) Motion capture method, motion capture device, electronic equipment and computer readable storage medium
CN105359054B (en) Equipment is positioned and is orientated in space
CN115616937B (en) Automatic driving simulation test method, device, equipment and computer readable medium
CN116079697B (en) Monocular vision servo method, device, equipment and medium based on image
CN112528957A (en) Human motion basic information detection method and system and electronic equipment
CN112818898B (en) Model training method and device and electronic equipment
CN111445499B (en) Method and device for identifying target information
CN110487264B (en) Map correction method, map correction device, electronic equipment and storage medium
CN113741750B (en) Cursor position updating method and device and electronic equipment
CN113407045B (en) Cursor control method and device, electronic equipment and storage medium
CN115844381A (en) Human body action recognition method and device, electronic equipment and storage medium
CN115900713A (en) Auxiliary voice navigation method and device, electronic equipment and storage medium
CN112880675B (en) Pose smoothing method and device for visual positioning, terminal and mobile robot
CN108595095A (en) Method and apparatus based on gesture control simulated target body movement locus
CN114116081B (en) Interactive dynamic fluid effect processing method and device and electronic equipment
CN112784622B (en) Image processing method and device, electronic equipment and storage medium
CN114663553A (en) Special effect video generation method, device and equipment and storage medium
CN113536552A (en) Human body posture visual tracking system
CN113873637A (en) Positioning method, positioning device, terminal and storage medium
CN111912528A (en) Body temperature measuring system, method, device and equipment storage medium
US20230418072A1 (en) Positioning method, apparatus, electronic device, head-mounted display device, and storage medium
CN114967942B (en) Motion attitude analysis method, terminal and storage medium
US20240265641A1 (en) Augmented reality device for obtaining position information of joints of user's hand and operating method thereof
CN113672137B (en) Cursor position updating method and device and electronic equipment
CN117631844A (en) Action data correction method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant after: Tiktok vision (Beijing) Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant before: BEIJING BYTEDANCE NETWORK TECHNOLOGY Co.,Ltd.

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant after: Douyin Vision Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant before: Tiktok vision (Beijing) Co.,Ltd.

GR01 Patent grant
GR01 Patent grant