Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing the devices, modules or units, and are not used for limiting the devices, modules or units to be different devices, modules or units, and also for limiting the sequence or interdependence of the functions executed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that the term "one or more" unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
To make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the embodiments of the present disclosure will be described in further detail below with reference to the accompanying drawings.
The embodiment of the disclosure provides a motion capture method, a motion capture device, an electronic device and a computer storage medium, which aim to solve the above technical problems in the prior art.
The following describes in detail the technical solutions of the embodiments of the present disclosure and how to solve the above technical problems with specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present disclosure will be described below with reference to the accompanying drawings.
One embodiment of the present disclosure provides a motion capture method, which is performed by a computer device, which may be a terminal or a server. The terminal may be a desktop device or a mobile terminal. The servers may be individual physical servers, clusters of physical servers, or virtual servers. As shown in fig. 1, the method includes:
step S110, acquiring the angular velocity of the first target limb of the first motion object at the current moment through the inertial motion capture device, and predicting first position information of the first target limb at the target moment according to the angular velocity.
Specifically, the inertial motion capture device may capture position information of a motion object through an inertial sensor, wherein the motion object (i.e., the first motion object described above) includes, but is not limited to, a person, an animal, and the like. In practical applications, the inertial motion capture device may be bound to a target limb (i.e., the first target limb) of the motion object to capture position information of the target limb at each time point, so as to obtain a motion of the target limb according to the position information of the target limb, and further obtain a motion of the motion object according to the motion of the target limb.
If the motion of a plurality of target limbs is to be captured, an inertial motion capture device may be bound to each target limb to capture the position information of each target limb at various time points, so as to obtain the motion of each target limb according to the position information of each target limb. Wherein each inertial motion capture device captures position information in the same coordinate system.
In one example, if the motion object is a human body and the target limbs are a left arm and a right arm, an inertial motion capture device a may be bound to the left arm of the human body to obtain position information of the left arm to capture the motion of the left arm, and an inertial motion capture device B may be bound to the right arm of the human body to obtain position information of the right arm to capture the motion of the right arm.
In another example, if the motion object is a human body and the target limbs are a left leg, a right leg and a waist, an inertial motion capture device C may be bound to the left leg of the human body to obtain position information of the left leg to capture the motion of the left leg; moreover, an inertial motion capture device D is bound on the right leg of the human body to obtain the position information of the right leg so as to capture the motion of the right leg; meanwhile, an inertial motion capture device E is bound on the waist of the human body to obtain the position information of the waist so as to capture the motion of the waist.
Specifically, in the process of capturing the position information of the target limb by the inertial motion capture device, the inertial motion capture device actually acquires the angular velocity of the target limb at the current time, and after obtaining the angular velocity, predicts the position information (denoted as first position information) of the target limb at the target time according to the angular velocity.
Step S120, determining second position information of the first target limb at the target moment through the optical motion capture device based on the first preset optical mark point of the first target limb.
Specifically, since the inertial motion capture device is prone to errors and offsets during the process of capturing the position information, after the position information of the target limb is captured by the inertial motion capture device, it is not easy to directly use the position information as the final position information of the target limb, but the position information after correction and update is first used as the final position information of the target limb. In the process of correcting and updating the position information of the target limb captured by the inertial motion capture device, the position information of the target limb captured by the inertial motion capture device at the target time can be corrected and updated by using a small amount of second position information of the target limb captured by the optical motion capture device at the target time. Optical motion capture devices include, but are not limited to, optical sensors, HTC Vive head mounted devices, among others.
Specifically, one or several optical marker points (i.e. first preset optical marker points) may be affixed in advance at predetermined positions of the target limb (such as skeletal joint points), and the optical marker points may reflect light (e.g. infrared light) emitted by the optical motion capture device. The target limb and the target limb bound by the inertial motion capture device are the same target limb, and if the inertial motion capture device is bound on the target limb of the left arm, the optical mark point is stuck on a predetermined position of the left arm. If the inertial motion capture device is bound to a target limb, the right arm, the optical mark is affixed to a predetermined location on the right arm.
Specifically, in the process of capturing the position information of the target limb at the target time by the optical motion capture device, infrared light may be emitted by the optical motion capture device, at this time, the optical mark point on the target limb reflects the infrared light emitted by the optical motion capture device, and correspondingly, the optical motion capture device receives the infrared light reflected by the optical mark point, so as to determine the position information (marked as the second position information) of the target limb at the target time according to the optical mark point.
Step S130, determining a target position of the first target limb at the target time according to the first position information and the second position information, so as to capture the motion of the first target limb.
Specifically, after first position information of a target limb at a target moment is acquired through the inertial motion capture device and second position information of the same target limb at the same target moment is acquired through the optical motion capture device, the first position information and the second position information can be comprehensively considered, and the target position of the target limb at the target moment is determined together according to the first position information and the second position information, so that errors and offsets introduced by the inertial motion capture device are reduced as much as possible, the influence on the determination of the position of the first target limb is reduced, and the accuracy of motion capture is greatly improved.
The motion capture method provided by the embodiment of the disclosure captures first position information of a first target limb at a target moment through an inertial motion capture device, captures second position information of the first target limb at the target moment through an optical motion capture device, and determines a target position of the first target limb at the target moment according to the first position information and the second position information, so that the first position information of the first target limb at the target moment captured by the inertial motion capture device can be corrected and updated by means of a very small amount of second position information of the first target limb at the target moment captured by the optical motion capture device, errors and offsets introduced by the inertial motion capture device can be reduced as much as possible, and the influence on the determination of the position of the first target limb can be obtained, so that a more accurate target position of the first target limb can be obtained, the accuracy of motion capture is greatly improved.
The following specifically introduces the method of the embodiment of the present disclosure, taking the action object as an example:
in one possible implementation manner, third position information of a second target limb of the second action object at the target moment may be determined by the optical motion capture device based on a second preset optical marker point of the second target limb; then, the distance between the first action object and the second action object is determined according to the second position information and the third position information.
Specifically, in the process of determining the distance between the first action object and the second action object, the first target limb and the second target limb should be limbs at the same position, and the position of the first preset optical mark point is consistent with the position of the second preset optical mark point. The position of the first preset optical mark point is consistent with the position of the second preset optical mark point, which includes but is not limited to the following cases: the position of the first preset optical mark point is completely the same as that of the second preset optical mark point, and the position deviation between the position of the first preset optical mark point and that of the second preset optical mark point is within a certain error range.
Specifically, the inertial motion capture device cannot identify the relative position between two motion objects in the process of capturing the position information, and the optical motion capture device can acquire the relative position between two motion objects, and at this time, the distance between two motion objects can be determined by the optical motion capture device, and the relative position between two motion objects can be determined according to the distance.
Specifically, if the first action object is zhang san and the second action object is lie san, in the process of determining the relative position between zhang san and lie san, the same optical motion capture device may be used to obtain the position information of the first preset optical marker point of the first target limb of zhang san at the target time and the position information of the second preset optical marker point of the second target limb of lie at the target time, and then calculate the distance between the two pieces of position information, so as to determine the relative position between zhang san and lie san according to the calculated distance.
Specifically, in the process of determining the relative position between zhangsan and lie four, an optical marker (e.g., point L1) may be first marked at a predetermined position of the target limb of zhangsan, for example, an optical marker L1 may be marked at the elbow of the left arm of zhangsan, and an optical marker (e.g., L2) may be marked at a predetermined position of the target limb of lie four, for example, an optical marker L2 may be marked at the elbow of the left arm of lie four; then, infrared light is emitted by the optical motion capture device, and at this time, the optical mark point L1 and the optical mark point L2 reflect the infrared light emitted by the optical motion capture device, and correspondingly, the optical motion capture device receives the infrared light reflected by the optical mark point L1 and the optical mark point L2, so as to determine the position information (marked as second position information) of the optical mark point L1 and determine the position information (marked as third position information) of the optical mark point L2; and then, determining the distance between the third Zhang and the fourth Li according to the second position information and the third position information, wherein the distance between the third Zhang and the fourth Li can be obtained by making a difference between the second position information and the third position information, so that the relative position between the third Zhang and the fourth Li is obtained.
It should be noted that the first target limb of zhangsan and the second target limb of lie xi should be limbs of the same body part, for example, the first target limb of zhangsan is a left arm, and then the second target limb of lie xi should also be a left arm, so that the relative position between the two can be accurately calculated, and the relative position has a very high reference value; if the first target limb of Zhangjin is a left arm, the second target limb of Liquan should also be a left leg, and since the first target limb and the second target limb belong to different body parts, the distance between the two is not comparable, and even if the distance between the two is calculated, the accuracy of the distance is extremely low, and the distance has no reference value.
In addition, although the first target limb and the second target limb belong to the same part of the limb, if the position of the first preset optical mark point is not consistent with the position of the second preset optical mark point, a certain deviation exists in the calculated relative position between zhangsan and lie quadruple. For example, the first preset optical mark point of zhangsan is the elbow of the left arm, and the second preset optical mark point of lie xi is the wrist of the left arm, and at this time, although the distance calculated according to the first preset optical mark point and the second preset optical mark point has a certain reference value, the accuracy of the calculated relative position between zhangsan and lie xi is low because the first preset optical mark point and the second preset optical mark point are far apart.
In one possible implementation manner, in the process of predicting the first position information of the first target limb at the target time according to the angular velocity, the orientation of the first target limb at the target time may be predicted according to the angular velocity; and then, determining first position information of the first target limb at the target moment according to the limb length and the orientation of the first target limb.
Specifically, in the process of capturing the position information of the first target limb by the inertial motion capture device, the inertial motion capture device actually acquires the angular velocity of the first target limb at the current time, and after the angular velocity is obtained, the position information (referred to as first position information) of the target limb at the target time is predicted according to the angular velocity.
Specifically, after obtaining the angular velocity, the orientation of the first target limb at the target time may be obtained by integrating the angular velocity in a time interval between the target time and the current time. Since the limb length of the first target limb is fixed, for example, the limb length of the left arm with three open limbs is fixed, and for example, the limb length of the right leg with three open limbs is fixed, the first position information of the first target limb at the target time can be determined according to the limb length of the first target limb and the orientation of the first target limb. Wherein the limb length of the first target limb is measured in advance.
Specifically, in the process of determining the second position information of the first target limb at the target moment based on the first preset optical mark point of the first target limb through the optical motion capture device, the fourth position information of the first preset optical mark point at the target moment may be acquired through the optical motion capture device; then, second position information of the first target limb is determined according to the fourth position information based on the predetermined motion object model.
Specifically, if the first action object is three, the first target limb of three is the left arm, and the first preset optical mark point of the first target limb is the elbow of the left arm, the position information of the elbow at the target time (denoted as the fourth position information) may be acquired by the optical motion capture device, at this time, although the position information of the left elbow is acquired, the position information of the left arm cannot be accurately obtained only from the position information of the left elbow, and at this time, with the help of a preset action object model (i.e., a preset human body model), the position information of the left arm at the target time may be relatively accurately obtained on the basis of the position information of the left elbow at the target time on the basis of the preset human body model.
In a possible implementation manner, in the process of determining the target position of the first target limb at the target moment according to the first position information and the second position information, a first weight value of the first position information and a second weight value of the second position information may be determined; then, according to the first weight value and the second weight value, fusing the first position information and the second position information to obtain a target position of the first target limb at a target moment; and the sum of the first weight value and the second weight value is a preset value.
Specifically, after the first position information and the second position information are obtained, a similar weighted summation operation may be performed on the first position information and the second position information to determine the target position of the first target limb at the target time. In the process of performing weighted summation on the first position information and the second position information, the target position of the first target limb at the target moment may be obtained by performing weighted summation on the weight values corresponding to the first position information and the second position information respectively.
Specifically, in the process of weighting and summing the first position information and the second position information, a weight value (denoted as a first weight value) corresponding to the first position information and a weight value (denoted as a second weight value) corresponding to the second position information are determined, and then the first position information and the second position information are fused according to the first weight value and the second weight value to obtain a target position of the first target limb at the target moment.
Specifically, in the process of determining the first weight value of the first location information and the second weight value of the second location information, the first weight value of the first location information and the second weight value of the second location information may be determined according to the accuracy of the first location information (denoted as a first accuracy) and the accuracy of the second location information (denoted as a second accuracy). When the first accuracy is higher than the second accuracy, the first position information is more accurate, and a larger weight value can be given to the first position information at the moment, namely the first weight value is higher than the second weight value; when the first accuracy is smaller than the second accuracy, the second position information is more accurate, and a larger weight value can be given to the second position information at the moment, namely the first weight value is smaller than the second weight value; when the first precision is equal to the second precision, it indicates that the accuracies of the first location information and the second location information are the same, and at this time, a weight value that is the same as the first location information may be given to the second location information, that is, the first weight value is equal to the second weight value.
Note that the sum of the first weight value and the second weight value is a predetermined value, such as 1, 2, 3, or the like. In one example, the predetermined value is 1, i.e., the sum of the first weight value and the second weight value is 1.
Specifically, after the first weight value and the second weight value are determined, a first product of the first weight value and the first position information, a second product of the second weight value and the second position information, a product sum between the first product and the second product, and the product sum as a target position of the first target limb at the target time may be calculated.
In one example, if the first location information is W _1, the first location information has a weight of Q _1, the second location information is W _2, and the second location information has a weight of Q _2, the weighted sum expression used may be: w _1 × Q _1 + W _2 × Q _2, i.e., the sum of the expressions, is the target position of the first target limb at the target time.
Fig. 2 is a schematic structural diagram of a motion capture apparatus according to another embodiment of the present disclosure, as shown in fig. 2, the apparatus 20 may include a processing module 201, a first determining module 202, and a second determining module 203; wherein:
the processing module 201 is configured to acquire an angular velocity of a first target limb of a first motion object at a current time through an inertial motion capture device, and predict first position information of the first target limb at a target time according to the angular velocity;
a first determining module 202, configured to determine, by an optical motion capture device, second position information of a first target limb at a target moment based on a first preset optical marker of the first target limb;
the second determining module 203 is configured to determine a target position of the first target limb at the target time according to the first position information and the second position information, so as to capture the motion of the first target limb.
In a possible implementation manner, the apparatus further includes a third determining module and a fourth determining module;
the third determining module is used for determining third position information of a second target limb of the second motion object at the target moment based on a second preset optical marking point of the second target limb through the optical motion capturing device;
the fourth determining module is used for determining the distance between the first action object and the second action object according to the second position information and the third position information;
the first target limb and the second target limb are limbs of the same part; the position of the first preset optical mark point is consistent with the position of the second preset optical mark point.
In one possible implementation, the processing module, in predicting first position information of the first target limb at the target moment in time from the angular velocity, is configured to:
according to the angular speed, predicting the orientation of the first target limb at the target moment;
and determining first position information of the first target limb at the target moment according to the limb length and the orientation of the first target limb.
In a possible implementation manner, the first determining module is specifically configured to acquire, by using an optical motion capture device, fourth position information of the first preset optical marker at the target time; and the second position information of the first target limb is determined according to the fourth position information based on the preset action object model.
In a possible implementation manner, the second determining module is configured to determine a first weight value of the first location information and a second weight value of the second location information, and a sum of the first weight value and the second weight value is a predetermined value; and the second position information is fused according to the first weight value and the second weight value, so that the target position of the first target limb at the target moment is obtained.
In a possible implementation manner, the second determining module is specifically configured to, when the first position information and the second position information are fused according to the first weight value and the second weight value to obtain the target position of the first target limb at the target time:
calculating a first product of the first weight value and the first position information, and calculating a second product of the second weight value and the second position information;
and calculating a product sum between the first product and the second product, and using the product sum as the target position of the first target limb at the target moment.
In a possible implementation manner, the second determining module is specifically configured to, when determining the first weight value of the first location information and the second weight value of the second location information:
determining a first accuracy of the first location information and a second accuracy of the second location information;
if the first accuracy is greater than the second accuracy, determining that the first weight value is greater than the second weight value;
if the first accuracy is less than the second accuracy, determining that the first weight value is less than the second weight value;
if the first accuracy is equal to the second accuracy, determining that the first weight value is equal to the second weight value.
The apparatus provided by the embodiment of the disclosure captures first position information of a first target limb at a target moment through an inertial motion capture device, captures second position information of the first target limb at the target moment through an optical motion capture device, and determines a target position of the first target limb at the target moment according to the first position information and the second position information, so that the first position information of the first target limb at the target moment captured by the inertial motion capture device can be corrected and updated by means of a very small amount of second position information of the first target limb at the target moment captured by the optical motion capture device, thereby reducing errors and offsets introduced by the inertial motion capture device as much as possible, and influencing the determination of the position of the first target limb, so as to obtain a more accurate target position of the first target limb, the accuracy of motion capture is greatly improved.
It should be noted that the present embodiment is an apparatus embodiment corresponding to the method embodiment described above, and the present embodiment can be implemented in cooperation with the method embodiment described above. The related technical details mentioned in the above method embodiment are still valid in this embodiment, and are not described herein again in order to reduce repetition. Accordingly, the related art details mentioned in the present embodiment can also be applied to the above-described method embodiment.
Referring now to FIG. 3, a block diagram of an electronic device 300 suitable for use in implementing embodiments of the present disclosure is shown. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a fixed terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 3 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
The electronic device comprises a memory and a processor, wherein the processor may be referred to as the processing device 301 described below, and the memory comprises at least one of a Read Only Memory (ROM)302, a Random Access Memory (RAM)303, and a storage device 308, which are described below:
as shown in fig. 3, the electronic device 300 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 301 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)302 or a program loaded from a storage device 308 into a Random Access Memory (RAM) 303. In the RAM 303, various programs and data necessary for the operation of the electronic apparatus 300 are also stored. The processing device 301, the ROM 302, and the RAM 303 are connected to each other via a bus 304. An input/output (I/O) interface 305 is also connected to bus 304.
Generally, the following devices may be connected to the I/O interface 305: input devices 306 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 307 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage devices 308 including, for example, magnetic tape, hard disk, etc.; and a communication device 309. The communication means 309 may allow the electronic device 300 to communicate wirelessly or by wire with other devices to exchange data. While fig. 3 illustrates an electronic device 300 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the process described above with reference to the flow diagrams may be implemented as a computer software program. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program containing program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication means 309, or installed from the storage means 308, or installed from the ROM 302. The computer program, when executed by the processing device 301, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring the angular velocity of a first target limb of a first motion object at the current moment through inertial motion capture equipment, and predicting first position information of the first target limb at the target moment according to the angular velocity; then, determining second position information of the first target limb at the target moment through the optical motion capture equipment based on a first preset optical mark point of the first target limb; and then, determining the target position of the first target limb at the target moment according to the first position information and the second position information so as to capture the motion of the first target limb.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules or units described in the embodiments of the present disclosure may be implemented by software or hardware. The name of the module or unit does not form a limitation on the unit itself under certain conditions, for example, the acquiring module may be further described as a module for acquiring at least one event processing mode corresponding to a predetermined live event when the occurrence of the predetermined live event is detected.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, there is provided a motion capture method including:
acquiring the angular velocity of a first target limb of a first motion object at the current moment through inertial motion capture equipment, and predicting first position information of the first target limb at a target moment according to the angular velocity;
determining second position information of the first target limb at the target moment based on a first preset optical mark point of the first target limb through an optical motion capture device;
and determining the target position of the first target limb at the target moment according to the first position information and the second position information so as to capture the motion of the first target limb.
In one possible implementation, the method further includes:
determining, by the optical motion capture device, third position information of a second target limb of a second motion object at the target moment based on a second preset optical marker of the second target limb;
determining the distance between the first action object and the second action object according to the second position information and the third position information;
the first target limb and the second target limb are limbs of the same part; the position of the first preset optical mark point is consistent with the position of the second preset optical mark point.
In one possible implementation, predicting first position information of the first target limb at the target time according to the angular velocity includes:
according to the angular speed, predicting the orientation of the first target limb at the target moment;
and determining first position information of the first target limb at the target moment according to the limb length and the orientation of the first target limb.
In one possible implementation, determining, by the optical motion capture device, second position information of the first target limb at the target moment based on the first preset optical marker point of the first target limb includes:
acquiring fourth position information of the first preset optical mark point at the target moment through optical motion capture equipment;
and determining second position information of the first target limb according to the fourth position information based on the predetermined action object model.
In one possible implementation manner, determining a target position of the first target limb at the target time according to the first position information and the second position information includes:
determining a first weight value of the first position information and a second weight value of the second position information, wherein the sum of the first weight value and the second weight value is a preset value;
and according to the first weight value and the second weight value, fusing the first position information and the second position information to obtain the target position of the first target limb at the target moment.
In a possible implementation manner, the fusing the first position information and the second position information according to the first weight value and the second weight value to obtain a target position of the first target limb at the target time includes:
calculating a first product of the first weight value and the first position information, and calculating a second product of the second weight value and the second position information;
and calculating a product sum between the first product and the second product, and using the product sum as the target position of the first target limb at the target moment.
In one possible implementation manner, determining a first weight value of the first location information and a second weight value of the second location information includes:
determining a first accuracy of the first location information and a second accuracy of the second location information;
if the first accuracy is greater than the second accuracy, determining that the first weight value is greater than the second weight value;
if the first accuracy is less than the second accuracy, determining that the first weight value is less than the second weight value;
if the first accuracy is equal to the second accuracy, determining that the first weight value is equal to the second weight value.
According to one or more embodiments of the present disclosure, there is provided a motion capture apparatus including:
the processing module is used for acquiring the angular velocity of a first target limb of a first motion object at the current moment through the inertial motion capture equipment and predicting first position information of the first target limb at the target moment according to the angular velocity;
the first determining module is used for determining second position information of the first target limb at the target moment based on a first preset optical marking point of the first target limb through the optical motion capture device;
and the second determining module is used for determining the target position of the first target limb at the target moment according to the first position information and the second position information so as to capture the action of the first target limb.
In a possible implementation manner, the apparatus further includes a third determining module and a fourth determining module;
the third determining module is used for determining third position information of a second target limb of the second motion object at the target moment based on a second preset optical marking point of the second target limb through the optical motion capturing device;
the fourth determining module is used for determining the distance between the first action object and the second action object according to the second position information and the third position information;
the first target limb and the second target limb are limbs of the same part; the position of the first preset optical mark point is consistent with the position of the second preset optical mark point.
In one possible implementation, the processing module predicts first position information of the first target limb at the target moment based on the angular velocity, for:
according to the angular speed, predicting the orientation of the first target limb at the target moment;
and determining first position information of the first target limb at the target moment according to the limb length and the orientation of the first target limb.
In a possible implementation manner, the first determining module is specifically configured to acquire, by using an optical motion capture device, fourth position information of the first preset optical marker at the target time; and the second position information of the first target limb is determined according to the fourth position information based on the preset action object model.
In a possible implementation manner, the second determining module is configured to determine a first weight value of the first location information and a second weight value of the second location information, and a sum of the first weight value and the second weight value is a predetermined value; and the second position information is fused according to the first weight value and the second weight value, so that the target position of the first target limb at the target moment is obtained.
In a possible implementation manner, the second determining module is specifically configured to, when the first position information and the second position information are fused according to the first weight value and the second weight value to obtain the target position of the first target limb at the target time:
calculating a first product of the first weight value and the first position information, and calculating a second product of the second weight value and the second position information;
and calculating a product sum between the first product and the second product, and using the product sum as the target position of the first target limb at the target moment.
In a possible implementation manner, the second determining module is specifically configured to, when determining the first weight value of the first location information and the second weight value of the second location information:
determining a first accuracy of the first location information and a second accuracy of the second location information;
if the first accuracy is greater than the second accuracy, determining that the first weight value is greater than the second weight value;
if the first accuracy is less than the second accuracy, determining that the first weight value is less than the second weight value;
if the first accuracy is equal to the second accuracy, determining that the first weight value is equal to the second weight value.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be understood by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features may be replaced with (but not limited to) technical features disclosed in the present disclosure having similar functions.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.