CN115953843A - High-efficiency motion capture method for virtual production - Google Patents

High-efficiency motion capture method for virtual production Download PDF

Info

Publication number
CN115953843A
CN115953843A CN202310033898.1A CN202310033898A CN115953843A CN 115953843 A CN115953843 A CN 115953843A CN 202310033898 A CN202310033898 A CN 202310033898A CN 115953843 A CN115953843 A CN 115953843A
Authority
CN
China
Prior art keywords
acceleration
angular velocity
whole body
data processing
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310033898.1A
Other languages
Chinese (zh)
Inventor
谢琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Baolin Dev International Cultural Technology Development Beijing Co ltd
Original Assignee
Baolin Dev International Cultural Technology Development Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Baolin Dev International Cultural Technology Development Beijing Co ltd filed Critical Baolin Dev International Cultural Technology Development Beijing Co ltd
Priority to CN202310033898.1A priority Critical patent/CN115953843A/en
Publication of CN115953843A publication Critical patent/CN115953843A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention discloses a high-performance motion capture method for virtual production, which comprises the following steps: s1, firstly, collecting coordinates in a geomagnetic coordinate system of a measured object, and angular velocity and acceleration of joints of the whole body; s2, sending the collected coordinates in the geomagnetic coordinate system and the angular velocity and acceleration information of the whole body joint to an abnormal data processing model through a wireless transmission module, and removing abnormal data in the angular velocity and acceleration information of the whole body joint; and S3, transmitting the coordinates in the geomagnetic coordinate system and the processed angular velocity and acceleration of the whole body joint to a data processing center, and performing corresponding processing to obtain processed motion data. The invention has reasonable design and ingenious concept, and through designing the abnormal data processing model, the action data captured by the sensor is preprocessed in advance, and the abnormal data in the action data is removed, so that the data processing efficiency is improved, and the prediction precision is improved.

Description

High-efficiency motion capture method for virtual production
Technical Field
The invention relates to the technical field of motion recognition, in particular to a high-efficiency motion capture method for virtual production.
Background
Motion recognition is a very popular field, and particularly, the recognition of human motion gestures is a direction of research interest for a large number of students. The current research is mainly applied to scenes such as medical monitoring, human body falling detection or gesture recognition, and the like, and the key is to recognize common actions such as running, jumping, falling and the like; but the research on the motion recognition in each movement is still in the starting stage. In fact, if the human body posture recognition technology can be applied to various sports disciplines to help coaches objectively and efficiently judge the levels of students, taking football as an example, the skilled and accurate mastering of basic actions such as passing, shooting and the like by football players is a premise of obtaining excellent performance, and therefore the basic actions are also important points in daily teaching and training.
At present, the gesture analysis of an inertial sensor is often utilized for motion capture and recognition, a special sensor is worn by an athlete to obtain data, and a model is established to process the data so as to achieve the recognition purpose. Because of individual differences among human bodies, diversity and complexity of actions, how to fully and effectively express action gestures is a big difficulty in gesture analysis based on vision, and the calculation cost and accuracy need to be balanced in specific application; in a research test by utilizing the attitude analysis of the inertial sensor, an athlete is often required to wear a plurality of sensors, and the established model can only identify relatively simple actions such as walking, running and falling
Most of the schemes adopt related attitude sensors for identification, but abnormal data is easy to exist in sensor data under the conditions of unstable signals or unstable voltage and the like in sensor data transmission, and when the data is processed finally, the data processing efficiency is reduced, and the prediction precision is also reduced.
Disclosure of Invention
The present invention provides a high performance motion capture method for virtual manufacturing to solve the above problems in the background art.
In order to achieve the purpose, the invention adopts the following technical scheme:
a high performance motion capture method for virtual fab includes the following steps:
s1, firstly, collecting coordinates in a geomagnetic coordinate system of a measured object, and angular velocity and acceleration of joints of the whole body;
s2, sending the collected coordinates in the geomagnetic coordinate system and the angular velocity and acceleration information of the whole body joint to an abnormal data processing model through a wireless transmission module, and removing abnormal data in the angular velocity and acceleration information of the whole body joint;
s3, sending the coordinates in the geomagnetic coordinate system and the processed angular velocity and acceleration of the joint of the whole body to a data processing center for corresponding processing to obtain processed action data;
and S4, establishing a skeleton model and applying the action data processed by the data processing module.
As a further improvement scheme of the technical scheme: s1, firstly, collecting coordinates in a geomagnetic coordinate system of a measured object, and angular velocity and acceleration of a whole body joint, specifically:
a plurality of position sensing units, acceleration sensing units and angular velocity sensing units are arranged on the whole body and joint points of the object to be detected.
As a further improvement scheme of the technical scheme: the position sensing unit is used for measuring the coordinates of the object to be measured in the geomagnetic coordinate system;
the action acquisition module comprises: the acceleration sensing unit is used for measuring an acceleration signal of the joint node;
and the angular speed sensing unit is used for measuring the angular speed signal of the joint node.
As a further improvement scheme of the technical scheme: the specific establishing steps of the abnormal data processing model are as follows:
collecting coordinates in a plurality of normal earth magnetic coordinate systems and angular velocity and acceleration samples of a whole body joint, and forming a data set together with the coordinates in a plurality of abnormal earth magnetic coordinate systems and the angular velocity and acceleration samples of the whole body joint;
and secondly, randomly dividing the data set into a training set and a testing set in proportion, training a plurality of machine learning models by using the training set, testing the effect of each machine learning model by using the testing set, and taking the machine learning model with the best testing result, namely the highest accuracy as an abnormal data processing model and storing the abnormal data processing model.
As a further improvement scheme of the technical scheme: the data set was divided into training and test sets by 8.
As a further improvement scheme of the technical scheme: in the second step, the selected machine learning model is at least one of logistic regression, linear discriminant analysis, K nearest neighbor, naive Bayes, a support vector machine, a random forest and a neural network.
As a further improvement scheme of the technical scheme: the selected machine learning model is a random forest model.
As a further improvement scheme of the technical scheme: a face capture module is also included for capturing actor facial movements using an infrared high-sensitivity camera.
As a further improvement scheme of the technical scheme: the data processing center includes:
the angular velocity processing unit is used for carrying out primary integration on the angular velocity signal to obtain an angular posture;
and the acceleration processing unit is used for estimating the roll angle and the pitch angle of the joint point according to the acceleration signal by gravity component.
As a further improvement scheme of the technical scheme: s4, establishing a skeleton model and applying the skeleton model to the action data processed by the data processing module, wherein the action data comprises the following specific steps: and establishing a skeleton model through a skeleton model establishing module, and applying the processed action data to the limb action of the skeleton model.
Compared with the prior art, the invention has the beneficial effects that:
the method has reasonable design and ingenious conception, and through designing the abnormal data processing model, the action data captured by the sensor is preprocessed in advance, and the abnormal data in the action data is removed, so that the data processing efficiency is improved, and the prediction precision is improved.
The foregoing description is only an overview of the technical solutions of the present invention, and in order to make the technical solutions of the present invention more clearly understood and to implement them in accordance with the contents of the description, the following detailed description is given with reference to the preferred embodiments of the present invention and the accompanying drawings. The detailed description of the present invention is given in detail by the following examples and the accompanying drawings.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a system diagram illustrating a high performance motion capture method for virtual fab in accordance with the present invention;
FIG. 2 is a flow chart illustrating a high performance motion capture method for virtual fab according to the present invention;
fig. 3 is a schematic diagram of a specific building step of the abnormal data processing model according to the present invention.
Detailed Description
The principles and features of this invention are described below in conjunction with the following drawings, which are set forth by way of illustration only and are not intended to limit the scope of the invention. The invention is described in more detail in the following paragraphs by way of example with reference to the accompanying drawings. Advantages and features of the present invention will become apparent from the following description and from the claims. It is to be noted that the drawings are in a very simplified form and are not to precise scale, which is merely for the purpose of facilitating and distinctly claiming the embodiments of the present invention.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
Referring to fig. 1 to 3, in an embodiment of the present invention, a high performance motion capture method for virtual manufacturing includes the following steps:
s1, mounting a plurality of position sensing units, acceleration sensing units and angular velocity sensing units on the whole body and joint points of an object to be measured, and collecting coordinates of the object to be measured in a geomagnetic coordinate system and angular velocities and accelerations of joints of the whole body, wherein the position sensing units are used for measuring the coordinates of the object to be measured in the geomagnetic coordinate system;
the action acquisition module comprises: the acceleration sensing unit is used for measuring an acceleration signal of the joint node;
the angular velocity sensing unit is used for measuring an angular velocity signal of the joint node;
s2, sending the collected coordinates in the geomagnetic coordinate system and the angular velocity and acceleration information of the whole body joint to an abnormal data processing model through a wireless transmission module, and removing abnormal data in the angular velocity and acceleration information of the whole body joint;
s3, sending the coordinates in the geomagnetic coordinate system and the processed angular velocity and acceleration of the joint of the whole body to a data processing center for corresponding processing to obtain processed action data, wherein the data processing center comprises an angular velocity processing unit and an acceleration processing unit, and the angular velocity processing unit is used for carrying out primary integration on an angular velocity signal to obtain an angular posture; the acceleration processing unit is used for estimating the roll angle and the pitch angle of the joint point according to the acceleration signal by gravity component;
and S4, establishing a skeleton model through a skeleton model establishing module, and applying the processed action data to limb actions of the skeleton model.
Specifically, in S2, the specific establishing step of the abnormal data processing model is as follows:
collecting coordinates in a plurality of normal earth magnetic coordinate systems and angular velocity and acceleration samples of a whole body joint, and forming a data set together with the coordinates in a plurality of abnormal earth magnetic coordinate systems and the angular velocity and acceleration samples of the whole body joint;
and secondly, dividing the data set into a training set and a testing set according to 8, training a plurality of machine learning models (including logistic regression, linear discriminant analysis, K nearest neighbor, naive Bayes, a support vector machine, random forests, a neural network and the like) by using the training set, testing the effect of each machine learning model by using the testing set, and taking and storing the machine learning model with the best testing result, namely the highest accuracy as an abnormal data processing model, wherein the machine learning model selected by the invention is a random forest model.
It is noted that the device also comprises a face acquisition module, which is used for capturing the facial movements of the person by using an infrared high-sensitivity camera, a face acquisition module shell captures the facial area information of the object to be detected, and establishing a three-dimensional model of the face area, and then integrating the obtained face image data and the limb action data to obtain complete action data of the object to be captured.
It should be recognized that embodiments of the present invention can be realized and implemented in computer hardware, a combination of hardware and software, or by computer instructions stored in a non-transitory computer readable memory. The methods may be implemented in a computer program using standard programming techniques, including a non-transitory computer-readable storage medium configured with the computer program, where the storage medium so configured causes a computer to operate in a specific and predefined manner, according to the methods and figures described in the detailed description. Each program may be implemented in a high level procedural or object oriented programming language to communicate with a computer system. However, the program(s) can be implemented in assembly or machine language, if desired. In any case, the language may be a compiled or interpreted language. Furthermore, the program can be run on a programmed application specific integrated circuit for this purpose.
Further, the operations of processes described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The processes described herein (or variations and/or combinations thereof) may be performed under the control of one or more computer systems configured with executable instructions, and may be implemented as code (e.g., executable instructions, one or more computer programs, or one or more applications) collectively executed on one or more processors, by hardware, or combinations thereof. The computer program includes a plurality of instructions executable by one or more processors.
Further, the method may be implemented in any type of computing platform operatively connected to a suitable connection, including but not limited to a personal computer, mini computer, mainframe, workstation, networked or distributed computing environment, separate or integrated computer platform, or in communication with a charged particle tool or other imaging device, or the like. Aspects of the invention may be implemented in machine-readable code stored on a non-transitory storage medium or device, whether removable or integrated onto a computing platform, such as a hard disk, optically read and/or write storage media, RAM, ROM, etc., so that it is readable by a programmable computer, which when read by the computer can be used to configure and operate the computer to perform the procedures described herein. Further, the machine-readable code, or portions thereof, may be transmitted over a wired or wireless network. The invention described herein includes these and other different types of non-transitory computer-readable storage media when such media include instructions or programs that implement the steps described above in conjunction with a microprocessor or other data processor. The invention also includes the computer itself when programmed according to the methods and techniques described herein.
A computer program can be applied to input data to perform the functions described herein to transform the input data to generate output data that is stored to non-volatile memory. The output information may also be applied to one or more output devices, such as a display. In a preferred embodiment of the invention, the transformed data represents physical and tangible objects, including particular visual depictions of physical and tangible objects produced on a display.
The present invention is not limited to the above embodiments, and any modifications, equivalent substitutions, improvements, etc. within the spirit and principle of the present invention should be included in the protection scope of the present invention as long as the technical effects of the present invention are achieved by the same means. The technical solution and/or the embodiments thereof may be variously modified and varied within the scope of the present invention.

Claims (10)

1. A high performance motion capture method for virtual fab, comprising the steps of:
s1, firstly, collecting coordinates in a geomagnetic coordinate system of a measured object, and angular velocity and acceleration of joints of the whole body;
s2, sending the collected coordinates in the geomagnetic coordinate system and the angular velocity and acceleration information of the whole body joint to an abnormal data processing model through a wireless transmission module, and removing abnormal data in the angular velocity and acceleration information of the whole body joint;
s3, sending the coordinates in the geomagnetic coordinate system and the processed angular velocity and acceleration of the joint of the whole body to a data processing center for corresponding processing to obtain processed action data;
and S4, establishing a skeleton model and applying the action data processed by the data processing module.
2. The high-performance motion capture method for virtual fabrication as claimed in claim 1, wherein S1 first collects coordinates in a geomagnetic coordinate system of the object to be measured and angular velocities and accelerations of joints of the whole body, and specifically comprises:
a plurality of position sensing units, acceleration sensing units and angular velocity sensing units are arranged on the whole body and joint points of the object to be detected.
3. The high performance motion capture method for virtual fab of claim 1, wherein the position sensor unit is used to measure the coordinates of the target in the geomagnetic coordinate system;
the action acquisition module comprises: the acceleration sensing unit is used for measuring an acceleration signal of the joint node;
and the angular speed sensing unit is used for measuring an angular speed signal of the joint node.
4. The method of claim 1, wherein the abnormal data processing model is created by the steps of:
collecting coordinates in a plurality of normal earth magnetic coordinate systems and angular velocity and acceleration samples of a whole body joint, and forming a data set together with the coordinates in a plurality of abnormal earth magnetic coordinate systems and the angular velocity and acceleration samples of the whole body joint;
and secondly, randomly dividing the data set into a training set and a testing set in proportion, training a plurality of machine learning models by using the training set, testing the effect of each machine learning model by using the testing set, and taking the machine learning model with the best testing result, namely the highest accuracy as an abnormal data processing model and storing the abnormal data processing model.
5. The method of claim 1, wherein the data set is divided into training set and testing set according to 8.
6. The method according to claim 1, wherein in the second step, the selected machine learning model is at least one of logistic regression, linear discriminant analysis, K-nearest neighbor, naive bayes, support vector machine, random forest, neural network.
7. The method of claim 1, wherein the selected machine learning model is a random forest model.
8. The high performance motion capture method for virtual production of claim 1 further comprising a facial capture module for capturing facial movements of the actor using an infrared high sensitivity camera.
9. The high performance motion capture method for virtual fab of claim 1, wherein the data processing center comprises:
the angular velocity processing unit is used for carrying out primary integration on the angular velocity signal to obtain an angular posture;
and the acceleration processing unit is used for estimating the roll angle and the pitch angle of the joint point according to the acceleration signal by gravity component.
10. The high-performance motion capture method for virtual fabrication as recited in claim 1, wherein the step S4 is to establish a skeleton model and apply the motion data processed by the data processing module, specifically: and establishing a skeleton model through a skeleton model establishing module, and applying the processed action data to the limb action of the skeleton model.
CN202310033898.1A 2023-01-10 2023-01-10 High-efficiency motion capture method for virtual production Pending CN115953843A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310033898.1A CN115953843A (en) 2023-01-10 2023-01-10 High-efficiency motion capture method for virtual production

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310033898.1A CN115953843A (en) 2023-01-10 2023-01-10 High-efficiency motion capture method for virtual production

Publications (1)

Publication Number Publication Date
CN115953843A true CN115953843A (en) 2023-04-11

Family

ID=87289109

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310033898.1A Pending CN115953843A (en) 2023-01-10 2023-01-10 High-efficiency motion capture method for virtual production

Country Status (1)

Country Link
CN (1) CN115953843A (en)

Similar Documents

Publication Publication Date Title
Johnston et al. Smartwatch-based biometric gait recognition
Liu et al. Fusion of inertial and depth sensor data for robust hand gesture recognition
KR101872907B1 (en) Motion analysis appratus and method using dual smart band
CN108279773B (en) Data glove based on MARG sensor and magnetic field positioning technology
Surer et al. Methods and technologies for gait analysis
CN111744156B (en) Football action recognition and evaluation system and method based on wearable equipment and machine learning
CN111552383A (en) Finger identification method and system of virtual augmented reality interaction equipment and interaction equipment
Hsu et al. Random drift modeling and compensation for mems-based gyroscopes and its application in handwriting trajectory reconstruction
Beily et al. A sensor based on recognition activities using smartphone
CN106970705A (en) Motion capture method, device and electronic equipment
Liu et al. Automatic fall risk detection based on imbalanced data
Yuan et al. Adaptive recognition of motion posture in sports video based on evolution equation
CN116226727A (en) Motion recognition system based on AI
TWI812053B (en) Positioning method, electronic equipment and computer-readable storage medium
CN115953843A (en) High-efficiency motion capture method for virtual production
Sung et al. Motion quaternion-based motion estimation method of MYO using K-means algorithm and Bayesian probability
CN111861275B (en) Household work mode identification method and device
Zhang Track and field training state analysis based on acceleration sensor and deep learning
Xia et al. Real-time recognition of human daily motion with smartphone sensor
İsmail et al. Human activity recognition based on smartphone sensor data using cnn
Chen et al. An integrated sensor network method for safety management of construction workers
Nana et al. The elderly’s falling motion recognition based on kinect and wearable sensors
Jia et al. Condor: Mobile Golf Swing Tracking via Sensor Fusion using Conditional Generative Adversarial Networks.
Htoo et al. Privacy preserving human fall recognition using human skeleton data
Lin et al. Toward detection of driver drowsiness with commercial smartwatch and smartphone

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination