CN115775347A - Taijiquan identification method based on fusion information, terminal device and storage medium - Google Patents

Taijiquan identification method based on fusion information, terminal device and storage medium Download PDF

Info

Publication number
CN115775347A
CN115775347A CN202111301208.3A CN202111301208A CN115775347A CN 115775347 A CN115775347 A CN 115775347A CN 202111301208 A CN202111301208 A CN 202111301208A CN 115775347 A CN115775347 A CN 115775347A
Authority
CN
China
Prior art keywords
information
skeleton joint
joint information
depth image
acquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111301208.3A
Other languages
Chinese (zh)
Inventor
王浩宇
杨珊莉
吴剑煌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujian University Of Traditional Chinese Medicine Subsidiary Rehabilitation Hospital
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Fujian University Of Traditional Chinese Medicine Subsidiary Rehabilitation Hospital
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujian University Of Traditional Chinese Medicine Subsidiary Rehabilitation Hospital, Shenzhen Institute of Advanced Technology of CAS filed Critical Fujian University Of Traditional Chinese Medicine Subsidiary Rehabilitation Hospital
Priority to CN202111301208.3A priority Critical patent/CN115775347A/en
Priority to PCT/CN2021/143893 priority patent/WO2023077659A1/en
Publication of CN115775347A publication Critical patent/CN115775347A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/776Validation; Performance evaluation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a Taijiquan identification method, terminal equipment and a computer storage medium. The Taijiquan identification method comprises the following steps: acquiring a first depth image and sensor data; inputting the first depth image into a preset deep learning model, and acquiring output first skeleton joint information; acquiring second skeleton joint information corresponding to the first depth image based on the acquisition time of the first depth image and sensor data; performing cross validation on the first skeleton joint information by adopting the second skeleton joint information; and outputting the first skeleton joint information after the verification is successful. Through the mode, the Taijiquan recognition method carries out cross validation on the output result of the deep learning model by adopting the sensor data, and particularly improves the accuracy of Taijiquan action recognition.

Description

Taijiquan identification method based on fusion information, terminal device and storage medium
Technical Field
The present application relates to the field of motion recognition technologies, and in particular, to a taijiquan recognition method based on fusion information, a terminal device, and a storage medium.
Background
The human motion posture recognition and the joint motion data are widely applied to various aspects of our lives, such as the motion industry, the rehabilitation industry, the security field and the like. For example, the Taiji boxing is an important activity for evaluating and exercising the motion capability and the heart and lung capability of the human body. High-precision and real-time human motion gesture recognition is of great importance to the improvement of the development level of the industry.
Along with the development of the social and economic level, people pay more and more attention to the health of the body. Meanwhile, with the increasing aging, more and more old people have the deterioration of physical functions and the reduction of the motor ability, and need to perform rehabilitation training by means of external force. The evaluation is the first step of rehabilitation, and then because the backward rehabilitation education level in China and the shortage of rehabilitation medical resources, the rehabilitation of the old is a pain point all the time, especially the evaluation of the motor ability of the old depends on the subjective evaluation mode of a rehabilitation therapist all the time, and the efficiency is low. In order to liberate medical resources and improve evaluation efficiency, the automatic recognition of human motion postures and the accurate acquisition of joint motion data are of great importance.
One of the main technical difficulties in identifying the motion of the taijiquan and other human motion gestures is how to accurately identify the complex motion of the human body in real time. Particularly relates to the field of motion evaluation and the like with higher precision requirements on motion data of joints, and accurate and quick identification results and motion data are the basis of all subsequent work.
Disclosure of Invention
The application provides a Taijiquan identification method based on fusion information, a terminal device and a storage medium.
The application provides a Taiji fist identification method based on fusion information, which comprises the following steps:
acquiring a first depth image and sensor data;
inputting the first depth image into a preset deep learning model, and acquiring output first skeleton joint information;
acquiring second skeleton joint information corresponding to the first depth image based on the acquisition time of the first depth image and sensor data;
performing cross validation on the first skeleton joint information by using the second skeleton joint information;
and outputting the first skeleton joint information after the verification is successful.
The Taijiquan identification method further comprises the following steps:
initializing a depth camera and an inertial sensor;
acquiring a second depth image by using the depth camera;
inputting the second depth image into the preset deep learning model, and acquiring output third skeleton joint information, wherein the third skeleton joint information comprises a plurality of skeleton joint information;
and establishing a mapping relation between each piece of bone joint information and the inertial sensor at the corresponding position, and recording the initial position of each inertial sensor in a world coordinate system.
Acquiring second skeleton joint information corresponding to the depth image based on the acquisition time of the first depth image and sensor data, wherein the acquisition comprises:
acquiring a time interval between the first depth image and an adjacent depth image;
acquiring speed information and acceleration information of the inertial sensor at acquisition time corresponding to the first depth image based on the sensor data;
and acquiring the second skeleton joint information by using the speed information and the acceleration information of the inertial sensor and the time interval.
Wherein the cross-validating the first skeleton joint information using the second skeleton joint information comprises:
extracting first human body orientation information from the first skeletal joint information;
extracting second human body orientation information from the second skeleton joint information;
acquiring orientation matching degree based on the first human body orientation information and the second human body orientation information;
and performing cross validation on the first skeleton joint information through the matching degree.
Wherein the extracting first human orientation information from the first skeletal joint information comprises:
acquiring a first joint point position and a second joint point position from the first skeleton joint information;
determining the first human orientation information based on the first joint point position and the second joint point position;
the extracting second human body orientation information from the second skeleton joint information includes:
acquiring a third joint position and a fourth joint position from the second skeleton joint information;
determining the second human body orientation information based on the third joint point position and the fourth joint point position.
The Taijiquan identification method further comprises the following steps:
when the matching degree is larger than 0, successfully verifying, and outputting the first skeleton joint information;
and when the matching degree is less than 0, the verification fails, and the second skeleton joint information is output.
The Taijiquan identification method further comprises the following steps:
and when the matching degree is smaller than 0, the verification fails, the first skeleton joint information is removed, and the preset deep learning model is trained by utilizing the second skeleton joint information.
The Taijiquan identification method further comprises the following steps:
when the matching degree is larger than 0, successfully verifying, and acquiring a correct probability value output by the preset deep learning model;
judging whether the correct probability value is greater than a preset probability value or not;
and if so, repositioning the position of the inertial sensor based on the first skeleton joint information.
The application also provides a terminal device, which comprises an acquisition module, an image module, a sensor module and an action recognition module; wherein, the first and the second end of the pipe are connected with each other,
the acquisition module is used for acquiring a first depth image and sensor data;
the image module is used for inputting the first depth image into a preset deep learning model and acquiring output first skeleton joint information;
the sensor module is used for acquiring second skeleton joint information corresponding to the first depth image based on the acquisition time of the first depth image and sensor data;
the action recognition module is used for performing cross validation on the first skeleton joint information by adopting the second skeleton joint information and outputting the first skeleton joint information after the validation is successful.
The present application further provides another terminal device, which includes a memory and a processor, wherein the memory is coupled to the processor;
wherein the memory is used for storing program data, and the processor is used for executing the program data to realize the Taijiquan recognition method.
The present application also provides a computer storage medium for storing program data which, when executed by a processor, is used to implement the method of taijiquan recognition described above.
The beneficial effect of this application is: the method comprises the steps that terminal equipment collects a first depth image and sensor data; inputting the first depth image into a preset deep learning model, and acquiring output first skeleton joint information; acquiring second skeleton joint information corresponding to the first depth image based on the acquisition time of the first depth image and sensor data; performing cross validation on the first skeleton joint information by adopting second skeleton joint information; and outputting the first skeleton joint information after the verification is successful. Through the mode, the Taijiquan recognition method carries out cross validation on the output result of the deep learning model by adopting the sensor data, and effectively improves the accuracy of Taijiquan action recognition.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts. Wherein:
fig. 1 is a schematic flowchart of an embodiment of a taijiquan recognition method provided in the present application;
FIG. 2 is a schematic diagram of a Tai Chi recognition flow provided by the present application;
FIG. 3 is a schematic illustration of the present application providing calculation of body right side orientation by inertial sensors;
fig. 4 is a schematic structural diagram of an embodiment of a terminal device provided in the present application;
fig. 5 is a schematic structural diagram of another embodiment of a terminal device provided in the present application;
FIG. 6 is a schematic structural diagram of an embodiment of a computer storage medium provided in the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly and completely with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only some embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The existing human motion posture identification methods mainly comprise two types: the method is characterized in that a visual acquisition device extracts a human skeleton based on a traditional computer visual algorithm or a deep learning algorithm to further realize the recognition of the motion posture of the human body, the method is represented by methods such as Microsoft Kinect, intel Realsense, openPose and the like, the method recognizes the skeleton of the human body from the color image and the depth image, and based on the result, the information such as the position, the speed, the acceleration and the like of joint nodes on the skeleton of the human body is calculated. The other type of algorithm is to directly acquire information such as joint node positions, speeds, accelerations and the like on the skeleton of a human body by wearing an inertial sensor.
However, the current human motion gesture recognition method has a low accuracy for recognizing certain actions, for example, for recognizing the side position of a human body, especially for recognizing the front and back of the human body in the process of turning the human body; in addition, the existing vision-based method firstly extracts the skeleton of the human body, then calculates the information of the joints of the human body based on the motion information of the skeleton, once the skeleton information is extracted incorrectly, the calculation result of the motion data of the joints has a larger difference with the real data, and the calculation precision of the motion data of the joints of the human body is not high.
In order to solve the problems, the application provides a Taijiquan recognition method and device based on fusion information, wherein a terminal device takes a color depth image acquired by a depth camera as an input; meanwhile, the testee wears the inertial sensor on the key part of the body, and the system acquires the data of the inertial sensor as a second input. The terminal equipment inputs the color depth image into the trained deep learning model to obtain a primary extraction result of the human skeleton information, then performs cross validation on the result and the joint position information acquired by the inertial sensor, and finally fuses the results of the two kinds of information to obtain accurate human skeleton information and accurate motion data of the joint. And meanwhile, taking the data result as training data to continuously strengthen the training of the deep learning model.
Specifically referring to fig. 1 and fig. 2, fig. 1 is a schematic flow chart of an embodiment of a tai chi fist identification method provided by the present application, and fig. 2 is a schematic flow chart of the tai chi fist identification method provided by the present application.
The Taijiquan identification method is applied to a terminal device, wherein the terminal device can be a server, and can also be a system formed by the server and the terminal device in a mutual matching mode. Accordingly, each part, such as each unit, sub-unit, module, and sub-module, included in the terminal device may be all disposed in the server, or may be disposed in the server and the terminal device, respectively.
Further, the server may be hardware or software. When the server is hardware, it may be implemented as a distributed server cluster composed of multiple servers, or may be implemented as a single server. When the server is software, it may be implemented as a plurality of software or software modules, for example, software or software modules for providing distributed servers, or as a single software or software module, and is not limited herein. In some possible implementations, the taijiquan recognition method of embodiments of the present application may be implemented by a processor calling computer readable instructions stored in a memory.
The motions according to the embodiments of the present application include motions in a plurality of fields and in a plurality of directions, for example, a taijiquan motion, a dance motion, a fitness motion, a life motion, and the like. In the description of the following embodiments of the present application, there is no further enumeration of types of actions that may be applicable.
Specifically, as shown in fig. 1, the method for identifying a taijiquan in the embodiment of the present application specifically includes the following steps:
step S11: a first depth image is acquired, along with sensor data.
In the embodiment of the present application, as shown in fig. 2, the terminal device uses a depth camera to acquire a color depth image including a motion of a user, that is, a first depth image; in addition, the terminal equipment adopts an inertial sensor to acquire sensor data of each skeleton joint of the user in the motion process, wherein the sensor data specifically comprises speed data, acceleration data and the like of each skeleton joint.
Specifically, before the taijiquan recognition method according to the embodiment of the application is implemented, a user needs to build a hardware environment first, so as to initialize the depth camera and the inertial sensor.
On one hand, a depth camera needs to be built to collect the actions of a user, and the collected actions are mainly a color RGB image data stream and a depth image data stream. On the other hand, it needs to be at A for the user 1 To A N The human joint of (1) wears N light inertial sensors, and the light inertial sensors are S according to the sequence of the human joint 1 To S N Respectively representing the corresponding inertial sensors at each human joint, connecting all the inertial sensors through Bluetooth or WIFI or other wireless transmission modes, and connecting the inertial sensors with a computing host respectively.
Further, initializing the inertial sensor also includes a first calibration of the inertial sensor position.
When the Taijiquan recognition method formally starts, the front of the user is required to face the two feet of the depth camera to be separated, and the two hands naturally droop. When the depth camera detects the image of the user for the first time, the depth camera inputs the acquired color depth image into a preset deep learning model which is established in advance, the preset deep learning model is calculated, the skeleton information of the user is identified, and A in the skeleton information can be identified 1 To A N The human joint of (1).
At this time, the terminal device maps the index or number of the human body bone joint with the index or number of the inertial sensor, and records the position of each inertial sensor in the world coordinate system as the starting position p of each inertial sensor i
In the world coordinate system of the embodiment of the present application, the middle points of the two feet of the human skeleton model are used as dots, the right front side facing the human body is the positive z-axis direction, the right side of the human body is the positive x-axis direction, and the head direction is the positive y-axis direction.
In addition, the preset deep learning model of the embodiment of the application can adopt a traditional supervised deep learning method to learn a large amount of marked video data, and a deep learning model is established. The deep learning model can adopt a color depth image collected by a depth camera as input to calculate the motion skeleton information of a human body in an output image.
Step S12: and inputting the first depth image into a preset deep learning model, and acquiring the output first skeleton joint information.
In the embodiment of the application, as shown in fig. 2, the terminal device inputs the first depth image into the preset deep learning model, so as to identify the skeleton and the nodes of the human body, and obtain the first skeleton joint information. The preset deep learning model is obtained by training in advance according to the offline training process introduced in the step S11.
Step S13: and acquiring second skeleton joint information corresponding to the first depth image based on the acquisition time of the first depth image and the sensor data.
In the embodiment of the application, the terminal device obtains the time interval Δ based on the acquisition time difference between the first depth image and the last acquired depth image t . The terminal equipment can set the depth camera in advance to collect according to the same collection time interval, so the time interval delta t Typically a fixed value.
The terminal device then combines the time interval Δ with the sensor data captured by the inertial sensor, including acceleration data and velocity data t The new position of each inertial sensor in the world coordinate system is calculated by an integration method.
Specifically, taking the forward Euler method as an example, the position of the inertial sensor i at n +1
Figure BDA0003338437920000081
The calculation method is as follows:
Figure BDA0003338437920000082
wherein the content of the first and second substances,
Figure BDA0003338437920000083
is the velocity of the inertial sensor i at the n position,
Figure BDA0003338437920000084
is the acceleration of the inertial sensor i at the n position.
In other embodiments, other time integration methods may be employed, such as the Longgadotta method, etc.
By the method, the positions of all the inertial sensors in the world coordinate system at the next moment of the currently acquired depth image can be calculated, so that the second skeleton joint information corresponding to the first depth image is acquired.
Step S14: and performing cross validation on the first skeleton joint information by adopting the second skeleton joint information.
In this embodiment, the terminal device may perform cross-validation on the second skeleton joint information and the first skeleton joint information by using the human body orientation.
Specifically, the terminal device may obtain the positions of the left foot and the right foot of the user from the first skeleton joint information calculated by the first skeleton joint information based on the preset deep learning model, namely the positions of the left foot and the right foot of the user
Figure BDA0003338437920000085
And
Figure BDA0003338437920000086
thus, the current right-side orientation of the human body can be calculated as:
Figure BDA0003338437920000087
the terminal device may obtain the positions of all the inertial sensors in the world coordinate system from the second skeleton joint information, and if the index of the left foot sensor is m and the index of the right foot sensor is n, specifically referring to fig. 3, the current right direction of the human body is:
d=p n -p m /‖p n -p m
further, in the embodiment of the present application, μ = d · d' is defined as a matching degree of the second skeleton joint information detected by the inertial sensor and the first skeleton joint information calculated by the deep learning model in the human body orientation.
When the matching degree is greater than 0, it is indicated that the human body orientation calculated by the deep learning model is the same as the human body orientation detected by the inertial sensor, and the process proceeds to step 15.
When the matching degree is smaller than 0, the human body orientation calculated by the deep learning model is opposite to the human body orientation detected by the inertial sensor, and at this time, the human body orientation calculated by the inertial sensor is taken as the reference, that is, the second skeleton joint information is output to represent the human body motion posture and the joint motion data.
Further, if the matching degree is less than 0, it is considered that the human body orientation output by the deep learning model is deviated, and at this time, the human body orientation calculated by the deep learning model may be corrected. Specifically, the terminal device can eliminate the error result of the deep learning model, save the correct result calculated by the inertial sensor as the training data of the deep learning model, and continue to train the deep learning module.
Step S15: and outputting the first skeleton joint information after the verification is successful.
In the embodiment of the application, when the matching degree is larger than 0, the terminal device outputs the first skeleton joint information to represent the human motion posture and the joint motion data.
Further, the matching degree is larger than 0, the human body output by the deep learning model can be considered to be correct in orientation, and at the moment, the position of the inertial sensor can be recalibrated to avoid the inertial sensor position drift phenomenon caused by the accumulation of the time integration algorithm calculation inertial sensor errors.
Specifically, when the deep learning model of the embodiment of the application calculates and outputs the skeleton model and the joint position of the human body, a numerical value from 0 to 1 is generated at the same time, which represents the probability that the output result is correct and is represented by η. When the front face of the user faces the vision sensor, the eta value is higher; when the user turns around, the η value is lower.
When eta is greater than 0.8, the terminal equipment can recalibrate, i.e. reposition the position of the inertial sensor, so that the joint positions calculated by the deep learning model are consistent. It should be noted that the calibration process is the same as the calibration process during initialization, and is not described herein again.
In the embodiment of the application, the terminal equipment acquires a first depth image and sensor data; inputting the first depth image into a preset deep learning model, and acquiring output first skeleton joint information; acquiring second skeleton joint information corresponding to the first depth image based on the acquisition time of the first depth image and sensor data; performing cross validation on the first skeleton joint information by adopting the second skeleton joint information; and outputting the first skeleton joint information after the verification is successful. Through the mode, the Taijiquan recognition method carries out cross validation on the output result of the deep learning model by adopting the sensor data, and effectively improves the accuracy of action recognition.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
To implement the taijiquan recognition method according to the foregoing embodiment, the present application further provides a terminal device, and specifically refer to fig. 4, where fig. 4 is a schematic structural diagram of an embodiment of the terminal device according to the present application.
The terminal device 400 of the embodiment of the application includes an acquisition module 41, an image module 42, a sensor module 43, and an action recognition module 44; wherein the content of the first and second substances,
the acquisition module 41 is configured to acquire the first depth image and the sensor data.
The image module 42 is configured to input the first depth image into a preset deep learning model, and obtain output first skeleton joint information.
The sensor module 43 is configured to obtain second skeleton joint information corresponding to the first depth image based on the acquisition time of the first depth image and sensor data.
The motion recognition module 44 is configured to perform cross validation on the first skeleton joint information by using the second skeleton joint information, and output the first skeleton joint information after the cross validation is successful.
To implement the taijiquan recognition method according to the foregoing embodiment, the present application further provides another terminal device, and specifically please refer to fig. 5, where fig. 5 is a schematic structural diagram of another embodiment of the terminal device according to the present application.
The terminal device 500 of the embodiment of the present application includes a memory 51 and a processor 52, wherein the memory 51 and the processor 52 are coupled.
The memory 51 is used for storing program data, and the processor 52 is used for executing the program data to realize the taijiquan recognition method according to the above-mentioned embodiment.
In the present embodiment, the processor 52 may also be referred to as a CPU (Central Processing Unit). Processor 52 may be an integrated circuit chip having signal processing capabilities. The processor 52 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor 52 may be any conventional processor or the like.
The present application also provides a computer storage medium, as shown in fig. 6, a computer storage medium 600 is used for storing program data 61, and when the program data 61 is executed by a processor, the method for identifying a taijiquan is implemented as described in the above embodiments.
The present application also provides a computer program product, wherein the computer program product comprises a computer program operable to cause a computer to execute the method for identifying a taijiquan as described in the embodiments of the present application. The computer program product may be a software installation package.
The taijiquan recognition method according to the above embodiments of the present application may be stored in a device, for example, a computer readable storage medium, when the method is implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the purpose of illustrating embodiments of the present application and is not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application or are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (11)

1. A Taijiquan recognition method based on fusion information is characterized by comprising the following steps:
acquiring a first depth image and sensor data;
inputting the first depth image into a preset deep learning model, and acquiring output first skeleton joint information;
acquiring second skeleton joint information corresponding to the first depth image based on the acquisition time of the first depth image and sensor data;
performing cross validation on the first skeleton joint information by using the second skeleton joint information;
and outputting the first skeleton joint information after the verification is successful.
2. The Tai Chi recognition method of claim 1,
the Taijiquan identification method further comprises the following steps:
initializing a depth camera and an inertial sensor;
acquiring a second depth image by using the depth camera;
inputting the second depth image into the preset deep learning model, and acquiring output third skeleton joint information, wherein the third skeleton joint information comprises a plurality of skeleton joint information;
and establishing a mapping relation between each piece of bone joint information and the inertial sensor at the corresponding position, and recording the initial position of each inertial sensor in a world coordinate system.
3. The Taijiquan recognition method of claim 2,
the acquiring of the second skeleton joint information corresponding to the depth image based on the acquisition time of the first depth image and the sensor data comprises:
acquiring a time interval between the first depth image and an adjacent depth image;
acquiring speed information and acceleration information of the inertial sensor at acquisition time corresponding to the first depth image based on the sensor data;
and acquiring the second skeleton joint information by using the speed information, the acceleration information and the time interval of the inertial sensor.
4. The Tai Chi recognition method according to claim 1 or 3,
the cross-validation of the first skeleton joint information by the second skeleton joint information comprises:
extracting first human body orientation information from the first skeletal joint information;
extracting second human body orientation information from the second skeleton joint information;
acquiring orientation matching degree based on the first human body orientation information and the second human body orientation information;
and performing cross validation on the first skeleton joint information through the matching degree.
5. The Tai Chi recognition method of claim 4, wherein,
the extracting first human body orientation information from the first skeletal joint information includes:
acquiring a first joint point position and a second joint point position from the first skeleton joint information;
determining the first human orientation information based on the first joint point position and the second joint point position;
the extracting of the second body orientation information from the second skeleton joint information includes:
acquiring a third joint position and a fourth joint position from the second skeleton joint information;
determining the second human body orientation information based on the third joint point position and the fourth joint point position.
6. The Tai Chi recognition method of claim 4,
the Taijiquan identification method further comprises the following steps:
when the matching degree is larger than 0, the verification is successful, and the first skeleton joint information is output;
and when the matching degree is less than 0, the verification fails, and the second skeleton joint information is output.
7. The Tai Chi recognition method of claim 6, wherein,
the Taijiquan identification method further comprises the following steps:
and when the matching degree is smaller than 0, the verification fails, the first skeleton joint information is removed, and the preset deep learning model is trained by utilizing the second skeleton joint information.
8. The Tai Chi recognition method of claim 6, wherein,
the Taijiquan identification method further comprises the following steps:
when the matching degree is larger than 0, successfully verifying, and acquiring a correct probability value output by the preset deep learning model;
judging whether the correct probability value is greater than a preset probability value or not;
and if so, repositioning the position of the inertial sensor based on the first skeleton joint information.
9. The terminal equipment is characterized by comprising an acquisition module, an image module, a sensor module and an action recognition module; wherein, the first and the second end of the pipe are connected with each other,
the acquisition module is used for acquiring a first depth image and sensor data;
the image module is used for inputting the first depth image into a preset deep learning model and acquiring output first skeleton joint information;
the sensor module is used for acquiring second skeleton joint information corresponding to the first depth image based on the acquisition time of the first depth image and sensor data;
the action recognition module is used for carrying out cross verification on the first skeleton joint information by adopting the second skeleton joint information and outputting the first skeleton joint information after the verification is successful.
10. A terminal device, comprising a memory and a processor, wherein the memory is coupled to the processor;
wherein the memory is for storing program data and the processor is for executing the program data to implement the taijiquan recognition method of any of claims 1-8.
11. A storage medium for storing program data which, when executed by a processor, is adapted to carry out the method of identifying a tai chi fist according to any one of claims 1 to 8.
CN202111301208.3A 2021-11-04 2021-11-04 Taijiquan identification method based on fusion information, terminal device and storage medium Pending CN115775347A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111301208.3A CN115775347A (en) 2021-11-04 2021-11-04 Taijiquan identification method based on fusion information, terminal device and storage medium
PCT/CN2021/143893 WO2023077659A1 (en) 2021-11-04 2021-12-31 Fusion information-based tai chi recognition method, terminal device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111301208.3A CN115775347A (en) 2021-11-04 2021-11-04 Taijiquan identification method based on fusion information, terminal device and storage medium

Publications (1)

Publication Number Publication Date
CN115775347A true CN115775347A (en) 2023-03-10

Family

ID=85388398

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111301208.3A Pending CN115775347A (en) 2021-11-04 2021-11-04 Taijiquan identification method based on fusion information, terminal device and storage medium

Country Status (2)

Country Link
CN (1) CN115775347A (en)
WO (1) WO2023077659A1 (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130077820A1 (en) * 2011-09-26 2013-03-28 Microsoft Corporation Machine learning gesture detection
CN104298358B (en) * 2014-10-29 2017-11-21 指挥家(厦门)科技有限公司 A kind of dynamic 3D gesture identification methods based on joint space position data
JP2017091377A (en) * 2015-11-13 2017-05-25 日本電信電話株式会社 Attitude estimation device, attitude estimation method, and attitude estimation program
CN109086659B (en) * 2018-06-13 2023-01-31 深圳市感动智能科技有限公司 Human behavior recognition method and device based on multi-channel feature fusion
CN113591726B (en) * 2021-08-03 2023-07-14 电子科技大学 Cross mode evaluation method for Taiji boxing training action

Also Published As

Publication number Publication date
WO2023077659A1 (en) 2023-05-11

Similar Documents

Publication Publication Date Title
Memo et al. Head-mounted gesture controlled interface for human-computer interaction
CN104850773B (en) Method for authenticating user identity for intelligent mobile terminal
Chaudhari et al. Yog-guru: Real-time yoga pose correction system using deep learning methods
Zhao A concise tutorial on human motion tracking and recognition with Microsoft Kinect
Wei et al. Towards on-demand virtual physical therapist: Machine learning-based patient action understanding, assessment and task recommendation
CN111414837A (en) Gesture recognition method and device, computer equipment and storage medium
CN109308437B (en) Motion recognition error correction method, electronic device, and storage medium
EP3611691B1 (en) Recognition device, recognition system, recognition method, and recognition program
CN107292295B (en) Gesture segmentation method and device
CN113705540A (en) Method and system for recognizing and counting non-instrument training actions
JP2016045884A (en) Pattern recognition device and pattern recognition method
CN110991292A (en) Action identification comparison method and system, computer storage medium and electronic device
Iyer et al. Generalized hand gesture recognition for wearable devices in IoT: Application and implementation challenges
CN111507244B (en) BMI detection method and device and electronic equipment
Enikeev et al. Recognition of sign language using leap motion controller data
CN110598647B (en) Head posture recognition method based on image recognition
Xu et al. A novel method for hand posture recognition based on depth information descriptor
CN115775347A (en) Taijiquan identification method based on fusion information, terminal device and storage medium
CN117109567A (en) Riding gesture monitoring method and system for dynamic bicycle movement and wearable riding gesture monitoring equipment
CN116310976A (en) Learning habit development method, learning habit development device, electronic equipment and storage medium
CN112206480B (en) Self-adaptive kicking state identification method and device based on nine-axis sensor
Kim et al. Evaluation of machine learning algorithms for worker’s motion recognition using motion sensors
CN113536879A (en) Image recognition method and device thereof, artificial intelligence model training method and device thereof
JP2020201755A (en) Concentration degree measurement device, concentration degree measurement method, and program
CN115293299B (en) Human body posture characteristic real-time detection method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination