CN108096807A - A kind of exercise data monitoring method and system - Google Patents

A kind of exercise data monitoring method and system Download PDF

Info

Publication number
CN108096807A
CN108096807A CN201711310325.XA CN201711310325A CN108096807A CN 108096807 A CN108096807 A CN 108096807A CN 201711310325 A CN201711310325 A CN 201711310325A CN 108096807 A CN108096807 A CN 108096807A
Authority
CN
China
Prior art keywords
data
sensor
user
gas
motion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711310325.XA
Other languages
Chinese (zh)
Inventor
丁贤根
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201711310325.XA priority Critical patent/CN108096807A/en
Publication of CN108096807A publication Critical patent/CN108096807A/en
Priority to PCT/CN2018/120363 priority patent/WO2019114708A1/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B71/00Games or sports accessories not covered in groups A63B1/00 - A63B69/00
    • A63B71/06Indicating or scoring devices for games or players, or for other sports activities
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B69/00Training appliances or apparatus for special sports
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B69/00Training appliances or apparatus for special sports
    • A63B69/20Punching balls, e.g. for boxing; Other devices for striking used during training of combat sports, e.g. bags
    • A63B69/32Punching balls, e.g. for boxing; Other devices for striking used during training of combat sports, e.g. bags with indicating devices
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B71/00Games or sports accessories not covered in groups A63B1/00 - A63B69/00
    • A63B71/06Indicating or scoring devices for games or players, or for other sports activities
    • A63B71/0619Displays, user interfaces and indicating devices, specially adapted for sport equipment, e.g. display mounted on treadmills
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B71/00Games or sports accessories not covered in groups A63B1/00 - A63B69/00
    • A63B71/06Indicating or scoring devices for games or players, or for other sports activities
    • A63B71/0619Displays, user interfaces and indicating devices, specially adapted for sport equipment, e.g. display mounted on treadmills
    • A63B2071/065Visualisation of specific exercise parameters
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B2220/00Measuring of physical parameters relating to sporting activity
    • A63B2220/10Positions
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B2220/00Measuring of physical parameters relating to sporting activity
    • A63B2220/20Distances or displacements
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B2220/00Measuring of physical parameters relating to sporting activity
    • A63B2220/30Speed
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B2220/00Measuring of physical parameters relating to sporting activity
    • A63B2220/40Acceleration
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B2220/00Measuring of physical parameters relating to sporting activity
    • A63B2220/50Force related parameters
    • A63B2220/56Pressure
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B2230/00Measuring physiological parameters of the user
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B2244/00Sports without balls
    • A63B2244/10Combat sports
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B2244/00Sports without balls
    • A63B2244/10Combat sports
    • A63B2244/102Boxing

Landscapes

  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The present invention proposes a kind of exercise data monitoring method and system, using first sensor, second sensor acquisition exercise data, it is combined to using data image, 2D images 3D, measures hitting power indirectly by motion sensor, supports study, training, to white silk, feature extraction, strong and weak item countermeasure, it realizes user automatic identification, action automatic identification, surging item recognition, weak tendency item recognition, automatic judge, automatically form match odds, additionally it is possible to the functions such as complete to call the roll, register, notify, positioning, alarming.The composition of system includes hardware and the cloud center softwares and application software such as sensor, micro-base station, smart mobile phone APP, PC machine and cloud center.

Description

Motion data monitoring method and system
Technical Field
The invention relates to the field of artificial intelligence application in information technology, in particular to an application technology of artificial intelligence in sports, in particular to a method and a system for image recognition, motion recognition, personnel recognition, intelligent competition training and automatic judgment, and in particular relates to a motion data monitoring method and a system.
Background
Human physical activity is very old and traditional. The sports industry is also a traditional industry. The application of artificial intelligence technology in sports application is still in the germination stage at present, and the search is carried out on related patent websites, and no patent application related to the invention is found.
The defects of the prior art are as follows:
1. the sports technique is entirely more traditional, and the advanced technique intervenes less.
2. There is no good method for measuring the sports data, the sports randomness of the human body is large, and the sports change is large along with the change of the field and the sports items.
3. There is no effective way to analyze and identify motion data.
4. The results of artificial intelligence are not used in sports.
The invention aims to solve the relevant problems in sports by using an artificial intelligence technology, try to change the defects of the current sports intelligence technology, such as mechanical measurement, action recognition, personnel recognition, learning, training, sparring, referee, evaluation and odds calculation in human dynamic motion (such as fighting), and invent a data imaging method creatively, so that the current achievements of artificial intelligence in the field of image recognition can be borrowed in sports measurement data.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention is realized by the following technical scheme:
as shown in FIG. 1, the invention comprises sensors of 104, 105-10 n and 10n + 1-10 m +1, a terminal of 101 and a fighting information system 2 of 103. The sensors comprise a motion sensor, a physiological sensor, a user number generator, a geographic coordinate sensor, a pressure sensor and the like, and the terminal also comprises a 102 fighting information system 1. The method comprises the following steps:
a method of athletic data monitoring, including but not limited to: a step of monitoring the first data D1 with the first sensor S1 provided on the user' S body.
A step of transmitting said first data D1 to a motion information system using a sensor network. And processing the first data D1.
As shown in fig. 2, 3 and 4, the first sensor structure comprises one or a combination of five of a motion sensor, a physiological sensor, a pressure sensor, a user number generator and a geographic coordinate sensor, and operates under the management of a processor, wherein the power supply subsystem is included. For example, for the same user, the first sensor with the motion sensor may be worn on the limbs to monitor the motion of the limbs, but for physiological monitoring, the monitoring is performed at any position of the limbs; in addition, as some sports (e.g. fighting), it may be necessary to monitor the pressure (e.g. the striking force of a fist), and in this case, not only the motion sensor but also the pressure sensor may be required to be disposed at a specific position (e.g. the position of the fist). In addition, for the management of people or animals, only a user number generator or a geographic coordinate sensor is needed to meet the requirements, so that the motion sensor, the physiological sensor, the pressure sensor, the user number generator and the geographic coordinate sensor are determined according to specific application scenarios by adopting any one or combination of the motion sensor, the physiological sensor, the pressure sensor, the user number generator and the geographic coordinate sensor.
On the basis of the technical scheme, the invention comprises the following improvements and combinations thereof:
the monitoring of the first data D1 using the first sensor S1 provided on the user' S body includes:
a step of collecting the user motion data using the motion sensor of the first sensor S1.
And acquiring the user motion data by utilizing a motion sensor included in the mobile phone and directly transmitting the user motion data to a motion information system through the inside of the mobile phone.
A step of acquiring the user physiological data by using the physiological sensor in the first sensor S1.
A step of collecting pressure data of the user hitting the target, an opponent, using the target, using the pressure sensor in the first sensor S1.
A step of generating user number data of the user using a user number generator in the first sensor S1.
A step of generating geographic coordinate data of the user using the geographic coordinate sensor in the first sensor S1.
The step of monitoring the second data D2 while the user strikes and uses the target using the second sensor S2 disposed on the target includes but is not limited to:
a step of collecting pressure data of the target hit and use by the user using the pressure sensor of the second sensor S2.
And collecting position data of the target when the user strikes and uses the target by using a position sensor in the second sensor S2.
A step of connecting all the first sensors S1 worn by one of the users to a personal sensor network, a place sensor network, the motion information system using a cell sensor network, as shown in fig. 7.
A step of connecting all the second sensors S2 equipped in a set of targets to a personal sensor network, a place sensor network, the motion information system using a cell sensor network, as shown in fig. 8.
A step of collecting and monitoring a system time value T of the time when the first data D1 and the second data D2 occur, and recording the system time value T into the first data D1 and the second data D2.
A step of analog/digital A/D converting the first data D1 and the second data D2.
Adjusting sampling frequency and sampling precision of the first sensor S1 and the second sensor S2 according to the motion category attribute data D4.
And interpolating and filling the first data D1 and the second data D2 according to a predetermined scale based on the first data D1 and the second data D2, and merging the first data D1 and the second data D2 into the associated data D3.
Wherein the first sensor S1 is disposed at a wrist, ankle, joint and/or strike location of the user.
And adopting the artificial intelligence algorithm, extracting habit action characteristic data of the user according to the user motion data, and recording the habit action characteristic data to personal profile data D5 of the user.
And extracting the voiceprint feature data of the user according to the voice data of the user by adopting the artificial intelligence algorithm, and recording the voiceprint feature data of the user into the personal profile data D5 of the user.
And extracting the motion characteristic data of the motion according to the motion type attribute data by adopting the artificial intelligence algorithm, and recording the motion characteristic data to the motion type attribute data D4.
The motion category attribute data D4 includes, but is not limited to: the exercise rule data and exercise force data, exercise level data, exercise amplitude data, injury degree data, duration degree data, physical ability consumption degree data, physiological degree data and/or competition rule data corresponding to the exercise rule data.
Wherein the motion rules include, but are not limited to: free combat, standing combat, unlimited combat, MMA, UFC, free combat, martial arts, Taijiquan, Taiquan, kickboxing, K1 rules, fencing, judo, wrestling, athletics, gymnastics, balls and the like.
The user has personal profile data D5, the personal profile data D5 including, but not limited to: the user's height, weight, three-dimensional, arm extension, arm weight, fist weight, heart rate, blood oxygen, body temperature, vital capacity, date and time, calorie consumption, historical athletic recording, historical race performance, typical athletic data, exertional athletic data, voiceprint data, image data, video data.
The motion sensor comprises but is not limited to an angular velocity sub sensor, an acceleration sub sensor and a magnetic force sub sensor, and the axis system at least comprises XYZ three axes.
Fig. 9 is a structure diagram of the micro base station.
On the basis of the technical scheme, the invention comprises the following improvements and combinations thereof:
and according to the data content including but not limited to sampling type, sampling frequency, sampling precision and data format, carrying out data formatting on the associated data D3.
A step of calculating unit data D3-U by decomposing the motion data portion in the associated data D3 into motion units according to the characteristics of the motion.
As shown in fig. 10 to 13, 1001 is the related data D3, and is converted into unit data 1004, that is, D3-U, by data formatting 1002 and motion decomposition.
And mapping the unit data D3-U into a moving image, and according to the collection sequence, mapping one group into an image point mapping step of mapping one pixel point in the moving image by taking the three-axis data of the motion sensor collected each time as one group in the unit data D3-U.
As in FIG. 11, unit data D3-U at 1004 is decomposed into angular velocity (gyro) sensor data 1015 and acceleration sensor data 1025, where one acquisition point is 1016 for group 1015 and 1026 for group 1025.
And a multi-map mapping step of mapping acquired data of each sub-sensor of the X-axis, the Y-axis and the Z-axis of the motion sensor in the unit data D3-U to be a moving image, mapping each acquired point of each sub-sensor to be a pixel point in the corresponding moving image, taking X, Y, Z three-axis data of the acquired point as an argument X of RGB three-primary-color data of the pixel point, establishing a function Y (X) of the RGB color code value Y, and calculating the RGB three-primary-color data.
As in fig. 11, group 1015 of angular velocity sensors is mapped to g-map 1018, and acquisition points 1016 in group 1015 are mapped to pixel points 1017 in g-map 1018; set 1025 of acceleration sensors is mapped to a graph 1028, and acquisition point 1026 in set 1025 is mapped to pixel point 1027 in a graph 1028.
And a single-image multi-channel mapping step of mapping acquired data of one sub-sensor in the motion sensor in the unit data D3-U to be a moving image, mapping acquired data of other sub-sensors to be a channel of the moving image, mapping each acquisition point of each sub-sensor to be a corresponding pixel point in the moving image or the channel, using X, Y, Z triaxial data of the acquisition point as an independent variable x of pixel point RGB data or channel data, establishing a function y (x) of an RGB color code value y, and calculating the RGB data or the channel data.
As in fig. 12, group 1015 of angular velocity sensors is mapped to g-map 1018, and acquisition points 1016 in group 1015 are mapped to pixel points 1017 in g-map 1018; the set 1025 of acceleration sensors is mapped to a c-channel 1038 and the acquisition points 1026 in the set 1025 are mapped to pixel points 1037 in the c-channel 1038.
And performing deep learning on the plurality of moving image data by adopting an artificial intelligence image recognition and classification algorithm, summarizing and calculating feature data comprising motion type features, action type features, pressure size features and user recognition features, and calculating and comparing the image deep learning of the feature data when the next associated data D3 is collected.
And according to the image and video file format, the multi-image mapping and the single-image mapping are changed into the image and the video file, so that the image and the video file displayed by a display and watched by human eyes can be reconstructed conveniently.
As shown in fig. 13, the reconstruction of the image and video files is shown, one of the ways to calculate and add header files, i.e., 1119, 1129, 1139 in fig. 13.
The artificial intelligence algorithm includes, but is not limited to: an artificial Neural network algorithm, a Convolutional Neural Network (CNNs) algorithm, a Recurrent Neural Network (RNN) algorithm, a Deep Neural Network (DNN), a Support Vector Machine (SVM) algorithm, a genetic algorithm, an ant colony algorithm, a simulated annealing algorithm, a particle swarm algorithm, and a Bayes (Bayes) algorithm.
The RGB functions include, but are not limited to, linear functions y-kx + j and nonlinear functions, where k and j are tuning constants.
On the basis of the technical scheme, the invention comprises the following improvements and combinations thereof:
a step of making more than one video image sensor S3 shoot more than one video image D6 of the user training.
A step of causing said one or more video image sensors S3 to communicate with said motion information system via said sensing network.
And performing three-dimensional vectorization synthesis of motion actions by adopting the artificial intelligence algorithm according to the position of the first sensor S1 in the video image D6 based on the video image D6 and the first data D1 to obtain three-dimensional vectorization data D7.
A step of associating the three-dimensional vectorized data D7 with the second data D2, the association data D3, the motion category attribute data D4 and/or the personal profile data D5.
And identifying the motion action in the video image D6 according to the three-dimensional vectorized data D7 and the motion type attribute data D4 by adopting the artificial intelligence algorithm, and synchronizing the front and rear time points of the motion action marked in the video image D6.
Wherein, the race training includes but is not limited to single training, single race, multi-player competition.
On the basis of the technical scheme, the invention comprises the following improvements and combinations thereof:
and striking a target by the coach user according to the motion type attribute data D4 by adopting a standard motion to obtain associated data D3 of the coach, performing machine learning in the associated data D3 of the coach according to the artificial intelligence algorithm to obtain an associated result D3-AI1 of the coach and a confidence result D3-AI2 of the coach, and updating the personal profile data D5 of the coach user to learn the coach.
Striking a target by the trainee user according to the motion category attribute data D4 to obtain the association data D3 of the trainee, performing machine learning in the association data D3 of the trainee according to the artificial intelligence algorithm to obtain the association result D3-AI1 of the trainee and the confidence result D3-AI2 of the trainee, and updating the self-training of the personal profile data D5 of the trainee user.
And circularly comparing the association result D3-AI1 of the student with the association result D3-AI1 of the trainer, and circularly comparing the confidence result D3-AI2 of the student with the confidence result D3-AI2 of the trainer.
And calculating and analyzing the strong item, the weak item and the gap of the trainee according to the association result D3-AI1 and the confidence result D3-AI2 of the trainee, updating the personal profile data D5 of the trainee, and calculating and generating and outputting a training suggestion information strong and weak item countermeasure.
Searching the personal profile data D5 of the opponent user and the personal profile data D5 of the trainee, comparing the typical sports data, the strong sports item data and the weak sports item data, calculating and analyzing the difference between the typical sports data, the strong sports item data and the weak sports item data, making a targeted training suggestion plan, and supervising and urging the opponent training for checking the training result.
On the basis of the technical scheme, the invention comprises the following improvements and combinations thereof:
when the first data D1 of the user is collected, adopting the artificial intelligence algorithm to identify a single-sensor user identification step of the user according to the first data D1, the correlation result D3-AI1, the confidence result D3-AI2 and/or the three-dimensional vectorization data D8.
And when the first data D1 of the user is collected, adopting the artificial intelligence algorithm to identify the habitual action user identification step of the user according to the first data D1 and the habitual action feature data.
And when the collected first data D1 of the user comprises the voice data, adopting the artificial intelligence algorithm to identify the voiceprint feature user identification step of the user according to the voice data and the voiceprint feature data.
When the first data D1 and the second data D2 of the user are collected, adopting the artificial intelligence algorithm to identify a double-sensor user identification step of the user according to the first data D1, the association result D3-AI1, the confidence result D3-AI2 of the user and the three-dimensional vectorization data D8.
And when the first data D1 of the user is collected, adopting the artificial intelligence algorithm, and identifying a single-sensor action identification step of the motion category attribute data D4 according to the user, the first data D1 and the correlation result D3-AI1 of the user, the confidence result D3-AI2 of the user and/or the three-dimensional vectorized data D8.
When the first data D1 and the second data D2 of the user are collected, a double-sensor action recognition step of recognizing the motion type attribute data D4 according to the user, the first data D1, the correlation result D3-AI1 of the user, the confidence result D3-AI2 of the user and the three-dimensional vectorization data D8 by adopting the artificial intelligence algorithm.
And when the first data D1 of the user is collected, adopting the artificial intelligence algorithm to identify the motion characteristic motion identification step of the motion type attribute data D4 according to the first data D1 and the motion characteristic data.
And a step of calculating pressure data generated by the striking motion of the user based on the image depth learning step and the calibration data D8.
A step of making the user strike the target, obtaining the angular velocity and acceleration data in the first sensor S1 and the pressure data in the second sensor S2 according to Newton' S mechanical algorithm, and establishing an acceleration pressure correlation D8.
A step of pressure recognition in the acceleration pressure correlation D8 according to the first data D1, in case the user strikes a target or an opponent only using the first sensor S1 without using the second sensor S2.
On the basis of the technical scheme, the invention comprises the following improvements and combinations thereof:
the method comprises the following judging steps:
and calculating the association result D3-AI1 and the confidence result D3-AI2 corresponding to each user by using the artificial intelligence algorithm during the training of the plurality of users according to the competition rules in the sports category attribute data D4.
Comparing the correlation result D3-AI1 and the confidence result D3-AI2 corresponding to the users according to the game rules in the sports category attribute data D4, and obtaining the real-time game process data including the degree and number of double hits, the degree and number of injuries, the reading time and number of seconds, TKO and KO.
And calculating and outputting dynamic odds and prediction result data of the game based on the game process data.
On the basis of the technical scheme, the invention comprises the following improvements and combinations thereof:
a step of communicating the first sensor S1 and the second sensor S2 with more than one fixed terminal to calculate absolute data of the own space coordinates, motion speed and motion track of the first sensor S1 and the second sensor S2;
a step of communicating the first sensor S1 and the second sensor S2 with more than one mobile terminal, a first sensor S1 and a second sensor S2, so as to calculate the relative data of the space coordinates, the motion speed and the motion track of the first sensor S1 and the second sensor S2.
And processing and displaying the result information of the motion information system by using the fixed terminal and the mobile terminal.
And sending the live playback video including the result information and the motion action to more than one display device so as to fuse and display the result information and the live video.
The fixed terminal and the mobile terminal include: micro base station, PC, smart mobile phone.
The connection mode of the sensing network comprises a wired mode and a wireless mode.
On the basis of the technical scheme, the invention comprises the following improvements and combinations thereof:
the user wearing the first sensor S1 is searched by the athletic information system and roll call information is sent to it, and the first sensor S1 worn by the user responds upon receipt, thereby implementing the roll call step.
The user wearing the first sensor S1 issues entry information to the sports information system via the first sensor S1 and receives a response, thereby implementing an entry procedure.
A notification step of sending a notification message to the first sensor S1 worn by the user by the sports information system, the first sensor S1 responding to the sports information system after receiving the notification message, and displaying and/or vibrating on the first sensor S1.
-performing a positioning step for said user wearing said first sensor S1 by said sports information system through one or more of said terminals.
An active alarm step of sending alarm information to the sports information system by the user wearing the first sensor S1 according to the user' S subjective intention
An abnormality warning step of sending warning information to the motion information system by the first sensor S1 according to the abnormal value of the first data D1.
The motion information system and the first sensor S1 are communicated through a sensor network, and the abnormal value includes an alarm triggering condition preset by the user and/or the motion information system.
A system for athletic data monitoring, comprising: a first sensor S1, a terminal and a motion information system; the first sensor S1 is connected to the terminal, which is connected to the motion information system.
On the basis of the foregoing technical solutions, the present invention further includes, but is not limited to, the following matters and combinations thereof:
further comprising: a second sensor S2, a video image sensor S3; the second sensor S2 and the video image sensor S3 are connected to terminals, respectively.
On the basis of the foregoing technical solutions, the present invention further includes, but is not limited to, the following matters and combinations thereof:
the first sensor S1 is formed by connecting a processor, a motion sensor, a physiological sensor, a pressure sensor, a user number generator and a geographic coordinate sensor; the motion sensor, the physiological sensor, the pressure sensor, the user number generator and the geographic coordinate sensor are respectively connected with the processor, and the processor is connected with the terminal.
The second sensor S2 includes a pressure sensor and a position sensor.
The connection mode of the terminal and the motion information system comprises wired connection and wireless sensor network connection, and the connection mode of the processor and the terminal comprises wired connection and wireless sensor network connection.
The motion sensor includes: three-axis angular velocity sensors, three-axis acceleration sensors, three-axis magnetic sensors, electronic compass sensors, speed sensors, motion direction sensors, displacement sensors, trajectory sensors, light sensors, and combinations thereof.
The physiological sensor includes: blood oxygen sensor, blood pressure sensor, pulse sensor, temperature sensor, perspiration level sensor, sound sensor, light sensor.
The pressure sensor includes: pressure sensor, impulsive force sensor, impulse sensor.
The position sensor includes: a spatial position sensor, a spatial coordinate sensor, an optical sensor, a camera.
The user number generator includes: and the user number storage, editing and sending module.
The geographic coordinate sensor includes: and a navigation satellite positioning module.
The video image sensor is a visible light camera or an invisible light camera.
The motion category attribute data D4 includes: the exercise rule data and exercise force data, exercise level data, exercise amplitude data, injury degree data, duration degree data, physical ability consumption degree data, physiological degree data and match rule data corresponding to the exercise rule data.
Wherein the motion rules comprise at least: free combat, standing combat, unlimited combat, MMA, UFC, free combat, martial arts, Taijiquan, Taiquan, kickboxing, K1 rules, fencing, judo, wrestling, athletics, gymnastics, balls and the like.
The user has personal profile data D5, the personal profile data D5 including: the user's height, weight, three-dimensional, arm extension, arm weight, fist weight, heart rate, blood oxygen, body temperature, vital capacity, date and time, calorie consumption, historical athletic recording, historical game achievement, typical athletic data, exertional athletic data, voice data, voiceprint data, image data, video data.
On the basis of the foregoing technical solutions, the present invention further includes, but is not limited to, the following matters and combinations thereof:
the sensing network comprises a fixed terminal and a mobile terminal, and the terminal comprises a micro base station, a mobile phone and a PC (personal computer); the connection mode of the sensing network comprises a wired mode and a wireless mode;
the micro base station includes: the motion information system comprises more than one downlink interface, a processor, a power subsystem and an uplink interface, wherein the more than one downlink interface is connected with the processor, the processor is connected with the uplink interface, the power subsystem is the downlink interface, the processor and the uplink interface provide power, the downlink interface is connected and communicated with the first sensor S1, the second sensor S2 and the video image sensor S3 through a wireless sensor network, and the uplink interface is communicated with the motion information system through a wired or wireless network.
The motion information system comprises a terminal unit and a cloud system which are communicated with each other; the terminal unit and the terminal are integrally or separately arranged, and the cloud system is arranged in a network cloud.
The target includes boxing targets, balls, rackets, sports equipment, and uses of the boxing targets include boxing, foot and body part striking of the targets.
On the basis of the foregoing technical solutions, the present invention further includes, but is not limited to, the following matters and combinations thereof:
the cloud center software is characterized by comprising cloud center software and application software, wherein:
the functions of connecting, collecting and processing the data including the user, the first data D1, the second data D2, the motion category attribute data D4, the user profile data D5 and the video data D6, completing user interaction and assisting in generating the association data D3, the user profile data D5, the three-dimensional vectorization data D7 and the calibration data D8 are completed by application software running on the terminal.
And the application software running on the terminal completes the function of forming big data by transmitting data to the cloud center.
And the application software running on the terminal is matched with cloud center software to complete the functions of learning, training, user identification, action identification and pressure identification.
The cloud center software running in the cloud center is responsible for completing the steps of processing the big data including the deep learning, data mining, classification algorithm, artificial intelligence processing, generating the associated data D3, the video data D6, the calibration data D8, updating D5, cloud center computing, cloud center management and communicating with the application software.
The motion information system comprises the application software and the cloud center software.
Managing a user by a connection of said application software to form said athletic information system; and a step of connecting and managing a plurality of users by a plurality of application software to form a plurality of motion information systems.
And the motion information systems of the plurality of users communicate with each other and finish the interaction.
Compared with the prior art, the invention has the following beneficial effects:
1. the measurement of dynamic striking force and striking energy during human fighting is realized.
2. The problem of imaging conversion of the motion data is solved, and visualization is achieved.
3. The problem of identification of an artificial intelligent image identification algorithm on sports is solved.
4. The method successfully solves the problems of personnel identification, motion identification, mechanical measurement, automatic judgment and dynamic odds calculation.
5. Artificial intelligence is introduced for big data analysis management of sports.
Drawings
FIG. 1 is a system diagram;
FIG. 2 is a block diagram of one of the first sensors;
FIG. 3 is a two-part construction of the first sensor;
FIG. 4 is a three-dimensional block diagram of the first sensor;
FIG. 5 is a block diagram of one of the second sensors;
FIG. 6 is a two-part construction of a second sensor;
FIG. 7 is a block diagram of one of the cellular sensor networks;
FIG. 8 is a diagram of a second configuration of a cellular sensor network;
fig. 9 is a view of a micro base station structure;
FIG. 10 is one of the data imaging maps;
FIG. 11 is a second data visualization map;
FIG. 12 is a third of the data imaging map;
FIG. 13 is a fourth of the data imaging map.
Detailed Description
Firstly, the method comprises the following steps: fighting match training system
Overview of the System
The fight match training system is mainly used for fight sports users. As shown in FIG. 1, the system comprises sensors of 104, 105-10 n and 10n + 1-10 m +1, a terminal of 101 and a fighting information system 2 of 103. The sensors comprise a motion sensor, a physiological sensor, a user number generator, a geographic coordinate sensor, a pressure sensor and the like, and the terminal internally comprises a 102 fighting information system 1.
For individuals or small clubs, the minimum unit is defined as a motion detection group, comprising:
the 4 first sensors S1, respectively 104, 105, 106, 107, 1 terminal 101 composed of micro base stations, which includes 102 the fight information system 1. The 4 first sensors S1 are connected to 1 micro base station, which is connected to the fighting information system 2. 4 first sensors S1 are worn at the user' S wrist and ankle, respectively, of which 1 is of the variety with a physiological sensor, a motion sensor and a user number generator, as shown in fig. 3; the other 3 are varieties with only motion sensors and user number generators, and no physiological sensors, as shown in fig. 4. As an extension, 2 pressure sensor varieties for the boxing glove can be selected. The motion sensor is a variety with a three-axis gyroscope and a three-axis acceleration sensor, and the physiological sensor is a pulse sensor variety.
According to the speed of movement, the sampling frequency of the movement sensor is set to be 10 frames/second to 200 frames/second, the heart rate sensor is set to collect once per minute, and the total sampling precision is 8-16 bits.
As a motion detection group, further comprising: and 1 second sensor S2, shown in fig. 5, connected to the micro base station. The second sensor S2 is a matrix film pressure sensor with a pressure and position detection circuit.
The measuring range can be divided into a plurality of pressure/striking force levels of 50 kilograms, 200 kilograms, 500 kilograms and the like. The second sensor may be selected to have different pressure levels and mounting configurations for user needs, typically varying with target shape.
As an option, a 4-way high definition camera may also be provided as the video image sensor S3. It is connected with the micro base station to complete the image acquisition function.
As shown in fig. 9, the micro base station includes: the system comprises 9 downlink interfaces, a processor, a power subsystem and an uplink interface, wherein the 9 downlink interfaces are connected with the processor, the processor is connected with the uplink interface, the power subsystem provides power for the downlink interfaces, the processor and the uplink interface, the downlink interfaces are connected with 4 first sensors S1, 1 second sensor S2 and 4 video image sensors S3 through a wireless sensor network for communication, and the uplink interfaces are communicated with a fighting information system through an optical fiber wired network.
The micro base station collects the signals of the sensors and is connected with a fighting information system through optical fibers.
The equipped striking sensor S2 mainly functions as two functions:
one is used in conjunction with the first sensor for correlating and calculating the strike data. That is, when the user strikes the target a plurality of times, the system simultaneously measures the data of the angular velocity and the acceleration of the first sensor S1 and the striking force data of the second sensor S2, and establishes the corresponding function according to the newton' S theorem of kinematics according to the corresponding relationship between the data of the angular velocity and the acceleration of the plurality of strikes and the striking force data of the second sensor. After that, the user can convert the hitting force data according to the angular velocity and acceleration data of the user at the time of hitting only by using the motion sensor without using the pressure sensor. The pressure sensor is troublesome to install and use for a user, and must be installed on the surface of a fist for example, so that the use scene is limited, and the method greatly facilitates the use of the user by eliminating the pressure sensor through an indirect measurement mode.
The second is to directly measure the striking force data of the user through the second sensor S2.
And the server with the GPU display card is selected for providing image calculation, big data and cloud service for the system.
As a large club, the following extensions may be chosen:
as shown in fig. 7 and 8, a first sensor S1 worn by a user is configured as a cell sensor network, a plurality of targets are configured as a cell sensor network, and the cell sensor network is configured as a personal sensor network or a location sensor network and is connected to a fighting information system.
As an expanded option, the first sensor S1 is formed by a processor connected with a motion sensor, a physiological sensor, and a pressure sensor. The motion sensor, the physiological sensor and the pressure sensor are respectively connected with the processor, and the processor is connected with the micro base station terminal.
The connection mode of the micro base station terminal and the fighting information system comprises wired connection and wireless sensor network connection, and the connection mode of the processor and the terminal comprises wired connection and wireless sensor network connection.
The motion sensor includes: a three-axis angular velocity sensor, a three-axis acceleration sensor and a three-axis magnetic sensor.
The physiological sensor includes: pulse sensor, temperature sensor, sound sensor.
The pressure sensor includes: matrix film pressure sensor.
The position sensor includes: and a spatial coordinate sensor.
The video image sensor is a visible light camera.
The terminal includes: micro base station, smart mobile phone, PC.
The motion category attribute data D4 includes, but is not limited to: the exercise rule data and exercise force data, exercise level data, exercise amplitude data, injury degree data, duration degree data, physical ability consumption degree data, physiological degree data and match rule data corresponding to the exercise rule data.
Wherein the motion rules include, but are not limited to: free combat, standing combat, unlimited combat, MMA, UFC, free combat, martial arts, Taijiquan, Taiquan, kickboxing, K1 rules, fencing, judo, wrestling, athletics, gymnastics, balls and the like.
The user has personal profile data D5 including, but not limited to: height, weight, three dimensions, arm extension, arm weight, fist weight, heart rate, blood oxygen, body temperature, lung capacity, date and time, calorie consumption, historical athletic records, historical game achievements, typical athletic data, exertional athletic data, voice print data, image data, video data of the user.
The fighting information system comprises a terminal unit and a cloud system which are communicated with each other; the terminal unit and the terminal are integrally or separately arranged, and the cloud system is arranged in the network cloud.
The functions of connecting, collecting and processing the data including the user, the first data D1, the second data D2, the motion category attribute data D4, the user personal profile data D5 and the video data D6, completing user interaction and assisting in generating the associated data D3, the user personal profile data D5, the three-dimensional vectorization data D7 and the calibration data D8 are completed by application software running on the terminal.
And the application software running on the terminal completes the function of forming big data by transmitting the data to the cloud center.
The application software running on the terminal is matched with the cloud center software to complete the functions of learning, training, user identification, action identification and pressure.
And cloud center software running in the cloud center is responsible for completing processing including deep learning, data mining, classification algorithm, artificial intelligence processing, generation of associated data D3, video data D6, calibration data D8, updating D5, cloud center computing and cloud center management on big data and communication with application software.
The motion information system comprises application software and cloud center software.
A user is connected and managed by application software to form a fight information system; a plurality of users are connected and managed by a plurality of application software to form a plurality of fighting information systems.
And the motion information systems of a plurality of users communicate with each other and complete interaction.
(II) description of configuration section
1. Mobile phone configuration
This system links to each other through little basic station and 2 bracelets, 2 foot rings and 1 second sensor, here is through BLE bluetooth low energy protocol or WIFI protocol communication, analogizes once of course, also can adopt other WSN protocols, and little basic station transmits the data collection of above-mentioned 5 sensors to the cloud database of fighting information system. The 5 sensors synchronize the collected data through the system time in a timestamp mode to obtain the motion data of the user, and the function of the fighting information system is realized by matching with the cloud center configuration of the cloud center.
The functions of connecting, collecting and processing the data including the user, the first data D1, the second data D2, the motion category attribute data D4, the user personal profile data D5 and the video data D6 are completed by a configuration running on the mobile phone, user interaction is completed, and the associated data D3, the user personal profile data D5, the three-dimensional vectorization data D7 and the calibration data D8 are generated in an auxiliary mode.
The function of forming big data by the configuration running on the mobile phone, including the data transmission to the cloud center, is completed.
The functions of learning, training, user identification, action identification and pressure identification are completed by the configuration running on the mobile phone and the configuration of the cloud center.
2. Cloud centric configuration
And the cloud center configuration running in the cloud center is responsible for completing the processing of the big data including deep learning, data mining, classification algorithm, artificial intelligence processing, generation of associated data D3, video data D6, calibration data D8, updating D5, cloud center computing and cloud center management and the steps of communication with the terminal application configuration.
The motion information system comprises terminal application configuration and cloud center configuration.
Managing a user by an application configuration connection to form a sports information system; and configuring and connecting and managing a plurality of users by a plurality of applications to form a plurality of motion information systems.
And communicating motion information systems of a plurality of users, and completing the steps of interaction and social contact.
(III) Key method steps
1. The first data D1 is monitored by the first sensor S1(2 hand rings and 2 foot rings) provided on the user' S body, and the first data D1 is transmitted to the fighting information system by the sensor network. The first data D1 is processed at the same time.
2. The second data D2 is monitored while the user strikes the target using a second sensor S2 provided on the target. While the user hits the target, the first data D1 and the second data D2 are simultaneously acquired in chronological order, and the associated data D3 is generated. The second data D2 and the associated data D3 are transmitted to a fighting information system using a sensing network. The users here include: student users, coach users, opponent users. The sensing network comprises terminals, wherein the terminals comprise fixed terminals and mobile terminals, and comprise a micro base station, a smart phone and a PC (personal computer). The target comprises a dummy, a sand bag, a hand target, a foot target, a wall target and the like. The use of fighting targets includes the striking of the target with a fist, feet and body parts.
3. Monitoring first data D1 with a first sensor S1 disposed on the user' S body, including:
the motion sensor in the first sensor S1 is used for collecting the motion data of the user, the physiological sensor in S1 is used for collecting the physiological data of the user, and the pressure sensor in S1 is used for collecting the pressure data when the user hits the target and hits the opponent. The second data D2 is monitored when the user strikes the target using the second sensor S2 provided on the target, the pressure data when the user strikes the target is collected using the pressure sensor in S2, and the position data when the user strikes the target is collected using the position sensor in S2.
All the first sensors S1 worn by one user are connected to the personal sensor network, the site sensor network, the fighting information system using the cell sensor network. All the second sensors S2 provided for one set of targets are connected to the personal sensor network, the site sensor network, the fighting information system by using the cell sensor network.
The system time value T of the time when the monitoring of the first data D1 and the second data D2 occurred was collected and recorded into the first data D1 and the second data D2.
The first data D1 and the second data D2 were a/D converted.
The sampling frequency and sampling accuracy of S1 and S2 are adjusted according to the motion category attribute data D4.
The first data D1 and the second data D2 are interpolated and filled up according to a predetermined scale from the first data D1 and the second data D2, and the first data D1 and the second data D2 are merged into associated data D3.
S1 is provided at the wrist, ankle, joint, or hitting position of the user.
And adopting an artificial intelligence algorithm to summarize and extract the habit action characteristic data of the user according to the user motion data, and recording the habit action characteristic data into the personal profile data D5 of the user.
And adopting an artificial intelligence algorithm to summarize and extract the voiceprint characteristic data of the user according to the voice data of the user, and recording the voiceprint characteristic data into the personal profile data D5 of the user.
And summarizing and extracting motion characteristic data of the motion according to the motion category attribute data by adopting an artificial intelligence algorithm, and recording the motion characteristic data into the motion category attribute data D4.
The motion category attribute data D4 includes, but is not limited to: the exercise rule data and exercise force data, exercise level data, exercise amplitude data, injury degree data, duration degree data, physical ability consumption degree data, physiological degree data and match rule data corresponding to the exercise rule data.
Wherein the motion rules include at least, but are not limited to: free combat, standing combat, unlimited combat, MMA, UFC, free combat, martial arts, Taijiquan, Taiquan, kickboxing, K1 rules, fencing, judo, wrestling, athletics, gymnastics, balls and the like.
The user has personal profile data D5, personal profile data D5 includes but is not limited to: height, weight, three dimensions, arm extension, arm weight, fist weight, heart rate, blood oxygen, body temperature, lung capacity, date and time, calorie consumption, historical athletic records, historical game achievements, typical athletic data, exertional athletic data, voice print data, image data, video data of the user.
The motion sensor comprises an angular velocity sub-sensor, an acceleration sub-sensor and a magnetic force sub-sensor, and the axis system comprises but is not limited to XYZ three axes.
4. The associated data D3 is formatted according to the data content including but not limited to the sampling type, sampling frequency, sampling precision, data format. The motion data portion in the associated data D3 is divided into motion units according to the characteristics of the motion, and the unit data D3-U is calculated.
The mapping unit data D3-U is a moving image, and according to the collected sequence, in the unit data D3-U, the three-axis data of the motion sensor collected at each time is set as a group, and one group is mapped as image point mapping of one pixel point in the moving image.
The data acquired by each sub-sensor of the X-axis, the Y-axis and the Z-axis of the motion sensor in the mapping unit data D3-U is a moving image, each acquisition point of each sub-sensor is mapped to be a pixel point in the corresponding moving image, the X, Y, Z three-axis data of the acquisition point is used as an independent variable X of the RGB three-primary-color data of the pixel point, a function Y (X) of the RGB color value Y is established, and the multi-map mapping of the RGB three-primary-color data is calculated.
The following multi-channel mapping method can also be used:
the method comprises the steps of enabling collected data of one sub-sensor in a motion sensor in mapping unit data D3-U to be a moving image, mapping collected data of other sub-sensors to be a channel of the moving image, mapping each collecting point of each sub-sensor to be a corresponding moving image or a pixel point in the channel, enabling X, Y, Z triaxial data of the collecting point to serve as independent variables x of RGB data or channel data of the pixel point, establishing a function y (x) of RGB color code values y, and calculating single-image multi-channel mapping of the RGB data or the channel data.
The RGB functions include linear functions y-kx + j and nonlinear functions, where k and j are adjustment constants.
And performing deep learning on a plurality of moving image data by adopting an artificial intelligence image recognition and classification algorithm, summarizing and calculating characteristic data including but not limited to motion type characteristics, action type characteristics, pressure size characteristics and user recognition characteristics, and calculating and comparing image deep learning of the characteristic data when next associated data D3 is collected.
According to the image and video file format, the multi-image mapping and the single-image mapping are changed into the image and video file, so that the image and video file displayed by a display and watched by human eyes can be reconstructed conveniently.
Artificial intelligence algorithms include, but are not limited to: artificial neural network algorithm, CNNs algorithm, RNN algorithm, SVM algorithm, genetic algorithm, ant colony algorithm, simulated annealing algorithm, particle swarm algorithm and Bayes algorithm.
The motion recognition is realized by the following steps: firstly, establishing an action characteristic library, and secondly, inquiring the action characteristic library.
Establishing an action characteristic library, firstly selecting some users with action specifications, wearing a first sensor S1, doing various actions, obtaining action data and action name data of the action data, extracting action characteristics of the action data through artificial intelligence analysis including but not limited to CNNs and SVM algorithms, and recording the action characteristics as an action characteristic library in a database of a cloud center.
Secondly, after data of an unknown user action is obtained, the characteristic data of the action is obtained by adopting the CNNs and SVM algorithms, then the characteristic data is searched in an action characteristic library of a cloud center to determine an action list with the highest similarity, and an action code is taken out, so that action recognition is realized.
When the user needs to identify the action of the user, the action data of the user is obtained first, and the method also comprises but is not limited to obtaining the behavior characteristic and action characteristic library of the user by adopting a CNNs (convolutional neural networks) and SVM (support vector machine) algorithm, and then the characteristic library data is searched in a database of a cloud center to determine an action list with the highest similarity, so that the user identification is realized.
5. Also included is 2D image 3D rendering.
The 4-way video image sensor S3 is caused to capture video images D6 of 4-way user races.
The 4-way video image sensor S3 is caused to communicate with a fight information system through a sensing network.
Based on the video image D6 and the first data D1, according to the position of the first sensor S1 in the video image D6, an artificial intelligence algorithm is adopted to carry out three-dimensional vectorization synthesis of motion actions, and three-dimensional vectorization data D7 are obtained.
And establishing association between the three-dimensional vectorized data D7 and the second data D2, the association data D3, the motion category attribute data D4 and the personal profile data D5.
And identifying the motion action in the video image D6 by adopting an artificial intelligence algorithm according to the three-dimensional vectorized data D7 and the motion type attribute data D4, and synchronizing the time points before and after the motion action marked in the video image D6.
Wherein, the race training comprises single training, single race match and multi-player competition match.
6. And striking the target by a coach user according to the motion type attribute data D4 by adopting a normative motion to obtain associated data D3 of the coach, performing machine learning in the associated data D3 of the coach according to an artificial intelligence algorithm to obtain an associated result D3-AI1 of the coach and a confidence result D3-AI2 of the coach, and updating personal profile data D5 of the coach user to learn the coach.
The trainee user strikes the target according to the motion category attribute data D4 to obtain the associated data D3 of the trainee, machine learning is carried out in the associated data D3 of the trainee according to an artificial intelligence algorithm to obtain the associated result D3-AI1 of the trainee and the confidence result D3-AI2 of the trainee, and the personal profile data D5 of the trainee user is updated for self-training.
And circularly comparing the association result D3-AI1 of the student with the association result D3-AI1 of the coach, and circularly comparing the confidence result D3-AI2 of the student with the confidence result D3-AI2 of the coach.
And calculating and analyzing the exercise strength, weakness and difference of the trainees according to the association results D3-AI1 and the confidence results D3-AI2 of the trainees, updating personal profile data D5 of the trainees, and calculating and outputting countermeasures of the strength and weakness of the training suggestion information.
Searching the personal file data D5 of the opponent user and the personal file data D5 of the trainee, comparing the typical sports data, the strong sports item data and the weak sports item data of the opponent user and the trainee, calculating and analyzing the difference between the typical sports data, the strong sports item data and the weak sports item data, making a targeted training suggestion plan, and supervising and urging the trainee to check the training result to train the opponent.
7. When the first data D1 of the user is collected, adopting an artificial intelligence algorithm to identify the single-sensor user identification of the user according to the first data D1, the user association result D3-AI1, the user confidence result D3-AI2 and the three-dimensional vectorization data D8.
When the first data D1 of the user is collected, the habitual action user identification of the user is identified by adopting an artificial intelligence algorithm according to the first data D1 and the habitual action feature data.
When the collected first data D1 of the user includes voice data, the user identification of the voiceprint characteristics of the user is identified by using an artificial intelligence algorithm according to the voice data and the voiceprint characteristic data.
When the first data D1 and the second data D2 of the user are collected, an artificial intelligence algorithm is adopted, and the dual-sensor user identification of the user is identified according to the first data D1, the association result D3-AI1, the confidence result D3-AI2 of the user and the three-dimensional vectorization data D8.
When the first data D1 of the user is collected, adopting an artificial intelligence algorithm to identify the single-sensor action identification of the motion type attribute data D4 according to the user, the first data D1, the correlation result D3-AI1 of the user, the confidence result D3-AI2 of the user and the three-dimensional vectorization data D8.
And when the first data D1 and the second data D2 of the user are collected, identifying the dual-sensor action identification of the motion type attribute data D4 by adopting an artificial intelligence algorithm according to the confidence results D3-AI2 three-dimensional vectorized data D8 of the user, the first data D1 and the association results D3-AI1 of the user.
When the first data D1 of the user is collected, the motion characteristic motion recognition of the motion type attribute data D4 is recognized by adopting an artificial intelligence algorithm according to the first data (D1) and the motion characteristic data.
From the image depth learning step and the calibration data D8, pressure data generated by the striking motion of the user is calculated.
Enabling a user to strike the target, acquiring angular velocity and acceleration data in the first sensor S1 and pressure data in the second sensor S2 according to a Newton' S mechanical algorithm, and establishing an acceleration-pressure correlation D8.
In the case where the user strikes the target or the opponent using only the first sensor S1 without using the second sensor S2, pressure recognition is performed in the acceleration pressure correlation D8 according to the first data D1.
Taking a boxing as an example, if the force of a user hitting an opponent is F, in F, the tension generated by decomposing arm muscles is F1, and the impulse force is F2, according to newton's law of mechanics F1+ F2-F1 + ma, at this time, m is the equivalent mass of a fist, which is the influence exerted on the fist by the movement of other parts of the body except for a glove, and a is the acceleration of the fist, the body and muscles form a memory effect through a large amount of training based on the basis that the body size data and the body part mass data of the user do not change in a short time, so the inventor's judgment: in the same operation, the acceleration a is the same, and the value of the output striking force F is also the same. Thus, as long as S1 is measured with S2, the relationship of D1, D2 is established, and thereafter, as long as D1 is measured, the value of D2 can be estimated. This is the principle and method of impact force recognition proposed by the inventor.
8. According to the competition rules in the sports type attribute data D4, when a plurality of users compete, an artificial intelligence algorithm is adopted to calculate the association result D3-AI1 and the confidence result D3-AI2 corresponding to each user.
According to the game rules in the sports type attribute data D4, the corresponding association results D3-AI1 and confidence results D3-AI2 of a plurality of users are compared, and the real-time game process data including the degree and times of double-click, the degree and times of injury, the reading seconds and times, TKO and KO are obtained.
And calculating and outputting dynamic odds and prediction result data of the game based on the game process data.
9. The first sensor S1 and the second sensor S2 are made to communicate with more than one fixed terminal to calculate the absolute data of the own space coordinates, the motion speed and the motion track of the first sensor S1 and the second sensor S2.
The first sensor S1 and the second sensor S2 are made to communicate with more than one mobile terminal, the first sensor S1 and the second sensor S2, so as to calculate the relative data of the space coordinates, the motion speed and the motion track of the first sensor S1 and the second sensor S2.
And processing and displaying the result information of the fighting information system by using the fixed terminal and the mobile terminal.
And sending the live playback video including the result information and the motion action to more than one display device so as to enable the result information and the live video to be displayed in a fusion mode.
The fixed terminal and the mobile terminal include: micro base station, PC, smart mobile phone. The connection mode of the sensing network comprises a wired mode and a wireless mode.
10. The user wearing the first sensor S1 is searched by the fighting information system and roll call information is sent to the user, and the first sensor S1 worn by the user responds after receiving the roll call information, so that roll call is realized.
The user wearing the first sensor S1 sends out registration information to the fighting information system through the first sensor S1, and obtains a response from the fighting information system, thereby realizing registration.
The first sensor S1 worn by the user is sent notification information by the fighting information system, and upon receiving the notification information, the first sensor S1 responds to the fighting information system and displays, shakes and sounds a notification on the first sensor S1.
Positioning, including but not limited to, a variety of positioning algorithms, is accomplished by the fighting information system via one or more terminals to the user wearing the first sensor S1.
The user wearing the first sensor S1 sends an active alarm of active alarm information to the fighting information system according to the subjective intention of the user.
An abnormality alarm of alarm information is issued to the sports information system by the first sensor S1 based on the abnormal value of the first data D1.
The fighting information system and the first sensor S1 realize communication through a sensing network, and abnormal values comprise alarm triggering conditions preset by a user and the motion information system.
Therefore, the fighting information system can realize the functions of positioning, registering, roll calling, informing, alarming and the like of the user, and provides technical support for strengthening management.
11. The system comprises: a first sensor S1, a terminal and a fight information system; the first sensor S1 is connected to a terminal, which is connected to a fight information system and processes data from the first sensor S1.
12. Further comprising: a second sensor S2, a video image sensor S3; the second sensor S2 and the video image sensor S3 are connected to a terminal, which is connected to a fighting information system, respectively.
13. The first sensor S1 is composed of a processor, a motion sensor, a physiological sensor, a pressure sensor, a user number generator and a geographic coordinate sensor. The motion sensor, the physiological sensor, the pressure sensor, the user number generator and the geographic coordinate sensor are respectively connected with the processor, and the processor is connected with the terminal.
The second sensor S2 includes a pressure sensor and a position sensor. The connection mode of the terminal and the fighting information system comprises wired connection and wireless sensor network connection, and the connection mode of the processor and the terminal comprises wired connection and wireless sensor network connection.
The motion sensor includes: three-axis angular velocity sensors, three-axis acceleration sensors, three-axis magnetic sensors, electronic compass sensors, speed sensors, motion direction sensors, displacement sensors, trajectory sensors, light sensors, and combinations thereof.
The physiological sensor includes: blood oxygen sensor, blood pressure sensor, pulse sensor, temperature sensor, perspiration level sensor, sound sensor, light sensor.
The pressure sensor includes: pressure sensor, impulsive force sensor, impulse sensor.
The position sensor includes: a spatial position sensor, a spatial coordinate sensor, an optical sensor, a camera.
The user number generator includes: and the user number storage, editing and sending module.
The geographic coordinate sensor includes: and a navigation satellite positioning module.
The video image sensor is a visible light camera or an invisible light camera.
14. The sensing network comprises a fixed terminal and a mobile terminal. The terminal comprises a micro base station, a smart phone and a PC; the connection mode of the sensing network comprises a wired mode and a wireless mode.
The micro base station includes: more than one downlink interface, a processor, a power subsystem and an uplink interface. The system comprises a power supply subsystem, a first sensor S1, a second sensor S2 and a video image sensor S3, wherein more than one downlink interface is connected with a processor, the processor is connected with an uplink interface, the power supply subsystem provides power for the downlink interface, the processor and the uplink interface, the downlink interface is connected with the first sensor S1, the second sensor S2 and the video image sensor S3 through a wireless sensor network for communication, and the uplink interface is communicated with a fighting information system through a wired or wireless network.
The motion information system comprises a terminal unit and a cloud system which are communicated with each other; the terminal unit and the terminal are integrally or separately arranged, and the cloud system is arranged in the network cloud.
Targets include boxing targets, balls, rackets, sports equipment, and uses for boxing targets include boxing, foot and body part striking of targets.
15. The functions of connecting, collecting and processing the data including the user, the first data D1, the second data D2, the motion category attribute data D4, the user personal profile data D5 and the video data D6, completing user interaction and assisting in generating the association data D3, the user personal profile data D5, the three-dimensional vectorization data D7 and the calibration data D8 are configured by the application running on the terminal.
The application running on the terminal is configured to complete the function of forming big data including transmitting data to the cloud center.
The functions of learning, training, user identification, action identification and pressure identification are completed by the application configuration running on the terminal and the cloud center software.
The cloud center operating in the cloud center is configured to complete processing including deep learning, data mining, classification algorithm, artificial intelligence processing, generation of associated data D3, video data D6, calibration data D8, updating D5, cloud center computing and cloud center management on big data and communication with application software.
The motion information system comprises an application configuration and a cloud center configuration.
An application software is connected with and manages a user to form a motion information system; a plurality of users are connected and managed by a plurality of application software to form a plurality of sports information systems.
And the motion information systems of a plurality of users communicate with each other and complete interaction.
(IV) description of advantageous effects
1. According to the pressure identification step, the problem of dynamically measuring the hitting force when only angular velocity and acceleration sensors are adopted during fighting is solved, the implementation is convenient, and the cost is reduced.
2. According to the key method step 4, the problem of imaging conversion of the motion data is solved, visualization is achieved, and application of the existing artificial intelligent image recognition algorithm is facilitated.
3. According to the key method step 5, the problem of 3D vectorization of the 2D video is solved.
4. According to the key method steps 7 and 8, personnel identification, motion identification, mechanical measurement, automatic judgment and dynamic odds calculation are achieved.
5. According to the key method step 6, the function of artificial intelligence auxiliary fight coaching is introduced.
6. According to the key method step 10, new functions of user positioning, registration, roll calling, notification, alarming and the like are developed.
II, secondly: motion recognition system-bracelet edition
Overview of the System
The system is mainly used for identity recognition, motion recognition and management of personal motion users, and particularly has the functions of recognizing the identity and motion actions of the users under the support of cloud big data through extraction and comparison of the personal motion characteristics of the users by the bracelet sensors.
Compared with the fight match training system, the same points are not described, and the differences are that:
1. the first sensor is a bracelet piece, and as shown in fig. 3, the first sensor is a motion sensor formed by a three-axis gyroscope and a three-axis accelerometer, a physiological sensor formed by a heart rate sensor and a user number generator, and can also comprise a geographic coordinate sensor and a voice sensor. The sampling frequency of the motion sensor is set to be 5 frames/second to 50 frames/second, the heart rate sensor is set to collect once per minute, the total sampling precision is 8-16 bits, and the sampling frequency of the voice sensor is set to be 8 KHz-2.8224 MHz.
2. Instead of using a micro base station, a user' S smart phone is used to connect to the first sensor S1.
3. And voiceprint characteristic user identification and habit action user identification are added to synchronously identify the user identity.
4. Identifying with the action features: running outdoors, heel-and-toe walking, body building walking and walking; running and walking on the indoor treadmill; the step counting by throwing hands, the step counting by an appliance which puts a sensor on a 'pedometer', the step counting by an animal which ties the sensor on the body of the animal and the like.
5. The exercise rules only include running, walking race, walking, and walking, but not other exercises.
6. The method does not include the hitting force identification and the 2D data 3D identification.
7. The statistics manage the motion recognition motion of the user.
(II) description of configuration section
1. Mobile phone configuration
The system is connected with the bracelet sensor through the mobile phone to obtain the motion data of the user, and the function of the motion information system is realized by matching with the cloud center configuration of the cloud center.
The APP software running on the mobile phone completes the functions of connecting, collecting and processing the data including the user, the first data D1, the second data D2, the motion category attribute data D4 and the user personal profile data D5, completing user interaction and assisting in generating the associated data D3 and the user personal profile data D5.
The APP application running on the mobile phone is configured to complete the function of forming big data by transmitting data to the cloud center.
The functions of learning, training, user identification, action identification and pressure identification are completed by the APP configuration running on the mobile phone and the cloud center software.
2. Cloud centric configuration
And the cloud center software running in the cloud center is responsible for completing the processing of the big data including deep learning, data mining, classification algorithm, artificial intelligence processing, generation of associated data D3, updating D5, cloud center computing and cloud center management and the communication with the application configuration.
The motion recognition information system comprises an application configuration and a cloud center configuration.
Managing a user by an application configuration connection to form a sports information system; and configuring and connecting and managing a plurality of users by a plurality of applications to form a plurality of motion information systems.
And the motion identification information systems of a plurality of users communicate with each other and finish the interaction.
(III) Key method steps
Compared with a fight match training system, the heterology is as follows:
1. only one bracelet sensor is used, the rest being the same.
2. There is no second sensor and the rest are the same.
3. Monitoring first data D1 with a first sensor S1 disposed on the user' S body, including:
user motion data is collected using the motion sensor in the first sensor S1. The physiological sensor in the first sensor S1 is used to collect the user physiological data, the user number data, and the geographic coordinate data.
The first data D1 and the second data D2 were a/D converted.
And adjusting the sampling frequency of the first sensor S1 to be 5 frames/second to 50 frames/second according to the motion type attribute data D4, wherein the sum sampling precision is 8-16 bits.
The first sensor S1 is disposed at the wrist or ankle of the user.
And (4) adopting an artificial intelligence algorithm to extract the habit action characteristic data of the user according to the user motion data, and recording the habit action characteristic data into the personal profile data D5 of the user.
And adopting an artificial intelligence algorithm to extract the voiceprint characteristic data of the user according to the voice data of the user and recording the voiceprint characteristic data into the personal profile data D5 of the user.
And (4) extracting motion characteristic data of the motion according to the motion type attribute data D4 by adopting an artificial intelligence algorithm, and recording the motion characteristic data into the motion type attribute data D4.
The rest of the content of the project is the same as the fight match training system.
4. The same is true.
5. This is not the case.
6. The same is true.
7. When the first data D1 of the user is collected, an artificial intelligence algorithm is adopted to identify the single-sensor user identification of the user according to the first data D1, the user association result D3-AI1 and the user confidence result D3-AI 2.
When the first data D1 of the user is collected, the habitual action user identification of the user is identified by adopting an artificial intelligence algorithm according to the first data D1 and the habitual action feature data.
When the collected first data D1 of the user includes voice data, the user identification of the voiceprint characteristics of the user is identified by using an artificial intelligence algorithm according to the voice data and the voiceprint characteristic data.
When the first data D1 of the user is collected, the motion characteristic motion recognition of the motion type attribute data D4 is recognized by adopting an artificial intelligence algorithm according to the first data (D1) and the motion characteristic data.
From the image depth learning step and the calibration data D8, pressure data generated by the striking motion of the user is calculated.
Taking a boxing as an example, if the force of a user hitting an opponent is F, and the force of the user hitting the opponent is F1 in F, and the impulse force is F2, according to newton' S law of mechanics, F1+ F2 is F1+ ma, at this time, m is the equivalent mass of a fist, and a is the acceleration of the fist, and based on the basis that the body size data and the body part mass data of the user do not change in a short time, the acceleration a is the same in the same motion, and the value of the output hitting force F is the same, so long as S2 is provided in advance, S1 is measured, the correlation between D1 and D2 is established, and thereafter, the value of D2 can be estimated as long as D1 is measured. This is the principle and method of impact force recognition.
8. According to the competition rules in the sports type attribute data D4, when a plurality of users compete, an artificial intelligence algorithm is adopted to calculate the association result D3-AI1 and the confidence result D3-AI2 corresponding to each user.
According to the game rule in the sports category attribute data D4, the corresponding association results D3-AI1 and confidence results D3-AI2 of a plurality of users are compared and instant data are obtained.
And calculating and outputting dynamic odds and prediction result data of the game based on the game process data.
9. The first sensor S1 is made to communicate with more than one fixed terminal to calculate the absolute data of the own spatial coordinates, the moving speed, the moving track of the first sensor S1.
The first sensor S1 is caused to communicate with more than one mobile terminal to calculate relative data of the first sensor S1 own spatial coordinates, motion speed, motion trajectory.
10. 11. same.
12. This is not the case.
13. No second sensor and pressure sensor, no video image sensor, and the like.
15. No second sensor and pressure sensor, no video image sensor, no video data D6, no three-dimensional vectorized data D7, no calibration data D8, and the others being the same.
(IV) description of advantageous effects
1. Through making the user wear bracelet sensor, solve personnel's discernment problem.
2. The problem of motion identification is solved, in particular to the identification of outdoor running, outdoor walking race, outdoor walking health, indoor treadmill running, indoor treadmill walking health, hand throwing and step counting, appliance step counting, animal step counting and the like.
3. The artificial intelligence auxiliary fight coaching function is introduced.
4. The system has new functions of user positioning, registration, roll calling, notification, alarm and the like.
Thirdly, the method comprises the following steps: motion recognition system-pure APP edition
Overview of the System
The system is mainly used for identity recognition, motion recognition and management of personal motion users, and particularly has the functions of recognizing the identity of the users and recognizing motion actions under the support of cloud big data through extraction and comparison of personal motion characteristics of the users by a gyroscope and an accelerometer which are arranged in a smart phone.
The mobile terminal is configured to collect user data using its own motion sensor, and the mobile terminal needs to hold the mobile phone in the hand or to be worn on the wrist during use.
The same contents as those of the motion recognition system-bracelet version in the embodiment are not described, but the difference lies in that a triaxial gyroscope, a triaxial accelerometer and a triaxial magnetometer which are included in the mobile phone are adopted to replace the first sensor S1, and the APP application software drives and reads sampling data in the mobile phone motion sensor through a direct bottom layer and adopts an artificial intelligence algorithm to perform recognition.
(II) Key method steps
Compared with the motion recognition system-bracelet edition, the difference and the same point lie in that:
1. the motion sensor carried by the smart phone replaces a bracelet sensor to acquire user motion data, and the rest are the same.
2. The same is true.
3. The first data D1 was monitored using a smartphone, the rest being the same.
4. 15 items, the same.
(III) description of advantageous effects
1. Only need to make the user use the cell-phone, do not need bracelet sensor just to solve personnel's discernment problem.
2. The problem of motion identification is solved, in particular to the identification of outdoor running, outdoor walking race, outdoor walking health, indoor treadmill running, indoor treadmill walking health, hand throwing and step counting, appliance step counting, animal step counting and the like.
3. The artificial intelligence auxiliary fight coaching function is introduced.
Fourthly, the method comprises the following steps: ball/track and field sports recognition system
Overview of the System
The system is mainly used for identifying and managing users of ball and track and field, and compared with a fight match training system, the system has the same parts without description, and the difference is that:
1. the first sensor S1 is used to detect the velocity and acceleration of movement of the limbs, and does not need to detect the hitting force. In addition, as an accurate speed measurement, it is necessary to convert the distance from the racket to the wrist S1 for different rackets.
2. The amount of exercise including horizontal running and vertical jumping and the amount of calorie consumption of the limbs were calculated.
3. The racket is provided with a motion sensor and incorporated into the management of the sports category attribute data D4 and the management of the user personal profile data D5.
(II) Key method steps
Compared with a fight match training system, the heterology is as follows:
1. the same is true.
2. Without such clauses.
3. Monitoring first data D1 with a first sensor S1 disposed on the user' S body, including:
the method comprises the steps of collecting user motion data by using a motion sensor in a first sensor S1, collecting user physiological data by using a physiological sensor in a first sensor S1, collecting a user number by using a user number generator in the first sensor S1, collecting geographic coordinates by using a geographic coordinate sensor in the first sensor S1, and connecting all the first sensors S1 worn by one user to a personal sensor network, a place sensor network and a motion information system by using a unit sensing network.
The first data D1 is subjected to analog/digital a/D conversion.
The sampling frequency and the sampling precision of the first sensor S1 are adjusted according to the motion category attribute data D4.
The first sensor S1 is disposed at the wrist, ankle, or joint of the user.
And (4) adopting an artificial intelligence algorithm to extract the habit action characteristic data of the user according to the user motion data, and recording the habit action characteristic data into the personal profile data D5 of the user.
And adopting an artificial intelligence algorithm to extract the voiceprint characteristic data of the user according to the voice data of the user and recording the voiceprint characteristic data into the personal profile data D5 of the user.
And (4) adopting an artificial intelligence algorithm, extracting motion characteristic data of the motion according to the motion type attribute data, and recording the motion characteristic data into the motion type attribute data D4.
The motion category attribute data D4 includes: the exercise rule data and exercise force data, exercise level data, exercise amplitude data, injury degree data, duration degree data, physical ability consumption degree data, physiological degree data and match rule data corresponding to the exercise rule data.
Wherein the motion rules include at least, but are not limited to: track and field, gymnastics, balls.
The user has personal profile data D5, the personal profile data D5 includes: height, weight, three dimensions, arm extension, arm weight, fist weight, heart rate, blood oxygen, body temperature, lung capacity, date and time, calorie consumption, historical athletic records, historical game achievements, typical athletic data, exertional athletic data, voice print data, image data, video data of the user.
The motion sensor comprises an angular velocity sub-sensor, an acceleration sub-sensor and a magnetic force sub-sensor, and the axis system at least comprises XYZ three axes.
4. The same is true.
5. Without such clauses.
6. The same is true.
7. When the first data D1 of the user is collected, adopting an artificial intelligence algorithm to identify the single-sensor user identification of the user according to the first data D1, the user association result D3-AI1, the user confidence result D3-AI2 and the three-dimensional vectorization data D8.
When the first data D1 of the user is collected, the habitual action user identification of the user is identified by adopting an artificial intelligence algorithm according to the first data D1 and the habitual action feature data.
When the collected first data D1 of the user includes voice data, the user identification of the voiceprint characteristics of the user is identified by using an artificial intelligence algorithm according to the voice data and the voiceprint characteristic data.
When the first data D1 of the user is collected, the motion characteristic motion recognition of the motion type attribute data D4 is recognized by adopting an artificial intelligence algorithm according to the first data (D1) and the motion characteristic data.
8. According to the competition rules in the sports type attribute data D4, when a plurality of users compete, an artificial intelligence algorithm is adopted to calculate the association result D3-AI1 and the confidence result D3-AI2 corresponding to each user.
According to the game rule in the sports category attribute data D4, the corresponding association results D3-AI1 and confidence results D3-AI2 of a plurality of users are compared, and the instant game process data is obtained.
And calculating and outputting dynamic odds and prediction result data of the game based on the game process data.
9. There is no second sensor and video image sensor, the others being the same.
10. The same is true.
11. No second sensor, pressure sensor, position sensor and video image sensor, the others being the same.
12. This is not the case.
13. No second sensor, pressure sensor, position sensor and video image sensor, the others being the same.
14. The same is true.
15. No second sensor, pressure sensor, position sensor and video image sensor, the others being the same.
(II) description of advantageous effects
1. Through making the user wear bracelet sensor, solve personnel's discernment problem.
2. The problem of motion recognition is solved, especially solve the problem of action recognition and motion management of various track and field sports.
3. The artificial intelligence auxiliary fight coaching function is introduced.
4. The system has new functions of user positioning, registration, roll calling, notification, alarm and the like.
Fifthly: personnel and action recognition system
Overview of the System
The system is mainly oriented to mechanisms for identifying persons.
The difference between individuals is analyzed and searched through the collection of the motion and the sound of the people, so that the people are identified, namely, the people are recognized. Meanwhile, typical movement motions are classified, and further identity identification is achieved for movement motions of the same individual.
The system comprises an artificial intelligent bracelet, a mobile phone APP and cloud center software. The method comprises the following specific steps:
(II) Key method steps
Compared with the motion recognition system-bracelet edition, the difference and the same point lie in that:
1. 2, the same.
3. The exercise rules only include daily activity rules, and the others are the same.
4. 5, 6, and the same.
7. When the first data D1 of the user is collected, an artificial intelligence algorithm is adopted to identify the single-sensor user identification of the user according to the first data D1, the user association result D3-AI1 and the user confidence result D3-AI 2. When the first data D1 of the user is collected, the habitual action user identification of the user is identified by adopting an artificial intelligence algorithm according to the first data D1 and the habitual action feature data. When the collected first data D1 of the user includes voice data, the user identification of the voiceprint characteristics of the user is identified by using an artificial intelligence algorithm according to the voice data and the voiceprint characteristic data. When the first data D1 of the user is collected, the motion characteristic and motion recognition of the motion category attribute data D4 is recognized by using an artificial intelligence algorithm according to the first data D1 and the motion characteristic data.
8. 9, there is no such item.
10. 15 are the same.
(III) description of advantageous effects
1. The problem of personal authentication is solved, and the function of authenticating people can be realized.
2. The health condition of the user is detected.
3. An artificial intelligence auxiliary exercise coach function is introduced.
4. The system has new functions of user positioning, registration, roll calling, notification, alarm and the like.
Sixthly, the method comprises the following steps: dangerous work security rescue system
Overview of the System
The system is mainly used for finishing the management of security rescue by detecting the physiological characteristics of individuals in dangerous working environments. For example, firefighters are in fire fighting environments, summer muggy environments when building ship cabins, mining tunnel environments, and the like.
The system comprises a plurality of artificial intelligent bracelets, a micro base station, a mobile phone APP and cloud center software. The method comprises the following specific steps:
(II) Key method steps
Compared with a personnel and action recognition system, namely a bracelet edition, the key method and the key system are basically consistent in 1-15 methods and systems. But only in terms of security and rescue software functions, and accordingly. It is noted that these are functional points that will be understood by those skilled in the art and can be designed without innovation. And therefore not described herein.
(III) description of advantageous effects
1. The problem of personnel's self discernment is solved, people's function can be realized discerning.
2. The problems of motion recognition and physiological recognition are solved, and the functions of life danger early warning and rescue guiding are provided.
3. The health condition of the user is detected.
4. An artificial intelligence auxiliary exercise coach function is introduced.
5. The system has new functions of user positioning, registration, roll calling, notification, alarm and the like.
Seventhly, the method comprises the following steps: pasture positioning alarm system
Overview of the System
The system is mainly used for detecting a management system for animal breeding, security and protection alarm in a pasture.
The system comprises a plurality of artificial intelligent sensors, a micro base station, a mobile phone APP and cloud center software. The method comprises the following specific steps:
(II) Key method steps
Compared with a fight match training system, the key method and the system have the following similarities:
1. the user is changed to an animal.
2. This is not used.
3. The first sensor S1 is provided at the corner and ankle position of the animal. And (4) adopting an artificial intelligence algorithm, extracting habit action characteristic data of the animal according to the animal motion data, and recording individual archive data D5 of the animal. And (3) extracting the voiceprint characteristic data of the animal according to the animal voice calling data by adopting an artificial intelligence algorithm, and recording the individual file data D5 of the animal. And (4) extracting motion characteristic data of the motion according to the motion type attribute data D4 by adopting an artificial intelligence algorithm, and recording the motion type attribute data D4.
The rest of the content of the project is the same as the fight match training system.
4. The same is true.
5. 6, do not use this item.
7. When the first data D1 of the animal is collected, the single-sensor animal identification of the animal is identified according to the first data D1, the correlation result D3-AI1 and the confidence result D3-AI2 by adopting an artificial intelligence algorithm.
When the first data D1 of the animal is collected, the habitual action animal identification of the animal is identified according to the first data D1 and the habitual action feature data by adopting an artificial intelligence algorithm.
When the first data D1 of the collected animal comprises the cry data, the artificial intelligence algorithm is adopted to identify the voiceprint feature animal identification of the animal according to the voice data and the voiceprint feature data.
8. 9, this is not used.
10. The animal information system searches for the animal wearing the first sensor S1 and sends roll call information to the animal, and the first sensor S1 worn by the animal responds after receiving the roll call information, so that roll call is realized.
The registration is realized by the animal wearing the first sensor S1 sending registration information to the animal information system via the first sensor S1 and receiving a response.
The positioning is achieved by the animal information system via more than one terminal for the animal wearing the first sensor S1.
An abnormality alarm of alarm information is issued to the animal information system by the first sensor S1 based on the abnormal value of the first data D1.
The animal information system and the first sensor S1 realize communication through a sensor network, and abnormal values comprise alarm triggering conditions preset by the animal and the animal information system.
The animal information system and the first sensor S1 communicate with each other via a sensor network.
The position and physiological condition of an animal can be checked through roll calling. A movement situation; by the abnormity alarm, whether the animal is out of range, abnormal physiological data, ill and the like can be known.
11. A first sensor S1, a terminal and an animal information system; the first sensor S1 is connected to a terminal, which is connected to an animal information system and processes data from the first sensor S1.
12. There is no such item.
13. The method comprises the following steps: the first sensor S1 includes, but is not limited to: the processor is connected with the motion sensor, the physiological sensor, the user number generator and the geographic coordinate sensor; the motion sensor, the physiological sensor, the user number generator and the geographic coordinate sensor are respectively connected with the processor, and the processor is connected with the terminal. The connection mode of the terminal and the animal information system comprises wired connection and wireless sensor network connection, and the connection mode of the processor and the terminal comprises wired connection and wireless sensor network connection.
The rest of the content of the project is the same as the fight match training system.
14. And 15, changing the human user into an animal user, and the rest is the same.
(III) description of advantageous effects
1. The problem of self identification of pasture animals is solved.
2. The problems of boundary crossing alarming and positioning and ill alarming are solved.
3. The health status of the animal is detected.
4. Artificial intelligence is introduced to assist animal feeding.

Claims (15)

1. A method of athletic data monitoring, comprising:
a step of monitoring first data (D1) with a first sensor (S1) provided on a user' S body;
a step of transmitting said first data (D1) to a motion information system using a sensor network;
and/or the presence of a gas in the gas,
-a step of processing said first data (D1).
2. The method of claim 1, further comprising:
a step of monitoring second data (D2) while the user uses the target with a second sensor (S2) provided on the target;
a step of simultaneously acquiring the first data (D1) and the second data (D2) in chronological order while the user is using the target, and generating associated data (D3); and/or the presence of a gas in the gas,
a step of transmitting the second data (D2) and the associated data (D3) to a motion information system using the sensing network; and/or the presence of a gas in the gas,
the user at least comprises: student users, coach users, opponent users, and animal users; the sensing network comprises a fixed terminal and a mobile terminal, and comprises a micro base station, a smart phone and a PC (personal computer); the target includes boxing targets, balls, rackets, sports equipment, and uses of the boxing targets include boxing, foot and body part striking of the targets.
3. The method of claim 2, wherein the step of monitoring the first data (D1) with the first sensor (S1) disposed on the user' S body comprises:
a step of collecting the user motion data using a motion sensor of the first sensors (S1); and/or the presence of a gas in the gas,
acquiring the user motion data by utilizing a motion sensor included in the smart phone and directly transmitting the user motion data to a motion information system through the interior of the smart phone; and/or the presence of a gas in the gas,
a step of acquiring the user physiological data using a physiological sensor of the first sensors (S1); and/or the presence of a gas in the gas,
a step of collecting pressure data of the user while using the target and/or striking opponent with a pressure sensor of the first sensors (S1); and/or the presence of a gas in the gas,
a step of generating user number data of the user using a user number generator included in the first sensor (S1); and/or the presence of a gas in the gas,
a step of generating geographic coordinate data of the user using a geographic coordinate sensor included in the first sensor (S1); and/or the presence of a gas in the gas,
the monitoring second data (D2) while the user uses the target with a second sensor (S2) disposed on the target includes:
a step of collecting pressure data of the user while using the target with a pressure sensor of the second sensors (S2); and/or the presence of a gas in the gas,
a step of collecting position data of the user using the target using a position sensor of the second sensors (S2); and/or the presence of a gas in the gas,
a step of connecting all of said first sensors (S1) worn by one of said users to a personal sensor network and/or a venue sensor network and/or said sports information system using a cellular sensor network; and/or the presence of a gas in the gas,
a step of connecting all said second sensors (S2) of a set of targets equipped with a cellular sensor network to a personal sensor network and/or a venue sensor network and/or said sports information system; and/or the presence of a gas in the gas,
a step of acquiring a system time value (T) monitoring the time of occurrence of the first data (D1) and the second data (D2) and recording the system time value (T) into the first data (D1) and the second data (D2); and/or the presence of a gas in the gas,
a step of analog/digital (A/D) converting the first data (D1) and the second data (D2); and/or the presence of a gas in the gas,
a step of adjusting sampling frequency and sampling precision of the first sensor (S1) and the second sensor (S2) according to the motion category attribute data (D4); and/or the presence of a gas in the gas,
a step of interpolating and filling up the first data (D1) and the second data (D2) in accordance with a predetermined scale based on the first data (D1) and the second data (D2), and merging the first data (D1) and/or the second data (D2) into the associated data (D3);
wherein the first sensor (S1) is disposed at a wrist, ankle, joint, and/or strike location of the user; and/or the presence of a gas in the gas,
extracting habit movement characteristic data of the user according to the user movement data by adopting an artificial intelligence algorithm, and recording the habit movement characteristic data of the user into personal profile data (D5); and/or the presence of a gas in the gas,
extracting voice print feature data of the user according to the user voice data by adopting the artificial intelligence algorithm, and recording the personal profile data (D5) of the user; and/or the presence of a gas in the gas,
extracting motion feature data of the motion according to the motion category attribute data (D4) by using the artificial intelligence algorithm, and recording the motion feature data into the motion category attribute data (D4); and/or the presence of a gas in the gas,
the motion category attribute data (D4) includes: the exercise rule data and exercise force data, exercise level data, exercise amplitude data, injury degree data, duration degree data, physical ability consumption degree data, physiological degree data and/or competition rule data corresponding to the exercise rule data; wherein the motion rules comprise at least: free combat, standing combat, unlimited combat, MMA, UFC, free combat, martial arts, taijiquan, taiquan, kickboxing, K1 rule, fencing, judo, wrestling, athletics, gymnastics, balls;
the user has personal profile data (D5), the personal profile data (D5) including: the user's height, weight, three-dimensional, arm extension, arm weight, fist weight, heart rate, blood oxygen, body temperature, vital capacity, date and time, calorie consumption, historical athletic recording, historical game achievement, typical athletic data, exertional athletic data, voice data, voiceprint data, image data, video data.
4. The method of claim 3, further comprising:
a step of formatting data for the associated data (D3) according to data content including sampling type, sampling frequency, sampling accuracy, and data format; and/or the presence of a gas in the gas,
a step of calculating unit data (D3-U) by decomposing a motion sequence into motion units in the motion data portion of the associated data (D3) according to the characteristics of motion;
a step of mapping the unit data (D3-U) into a moving image, and mapping, in the unit data (D3-U), the three-axis data of the motion sensor acquired at each time as a group, one of the groups as a pixel point in the moving image, in accordance with the acquired sequence;
mapping the acquired data of each sub-sensor of the X axis, the Y axis and the Z axis of the motion sensor in the unit data (D3-U) into a moving image, mapping each acquisition point of each sub-sensor into a corresponding pixel point in the moving image, taking X, Y, Z triaxial data of the acquisition point as an independent variable X of RGB (red, green and blue) data of the pixel point, establishing a function Y of an RGB (red, green and blue) color value Y as f (X), and calculating the RGB data; and/or the presence of a gas in the gas,
mapping acquired data of one sub-sensor in the motion sensor in the unit data (D3-U) into a moving image, mapping acquired data of other sub-sensors into a channel of the moving image, mapping each acquisition point of each sub-sensor into a corresponding moving image or a pixel point in the channel, taking X, Y, Z triaxial data of the acquisition point as independent variable x of RGB three-primary-color data or channel data of the pixel point, establishing a function y of RGB color code value y as f (x), and calculating the RGB three-primary-color data or channel data; and/or the presence of a gas in the gas,
a step of performing deep learning on a plurality of moving image data by using an image recognition and classification algorithm in artificial intelligence, calculating the habit action feature of the user, the voiceprint feature of the user, the action feature of the movement and the pressure magnitude feature, and comparing feature data when the next associated data (D3) is collected; and/or the presence of a gas in the gas,
according to the image and video file format, the multi-image mapping and the single-image multi-channel mapping are changed into the image and the video file, so that the image and the video file can be conveniently displayed on a display and can be conveniently watched by human eyes; and/or the presence of a gas in the gas,
the artificial intelligence algorithm at least comprises: an artificial neural network algorithm, a CNNs convolutional neural network algorithm, an RNN recurrent neural network algorithm, an SVM support vector machine algorithm, a genetic algorithm, an ant colony algorithm, a simulated annealing algorithm, a particle swarm algorithm and a Bayes algorithm;
the RGB functions include linear functions y-kx + j and nonlinear functions, where k and j are tuning constants.
5. The method of claim 3 or 4, further comprising:
a step of causing one or more video image sensors (S3) to capture one or more video images (D6) of the user race; and/or the presence of a gas in the gas,
a step of communicating said one or more video image sensors (S3) with said motion information system via said sensing network; and/or the presence of a gas in the gas,
a step of performing a three-dimensional vectorization composition of the motion actions by using the artificial intelligence algorithm according to the position of the first sensor (S1) in the video image (D6) based on the video image (D6) and the first data (D1) to obtain three-dimensional vectorized data (D7); and/or the presence of a gas in the gas,
a step of associating the three-dimensional vectorized data (D7) with the second data (D2), the association data (D3), the motion category attribute data (D4) and/or the personal profile data (D5); and/or the presence of a gas in the gas,
a step of identifying a motion in the video image (D6) based on the three-dimensional vectorized data (D7) and the motion category attribute data (D4) using the artificial intelligence algorithm, and synchronizing the temporal points before and after the motion, which are labeled in the video image (D6);
wherein, the race training comprises single training, single race match and multi-player competition match.
6. The method of claim 4, further comprising:
a step of striking a target with a normative motion by the coach user according to the motion category attribute data (D4) to obtain the relevant data (D3) of the coach, performing machine learning in the relevant data (D3) of the coach according to the artificial intelligence algorithm to obtain the relevant result (D3-AI1) of the coach and the confidence result (D3-AI2) of the coach, and updating the personal profile data (D5) of the coach user; and/or the presence of a gas in the gas,
a step of striking a target by the trainee user according to the sports category attribute data (D4), obtaining association data (D3) of the trainee, performing machine learning in the association data (D3) of the trainee according to the artificial intelligence algorithm, obtaining an association result (D3-AI1) of the trainee and a confidence result (D3-AI2) of the trainee, and updating the personal profile data (D5) of the trainee user; and/or the presence of a gas in the gas,
-looping the step of comparing said association result (D3-AI1) of said trainee with said association result (D3-AI1) of said trainer, -looping the step of comparing said confidence result (D3-AI2) of said trainee with said confidence result (D3-AI2) of said trainer; and/or the presence of a gas in the gas,
a step of calculating and analyzing typical exercise data, strong terms, weak terms and gaps of the trainee according to the association result (D3-AI1) and the confidence result (D3-AI2) of the trainee, updating the personal profile data (D5) of the trainee, and calculating and generating and outputting training advice information; and/or the presence of a gas in the gas,
searching the personal profile data (D5) of the opponent user and the personal profile data (D5) of the trainee, comparing the typical exercise data, the strong item data and the weak item data of the opponent user and the trainee, calculating and analyzing the difference between the typical exercise data, the strong item data and the weak item data, making a targeted training proposal plan, and supervising and prompting the training result.
7. The method of claim 5, further comprising:
a step of identifying the user, when the first data (D1) of the user are acquired, on the basis of the first data (D1) and the associated results (D3-AI1) and/or the confidence results (D3-AI2) and/or the three-dimensional vectorized data (D8), using the artificial intelligence algorithm; or,
a step of identifying the user based on the first data (D1) and the habit feature data using the artificial intelligence algorithm when the first data (D1) of the user is collected; or,
a step of recognizing the user based on the voice data and the voiceprint feature data by using the artificial intelligence algorithm when the first data (D1) collected from the user includes the voice data; or,
a step of identifying the user, when the first data (D1) and the second data (D2) of the user are acquired, from the first data (D1) and the associated results (D3-AI1) and/or the confidence results (D3-AI2) and/or the three-dimensional vectorized data (D8) using the artificial intelligence algorithm; and/or the presence of a gas in the gas,
a step of identifying, when the first data (D1) of the user are acquired, the motion category attribute data (D4) using the artificial intelligence algorithm, depending on the user, the first data (D1) and the correlation results (D3-AI1) and/or the confidence results (D3-AI2) and/or the three-dimensional vectorized data (D8); or,
a step of identifying said sports category attribute data (D4) based on said user, said first data (D1) and said associated results (D3-AI1) and/or said confidence results (D3-AI2) and/or said three-dimensional vectorized data (D8) using said artificial intelligence algorithm when said first data (D1) and said second data (D2) of said user are acquired;
a step of identifying the motion category attribute data (D4) based on the first data (D1) and the motion characteristic data by using the artificial intelligence algorithm when the first data (D1) of the user is collected;
a step of calculating pressure data generated by the striking motion of the user based on the image depth learning step and the calibration data (D8);
a step of making the user strike the target, obtaining the angular velocity data and the acceleration data in the first sensor (S1) and the pressure data in the second sensor (S2) according to a newton' S mechanical algorithm, establishing an acceleration pressure correlation (D8); and/or the presence of a gas in the gas,
striking a target or an opponent without using a second sensor (S2) by the user using only the first sensor (S1), a step of pressure recognition in the acceleration pressure correlation (D8) according to the first data (D1).
8. The method of claim 7, further comprising:
calculating the association result (D3-AI1) and the confidence result (D3-AI2) corresponding to each user by using the artificial intelligence algorithm during the training of the plurality of users according to the competition rules in the sports category attribute data (D4);
comparing the corresponding association results (D3-AI1) and the confidence results (D3-AI2) of a plurality of users according to the game rules in the sports category attribute data (D4) and obtaining the real-time game process data including the degree and number of pounding, the degree and number of injury, the number and reading time, TKO and KO;
and calculating and outputting dynamic odds and prediction result data of the game based on the game process data.
9. The method of claim 2, further comprising:
a step of communicating the first sensor (S1) and/or the second sensor (S2) with more than one fixed terminal to calculate absolute data of the own spatial coordinates, motion speed, motion trajectory of the first sensor (S1) and/or the second sensor (S2); and/or the presence of a gas in the gas,
a step of communicating the first sensor (S1) and/or the second sensor (S2) with more than one mobile terminal, first sensor (S1) and/or second sensor (S2) to calculate relative data of the first sensor (S1) and/or the second sensor (S2) own spatial coordinates, motion speed, motion trajectory; and/or the presence of a gas in the gas,
processing and displaying the result information of the motion information system by using the fixed terminal and/or the mobile terminal; and/or the presence of a gas in the gas,
and sending the live playback video including the result information and/or the motion action to more than one display device so as to enable the result information and the live video to be displayed in a fusion mode.
10. The method of claim 3, further comprising:
a step of searching for the user wearing the first sensor (S1) by the sports information system and sending roll call information thereto, the first sensor (S1) worn by the user responding upon receipt; and/or the presence of a gas in the gas,
a step of issuing entry information to the sports information system via the first sensor (S1) and acquiring a response by the user wearing the first sensor (S1); and/or the presence of a gas in the gas,
a step of sending a notification message to the first sensor (S1) worn by the user by the sports information system, the first sensor (S1) responding to the sports information system after receiving the notification message, and displaying and/or vibrating on the first sensor (S1); and/or the presence of a gas in the gas,
-carrying out a positioning step for said user wearing said first sensor (S1) by said sports information system through one or more of said terminals; and/or the presence of a gas in the gas,
a step of sending, by the user wearing the first sensor (S1), alarm information to the sports information system according to the user' S subjective intention; and/or the presence of a gas in the gas,
a step of sending alarm information to the motion information system by the first sensor (S1) according to the abnormal value of the first data (D1); and/or the presence of a gas in the gas,
communication between the motion information system and the first sensor (S1) is achieved through a sensing network; the abnormal value includes an alarm trigger condition preset by the user and/or the motion information system.
11. A system for athletic data monitoring, comprising: a first sensor (S1), a terminal and a motion information system; the first sensor (S1) is connected with the terminal, and the terminal is connected with the motion information system and processes data from the first sensor (S1).
12. The system of claim 11, further comprising: a second sensor (S2) and/or a video image sensor (S3); the second sensor (S2) and the video image sensor (S3) are connected to a terminal, respectively, which is connected to the motion information system.
13. The system according to claim 11 or 12, characterized in that:
the first sensor (S1) is formed by connecting a processor with a motion sensor and/or a physiological sensor and/or a pressure sensor and/or a user number generator and/or a geographic coordinate sensor; the motion sensor, the physiological sensor, the pressure sensor, the user number generator and the geographic coordinate sensor are respectively connected with the processor, and the processor is connected with the terminal; and/or the presence of a gas in the gas,
the second sensor (S2) includes a pressure sensor and a position sensor,
the connection mode of the terminal and the motion information system comprises wired connection and wireless sensor network connection, and the connection mode of the processor and the terminal comprises wired connection and wireless sensor network connection;
the motion sensor includes: three-axis angular velocity sensors, three-axis acceleration sensors, three-axis magnetic sensors, electronic compass sensors, speed sensors, motion direction sensors, displacement sensors, trajectory sensors, light sensors, and combinations thereof;
the physiological sensor includes: blood oxygen sensors, blood pressure sensors, pulse sensors, temperature sensors, perspiration level sensors, sound and/or light sensors;
the pressure sensor includes: a pressure sensor, an impulse sensor and/or an impulse sensor;
the position sensor includes: a spatial position sensor, a spatial coordinate sensor, a light sensor and/or a camera;
the user number generator includes: a user number storage, editing and sending module;
the geographic coordinate sensor includes: a navigation satellite positioning module;
the video image sensor is a visible light and/or invisible light camera.
14. The system of claim 13, wherein the first and second light sources are,
the sensing network comprises a fixed terminal and a mobile terminal, wherein the terminal comprises a micro base station and/or a mobile phone and/or a PC; the connection mode of the sensing network comprises a wired mode and a wireless mode;
the micro base station includes: the system comprises more than one downlink interface, a processor, a power subsystem and an uplink interface, wherein the more than one downlink interface is connected with the processor, the processor is connected with the uplink interface, the power subsystem provides power for the downlink interface, the processor and the uplink interface, the downlink interface is connected with the first sensor (S1) and/or the second sensor (S2) and/or the video image sensor (S3) through a wireless sensor network for communication, and the uplink interface is communicated with the motion information system through a wired or wireless network;
the motion information system comprises a terminal unit and a cloud center which are communicated with each other; the terminal unit and the terminal are integrally or separately arranged;
the target includes boxing targets, balls, rackets, sports equipment, and uses of the boxing targets include boxing, foot and body part striking of the targets.
15. The system of claim 14, wherein the cloud center is configured to:
-performing by said terminal the functions of connecting down, collecting, processing including said user, said first data (D1), said second data (D2), said motion category attribute data (D4), said user profile data (D5), said video data (D6), performing user interaction, and assisting in generating said association data (D3), said user profile data (D5), said three-dimensional vectoring data (D7), said calibration data (D8);
the terminal completes the function of forming big data by transmitting data to the cloud center;
the terminal interacts with a cloud center to complete the functions of learning, training, user identification, action identification and pressure identification;
completing, by the cloud center, functions of processing and communicating with the application software for the big data including the deep learning, data mining, classification algorithms, artificial intelligence processing, generating the association data (D3), the video data (D6), the calibration data (D8), updating (D5), cloud computing, cloud management;
the motion information system is configured between the terminal and the cloud center.
CN201711310325.XA 2017-12-11 2017-12-11 A kind of exercise data monitoring method and system Pending CN108096807A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201711310325.XA CN108096807A (en) 2017-12-11 2017-12-11 A kind of exercise data monitoring method and system
PCT/CN2018/120363 WO2019114708A1 (en) 2017-12-11 2018-12-11 Motion data monitoring method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711310325.XA CN108096807A (en) 2017-12-11 2017-12-11 A kind of exercise data monitoring method and system

Publications (1)

Publication Number Publication Date
CN108096807A true CN108096807A (en) 2018-06-01

Family

ID=62208337

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711310325.XA Pending CN108096807A (en) 2017-12-11 2017-12-11 A kind of exercise data monitoring method and system

Country Status (2)

Country Link
CN (1) CN108096807A (en)
WO (1) WO2019114708A1 (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109107136A (en) * 2018-09-07 2019-01-01 广州仕伯特体育文化有限公司 A kind of kinematic parameter monitoring method and device
CN109718528A (en) * 2018-11-28 2019-05-07 浙江骏炜健电子科技有限责任公司 Personal identification method and system based on kinematic feature factor
CN109769213A (en) * 2019-01-25 2019-05-17 努比亚技术有限公司 Method, mobile terminal and the computer storage medium of user behavior track record
CN109800860A (en) * 2018-12-28 2019-05-24 北京工业大学 A kind of Falls in Old People detection method of the Community-oriented based on CNN algorithm
WO2019114708A1 (en) * 2017-12-11 2019-06-20 丁贤根 Motion data monitoring method and system
CN110314346A (en) * 2019-07-03 2019-10-11 重庆道吧网络科技有限公司 Intelligence fistfight sports series of skills and tricks in boxing, footmuff, system and method based on big data analysis
CN110412627A (en) * 2019-05-30 2019-11-05 沈恒 A kind of hydrostatic project ship, the acquisition of paddle data application method
CN110507969A (en) * 2019-08-30 2019-11-29 佛山市启明星智能科技有限公司 A kind of training system and method for tae kwon do
CN112884062A (en) * 2021-03-11 2021-06-01 四川省博瑞恩科技有限公司 Motor imagery classification method and system based on CNN classification model and generation countermeasure network
CN113317783A (en) * 2021-04-20 2021-08-31 港湾之星健康生物(深圳)有限公司 Multimode personalized longitudinal and transverse calibration method
WO2021253296A1 (en) * 2020-06-17 2021-12-23 华为技术有限公司 Exercise model generation method and related device
CN113996048A (en) * 2021-11-18 2022-02-01 宜宾显微智能科技有限公司 Fighting scoring system and method based on posture recognition and electronic protector monitoring
CN114886387A (en) * 2022-07-11 2022-08-12 深圳市奋达智能技术有限公司 Method and system for calculating walking and running movement calorie based on pressure sensation and storage medium
CN115869608A (en) * 2022-11-29 2023-03-31 京东方科技集团股份有限公司 Referee method, device and system for fencing competition and computer readable storage medium
WO2023026256A3 (en) * 2021-08-27 2023-04-06 Rapsodo Pte. Ltd. Intelligent analysis and automatic grouping of activity sensors
TWI803833B (en) * 2021-03-02 2023-06-01 國立屏東科技大學 Cloud sport action image training system and method thereof for ball games
CN116269266A (en) * 2023-05-22 2023-06-23 广州培生智能科技有限公司 AI-based old people health monitoring method and system
TWI824882B (en) * 2022-09-02 2023-12-01 宏達國際電子股份有限公司 Posture correction system and method

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AT527014A1 (en) * 2023-02-24 2024-09-15 Res Industrial Systems Engineering Rise Forschungs Entwicklungs Und Grossprojektberatung Gmbh Procedure for calibrating a batting glove
CN117100255B (en) * 2023-10-25 2024-01-23 四川大学华西医院 Method for judging fall prevention based on neural network model and related products

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH112574A (en) * 1997-06-11 1999-01-06 Casio Comput Co Ltd Impact force estimating device, impact force estimating method and memory medium storing impact force estimating process program
CN202366428U (en) * 2011-12-22 2012-08-08 钟亚平 Digital acquisition system for beating training of taekwondo
CN103463804A (en) * 2013-09-06 2013-12-25 南京物联传感技术有限公司 Boxing training perception system and method thereof
CN105183152A (en) * 2015-08-25 2015-12-23 小米科技有限责任公司 Sport ability analysis method, apparatus and terminal
KR20160074289A (en) * 2014-12-18 2016-06-28 조선아 Device and method for judging hit
CN107126680A (en) * 2017-06-13 2017-09-05 广州体育学院 A kind of running monitoring and speech prompting system based on motion class sensor

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9314666B2 (en) * 2013-03-15 2016-04-19 Ficus Ventures, Inc. System and method for identifying and interpreting repetitive motions
EP3005280B1 (en) * 2013-05-30 2019-05-08 Atlas Wearables, Inc. Portable computing device and analyses of personal data captured therefrom
CN106823348A (en) * 2017-01-20 2017-06-13 广东小天才科技有限公司 Motion data management method, device and system and user equipment
CN107213619A (en) * 2017-07-04 2017-09-29 曲阜师范大学 Sports training assessment system
CN108096807A (en) * 2017-12-11 2018-06-01 丁贤根 A kind of exercise data monitoring method and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH112574A (en) * 1997-06-11 1999-01-06 Casio Comput Co Ltd Impact force estimating device, impact force estimating method and memory medium storing impact force estimating process program
CN202366428U (en) * 2011-12-22 2012-08-08 钟亚平 Digital acquisition system for beating training of taekwondo
CN103463804A (en) * 2013-09-06 2013-12-25 南京物联传感技术有限公司 Boxing training perception system and method thereof
KR20160074289A (en) * 2014-12-18 2016-06-28 조선아 Device and method for judging hit
CN105183152A (en) * 2015-08-25 2015-12-23 小米科技有限责任公司 Sport ability analysis method, apparatus and terminal
CN107126680A (en) * 2017-06-13 2017-09-05 广州体育学院 A kind of running monitoring and speech prompting system based on motion class sensor

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019114708A1 (en) * 2017-12-11 2019-06-20 丁贤根 Motion data monitoring method and system
CN109107136A (en) * 2018-09-07 2019-01-01 广州仕伯特体育文化有限公司 A kind of kinematic parameter monitoring method and device
CN109718528B (en) * 2018-11-28 2021-06-04 浙江骏炜健电子科技有限责任公司 Identity recognition method and system based on motion characteristic parameters
CN109718528A (en) * 2018-11-28 2019-05-07 浙江骏炜健电子科技有限责任公司 Personal identification method and system based on kinematic feature factor
CN109800860A (en) * 2018-12-28 2019-05-24 北京工业大学 A kind of Falls in Old People detection method of the Community-oriented based on CNN algorithm
CN109769213A (en) * 2019-01-25 2019-05-17 努比亚技术有限公司 Method, mobile terminal and the computer storage medium of user behavior track record
CN109769213B (en) * 2019-01-25 2022-01-14 努比亚技术有限公司 Method for recording user behavior track, mobile terminal and computer storage medium
CN110412627A (en) * 2019-05-30 2019-11-05 沈恒 A kind of hydrostatic project ship, the acquisition of paddle data application method
CN110314346A (en) * 2019-07-03 2019-10-11 重庆道吧网络科技有限公司 Intelligence fistfight sports series of skills and tricks in boxing, footmuff, system and method based on big data analysis
CN110507969A (en) * 2019-08-30 2019-11-29 佛山市启明星智能科技有限公司 A kind of training system and method for tae kwon do
WO2021253296A1 (en) * 2020-06-17 2021-12-23 华为技术有限公司 Exercise model generation method and related device
TWI803833B (en) * 2021-03-02 2023-06-01 國立屏東科技大學 Cloud sport action image training system and method thereof for ball games
CN112884062B (en) * 2021-03-11 2024-02-13 四川省博瑞恩科技有限公司 Motor imagery classification method and system based on CNN classification model and generated countermeasure network
CN112884062A (en) * 2021-03-11 2021-06-01 四川省博瑞恩科技有限公司 Motor imagery classification method and system based on CNN classification model and generation countermeasure network
CN113317783B (en) * 2021-04-20 2022-02-01 港湾之星健康生物(深圳)有限公司 Multimode personalized longitudinal and transverse calibration method
CN113317783A (en) * 2021-04-20 2021-08-31 港湾之星健康生物(深圳)有限公司 Multimode personalized longitudinal and transverse calibration method
WO2023026256A3 (en) * 2021-08-27 2023-04-06 Rapsodo Pte. Ltd. Intelligent analysis and automatic grouping of activity sensors
US12039804B2 (en) 2021-08-27 2024-07-16 Rapsodo Pte. Ltd. Intelligent analysis and automatic grouping of activity sensors
GB2624999A (en) * 2021-08-27 2024-06-05 Rapsodo Pte Ltd Intelligent analysis and automatic grouping of activity sensors
CN113996048A (en) * 2021-11-18 2022-02-01 宜宾显微智能科技有限公司 Fighting scoring system and method based on posture recognition and electronic protector monitoring
CN113996048B (en) * 2021-11-18 2023-03-14 宜宾显微智能科技有限公司 Fighting scoring system and method based on posture recognition and electronic protector monitoring
CN114886387A (en) * 2022-07-11 2022-08-12 深圳市奋达智能技术有限公司 Method and system for calculating walking and running movement calorie based on pressure sensation and storage medium
CN114886387B (en) * 2022-07-11 2023-02-14 深圳市奋达智能技术有限公司 Method and system for calculating walking and running movement calorie based on pressure sensation and storage medium
TWI824882B (en) * 2022-09-02 2023-12-01 宏達國際電子股份有限公司 Posture correction system and method
CN115869608A (en) * 2022-11-29 2023-03-31 京东方科技集团股份有限公司 Referee method, device and system for fencing competition and computer readable storage medium
CN116269266B (en) * 2023-05-22 2023-08-04 广州培生智能科技有限公司 AI-based old people health monitoring method and system
CN116269266A (en) * 2023-05-22 2023-06-23 广州培生智能科技有限公司 AI-based old people health monitoring method and system

Also Published As

Publication number Publication date
WO2019114708A1 (en) 2019-06-20

Similar Documents

Publication Publication Date Title
CN108096807A (en) A kind of exercise data monitoring method and system
US11990160B2 (en) Disparate sensor event correlation system
US11355160B2 (en) Multi-source event correlation system
US9911045B2 (en) Event analysis and tagging system
KR101687252B1 (en) Management system and the method for customized personal training
AU2017331639B2 (en) A system and method to analyze and improve sports performance using monitoring devices
US9401178B2 (en) Event analysis system
Baca et al. Ubiquitous computing in sports: A review and analysis
CN109692003B (en) Training system is corrected to children gesture of running
US20150318015A1 (en) Multi-sensor event detection system
CN105498188A (en) Physical activity monitoring device
Saponara Wearable biometric performance measurement system for combat sports
JP2018523868A (en) Integrated sensor and video motion analysis method
CN104126185A (en) Fatigue indices and uses thereof
JP2017521017A (en) Motion event recognition and video synchronization system and method
CN104075731A (en) Methods Of Determining Performance Information For Individuals And Sports Objects
WO2017011818A1 (en) Sensor and media event detection and tagging system
WO2017011811A1 (en) Event analysis and tagging system
Hu et al. Application of intelligent sports goods based on human-computer interaction concept in training management
WO2017218962A1 (en) Event detection, confirmation and publication system that integrates sensor data and social media
WO2023150715A2 (en) Systems and methods for measuring and analyzing the motion of a swing and matching the motion of a swing to optimized swing equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180601