WO2019114708A1 - 一种运动数据监测方法和系统 - Google Patents
一种运动数据监测方法和系统 Download PDFInfo
- Publication number
- WO2019114708A1 WO2019114708A1 PCT/CN2018/120363 CN2018120363W WO2019114708A1 WO 2019114708 A1 WO2019114708 A1 WO 2019114708A1 CN 2018120363 W CN2018120363 W CN 2018120363W WO 2019114708 A1 WO2019114708 A1 WO 2019114708A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- sensor
- user
- motion
- information system
- Prior art date
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63B—APPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
- A63B71/00—Games or sports accessories not covered in groups A63B1/00 - A63B69/00
- A63B71/06—Indicating or scoring devices for games or players, or for other sports activities
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63B—APPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
- A63B69/00—Training appliances or apparatus for special sports
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63B—APPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
- A63B69/00—Training appliances or apparatus for special sports
- A63B69/20—Punching balls, e.g. for boxing; Other devices for striking used during training of combat sports, e.g. bags
- A63B69/32—Punching balls, e.g. for boxing; Other devices for striking used during training of combat sports, e.g. bags with indicating devices
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63B—APPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
- A63B71/00—Games or sports accessories not covered in groups A63B1/00 - A63B69/00
- A63B71/06—Indicating or scoring devices for games or players, or for other sports activities
- A63B71/0619—Displays, user interfaces and indicating devices, specially adapted for sport equipment, e.g. display mounted on treadmills
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63B—APPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
- A63B71/00—Games or sports accessories not covered in groups A63B1/00 - A63B69/00
- A63B71/06—Indicating or scoring devices for games or players, or for other sports activities
- A63B71/0619—Displays, user interfaces and indicating devices, specially adapted for sport equipment, e.g. display mounted on treadmills
- A63B2071/065—Visualisation of specific exercise parameters
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63B—APPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
- A63B2220/00—Measuring of physical parameters relating to sporting activity
- A63B2220/10—Positions
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63B—APPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
- A63B2220/00—Measuring of physical parameters relating to sporting activity
- A63B2220/20—Distances or displacements
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63B—APPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
- A63B2220/00—Measuring of physical parameters relating to sporting activity
- A63B2220/30—Speed
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63B—APPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
- A63B2220/00—Measuring of physical parameters relating to sporting activity
- A63B2220/40—Acceleration
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63B—APPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
- A63B2220/00—Measuring of physical parameters relating to sporting activity
- A63B2220/50—Force related parameters
- A63B2220/56—Pressure
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63B—APPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
- A63B2230/00—Measuring physiological parameters of the user
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63B—APPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
- A63B2244/00—Sports without balls
- A63B2244/10—Combat sports
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63B—APPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
- A63B2244/00—Sports without balls
- A63B2244/10—Combat sports
- A63B2244/102—Boxing
Definitions
- the invention relates to the field of artificial intelligence application in information technology, in particular to the application technology of artificial intelligence in sports, in particular to a method and a system for image recognition, motion recognition, personnel identification, intelligent training, automatic evaluation, and particularly relates to a method and system Motion data monitoring methods and systems.
- the intent of the present invention is to solve the related problems in sports by using artificial intelligence technology, and try to change the shortcomings of current sports intelligent technologies, such as mechanical measurement, motion recognition, personnel recognition, learning, training, and human body dynamic motion (such as fighting).
- Practice referee, evaluation, odds calculation, and creatively invented the method of data imaging, so that the results of artificial intelligence in the field of image recognition can be borrowed from sports measurement data.
- the present invention includes 104, 105 to 10n, 10n+1 to 10m+1 sensors, including terminals of 101, including a combat information system 2 of 103.
- the sensor includes a motion sensor, a physiological sensor, a user number generator, a geographic coordinate sensor, a pressure sensor, and the like, wherein the terminal further includes a combat information system 1 of 102. specifically is:
- a method of motion data monitoring includes, but is not limited to, the step of monitoring the first data D1 with a first sensor S1 disposed on a user's body.
- the structure of the first sensor includes one of or a combination of a motion sensor, a physiological sensor, a pressure sensor, a user number generator, and a geographic coordinate sensor, in the processor.
- Work under management which includes the power subsystem. Which one of the motion sensor, the physiological sensor, the pressure sensor user number generator and the geographic coordinate sensor is used, depending on the application scenario, for example, for the same user, the first sensor with the motion sensor may It needs to be worn on all four limbs to monitor the movements of the limbs, but for physiological monitoring, it can be monitored at any part of the limbs. In addition, as some sports (such as fighting), it may be necessary to monitor the pressure (such as the impact of the fist).
- the motion sensor not only the motion sensor but also the pressure sensor is required to be placed in a specific part (such as a fist part).
- a specific part such as a fist part.
- the user number generator or geographic coordinate sensor can meet the requirements. Therefore, where is the motion sensor, physiological sensor, pressure sensor, user number generator and geographic coordinate sensor? One or a combination thereof is determined according to a specific application scenario.
- the present invention includes but is not limited to the following improvement measures and combinations thereof:
- the step of monitoring the first data D1 by using the first sensor S1 disposed on the user's body includes:
- the step of monitoring the second data D2 when the user strikes and uses the target device by using the second sensor S2 disposed on the target device includes but is not limited to:
- the step of connecting all of the first sensors S1 worn by the user to the personal sensor network, the location sensor network, and the motion information system using the unit sensing network is as shown in FIG.
- the step of connecting all of the second sensors S2 equipped with a set of target devices to the personal sensor network, the location sensor network, and the motion information system using the unit sensing network is as shown in FIG.
- the step of monitoring the system time value T at which the first data D1 and the second data D2 occur is recorded and recorded in the first data D1 and the second data D2.
- the steps of sampling frequency and sampling accuracy of the first sensor S1 and the second sensor S2 are adjusted according to the motion type attribute data D4.
- the first data D1 and the second data D2 are interpolated according to a predetermined scale, and the first data D1 and the second data D2 are interpolated.
- the first sensor S1 is disposed at a wrist, an ankle, a joint, and/or a striking position of the user.
- the motion feature data of the motion is extracted according to the motion category attribute data, and the step of recording the motion category attribute data D4 is performed.
- the motion category attribute data D4 includes, but is not limited to: motion rule data and motion intensity data corresponding to the motion rule data, motion level data, motion amplitude data, injury degree data, duration data, physical energy consumption degree data, Physiological data and/or competition rules data.
- the rules of the exercise include but are not limited to: free combat, standing fighting, unrestricted fighting, MMA, UFC, Sanda, martial arts, Tai Chi, Muay Thai, kick boxing, K1 rules, fencing, judo, wrestling, track and field, gymnastics, Ball class.
- the user has personal profile data D5, including but not limited to: the user's height, weight, three-dimensional, arm span, arm weight, punch weight, heart rate, blood oxygen, body temperature, vital capacity, date time Calorie consumption, historical sports records, historical competition results, typical sports data, strong sports project data, weak sports project data, voiceprint data, image data, video data.
- the motion sensor includes, but is not limited to, an angular velocity sub-sensor, an acceleration sub-sensor, a magnetic sub-sensor, and the shaft system includes at least an XYZ triaxial.
- Figure 9 is a structural diagram of the micro base station.
- the present invention includes but is not limited to the following improvement measures and combinations thereof:
- the data formatting step is performed on the associated data D3 according to data contents including, but not limited to, sampling type, sampling frequency, sampling precision, and data format.
- the decomposition action sequence is an action unit, and the step of calculating the unit data D3-U.
- 1001 is the related data D3, which is formatted into data 1002, and is decomposed by action to become unit data 1004, that is, D3-U.
- 1004 of the unit data D3-U is decomposed into an angular velocity (gyroscope) sensor data set 1015 and an acceleration sensor data set 1025, wherein one of the collection points is 1016 for the group 1015 and 1026 for the group 1025. .
- group 1015 of angular velocity sensors is mapped to gFIG. 1018, collection points 1016 in group 1015 are mapped to pixel points 1017 in FIG. 1018; group 1025 of acceleration sensors are mapped to a diagram 1028, acquisition in group 1025. Point 1026 is mapped to pixel point 1027 in a diagram 1028.
- the secondary collection point is the corresponding moving image or one pixel in the channel, and the X, Y, and Z triaxial data of the collection point is used as the pixel x RGB three primary color data or the argument x of the channel data to establish RGB.
- group 1015 of angular velocity sensors are mapped to gFIG. 1018, collection points 1016 in group 1015 are mapped to pixel points 1017 in FIG. 1018; group 1025 of acceleration sensors are mapped to c-channels 1038, acquisitions in group 1025 Point 1026 is mapped to pixel point 1037 in c channel 1038.
- the artificial intelligence image recognition and classification algorithm is used to perform deep learning on a plurality of the moving image data, and the feature data including the motion type feature, the action type feature, the pressure size feature, and the user identification feature are summarized and calculated.
- the step of comparing the image depth learning of the feature data is calculated.
- the multi-map mapping and the single-image mapping are adapted into the image and the video file, which facilitates the steps of displaying the image and reconstructing the image and video file viewed by the human eye.
- one of the methods of reconstructing the illustrated image and video file is to calculate and add a header file, that is, 1119, 1129, 1139 in FIG.
- the artificial intelligence algorithm includes, but is not limited to, an artificial neural network algorithm, a Convolutional Neural Networks (hereinafter referred to as CNNs) algorithm, a Recurrent Neural Networks (hereinafter referred to as RNN) algorithm, and a deep neural network (Dotnetnuke, Hereinafter referred to as DNN), Support Vector Machine (SVM) algorithm, genetic algorithm, ant colony algorithm, simulated annealing algorithm, particle swarm algorithm, Bayes (Bayes) algorithm.
- CNNs Convolutional Neural Networks
- RNN Recurrent Neural Networks
- DNN deep neural network
- SVM Support Vector Machine
- genetic algorithm genetic algorithm
- ant colony algorithm simulated annealing algorithm
- particle swarm algorithm particle swarm algorithm
- Bayes (Bayes) algorithm Bayes
- the present invention includes but is not limited to the following improvement measures and combinations thereof:
- the artificial intelligence algorithm is used to perform three-dimensional vector synthesis of motion actions to obtain a three-dimensional vector.
- identifying the motion action in the video image D6 according to the three-dimensional vectorization data D7 and the motion category attribute data D4, and synchronizing the motion marked in the video image D6 The steps before and after the action.
- the game includes, but is not limited to, single-player training, single-player races, and multiplayer confrontation competitions.
- the present invention includes but is not limited to the following improvement measures and combinations thereof:
- a step of learning the association result D3-AI1 of the coach and the confidence result D3-AI2 of the coach and updating the learning profile of the profile data D5 of the coach user is obtained.
- the steps of comparing the correlation result D3-AI1 of the student and the association result D3-AI1 of the coach are cyclically compared with the step of comparing the confidence result D3-AI2 of the student and the confidence result D3-AI2 of the coach.
- the present invention includes but is not limited to the following improvement measures and combinations thereof:
- the artificial intelligence algorithm is adopted, according to the first data D1 and the association result D3-AI1, the confidence result D3-AI2 and/or the three-dimensional vector
- the data D8 identifies a single sensor user identification step of the user.
- the artificial intelligence algorithm is used to identify the habit action user identification step of the user according to the first data D1 and the custom action feature data.
- the artificial intelligence algorithm is used to identify the voiceprint feature of the user according to the voice data and the voiceprint feature data. .
- the artificial intelligence algorithm is adopted, according to the first data D1 and the association result D3-AI1, the user's confidence result D3 - AI2, the three-dimensional vectorized data D8, identifying a dual sensor user identification step of the user.
- the artificial intelligence algorithm is used, according to the user, the first data D1, and the associated result D3-AI1 of the user, and the confidence result D3-3 of the user.
- the AI2 and/or the three-dimensional vectorized data D8 recognizes a single sensor motion recognition step of the motion type attribute data D4.
- the artificial intelligence algorithm is adopted, according to the user, the first data D1, and the associated result D3-AI1 of the user.
- the user's confidence result D3-AI2 the three-dimensional vectorization data D8, and the dual sensor motion recognition step of identifying the motion type attribute data D4.
- the artificial intelligence algorithm is used to identify an action feature action identifying step of the motion category attribute data D4 according to the first data D1 and the action feature data.
- the step of calculating the pressure data generated by the striking action of the user is performed according to the image depth learning step and the calibration data D8.
- the present invention includes but is not limited to the following improvement measures and combinations thereof:
- the artificial intelligence algorithm is used to calculate the association result D3-AI1 and the confidence corresponding to each user during the game training of the plurality of users. Results D3-AI2 steps.
- the present invention includes but is not limited to the following improvement measures and combinations thereof:
- the first sensor S1 and the second sensor S2 are communicated with one or more mobile terminals, the first sensor S1, and the second sensor S2 to calculate spatial coordinates of the first sensor S1 and the second sensor S2.
- the fixed terminal and the mobile terminal include: a micro base station, a PC, and a smart phone.
- connection manner of the sensing network includes a wired mode and a wireless mode.
- the present invention includes but is not limited to the following improvement measures and combinations thereof:
- the user who wears the first sensor S1 is searched by the motion information system, and the name information is sent to the user, and the first sensor S1 worn by the user responds after receiving the response, thereby implementing the name step.
- the user who wears the first sensor S1 sends registration information to the motion information system through the first sensor S1, and obtains a response, thereby implementing the registration step.
- the positioning step is implemented by the motion information system through the one or more terminals for the user wearing the first sensor S1.
- the abnormality alarming step of the alarm information is sent to the motion information system by the first sensor S1 according to the abnormal value of the first data D1.
- the communication between the motion information system and the first sensor S1 is implemented by a sensor network, and the abnormal value includes an alarm trigger condition preset by the user and/or the motion information system.
- a system for monitoring motion data comprising: a first sensor S1, a terminal and a motion information system; the first sensor S1 is connected to the terminal, and the terminal is connected to the motion information system.
- the present invention further includes, but is not limited to, the following contents and combinations thereof:
- the method further includes: a second sensor S2, a video image sensor S3; the second sensor S2 and the video image sensor S3 are respectively connected to the terminal.
- the present invention further includes, but is not limited to, the following contents and combinations thereof:
- the first sensor S1 is composed of a processor and a motion sensor, a physiological sensor, a pressure sensor, a user number generator, and a geographic coordinate sensor; wherein the motion sensor, the physiological sensor, the pressure sensor, and the user The number generator, the geographic coordinate sensor are respectively connected to the processor, and the processor is connected to the terminal.
- the second sensor S2 includes a pressure sensor and a position sensor.
- the manner in which the terminal and the motion information system are connected includes a wired connection and a wireless sensor network connection
- the manner in which the processor and the terminal are connected includes a wired connection and a wireless sensor network connection.
- the motion sensor includes a three-axis angular velocity sensor, a three-axis acceleration sensor, a three-axis magnetic sensor, an electronic compass sensor, a speed sensor, a motion direction sensor, a displacement sensor, a trajectory sensor, a light sensor, and combinations thereof.
- the physiological sensor includes a blood oxygen sensor, a blood pressure sensor, a pulse sensor, a temperature sensor, a sweating degree sensor, a sound, and a light sensor.
- the pressure sensor includes: a pressure sensor, a pressure sensor, a momentum sensor, and an impulse sensor.
- the position sensor includes: a space position sensor, a space coordinate sensor, a light sensor, and a camera.
- the user number generator includes: a user number storage edit sending module.
- the geographic coordinate sensor includes: a navigation satellite positioning module.
- the video image sensor is a visible light, invisible light camera.
- the motion category attribute data D4 includes: motion rule data and motion intensity data corresponding to the motion rule data, motion level data, motion amplitude data, injury degree data, duration data, physical energy consumption degree data, and physiological degree data. , game rules data.
- the exercise rules include at least: free combat, standing fighting, unlimited fighting, MMA, UFC, Sanda, martial arts, Tai Chi, Muay Thai, kick boxing, K1 rules, fencing, judo, wrestling, track and field, gymnastics, ball games. .
- the user has personal profile data D5, the personal profile data D5 including: the user's height, weight, three-dimensional, wingspan, arm weight, punch weight, heart rate, blood oxygen, body temperature, vital capacity, date and time, calorie consumption Historical sports records, historical competition results, typical sports data, strong sports project data, weak sports project data, voice data, voiceprint data, image data, video data.
- the present invention further includes, but is not limited to, the following contents and combinations thereof:
- the sensing network includes a fixed terminal and a mobile terminal, and the terminal includes a micro base station, a mobile phone, and a PC; and the connection manner of the sensing network includes a wired mode and a wireless mode;
- the micro base station includes: one or more downlink interfaces, a processor, a power subsystem, and an uplink interface, where the one or more downlink interfaces are connected to the processor, and the processor is connected to the uplink interface, the power source
- the subsystem provides power for the downlink interface, the processor, and the uplink interface, and the downlink interface is connected to the first sensor S1, the second sensor S2, and the video image sensor S3 through a wireless sensor network.
- Communication the uplink interface communicating with the athletic information system over a wired or wireless network.
- the motion information system includes a terminal unit and a cloud system that communicate with each other; the terminal unit and the terminal are integrated or separately, and the cloud system is disposed in a network cloud.
- the target includes a combat target, a ball, a racquet, a sports apparatus, and the use of the combat target includes a punch, a foot, and a body part hitting the target.
- the present invention further includes, but is not limited to, the following contents and combinations thereof:
- cloud center software and application software, among which:
- the data D5 and the video data D6 complete user interaction and assist in generating the functions of the associated data D3, the household profile data D5, the three-dimensional vectorized data D7, and the calibration data D8.
- the function of transmitting data to the cloud center to form big data is completed by the application software running on the terminal.
- the functions of the learning, the training, the user identification, the motion recognition, and the pressure recognition are performed by the application software running on the terminal in conjunction with the cloud center software.
- the motion information system includes the application software and the cloud center software.
- the plurality of users' mobile information systems communicate and complete the interactive steps.
- the present invention has the following beneficial effects:
- Figure 1 is a system diagram
- Figure 2 is a structural view of one of the first sensors
- Figure 3 is a second structural view of the first sensor
- Figure 4 is a three-figure diagram of the first sensor
- Figure 5 is a structural view of one of the second sensors
- Figure 6 is a second structural view of the second sensor
- Figure 7 is a structural view of a unit sensor network
- Figure 8 is a second structural view of the unit sensor network
- Figure 9 is a structural diagram of a micro base station
- Figure 10 is one of the data image maps
- Figure 11 is the second of the data image mapping
- Figure 12 is the third of the data image mapping
- Figure 13 is the fourth of the data image mapping.
- the combat training system is mainly used for combat sports users.
- the system includes 104, 105-10n, 10n+1 ⁇ 10m+1 sensors, including terminals of 101, including a combat information system 2 of 103.
- the sensor includes a motion sensor, a physiological sensor, a user number generator, a geographic coordinate sensor, a pressure sensor, and the like, wherein the terminal further includes a combat information system 1 of 102.
- the smallest unit is defined as a motion detection group, including:
- the four first sensors S1 are 104, 105, 106, 107, respectively, and one terminal 101 composed of a micro base station, including 102 combat information system 1.
- Four first sensors S1 and one micro base station are connected, and the micro base station is connected. Connected to the Fighting Information System 2.
- the four first sensors S1 are respectively worn on the wrist and the ankle of the user, one of which is a variety with a physiological sensor, a motion sensor and a user number generator, as shown in FIG. 3; the other three are only with a motion sensor.
- the user number generator without the variety of physiological sensors, as shown in Figure 4.
- the motion sensor uses a variety of three-axis gyroscopes and three-axis acceleration sensors, and the physiological sensor is a pulse sensor.
- the sampling frequency of the motion sensor 10 frames/second to 200 frames/second, and set the heart rate sensor to collect once every minute.
- the sampling accuracy is 8 ⁇ 16bits.
- a second sensor S2 which is connected to the micro base station as shown in FIG.
- the second sensor S2 is composed of a matrix film pressure sensor and has a pressure and position detecting circuit.
- the range can be divided into several pressure/strike levels such as 50 kg, 200 kg, and 500 kg.
- the second sensor can be selected for different pressure levels and mounting styles, depending on the shape of the target.
- a 4-way HD camera can also be equipped as the video image sensor S3. It is connected to the micro base station to complete the image acquisition function.
- the micro base station includes: 9 downlink interfaces, a processor, a power subsystem, and an uplink interface, wherein 9 downlink interfaces are connected to the processor, the processor is connected to the uplink interface, and the power subsystem is a downlink interface.
- the processor and the uplink interface provide power, and the downlink interface communicates with the four first sensors S1, the first sensor S2, and the four video image sensors S3 through the wireless sensor network, and the uplink interface communicates with the combat information system through the fiber-optic cable network.
- the micro base station aggregates the signals of the above sensors and connects them to the combat information system through the optical fibers.
- the main functions of the equipped striking sensor S2 are as follows:
- One is to cooperate with the first sensor for correlating and calculating the hit data. That is, when the user hits the target multiple times, the system simultaneously measures the data of the angular velocity and acceleration of the first sensor S1 and the striking force data of the second sensor S2, according to the data of the angular velocity and acceleration of the multiple strikes and the blow of the second sensor.
- the correspondence between force data is based on Newton's kinematics theorem.
- the user only needs to use the motion sensor instead of the pressure sensor to convert the striking force data based on the data of the angular velocity and acceleration of the user at the time of the striking.
- the installation of the pressure sensor is cumbersome and must be installed on the surface of, for example, a fist, which limits the scene to be used, and the method eliminates the pressure sensor by indirect measurement, which is greatly facilitated. User's use.
- the user's striking force data is directly measured by the second sensor S2.
- the server which uses a server with a GPU graphics card, provides image computing, big data, and cloud services to the system.
- the first sensor S1 worn by one user constitutes a unit sensor network
- a plurality of target devices constitute a unit sensor network
- the unit sensor network constitutes a personal sensor network or a location sensor network, and then Fight information system connection.
- the first sensor S1 is composed of a processor and a motion sensor, a physiological sensor, and a pressure sensor.
- the motion sensor, the physiological sensor, and the pressure sensor are respectively connected to the processor, and the processor and the micro base station terminal are connected.
- the manner in which the micro base station terminal and the combat information system are connected includes a wired connection and a wireless sensor network connection
- the manner in which the processor and the terminal are connected includes a wired connection and a wireless sensor network connection.
- the motion sensor includes a three-axis angular velocity sensor, a three-axis acceleration sensor, and a three-axis magnetic sensor.
- Physiological sensors include: a pulse sensor, a temperature sensor, and a sound sensor.
- the pressure sensor includes: a matrix membrane pressure sensor sensor.
- the position sensor includes: a space coordinate sensor.
- the video image sensor is a visible light camera.
- the terminal includes: a micro base station, a smart phone, and a PC.
- the sport type attribute data D4 includes, but is not limited to, motion rule data and exercise intensity data corresponding to the exercise rule data, exercise level data, exercise amplitude data, damage degree data, persistence data, physical energy consumption degree data, physiological degree data, Match rule data.
- the rules of exercise include but are not limited to: free combat, standing fighting, unlimited fighting, MMA, UFC, Sanda, martial arts, Tai Chi, Muay Thai, kick boxing, K1 rules, fencing, judo, wrestling, track and field, gymnastics, ball .
- the user has personal profile data D5, which includes but is not limited to: user's height, weight, three-dimensional, wingspan, arm weight, punch weight, heart rate, blood oxygen, body temperature, vital capacity, date and time, calorie consumption, historical exercise records, history Competition results, typical sports data, strong sports data, weak sports data, voice data, voiceprint data, image data, video data.
- the combat information system includes a terminal unit and a cloud system that communicate with each other; the terminal unit and the terminal are integrated or discretely arranged, and the cloud system is disposed in the network cloud.
- the user connected to the terminal completes the connection, collection, and processing including the user, the first data D1, the second data D2, the sport type attribute data D4, the user profile data D5, and the video data D6, and completes the user interaction, and The functions of the associated data D3, the household profile data D5, the three-dimensional vectorized data D7, and the calibration data D8 are assisted.
- the function of transmitting data to the cloud center to form big data is completed by the application software running on the terminal.
- the application software running on the terminal cooperates with the cloud center software to complete the functions of learning, training, user identification, motion recognition and stress.
- the cloud center software running in the cloud center is responsible for the completion of big data including deep learning, data mining, classification algorithms, artificial intelligence processing, generating associated data D3, video data D6, calibration data D8, update D5, cloud center computing, cloud Processing and communication with application software, including central management.
- the sports information system includes application software and cloud center software.
- An application software connection manages one user to form a combat information system; multiple application software connections manage multiple users to form multiple combat information systems.
- the system is connected by a micro base station and two bracelets, two foot loops and one second sensor.
- the communication is through the BLE Bluetooth low power protocol or the WIFI protocol.
- the analogy can also use other WSN protocols, and the micro base station.
- the collected data of the above five sensors are transmitted to the cloud database of the combat information system.
- the above five sensors realize the synchronization of the collected data through the time of the system in time stamp mode, to obtain the motion data of the user, and cooperate with the cloud center configuration of the cloud center to realize the function of the combat information system.
- connection, collection, and processing including the user, the first data D1, the second data D2, the motion type attribute data D4, the user profile data D5, and the video data D6 are completed by the configuration running on the mobile phone, and the user interaction is completed and assisted.
- the functions of the associated data D3, the household profile data D5, the three-dimensional vectorized data D7, and the calibration data D8 are generated.
- the configuration completed by running on the mobile phone includes the function of transmitting data to the cloud center to form big data.
- the functions of learning, training, user identification, motion recognition and pressure recognition are configured by the configuration running on the mobile phone in conjunction with the cloud center configuration.
- the cloud center configuration running in the cloud center is responsible for completing the big data including deep learning, data mining, classification algorithm, artificial intelligence processing, generating associated data D3, video data D6, calibration data D8, update D5, cloud center computing, cloud The process of central management and communication with the terminal application configuration.
- the sports information system includes terminal application configuration and cloud center configuration.
- An application configuration connection manages a user to form a motion information system; a plurality of application configuration connections manage multiple users to form a plurality of motion information systems.
- the first data D1 is monitored by the first sensor S1 (2 wristbands and 2 foot loops) provided on the user's body, and the first data D1 is transmitted to the combat information system using the sensor network. At the same time, the first data D1 is processed.
- the second sensor D2 is monitored by the second sensor S2 disposed on the target when the user strikes the target. While the user hits the target, the first data D1 and the second data D2 are simultaneously acquired in chronological order, and the associated data D3 is generated. The second data D2 and the associated data D3 are transmitted to the combat information system using the sensor network.
- the users here include: student users, coach users, and opponent users.
- the sensing network includes a terminal, and the terminal includes a fixed terminal and a mobile terminal, including a micro base station, a smart phone, and a PC.
- the target includes a target such as a dummy, a sandbag, a hand target, a foot target, and a wall target.
- the use of combat targets includes the impact of the punches, feet, and body parts on the target.
- the user motion data is collected by the motion sensor in the first sensor S1
- the physiological data of the user is collected by the physiological sensor in S1
- the pressure data in the S1 is used to collect the pressure data when the user hits the target and hits the opponent.
- the second sensor S2 disposed on the target device monitors the second data D2 when the user hits the target, uses the pressure sensor in S2 to collect the pressure data when the user hits the target, and uses the position sensor in S2 to collect the target when the user hits the target. Location data.
- All of the first sensors S1 worn by one user are connected to the personal sensor network, the location sensor network, and the combat information system using the unit sensing network.
- All of the second sensors S2 equipped with a set of target devices are connected to the personal sensor network, the location sensor network, and the combat information system using the unit sensing network.
- the system time value T at which the timing at which the first data D1 and the second data D2 are generated is collected and recorded in the first data D1 and the second data D2.
- A/D conversion is performed on the first data D1 and the second data D2.
- sampling frequency and sampling accuracy of S1 and S2 are adjusted according to the motion type attribute data D4.
- the first data D1 and the second data D2 are interpolated according to a predetermined scale, and the first data D1 and the second data D2 are merged into the associated data D3.
- S1 is set at the user's wrist, ankle, joint, and striking position.
- the artificial intelligence algorithm is used to extract and extract the user's custom action feature data according to the user's motion data, and record it in the user's profile data D5.
- the artificial intelligence algorithm is used to extract and extract the voiceprint feature data of the user according to the user voice data, and record it in the user's profile data D5.
- the artificial intelligence algorithm is used to summarize and extract the motion feature data of the motion according to the motion category attribute data, and record it into the motion category attribute data D4.
- the sport type attribute data D4 includes, but is not limited to, motion rule data and exercise intensity data corresponding to the exercise rule data, exercise level data, exercise amplitude data, damage degree data, persistence data, physical energy consumption degree data, physiological degree data, Match rule data.
- the rules of exercise include at least but not limited to: free combat, standing fighting, unrestricted fighting, MMA, UFC, Sanda, martial arts, Tai Chi, Muay Thai, kick boxing, K1 rules, fencing, judo, wrestling, track and field, gymnastics, ball class.
- the user has personal profile data D5, including but not limited to: user's height, weight, three-dimensional, wingspan, arm weight, punch weight, heart rate, blood oxygen, body temperature, vital capacity, date and time, calorie consumption, historical movement Records, historical game results, typical sports data, strong sports data, weak sports data, voice data, voiceprint data, image data, video data.
- the motion sensor includes an angular velocity sub-sensor, an acceleration sub-sensor, a magnetic sub-sensor, and the shafting includes, but is not limited to, an XYZ triaxial.
- the data is formatted for the associated data D3.
- the decomposition action sequence is an action unit, and the unit data D3-U is calculated.
- the mapping unit data D3-U is a moving image. According to the collected sequence, in the unit data D3-U, the triaxial data of each acquired motion sensor is taken as a group, and one group is mapped to one pixel in the moving image. Image point mapping.
- the data collected by each sub-sensor of the X-axis, the Y-axis, and the Z-axis of the motion sensor in the mapping unit data D3-U is a moving image, and each sub-sensor is mapped to a pixel point in the corresponding moving image.
- the collected data of one of the motion sensors in the mapping unit data D3-U is a moving image
- the collected data of the other sub-sensors is a channel of the moving image
- each sub-sensor is mapped to a corresponding moving image.
- the artificial intelligence image recognition and classification algorithm is used to perform deep learning on a plurality of moving image data, and the feature data including but not limited to the motion type feature, the action type feature, the pressure size feature, and the user identification feature is summarized and calculated. The next time the data D3 is associated, the image depth learning of the comparison feature data is calculated.
- the multi-map mapping and the single-image mapping are adapted into image and video files, which is convenient for image display and video file reconstruction by the human eye.
- Artificial intelligence algorithms include but are not limited to: artificial neural network algorithm, CNNs algorithm, RNN algorithm, SVM algorithm, genetic algorithm, ant colony algorithm, simulated annealing algorithm, particle swarm algorithm, Bayes algorithm.
- the motion recognition it is realized by first establishing an action feature library and secondly querying the action feature library.
- an action feature library To establish an action feature library, first select some users of the action specification, wear the first sensor S1, perform various actions, and obtain the action data and the action name data for the data, including but not limited to the artificial intelligence analysis using CNNs and SVM algorithms.
- the action characteristics are extracted and recorded as a function feature database in the database of the cloud center.
- the following also includes, but is not limited to, using the CNNs and the SVM algorithm to obtain the feature data of the action, and then using the feature data to retrieve the search in the action feature database of the cloud center to determine the similarity.
- the highest-level action list, the action code is taken out, and the motion recognition is realized.
- the action data of the user is first obtained, including but not limited to using the CNNs and the SVM algorithm to obtain the behavioral feature and the action feature database of the user, and then using the feature database data.
- the search is performed in the database of the cloud center to determine the action list with the highest similarity, thereby realizing user identification.
- the 4-way video image sensor S3 is caused to capture the video image D6 of the 4-way user game.
- the 4-way video image sensor S3 is communicated through the sensor network and the combat information system.
- an artificial intelligence algorithm is used to perform three-dimensional vector synthesis of the motion action, and the three-dimensional vectorized data D7 is obtained.
- the three-dimensional vectorized data D7 is associated with the second data D2, the associated data D3, the sport type attribute data D4, and the profile data D5.
- the artificial intelligence algorithm is used to identify the motion motion in the video image D6 according to the three-dimensional vectorized data D7 and the motion type attribute data D4, and synchronize the pre- and post-motion points marked in the video image D6.
- the training includes single-person training, single-handed routines, and multiplayer competitions.
- the coach user uses the standard action to hit the target according to the sports category attribute data D4, obtains the associated data D3 of the coach, and performs machine learning in the associated data D3 of the coach according to the artificial intelligence algorithm, and obtains the correlation result of the coach D3- AI1 and the coach's confidence result D3-AI2, and updated the coach user's profile data D5 learning coach.
- the student user hits the target according to the sport type attribute data D4, obtains the student's associated data D3, and performs machine learning in the student's associated data D3 according to the artificial intelligence algorithm, and obtains the student's associated result D3-AI1 and the student's confidence result. D3-AI2, and update the student user's profile data D5 self-training.
- the cycle compares the student's association result D3-AI1 with the coach's association result D3-AI1, and compares the student's confidence result D3-AI2 with the coach's confidence result D3-AI2.
- the students' exercise strengths, weaknesses and gaps are calculated and analyzed, and the student's personal file data D5 is updated to calculate the strength and weakness measures for generating and outputting the training suggestion information.
- the artificial intelligence algorithm is used to identify the single sensor user of the user according to the first data D1 and the user association result D3-AI1, the user confidence result D3-AI2, and the three-dimensional vectorization data D8. Identification.
- the artificial intelligence algorithm is used to identify the user's custom action user identification according to the first data D1 and the custom action feature data.
- the artificial intelligence algorithm is used to identify the user's voiceprint feature user identification according to the voice data and the voiceprint feature data.
- the artificial intelligence algorithm is used to identify the user according to the first data D1 and the associated result D3-AI1, the user's confidence result D3-AI2, and the three-dimensional vectorized data D8. Dual sensor user identification.
- the artificial intelligence algorithm is used to identify the motion category attribute according to the user, the first data D1 and the user's association result D3-AI1, the user's confidence result D3-AI2, and the three-dimensional vectorized data D8.
- Single sensor motion recognition of data D4 is used to identify the motion category attribute according to the user, the first data D1 and the user's association result D3-AI1, the user's confidence result D3-AI2, and the three-dimensional vectorized data D8.
- the artificial intelligence algorithm is used to identify the D3 and AI2 three-dimensional vectorized data D8 according to the user, the first data D1, the user's association result D3-AI1 user's confidence result D3.
- an artificial intelligence algorithm is used to identify the motion feature motion recognition of the motion type attribute data D4 according to the first data (D1) and the motion feature data.
- the pressure data generated by the user's striking action is calculated based on the image depth learning step and the calibration data D8.
- the user is struck against the target, and according to the Newtonian mechanics algorithm, the angular velocity and acceleration data in the first sensor S1 and the pressure data in the second sensor S2 are obtained, and the acceleration pressure correlation D8 is established.
- the pressure is recognized in the acceleration pressure correlation D8 according to the first data D1.
- the artificial intelligence algorithm is used to calculate the association result D3-AI1 and the confidence result D3-AI2 corresponding to each user.
- the corresponding association result D3-AI1 and the confidence result D3-AI2 of the plurality of users are compared, and the immediate degree and the number of times of the attack, the degree and degree of the damage, the countdown and the number of times are obtained.
- Competition process data including TKO and KO.
- the dynamic odds and predicted result data of the game are calculated based on the game process data and output.
- the first sensor S1 and the second sensor S2 are communicated with one or more fixed terminals to calculate absolute data of the first spatial coordinates, the moving speed, and the motion trajectory of the first sensor S1 and the second sensor S2.
- the first sensor S1 and the second sensor S2 are communicated with one or more mobile terminals, the first sensor S1, and the second sensor S2 to calculate relative data of the first sensor S1 and the second sensor S2, which are spatial coordinates, motion speed, and motion trajectory. .
- the battle information system result information is processed and displayed by the fixed terminal and the mobile terminal.
- the result information, the motion action live playback video is transmitted to more than one display device to cause the result information to be displayed in fusion with the live video.
- the fixed terminal and the mobile terminal include: a micro base station, a PC, and a smart phone.
- the connection method of the sensor network includes wired mode and wireless mode.
- the user who wears the first sensor S1 is searched by the combat information system, and the name information is sent to the user, and the first sensor S1 worn by the user responds after receiving the response, thereby realizing the name.
- the user who wears the first sensor S1 sends the registration information to the combat information system through the first sensor S1, and obtains the response of the combat information system, thereby realizing the registration.
- the notification information is sent to the first sensor S1 worn by the user by the combat information system. After receiving the notification information, the first sensor S1 answers the combat information system and displays, vibrates and announces the voice on the first sensor S1.
- the user wearing the first sensor S1 implements positioning including but not limited to a plurality of positioning algorithms by the combat information system through more than one terminal.
- the user wearing the first sensor S1 issues an active alarm of the active alarm information to the combat information system according to the subjective will of the user.
- the first sensor S1 issues an abnormality alarm of the alarm information to the motion information system based on the abnormal value of the first data D1.
- the communication between the combat information system and the first sensor S1 is realized through the sensor network, and the abnormal value includes an alarm trigger condition preset by the user and the motion information system.
- the combat information system can realize the functions of positioning, registration, name, notification, alarm, etc. for the user, and provides technical support for strengthening management.
- the system comprises: a first sensor S1, a terminal and a combat information system; the first sensor S1 is connected to the terminal, the terminal is connected to the combat information system, and processes data from the first sensor S1.
- a second sensor S2, a video image sensor S3; a second sensor S2 and a video image sensor S3 are respectively connected to the terminal, and the terminal is connected to the combat information system.
- the first sensor S1 is composed of a processor and a motion sensor, a physiological sensor, a pressure sensor, a user number generator, and a geographic coordinate sensor.
- the motion sensor, the physiological sensor, the pressure sensor, the user number generator, and the geographic coordinate sensor are respectively connected to the processor, and the processor and the terminal are connected.
- the second sensor S2 includes a pressure sensor and a position sensor.
- the way the terminal and the combat information system are connected includes a wired connection and a wireless sensor network connection, and the way the processor and the terminal are connected includes a wired connection and a wireless sensor network connection.
- the motion sensor includes a three-axis angular velocity sensor, a three-axis acceleration sensor, a three-axis magnetic sensor, an electronic compass sensor, a speed sensor, a motion direction sensor, a displacement sensor, a trajectory sensor, a light sensor, and combinations thereof.
- the physiological sensor includes a blood oxygen sensor, a blood pressure sensor, a pulse sensor, a temperature sensor, a sweating degree sensor, a sound, and a light sensor.
- the pressure sensor includes: a pressure sensor, a pressure sensor, a momentum sensor, and an impulse sensor.
- the position sensor includes: a space position sensor, a space coordinate sensor, a light sensor, and a camera.
- the user number generator includes: a user number storage editing transmission module.
- the geographic coordinate sensor includes: a navigation satellite positioning module.
- the video image sensor is a visible light, invisible light camera.
- the sensor network includes a fixed terminal and a mobile terminal.
- the terminal includes a micro base station, a smart phone, and a PC; the connection mode of the sensing network includes a wired mode and a wireless mode.
- the micro base station includes: one or more downlink interfaces, a processor, a power subsystem, and an uplink interface.
- the one or more downlink interfaces are connected to the processor, the processor is connected to the uplink interface, the power subsystem supplies power to the downlink interface, the processor, and the uplink interface, and the downlink interface passes through the wireless sensor network with the first sensor S1 and the second sensor S2.
- the video image sensor S3 is connected to communicate, and the uplink interface communicates with the combat information system via a wired or wireless network.
- the motion information system includes a terminal unit and a cloud system that communicate with each other; the terminal unit and the terminal are integrated or separately, and the cloud system is disposed in the network cloud.
- Targets include combat targets, balls, racquets, sports equipment, and the use of combat targets includes the impact of punches, feet, and body parts on the target.
- connection, collection, and processing including the user, the first data D1, the second data D2, the motion type attribute data D4, the user profile data D5, and the video data D6 are completed by the application configuration running on the terminal, and the user interaction is completed. And assisting in generating the functions of the associated data D3, the household profile data D5, the three-dimensional vectorized data D7, and the calibration data D8.
- the configuration of the application running on the terminal completes the function of transmitting data to the cloud center to form big data.
- the function of learning, training, user identification, motion recognition and pressure recognition is completed by the application running on the terminal and the cloud center software.
- the cloud center configuration running in the cloud center is responsible for completing the big data including deep learning, data mining, classification algorithm, artificial intelligence processing, generating associated data D3, video data D6, calibration data D8, update D5, cloud center computing, cloud Processing and communication with application software, including central management.
- the sports information system includes application configuration and cloud center configuration.
- An application software connection manages one user to form a motion information system; multiple application software connections manage multiple users to form multiple motion information systems.
- the problem of dynamic measurement of the impact force when only the angular velocity and the acceleration sensor are used in the fight is solved, which facilitates the implementation and reduces the cost.
- step 4 the problem of image conversion of motion data is solved, which is visualized and convenient for the application of the existing artificial intelligence image recognition algorithm.
- step 6 the artificial intelligence assisted combat coaching function is introduced.
- the system is mainly used for personal sports user identification, motion recognition and management. Specifically, through the extraction and comparison of the user's personal motion characteristics by the wristband sensor, the user identity and motion action are recognized under the support of cloud big data. Identifyed features.
- the first sensor is a wristband. As shown in Figure 3, it is a motion sensor consisting of a three-axis gyroscope and a three-axis accelerometer.
- a physiological sensor and user number generator consisting of a heart rate sensor can also be used. Includes geographic coordinate sensors and voice sensors. Set the sampling frequency of the motion sensor to 5 frames/second to 50 frames/second. Set the heart rate sensor to collect once every minute. The sampling accuracy is 8 ⁇ 16bits, and the sampling frequency of the voice sensor is set to 8KHz ⁇ 2.8224MHz.
- the user's smart phone is connected to the first sensor S1.
- step 4 using the characteristics of the action to identify: outdoor running, walking, walking, walking; running, walking on the indoor treadmill; step by step, put the sensor on the "step counter artifact” step counter, tie the sensor Animals in animals are counted.
- the rules of exercise only include running, walking, walking, walking, and not including other sports.
- the system is connected to the mobile phone and the wristband sensor to obtain the user's motion data, and cooperate with the cloud center's cloud center configuration to realize the function of the motion information system.
- the user is connected, collected, and processed by the APP application running on the mobile phone, including the user, the first data D1, the second data D2, the sport category attribute data D4, and the user profile data D5, complete the user interaction, and assist in generating the association.
- the configuration of the APP application running on the mobile phone completes the function of transmitting data to the cloud center to form big data.
- the application of the APP application running on the mobile phone cooperates with the cloud center software to complete the functions of learning, training, user identification, motion recognition and pressure recognition.
- the cloud center software running in the cloud center is responsible for completing the processing of big data including deep learning, data mining, classification algorithm, artificial intelligence processing, generating associated data D3, updating D5, cloud center computing, cloud center management, and The steps to communicate with the application configuration.
- the motion recognition information system includes an application configuration and a cloud center configuration.
- An application configuration connection manages one user to form a motion information system; a plurality of application configuration connections manage multiple users to form a plurality of motion information systems.
- User motion data is acquired using motion sensors in the first sensor S1.
- User physiological data, user number data, and geographic coordinate data are collected by the physiological sensor in the first sensor S1.
- A/D conversion is performed on the first data D1 and the second data D2.
- the sampling frequency of the first sensor S1 is adjusted according to the motion type attribute data D4 by 5 frames/second to 50 frames/second, and the sampling precision is 8 to 16 bits.
- the first sensor S1 is disposed at the wrist or the ankle of the user.
- the artificial intelligence algorithm is used to extract the user's custom action feature data according to the user's motion data, and record it in the user's profile data D5.
- the artificial intelligence algorithm is used to extract the voiceprint feature data of the user according to the user voice data, and record it into the user's profile data D5.
- the motion feature data of the motion is extracted based on the motion type attribute data D4 using an artificial intelligence algorithm, and is recorded in the motion type attribute data D4.
- the rest of the project is the same as the combat training system.
- the artificial sensor algorithm is used to identify the single sensor user identification of the user according to the first data D1 and the user association result D3-AI1 and the user confidence result D3-AI2.
- the artificial intelligence algorithm is used to identify the user's custom action user identification according to the first data D1 and the custom action feature data.
- the artificial intelligence algorithm is used to identify the user's voiceprint feature user identification according to the voice data and the voiceprint feature data.
- an artificial intelligence algorithm is used to identify the motion feature motion recognition of the motion type attribute data D4 according to the first data (D1) and the motion feature data.
- the pressure data generated by the user's striking action is calculated based on the image depth learning step and the calibration data D8.
- the artificial intelligence algorithm is used to calculate the association result D3-AI1 and the confidence result D3-AI2 corresponding to each user.
- the corresponding association result D3-AI1 and the confidence result D3-AI2 of the plurality of users are compared, and the instant data is obtained.
- the dynamic odds and predicted result data of the game are calculated based on the game process data and output.
- the first sensor S1 is communicated with more than one fixed terminal to calculate absolute data of the first sensor S1's own spatial coordinates, motion speed, and motion trajectory.
- the first sensor S1 is caused to communicate with more than one mobile terminal to calculate relative data of the first sensor S1's own spatial coordinates, motion speed, and motion trajectory.
- the system is mainly used for personal sports user identification, motion recognition and management, specifically through the extraction and comparison of the user's personal motion characteristics by the gyroscope and accelerometer provided in the smart phone, supported by the cloud big data. Next, to identify the user identity, motion recognition function.
- the mobile terminal is configured to capture user data using its own motion sensor, which is required to be held on the hand or on the wrist.
- the same content as the embodiment "motion recognition system - bracelet version” is not described, except that the three-axis gyroscope, the three-axis accelerometer, and the three-axis magnetometer included in the mobile phone are used instead of the first sensor S1.
- the APP application software uses artificial intelligence algorithms to identify the data by directly driving and reading the sampled data in the mobile motion sensor.
- the system is mainly used for the identification and management of ball and track and field users. Compared with the combat training system, the similarities are not described. The difference is:
- the first sensor S1 is used to detect the movement speed and acceleration of the limbs of the hands and feet, and does not need to detect the striking force. In addition, as a precise speed measurement, it is necessary to convert the distance from the racket to the wrist S1 for different rackets.
- the racket sets the motion sensor and is incorporated into the management of the sport type attribute data D4 and the user profile data D5.
- the geographic coordinate sensor collects the geographic coordinates, and uses the unit sensing network to connect all the first sensors S1 worn by one user to the personal sensor network, the location sensor network, and the motion information system.
- Analog/digital A/D conversion is performed on the first data D1.
- sampling frequency and sampling accuracy of the first sensor S1 are adjusted according to the motion type attribute data D4.
- the first sensor S1 is disposed at the wrist, the ankle, and the joint position of the user.
- the artificial intelligence algorithm is used to extract the user's custom action feature data according to the user's motion data, and record it in the user's profile data D5.
- the artificial intelligence algorithm is used to extract the voiceprint feature data of the user according to the user voice data, and record it into the user's profile data D5.
- the artificial intelligence algorithm is used to extract the motion feature data of the motion according to the motion category attribute data, and record it in the motion category attribute data D4.
- the sport type attribute data D4 includes: motion rule data and exercise intensity data corresponding to the exercise rule data, exercise level data, exercise amplitude data, damage degree data, duration data, physical energy consumption degree data, physiological degree data, and game rule data. .
- the rules of exercise include at least but not limited to: athletics, gymnastics, and ball.
- the user has personal profile data D5, and the profile data D5 includes: user's height, weight, three-dimensional, arm span, arm weight, punch weight, heart rate, blood oxygen, body temperature, vital capacity, date and time, calorie consumption, historical exercise record, history Competition results, typical sports data, strong sports data, weak sports data, voice data, voiceprint data, image data, video data.
- the motion sensor includes an angular velocity sub-sensor, an acceleration sub-sensor, a magnetic sub-sensor, and the shaft system includes at least an XYZ triaxial.
- the artificial intelligence algorithm is used to identify the single sensor user of the user according to the first data D1 and the user association result D3-AI1, the user confidence result D3-AI2, and the three-dimensional vectorization data D8. Identification.
- the artificial intelligence algorithm is used to identify the user's custom action user identification according to the first data D1 and the custom action feature data.
- the artificial intelligence algorithm is used to identify the user's voiceprint feature user identification according to the voice data and the voiceprint feature data.
- an artificial intelligence algorithm is used to identify the motion feature motion recognition of the motion type attribute data D4 according to the first data (D1) and the motion feature data.
- the artificial intelligence algorithm is used to calculate the association result D3-AI1 and the confidence result D3-AI2 corresponding to each user.
- the corresponding association results D3-AI1 and the confidence results D3-AI2 of the plurality of users are compared, and the instant game process data is obtained.
- the dynamic odds and predicted result data of the game are calculated based on the game process data and output.
- the system is mainly for the organization and for personnel identification.
- the system includes an artificial intelligence bracelet, mobile app and cloud center software. details as follows:
- the exercise rules only contain rules for daily activities, and the others are the same.
- the artificial sensor algorithm is used to identify the single sensor user identification of the user according to the first data D1 and the user association result D3-AI1 and the user confidence result D3-AI2.
- the artificial intelligence algorithm is used to identify the user's custom action user identification according to the first data D1 and the custom action feature data.
- the voice data is included in the first data D1 collected by the user
- the artificial intelligence algorithm is used to identify the user's voiceprint feature user identification according to the voice data and the voiceprint feature data.
- an artificial intelligence algorithm is used to identify the motion feature motion recognition of the motion type attribute data D4 according to the first data D1 and the motion feature data.
- the system is mainly used to complete the management of security rescue by detecting the physiological characteristics of the individual in a dangerous working environment.
- firefighters are in a fire-fighting environment, building a ship's cabin in a hot summer environment, mining tunnel environment, etc.
- the system includes several personal smart bracelets, micro base stations, mobile APP and cloud center software. details as follows:
- the key methods and systems are basically the same in 1 to 15 methods and systems. It is only strengthened in terms of security and rescue software functions. It should be noted that these are the functional points that can be understood by mid-level technicians in the industry and can be designed without innovation. Therefore, it will not be described here.
- the system is mainly used to detect the management system of animals raising security alarms in the pasture.
- the system includes several personal intelligence sensors, micro base stations, mobile APP and cloud center software. details as follows:
- the user is changed to an animal.
- the first sensor S1 is disposed at the corner of the animal and at the position of the ankle.
- the artificial intelligence algorithm is used to extract the animal's habitual action characteristic data based on the animal motion data, and the individual file data D5 of the animal is recorded.
- the artificial intelligence algorithm is used to extract the voiceprint characteristic data of the animal according to the animal sound data, and record the individual file data D5 of the animal.
- the motion feature data of the motion is extracted based on the motion type attribute data D4, and recorded to the motion type attribute data D4.
- the rest of the project is the same as the combat training system.
- the artificial sensor algorithm is used to identify the single sensor animal identification of the animal according to the first data D1 and the associated result D3-AI1 and the confidence result D3-AI2.
- the artificial intelligence algorithm is used to identify the animal's custom action animal identification based on the first data D1 and the custom action feature data.
- the artificial intelligence algorithm is used to identify the animal's voiceprint characteristic animal recognition based on the voice data and the voiceprint feature data.
- the animal information system searches for the animal wearing the first sensor S1, and sends a name information to the animal, and the first sensor S1 worn by the animal receives the response, thereby realizing the name.
- the animal wearing the first sensor S1 sends the registration information to the animal information system through the first sensor S1, and obtains a response, thereby realizing the registration.
- the animal wearing the first sensor S1 is positioned by the animal information system through more than one terminal.
- the first sensor S1 issues an abnormality alarm of the alarm information to the animal information system based on the abnormal value of the first data D1.
- the animal information system and the first sensor S1 communicate through the sensor network, and the abnormal values include alarm trigger conditions preset by the animal and animal information systems.
- Communication between the animal information system and the first sensor S1 is achieved via a sensor network.
- a first sensor S1 a terminal and an animal information system; the first sensor S1 is connected to the terminal, the terminal is connected to the animal information system, and the data from the first sensor S1 is processed.
- the first sensor S1 includes, but is not limited to, a processor and a motion sensor, a physiological sensor, a user number generator, and a geographic coordinate sensor; wherein the motion sensor, the physiological sensor, the user number generator, and the geographic coordinate sensor Connected to the processor, the processor and the terminal are connected.
- the way the terminal and the animal information system are connected includes a wired connection and a wireless sensor network connection, and the way the processor and the terminal are connected includes a wired connection and a wireless sensor network connection.
- the rest of the project is the same as the combat training system.
Landscapes
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Physical Education & Sports Medicine (AREA)
- Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
Description
Claims (15)
- 一种运动数据监测的方法,其特征在于,包括:利用设置在用户身体上的第一传感器(S1)监测第一数据(D1)的步骤;利用传感网络将所述第一数据(D1)传输到运动信息系统的步骤;和/或,对所述第一数据(D1)进行处理的步骤。
- 根据权利要求1所述的方法,其特征在于,还包括:利用设置在靶具上的第二传感器(S2)在所述用户使用所述靶具时监测第二数据(D2)的步骤;在所述用户使用所述靶具的同时,按照时间顺序,同时采集所述第一数据(D1)和所述第二数据(D2),并生成关联数据(D3)的步骤;和/或,利用所述传感网络将所述第二数据(D2)和所述关联数据(D3)传输到运动信息系统的步骤;和/或,所述用户至少包括:学员用户、教练用户、对手用户和动物用户;所述传感网络包括固定终端和移动终端,包括微基站、智能手机和PC机;所述靶具包括搏击靶具、球类、球拍类、体育器械,对于所述搏击靶具的使用包括拳、脚和身体部位对于靶具的打击。
- 根据权利要求2所述的方法,其特征在于,所述利用设置在用户身体上的第一传感器(S1)监测第一数据(D1)的步骤包括:利用所述第一传感器(S1)中的运动传感器采集所述用户运动数据的步骤;和/或,利用所述智能手机中包括的运动传感器采集所述用户运动数据并通过所述智能手机内部直接传输到运动信息系统的步骤;和/或,利用所述第一传感器(S1)中的生理传感器采集所述用户生理数据的步骤;和/或,利用所述第一传感器(S1)中的压力传感器采集所述用户使用所述靶具和/或打击对手时的压力数据的步骤;和/或,利用所述第一传感器(S1)中包括的用户号发生器产生所述用户的用 户号数据的步骤;和/或,利用所述第一传感器(S1)中包括的地理坐标传感器产生所述用户的地理坐标数据的步骤;和/或,所述利用设置在靶具上的第二传感器(S2)在所述用户使用所述靶具时监测第二数据(D2)的步骤包括:利用所述第二传感器(S2)中的压力传感器采集所述用户使用所述靶具时的压力数据的步骤;和/或,利用所述第二传感器(S2)中位置传感器采集所述用户使用所述靶具时的位置数据的步骤;和/或,利用单元传感网络将一个所述用户所佩戴的全部所述第一传感器(S1)连接到个人传感器网络和/或场所传感器网络和/或所述运动信息系统的步骤;和/或,利用单元传感网络将一套靶具所装备的全部所述第二传感器(S2)连接到个人传感器网络和/或场所传感器网络和/或所述运动信息系统的步骤;和/或,采集监测所述第一数据(D1)和所述第二数据(D2)所发生时刻的系统时间值(T),并记录到所述第一数据(D1)和第二数据(D2)中的步骤;和/或,对所述第一数据(D1)和所述第二数据(D2)进行模/数(A/D)转换的步骤;和/或,依据运动种类属性数据(D4)调节所述第一传感器(S1)和所述第二传感器(S2)采样频率和采样精度的步骤;和/或,依据所述第一数据(D1)和所述第二数据(D2),按照预定的刻度,插值补齐所述第一数据(D1)和所述第二数据(D2),并将所述第一数据(D1)和/或第二数据(D2)合并到所述关联数据(D3)的步骤;其中,所述第一传感器(S1)设置于所述用户的手腕、脚踝、关节和/或打击位置处;和/或,采用人工智能算法,依据所述用户运动数据提取所述用户的习惯动作特征数据,并记录到所述用户的个人档案数据(D5)的步骤;和/或,采用所述人工智能算法,依据所述用户语音数据提取所述用户的声 纹特征数据,并记录到所述用户的所述个人档案数据(D5)的步骤;和/或,采用所述人工智能算法,依据所述运动种类属性数据(D4)提取所述运动的动作特征数据,并记录到所述运动种类属性数据(D4)的步骤;和/或,所述运动种类属性数据(D4)包括:运动规则数据以及与所述运动规则数据相对应的运动力度数据、运动级别数据、运动幅度数据、伤害程度数据、持续程度数据、体能消耗程度数据、生理程度数据和/或比赛规则数据;其中,所述运动规则至少包括:自由搏击、站立格斗、无限制格斗、MMA、UFC、散打、武术、太极拳、泰拳、踢拳、K1规则、击剑、柔道、摔跤、田径、体操、球类;所述用户具有个人档案数据(D5),所述个人档案数据(D5)包括:所述用户的身高、体重、三维、臂展、臂重、拳重、心率、血氧、体温、肺活量、日期时间、卡路里消耗、历史运动记录、历史比赛成绩、典型运动数据、强势运动项目数据、弱势运动项目数据、语音数据、声纹数据、图像数据、视频数据。
- 根据权利要求3所述的方法,其特征在于,还包括:依据包括采样种类、采样频率、采样精度、数据格式的数据内容,对于所述关联数据(D3),做数据格式化的步骤;和/或,依据运动动作的特性,在所述关联数据(D3)中的所述运动数据部分,分解动作序列为动作单元,计算单元数据(D3-U)的步骤;映射所述单元数据(D3-U)为运动图像,依据采集的序列,在所述单元数据(D3-U)中,将每次采集的运动传感器的三轴数据作为一个组,映射一个所述组为所述运动图像中的一个像素点的步骤;映射所述单元数据(D3-U)中所述运动传感器X轴、Y轴、Z轴的每个子传感器的采集数据为一幅运动图像,映射每个子传感器每次采集点为对应的所述运动图像中的一个像素点,将所述采集点的X、Y、Z三轴数据作为像素点RGB三原色数据的自变量x,建立RGB色码值y的函数y=f(x),计算出所述RGB三原色的数据的步骤;和/或,映射所述单元数据(D3-U)中所述运动传感器中的一个子传感器的采 集数据为一幅运动图像,映射其它所述子传感器的采集数据为该所述运动图像的通道,映射每个子传感器每次采集点为对应的所述运动图像或所述通道中的一个像素点,将所述采集点的X、Y、Z三轴数据作为像素点RGB三原色数据或通道数据的自变量x,建立RGB色码值y的函数y=f(x),计算出所述RGB三原色的数据或通道数据的步骤;和/或,采用人工智能中图像识别和分类算法,对于多个所述运动图像数据,进行深度学习,计算出所述用户的所述习惯动作特征、所述用户的所述声纹特征、所述运动的所述动作特征以及所述压力大小特征,并在采集到下一次所述关联数据(D3)时,比对特征数据的步骤;和/或,依据图像及视频文件格式,将所述多图映射和所述单图多通道映射改编成所述图像及所述视频文件,便于在显示器上显示和人眼观看的步骤;和/或,所述人工智能算法至少包括:人工神经网络算法、CNNs卷积神经网络算法、RNN循环神经网络算法、SVM支持向量机算法、遗传算法、蚁群算法、模拟退火算法、粒子群算法、Bayes贝叶斯算法;所述RGB函数包括线性函数y=kx+j和非线性函数,其中k和j为调整常数。
- 根据权利要求3或4所述的方法,其特征在于,还包括:使一路以上视频图像传感器(S3)拍摄一路以上所述用户赛训的视频图像(D6)的步骤;和/或,使所述一路以上视频图像传感器(S3)通过所述传感网络和所述运动信息系统通信的步骤;和/或,基于所述视频图像(D6)和所述第一数据(D1),依据所述第一传感器(S1)在所述视频图像(D6)中的位置,采用所述人工智能算法,做运动动作的三维矢量化合成,得到三维矢量化数据(D7)的步骤;和/或,将所述三维矢量化数据(D7)与所述第二数据(D2)、所述关联数据(D3)、所述运动种类属性数据(D4)和/或所述个人档案数据(D5)建立关联的步骤;和/或,采用所述人工智能算法,依据所述三维矢量化数据(D7)和所述运动种类属性数据(D4),识别所述视频图像(D6)中的运动动作,并同步在 所述视频图像(D6)中标注的、所述运动动作前、后时点的步骤;其中,所述赛训包括单人训练、单人套路比赛、多人对抗比赛。
- 根据权利要求4所述的方法,其特征在于,还包括:由所述教练用户按照所述运动种类属性数据(D4)采用规范的动作打击靶具,获得所述教练的关联数据(D3),依据所述人工智能算法,在所述教练的关联数据(D3)中做机器学习,得出所述教练的关联结果(D3-AI1)和所述教练的置信结果(D3-AI2),并更新所述教练用户的所述个人档案数据(D5)的步骤;和/或,由所述学员用户按照所述运动种类属性数据(D4)打击靶具,获得所述学员的关联数据(D3),依据所述人工智能算法,在所述学员的关联数据(D3)中做机器学习,得出所述学员的关联结果(D3-AI1)和所述学员的置信结果(D3-AI2),并更新所述学员用户的所述个人档案数据(D5)的步骤;和/或,循环比较所述学员的所述关联结果(D3-AI1)和所述教练的所述关联结果(D3-AI1)的步骤,循环比较所述学员的所述置信结果(D3-AI2)和所述教练的所述置信结果(D3-AI2)的步骤;和/或,依据所述学员的所述关联结果(D3-AI1)和所述置信结果(D3-AI2),计算分析所述学员的典型运动数据、强项、弱项和差距,更新所述学员的所述个人档案数据(D5),计算产生并输出训练建议信息的步骤;和/或,查找所述对手用户的所述个人档案数据(D5)和所述学员的所述个人档案数据(D5),比较二者其中的所述典型运动数据、所述强项数据和所述弱项数据,计算分析二者的差距,制定针对性训练建议计划,并督促检查训练结果的步骤。
- 根据权利要求5所述的方法,其特征在于,还包括:当采集到所述用户的所述第一数据(D1)时,采用所述人工智能算法,依据所述第一数据(D1)和所述关联结果(D3-AI1)和/或所述置信结果(D3-AI2)和/或所述三维矢量化数据(D8),识别所述用户的步骤;或者,当采集到所述用户的所述第一数据(D1)时,采用所述人工智能算 法,依据所述第一数据(D1)和所述习惯动作特征数据,识别所述用户的步骤;或者,当采集到所述用户的第一数据(D1)中包括所述语音数据时,采用所述人工智能算法,依据所述语音数据和所述声纹特征数据,识别所述用户的步骤;或者,当采集到所述用户的所述第一数据(D1)和所述第二数据(D2)时,采用所述人工智能算法,依据所述第一数据(D1)和所述关联结果(D3-AI1)和/或所述置信结果(D3-AI2)和/或所述三维矢量化数据(D8),识别所述用户的步骤;和/或,当采集到所述用户的所述第一数据(D1)时,采用所述人工智能算法,依据所述用户、所述第一数据(D1)和所述关联结果(D3-AI1)和/或所述置信结果(D3-AI2)和/或所述三维矢量化数据(D8),识别所述运动种类属性数据(D4)的步骤;或者,当采集到所述用户的所述第一数据(D1)和所述第二数据(D2)时,采用所述人工智能算法,依据所述用户、所述第一数据(D1)和所述关联结果(D3-AI1)和/或所述置信结果(D3-AI2)和/或所述三维矢量化数据(D8),识别所述运动种类属性数据(D4)的步骤;当采集到所述用户的所述第一数据(D1)时,采用所述人工智能算法,依据所述第一数据(D1)和动作特征数据,识别所述运动种类属性数据(D4)的步骤;依据所述图像深度学习步骤和所述校准数据(D8),计算所述用户的打击动作所产生的压力数据的步骤;使所述用户打击所述靶具,依据牛顿力学算法,取得所述第一传感器(S1)中的所述角速度数据和所述加速度数据和所述第二传感器(S2)中的所述压力数据,建立加速度压力关联(D8)的步骤;和/或,在所述用户只使用所述第一传感器(S1)不使用第二传感器(S2)的情况下打击靶具或者对手,依据所述第一数据(D1),在所述加速度压力关联(D8)中压力识别的步骤。
- 根据权利要求7所述的方法,其特征在于,还包括:依据所述运动种类属性数据(D4)中的比赛规则,在多个所述用户的 所述赛训时,采用所述人工智能算法,计算各个用户所对应的所述关联结果(D3-AI1)和所述置信结果(D3-AI2)的步骤;依据所述运动种类属性数据(D4)中的比赛规则,比较多个所述用户的所对应的所述关联结果(D3-AI1)和所述置信结果(D3-AI2),并获得即时的包括重击程度及次数、伤害程度及次数、读秒及次数、TKO及KO在内的比赛过程数据的步骤;基于所述比赛过程数据计算所述比赛的动态赔率和预测结果数据并输出的步骤。
- 根据权利要求2所述的方法,其特征在于,还包括:使所述第一传感器(S1)和/或所述第二传感器(S2)与一个以上固定终端通信,以计算所述第一传感器(S1)和/或所述第二传感器(S2)自身空间坐标、运动速度、运动轨迹的绝对数据的步骤;和/或,使所述第一传感器(S1)和/或所述第二传感器(S2)与一个以上移动终端、第一传感器(S1)和/或第二传感器(S2)通信,以计算所述第一传感器(S1)和/或所述第二传感器(S2)自身空间坐标、运动速度、运动轨迹的相对数据的步骤;和/或,利用所述固定终端和/或移动终端处理和显示所述运动信息系统结果信息的步骤;和/或,将包括所述结果信息和/或运动动作现场回放视频发送到一个以上的显示装置,以使所述结果信息与现场视频融合显示的步骤。
- 根据权利要求3所述的方法,其特征在于,还包括:由所述运动信息系统查找佩戴所述第一传感器(S1)的所述用户,并向其发出点名信息,所述用户佩戴的所述第一传感器(S1)收到后做出应答的步骤;和/或,由佩戴所述第一传感器(S1)的所述用户,通过所述第一传感器(S1)向所述运动信息系统发出报名信息,并取得应答的步骤;和/或,由所述运动信息系统向所述用户所佩戴的所述第一传感器(S1)发出通知信息,所述第一传感器(S1)接收到所述通知信息后,应答所述运动信息系统,并在所述第一传感器(S1)上显示和/或震动的步骤;和/或,由所述运动信息系统通过一个以上所述终端,对于所述佩戴所述第 一传感器(S1)的所述用户实现定位步骤;和/或,由佩戴所述第一传感器(S1)的所述用户,根据所述用户的个人主观意愿,向所述运动信息系统发出报警信息的步骤;和/或,由所述第一传感器(S1)根据所述第一数据(D1)的异常值,向所述运动信息系统发出报警信息的步骤;和/或,所述运动信息系统和所述第一传感器(S1)之间通过传感网络实现通信;所述异常值包括所述用户和/或所述运动信息系统预先设定的报警触发条件。
- 一种运动数据监测的系统,其特征在于,包括:第一传感器(S1)、终端和运动信息系统;所述第一传感器(S1)和所述终端连接,所述终端和所述运动信息系统连接,并处理来自所述第一传感器(S1)的数据。
- 根据权利要求11所述的系统,其特征在于,还包括:第二传感器(S2)和/或视频图像传感器(S3);所述第二传感器(S2)和所述视频图像传感器(S3)分别和终端连接,所述终端和所述运动信息系统连接。
- 根据权利要求11或12所述的系统,其特征在于:所述第一传感器(S1)由处理器和运动传感器和/或生理传感器和/或压力传感器和/或用户号发生器和/或地理坐标传感器连接构成;其中,所述运动传感器、所述生理传感器、所述压力传感器、所述用户号发生器、所述地理坐标传感器分别与所述处理器连接,所述处理器和所述终端连接;和/或,所述第二传感器(S2)包括压力传感器和位置传感器,所述终端和所述运动信息系统连接的方式包括有线连接和无线传感器网络连接,所述处理器和所述终端连接的方式包括有线连接和无线传感器网络连接;所述运动传感器包括:三轴角速度传感器、三轴加速度传感器、三轴磁传感器、电子罗盘传感器、速度传感器、运动方向传感器、位移传感器、轨迹传感器、光传感器及其它们之间的组合;所述生理传感器包括:血氧传感器、血压传感器、脉搏传感器、温 度传感器、出汗程度传感器、声音和/或光传感器;所述压力传感器包括:压力传感器、压强传感器、冲力传感器和/或冲量传感器;所述位置传感器包括:空间位置传感器、空间坐标传感器、光传感器和/或摄像机;所述用户号发生器包括:用户号存储编辑发送模块;所述地理坐标传感器包括:导航卫星定位模块;所述视频图像传感器为可见光和/或不可见光摄像机。
- 根据权利要求13所述的系统,所述传感网络包括固定终端和移动终端,所述终端包括微基站和/或手机和/或PC机;所述传感网络的连接方式包括有线方式和无线方式;所述微基站包括:一个以上下行接口、处理器、电源子系统和上行接口,其中,所述一个以上下行接口与所述处理器相连,所述处理器与所述上行接口相连,所述电源子系统为所述下行接口、所述处理器、所述上行接口提供电源,所述下行接口通过无线传感器网络与所述第一传感器(S1)和/或所述第二传感器(S2)和/或所述视频图像传感器(S3)连接通信,所述上行接口通过有线或者无线网络与所述运动信息系统通信;所述运动信息系统包括相互通信的终端单元和云中心;所述终端单元和所述终端一体或者分立设置;所述靶具包括搏击靶具、球类、球拍类、体育器械,对于所述搏击靶具的使用包括拳、脚和身体部位对于靶具的打击。
- 根据权利要求14所述的系统,其特征在于,所述云中心被配置为:由所述终端完成对下连接、收集、处理包括所述用户、所述第一数据(D1)、所述第二数据(D2)、所述运动种类属性数据(D4)、所述用户个人档案数据(D5)、所述视频数据(D6),完成用户交互,并辅助生成所述关联数据(D3)、所述户个人档案数据(D5)、所述三维矢量化数据(D7)、所述校准数据(D8)的功能;由所述终端完成对上包括传送数据到云中心形成大数据的功能;由所述终端与云中心进行交互,完成所述学习、所述训练、所述用户识别、所述动作识别和压力识别的功能;由所述云中心完成对于所述大数据进行包括所述深度学习、数据挖掘、分类算法、人工智能处理、生成所述关联数据(D3)、所述视频数据(D6)、所述校准数据(D8)、更新(D5)、云计算、云管理在内的处理和与所述应用软件通信的功能;所述运动信息系统被配置在所述终端和所述云中心。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711310325.XA CN108096807A (zh) | 2017-12-11 | 2017-12-11 | 一种运动数据监测方法和系统 |
CN201711310325.X | 2017-12-11 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019114708A1 true WO2019114708A1 (zh) | 2019-06-20 |
Family
ID=62208337
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2018/120363 WO2019114708A1 (zh) | 2017-12-11 | 2018-12-11 | 一种运动数据监测方法和系统 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN108096807A (zh) |
WO (1) | WO2019114708A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117100255A (zh) * | 2023-10-25 | 2023-11-24 | 四川大学华西医院 | 一种基于神经网络模型进行防摔倒判定的方法和相关产品 |
Families Citing this family (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108096807A (zh) * | 2017-12-11 | 2018-06-01 | 丁贤根 | 一种运动数据监测方法和系统 |
CN109107136A (zh) * | 2018-09-07 | 2019-01-01 | 广州仕伯特体育文化有限公司 | 一种运动参数监测方法及装置 |
CN109718528B (zh) * | 2018-11-28 | 2021-06-04 | 浙江骏炜健电子科技有限责任公司 | 基于运动特征参数的身份识别方法和系统 |
CN109800860A (zh) * | 2018-12-28 | 2019-05-24 | 北京工业大学 | 一种面向社区基于cnn算法的老年人跌倒检测方法 |
CN109769213B (zh) * | 2019-01-25 | 2022-01-14 | 努比亚技术有限公司 | 用户行为轨迹记录的方法、移动终端及计算机存储介质 |
CN110412627A (zh) * | 2019-05-30 | 2019-11-05 | 沈恒 | 一种静水项目船、桨数据采集的应用方法 |
CN110314346A (zh) * | 2019-07-03 | 2019-10-11 | 重庆道吧网络科技有限公司 | 基于大数据分析的智能格斗竞技拳套、脚套、系统及方法 |
CN110507969A (zh) * | 2019-08-30 | 2019-11-29 | 佛山市启明星智能科技有限公司 | 一种跆拳道的训练系统与方法 |
CN114080258B (zh) * | 2020-06-17 | 2022-08-09 | 华为技术有限公司 | 一种运动模型生成方法及相关设备 |
TWI803833B (zh) * | 2021-03-02 | 2023-06-01 | 國立屏東科技大學 | 雲端化球類運動之動作影像訓練系統及其方法 |
CN112884062B (zh) * | 2021-03-11 | 2024-02-13 | 四川省博瑞恩科技有限公司 | 一种基于cnn分类模型和生成对抗网络的运动想象分类方法及系统 |
CN113317783B (zh) * | 2021-04-20 | 2022-02-01 | 港湾之星健康生物(深圳)有限公司 | 多模个性化纵横校准的方法 |
US20230060394A1 (en) * | 2021-08-27 | 2023-03-02 | Rapsodo Pte. Ltd. | Intelligent analysis and automatic grouping of activity sensors |
CN113996048B (zh) * | 2021-11-18 | 2023-03-14 | 宜宾显微智能科技有限公司 | 一种基于姿势识别及电子护具监测的搏击计分系统及方法 |
CN114886387B (zh) * | 2022-07-11 | 2023-02-14 | 深圳市奋达智能技术有限公司 | 基于压感的走跑运动卡路里计算方法、系统及存储介质 |
US20240078842A1 (en) * | 2022-09-02 | 2024-03-07 | Htc Corporation | Posture correction system and method |
CN115869608A (zh) * | 2022-11-29 | 2023-03-31 | 京东方科技集团股份有限公司 | 击剑比赛裁判方法及装置、系统、计算机可读存储介质 |
CN116269266B (zh) * | 2023-05-22 | 2023-08-04 | 广州培生智能科技有限公司 | 基于ai的老年人健康监测方法和系统 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140270375A1 (en) * | 2013-03-15 | 2014-09-18 | Focus Ventures, Inc. | System and Method for Identifying and Interpreting Repetitive Motions |
CN105183152A (zh) * | 2015-08-25 | 2015-12-23 | 小米科技有限责任公司 | 运动能力的分析方法、装置及终端 |
CN105453128A (zh) * | 2013-05-30 | 2016-03-30 | 阿特拉斯维拉伯斯公司 | 便携式计算设备以及对从其捕捉的个人数据的分析 |
CN106823348A (zh) * | 2017-01-20 | 2017-06-13 | 广东小天才科技有限公司 | 一种运动数据管理方法、装置及系统、用户设备 |
CN107213619A (zh) * | 2017-07-04 | 2017-09-29 | 曲阜师范大学 | 体育运动训练评估系统 |
CN108096807A (zh) * | 2017-12-11 | 2018-06-01 | 丁贤根 | 一种运动数据监测方法和系统 |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3949226B2 (ja) * | 1997-06-11 | 2007-07-25 | カシオ計算機株式会社 | 衝撃力推定装置、衝撃力推定方法、及び衝撃力推定処理プログラムを記憶した記憶媒体 |
CN202366428U (zh) * | 2011-12-22 | 2012-08-08 | 钟亚平 | 一种跆拳道击打训练数字采集系统 |
CN103463804A (zh) * | 2013-09-06 | 2013-12-25 | 南京物联传感技术有限公司 | 拳击训练感知系统及其方法 |
KR20160074289A (ko) * | 2014-12-18 | 2016-06-28 | 조선아 | 타격 판정 장치 및 방법 |
CN107126680A (zh) * | 2017-06-13 | 2017-09-05 | 广州体育学院 | 一种基于运动类传感器的跑步监测和语音提醒系统 |
-
2017
- 2017-12-11 CN CN201711310325.XA patent/CN108096807A/zh active Pending
-
2018
- 2018-12-11 WO PCT/CN2018/120363 patent/WO2019114708A1/zh active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140270375A1 (en) * | 2013-03-15 | 2014-09-18 | Focus Ventures, Inc. | System and Method for Identifying and Interpreting Repetitive Motions |
CN105453128A (zh) * | 2013-05-30 | 2016-03-30 | 阿特拉斯维拉伯斯公司 | 便携式计算设备以及对从其捕捉的个人数据的分析 |
CN105183152A (zh) * | 2015-08-25 | 2015-12-23 | 小米科技有限责任公司 | 运动能力的分析方法、装置及终端 |
CN106823348A (zh) * | 2017-01-20 | 2017-06-13 | 广东小天才科技有限公司 | 一种运动数据管理方法、装置及系统、用户设备 |
CN107213619A (zh) * | 2017-07-04 | 2017-09-29 | 曲阜师范大学 | 体育运动训练评估系统 |
CN108096807A (zh) * | 2017-12-11 | 2018-06-01 | 丁贤根 | 一种运动数据监测方法和系统 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117100255A (zh) * | 2023-10-25 | 2023-11-24 | 四川大学华西医院 | 一种基于神经网络模型进行防摔倒判定的方法和相关产品 |
CN117100255B (zh) * | 2023-10-25 | 2024-01-23 | 四川大学华西医院 | 一种基于神经网络模型进行防摔倒判定的方法和相关产品 |
Also Published As
Publication number | Publication date |
---|---|
CN108096807A (zh) | 2018-06-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019114708A1 (zh) | 一种运动数据监测方法和系统 | |
Rana et al. | Wearable sensors for real-time kinematics analysis in sports: A review | |
US11990160B2 (en) | Disparate sensor event correlation system | |
US11355160B2 (en) | Multi-source event correlation system | |
US10124210B2 (en) | Systems and methods for qualitative assessment of sports performance | |
US9911045B2 (en) | Event analysis and tagging system | |
KR101687252B1 (ko) | 맞춤형 개인 트레이닝 관리 시스템 및 방법 | |
Baca et al. | Ubiquitous computing in sports: A review and analysis | |
US9401178B2 (en) | Event analysis system | |
US9406336B2 (en) | Multi-sensor event detection system | |
CN109692003B (zh) | 一种儿童跑步姿态纠正训练系统 | |
US20180160943A1 (en) | Signature based monitoring systems and methods | |
CN107211109B (zh) | 视频和运动事件集成系统 | |
CN105498188A (zh) | 一种体育活动监控装置 | |
JP2018523868A (ja) | 統合されたセンサおよびビデオモーション解析方法 | |
Saponara | Wearable biometric performance measurement system for combat sports | |
JP2017521017A (ja) | モーション事象認識およびビデオ同期システム並びに方法 | |
CN104075731A (zh) | 确定个人和运动物体的表现信息的方法 | |
KR20160045833A (ko) | 에너지 소모 디바이스 | |
WO2017011811A1 (en) | Event analysis and tagging system | |
Kos et al. | Tennis stroke consistency analysis using miniature wearable IMU | |
CN111672089B (zh) | 一种针对多人对抗类项目的电子计分系统及实现方法 | |
US20160180059A1 (en) | Method and system for generating a report for a physical activity | |
US20230302325A1 (en) | Systems and methods for measuring and analyzing the motion of a swing and matching the motion of a swing to optimized swing equipment | |
Hu et al. | Application of intelligent sports goods based on human-computer interaction concept in training management |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18888948 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18888948 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 17/11/2020) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18888948 Country of ref document: EP Kind code of ref document: A1 |