WO2019114708A1 - 一种运动数据监测方法和系统 - Google Patents

一种运动数据监测方法和系统 Download PDF

Info

Publication number
WO2019114708A1
WO2019114708A1 PCT/CN2018/120363 CN2018120363W WO2019114708A1 WO 2019114708 A1 WO2019114708 A1 WO 2019114708A1 CN 2018120363 W CN2018120363 W CN 2018120363W WO 2019114708 A1 WO2019114708 A1 WO 2019114708A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
sensor
user
motion
information system
Prior art date
Application number
PCT/CN2018/120363
Other languages
English (en)
French (fr)
Inventor
丁贤根
Original Assignee
丁贤根
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 丁贤根 filed Critical 丁贤根
Publication of WO2019114708A1 publication Critical patent/WO2019114708A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B71/00Games or sports accessories not covered in groups A63B1/00 - A63B69/00
    • A63B71/06Indicating or scoring devices for games or players, or for other sports activities
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B69/00Training appliances or apparatus for special sports
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B69/00Training appliances or apparatus for special sports
    • A63B69/20Punching balls, e.g. for boxing; Other devices for striking used during training of combat sports, e.g. bags
    • A63B69/32Punching balls, e.g. for boxing; Other devices for striking used during training of combat sports, e.g. bags with indicating devices
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B71/00Games or sports accessories not covered in groups A63B1/00 - A63B69/00
    • A63B71/06Indicating or scoring devices for games or players, or for other sports activities
    • A63B71/0619Displays, user interfaces and indicating devices, specially adapted for sport equipment, e.g. display mounted on treadmills
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B71/00Games or sports accessories not covered in groups A63B1/00 - A63B69/00
    • A63B71/06Indicating or scoring devices for games or players, or for other sports activities
    • A63B71/0619Displays, user interfaces and indicating devices, specially adapted for sport equipment, e.g. display mounted on treadmills
    • A63B2071/065Visualisation of specific exercise parameters
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B2220/00Measuring of physical parameters relating to sporting activity
    • A63B2220/10Positions
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B2220/00Measuring of physical parameters relating to sporting activity
    • A63B2220/20Distances or displacements
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B2220/00Measuring of physical parameters relating to sporting activity
    • A63B2220/30Speed
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B2220/00Measuring of physical parameters relating to sporting activity
    • A63B2220/40Acceleration
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B2220/00Measuring of physical parameters relating to sporting activity
    • A63B2220/50Force related parameters
    • A63B2220/56Pressure
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B2230/00Measuring physiological parameters of the user
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B2244/00Sports without balls
    • A63B2244/10Combat sports
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B2244/00Sports without balls
    • A63B2244/10Combat sports
    • A63B2244/102Boxing

Definitions

  • the invention relates to the field of artificial intelligence application in information technology, in particular to the application technology of artificial intelligence in sports, in particular to a method and a system for image recognition, motion recognition, personnel identification, intelligent training, automatic evaluation, and particularly relates to a method and system Motion data monitoring methods and systems.
  • the intent of the present invention is to solve the related problems in sports by using artificial intelligence technology, and try to change the shortcomings of current sports intelligent technologies, such as mechanical measurement, motion recognition, personnel recognition, learning, training, and human body dynamic motion (such as fighting).
  • Practice referee, evaluation, odds calculation, and creatively invented the method of data imaging, so that the results of artificial intelligence in the field of image recognition can be borrowed from sports measurement data.
  • the present invention includes 104, 105 to 10n, 10n+1 to 10m+1 sensors, including terminals of 101, including a combat information system 2 of 103.
  • the sensor includes a motion sensor, a physiological sensor, a user number generator, a geographic coordinate sensor, a pressure sensor, and the like, wherein the terminal further includes a combat information system 1 of 102. specifically is:
  • a method of motion data monitoring includes, but is not limited to, the step of monitoring the first data D1 with a first sensor S1 disposed on a user's body.
  • the structure of the first sensor includes one of or a combination of a motion sensor, a physiological sensor, a pressure sensor, a user number generator, and a geographic coordinate sensor, in the processor.
  • Work under management which includes the power subsystem. Which one of the motion sensor, the physiological sensor, the pressure sensor user number generator and the geographic coordinate sensor is used, depending on the application scenario, for example, for the same user, the first sensor with the motion sensor may It needs to be worn on all four limbs to monitor the movements of the limbs, but for physiological monitoring, it can be monitored at any part of the limbs. In addition, as some sports (such as fighting), it may be necessary to monitor the pressure (such as the impact of the fist).
  • the motion sensor not only the motion sensor but also the pressure sensor is required to be placed in a specific part (such as a fist part).
  • a specific part such as a fist part.
  • the user number generator or geographic coordinate sensor can meet the requirements. Therefore, where is the motion sensor, physiological sensor, pressure sensor, user number generator and geographic coordinate sensor? One or a combination thereof is determined according to a specific application scenario.
  • the present invention includes but is not limited to the following improvement measures and combinations thereof:
  • the step of monitoring the first data D1 by using the first sensor S1 disposed on the user's body includes:
  • the step of monitoring the second data D2 when the user strikes and uses the target device by using the second sensor S2 disposed on the target device includes but is not limited to:
  • the step of connecting all of the first sensors S1 worn by the user to the personal sensor network, the location sensor network, and the motion information system using the unit sensing network is as shown in FIG.
  • the step of connecting all of the second sensors S2 equipped with a set of target devices to the personal sensor network, the location sensor network, and the motion information system using the unit sensing network is as shown in FIG.
  • the step of monitoring the system time value T at which the first data D1 and the second data D2 occur is recorded and recorded in the first data D1 and the second data D2.
  • the steps of sampling frequency and sampling accuracy of the first sensor S1 and the second sensor S2 are adjusted according to the motion type attribute data D4.
  • the first data D1 and the second data D2 are interpolated according to a predetermined scale, and the first data D1 and the second data D2 are interpolated.
  • the first sensor S1 is disposed at a wrist, an ankle, a joint, and/or a striking position of the user.
  • the motion feature data of the motion is extracted according to the motion category attribute data, and the step of recording the motion category attribute data D4 is performed.
  • the motion category attribute data D4 includes, but is not limited to: motion rule data and motion intensity data corresponding to the motion rule data, motion level data, motion amplitude data, injury degree data, duration data, physical energy consumption degree data, Physiological data and/or competition rules data.
  • the rules of the exercise include but are not limited to: free combat, standing fighting, unrestricted fighting, MMA, UFC, Sanda, martial arts, Tai Chi, Muay Thai, kick boxing, K1 rules, fencing, judo, wrestling, track and field, gymnastics, Ball class.
  • the user has personal profile data D5, including but not limited to: the user's height, weight, three-dimensional, arm span, arm weight, punch weight, heart rate, blood oxygen, body temperature, vital capacity, date time Calorie consumption, historical sports records, historical competition results, typical sports data, strong sports project data, weak sports project data, voiceprint data, image data, video data.
  • the motion sensor includes, but is not limited to, an angular velocity sub-sensor, an acceleration sub-sensor, a magnetic sub-sensor, and the shaft system includes at least an XYZ triaxial.
  • Figure 9 is a structural diagram of the micro base station.
  • the present invention includes but is not limited to the following improvement measures and combinations thereof:
  • the data formatting step is performed on the associated data D3 according to data contents including, but not limited to, sampling type, sampling frequency, sampling precision, and data format.
  • the decomposition action sequence is an action unit, and the step of calculating the unit data D3-U.
  • 1001 is the related data D3, which is formatted into data 1002, and is decomposed by action to become unit data 1004, that is, D3-U.
  • 1004 of the unit data D3-U is decomposed into an angular velocity (gyroscope) sensor data set 1015 and an acceleration sensor data set 1025, wherein one of the collection points is 1016 for the group 1015 and 1026 for the group 1025. .
  • group 1015 of angular velocity sensors is mapped to gFIG. 1018, collection points 1016 in group 1015 are mapped to pixel points 1017 in FIG. 1018; group 1025 of acceleration sensors are mapped to a diagram 1028, acquisition in group 1025. Point 1026 is mapped to pixel point 1027 in a diagram 1028.
  • the secondary collection point is the corresponding moving image or one pixel in the channel, and the X, Y, and Z triaxial data of the collection point is used as the pixel x RGB three primary color data or the argument x of the channel data to establish RGB.
  • group 1015 of angular velocity sensors are mapped to gFIG. 1018, collection points 1016 in group 1015 are mapped to pixel points 1017 in FIG. 1018; group 1025 of acceleration sensors are mapped to c-channels 1038, acquisitions in group 1025 Point 1026 is mapped to pixel point 1037 in c channel 1038.
  • the artificial intelligence image recognition and classification algorithm is used to perform deep learning on a plurality of the moving image data, and the feature data including the motion type feature, the action type feature, the pressure size feature, and the user identification feature are summarized and calculated.
  • the step of comparing the image depth learning of the feature data is calculated.
  • the multi-map mapping and the single-image mapping are adapted into the image and the video file, which facilitates the steps of displaying the image and reconstructing the image and video file viewed by the human eye.
  • one of the methods of reconstructing the illustrated image and video file is to calculate and add a header file, that is, 1119, 1129, 1139 in FIG.
  • the artificial intelligence algorithm includes, but is not limited to, an artificial neural network algorithm, a Convolutional Neural Networks (hereinafter referred to as CNNs) algorithm, a Recurrent Neural Networks (hereinafter referred to as RNN) algorithm, and a deep neural network (Dotnetnuke, Hereinafter referred to as DNN), Support Vector Machine (SVM) algorithm, genetic algorithm, ant colony algorithm, simulated annealing algorithm, particle swarm algorithm, Bayes (Bayes) algorithm.
  • CNNs Convolutional Neural Networks
  • RNN Recurrent Neural Networks
  • DNN deep neural network
  • SVM Support Vector Machine
  • genetic algorithm genetic algorithm
  • ant colony algorithm simulated annealing algorithm
  • particle swarm algorithm particle swarm algorithm
  • Bayes (Bayes) algorithm Bayes
  • the present invention includes but is not limited to the following improvement measures and combinations thereof:
  • the artificial intelligence algorithm is used to perform three-dimensional vector synthesis of motion actions to obtain a three-dimensional vector.
  • identifying the motion action in the video image D6 according to the three-dimensional vectorization data D7 and the motion category attribute data D4, and synchronizing the motion marked in the video image D6 The steps before and after the action.
  • the game includes, but is not limited to, single-player training, single-player races, and multiplayer confrontation competitions.
  • the present invention includes but is not limited to the following improvement measures and combinations thereof:
  • a step of learning the association result D3-AI1 of the coach and the confidence result D3-AI2 of the coach and updating the learning profile of the profile data D5 of the coach user is obtained.
  • the steps of comparing the correlation result D3-AI1 of the student and the association result D3-AI1 of the coach are cyclically compared with the step of comparing the confidence result D3-AI2 of the student and the confidence result D3-AI2 of the coach.
  • the present invention includes but is not limited to the following improvement measures and combinations thereof:
  • the artificial intelligence algorithm is adopted, according to the first data D1 and the association result D3-AI1, the confidence result D3-AI2 and/or the three-dimensional vector
  • the data D8 identifies a single sensor user identification step of the user.
  • the artificial intelligence algorithm is used to identify the habit action user identification step of the user according to the first data D1 and the custom action feature data.
  • the artificial intelligence algorithm is used to identify the voiceprint feature of the user according to the voice data and the voiceprint feature data. .
  • the artificial intelligence algorithm is adopted, according to the first data D1 and the association result D3-AI1, the user's confidence result D3 - AI2, the three-dimensional vectorized data D8, identifying a dual sensor user identification step of the user.
  • the artificial intelligence algorithm is used, according to the user, the first data D1, and the associated result D3-AI1 of the user, and the confidence result D3-3 of the user.
  • the AI2 and/or the three-dimensional vectorized data D8 recognizes a single sensor motion recognition step of the motion type attribute data D4.
  • the artificial intelligence algorithm is adopted, according to the user, the first data D1, and the associated result D3-AI1 of the user.
  • the user's confidence result D3-AI2 the three-dimensional vectorization data D8, and the dual sensor motion recognition step of identifying the motion type attribute data D4.
  • the artificial intelligence algorithm is used to identify an action feature action identifying step of the motion category attribute data D4 according to the first data D1 and the action feature data.
  • the step of calculating the pressure data generated by the striking action of the user is performed according to the image depth learning step and the calibration data D8.
  • the present invention includes but is not limited to the following improvement measures and combinations thereof:
  • the artificial intelligence algorithm is used to calculate the association result D3-AI1 and the confidence corresponding to each user during the game training of the plurality of users. Results D3-AI2 steps.
  • the present invention includes but is not limited to the following improvement measures and combinations thereof:
  • the first sensor S1 and the second sensor S2 are communicated with one or more mobile terminals, the first sensor S1, and the second sensor S2 to calculate spatial coordinates of the first sensor S1 and the second sensor S2.
  • the fixed terminal and the mobile terminal include: a micro base station, a PC, and a smart phone.
  • connection manner of the sensing network includes a wired mode and a wireless mode.
  • the present invention includes but is not limited to the following improvement measures and combinations thereof:
  • the user who wears the first sensor S1 is searched by the motion information system, and the name information is sent to the user, and the first sensor S1 worn by the user responds after receiving the response, thereby implementing the name step.
  • the user who wears the first sensor S1 sends registration information to the motion information system through the first sensor S1, and obtains a response, thereby implementing the registration step.
  • the positioning step is implemented by the motion information system through the one or more terminals for the user wearing the first sensor S1.
  • the abnormality alarming step of the alarm information is sent to the motion information system by the first sensor S1 according to the abnormal value of the first data D1.
  • the communication between the motion information system and the first sensor S1 is implemented by a sensor network, and the abnormal value includes an alarm trigger condition preset by the user and/or the motion information system.
  • a system for monitoring motion data comprising: a first sensor S1, a terminal and a motion information system; the first sensor S1 is connected to the terminal, and the terminal is connected to the motion information system.
  • the present invention further includes, but is not limited to, the following contents and combinations thereof:
  • the method further includes: a second sensor S2, a video image sensor S3; the second sensor S2 and the video image sensor S3 are respectively connected to the terminal.
  • the present invention further includes, but is not limited to, the following contents and combinations thereof:
  • the first sensor S1 is composed of a processor and a motion sensor, a physiological sensor, a pressure sensor, a user number generator, and a geographic coordinate sensor; wherein the motion sensor, the physiological sensor, the pressure sensor, and the user The number generator, the geographic coordinate sensor are respectively connected to the processor, and the processor is connected to the terminal.
  • the second sensor S2 includes a pressure sensor and a position sensor.
  • the manner in which the terminal and the motion information system are connected includes a wired connection and a wireless sensor network connection
  • the manner in which the processor and the terminal are connected includes a wired connection and a wireless sensor network connection.
  • the motion sensor includes a three-axis angular velocity sensor, a three-axis acceleration sensor, a three-axis magnetic sensor, an electronic compass sensor, a speed sensor, a motion direction sensor, a displacement sensor, a trajectory sensor, a light sensor, and combinations thereof.
  • the physiological sensor includes a blood oxygen sensor, a blood pressure sensor, a pulse sensor, a temperature sensor, a sweating degree sensor, a sound, and a light sensor.
  • the pressure sensor includes: a pressure sensor, a pressure sensor, a momentum sensor, and an impulse sensor.
  • the position sensor includes: a space position sensor, a space coordinate sensor, a light sensor, and a camera.
  • the user number generator includes: a user number storage edit sending module.
  • the geographic coordinate sensor includes: a navigation satellite positioning module.
  • the video image sensor is a visible light, invisible light camera.
  • the motion category attribute data D4 includes: motion rule data and motion intensity data corresponding to the motion rule data, motion level data, motion amplitude data, injury degree data, duration data, physical energy consumption degree data, and physiological degree data. , game rules data.
  • the exercise rules include at least: free combat, standing fighting, unlimited fighting, MMA, UFC, Sanda, martial arts, Tai Chi, Muay Thai, kick boxing, K1 rules, fencing, judo, wrestling, track and field, gymnastics, ball games. .
  • the user has personal profile data D5, the personal profile data D5 including: the user's height, weight, three-dimensional, wingspan, arm weight, punch weight, heart rate, blood oxygen, body temperature, vital capacity, date and time, calorie consumption Historical sports records, historical competition results, typical sports data, strong sports project data, weak sports project data, voice data, voiceprint data, image data, video data.
  • the present invention further includes, but is not limited to, the following contents and combinations thereof:
  • the sensing network includes a fixed terminal and a mobile terminal, and the terminal includes a micro base station, a mobile phone, and a PC; and the connection manner of the sensing network includes a wired mode and a wireless mode;
  • the micro base station includes: one or more downlink interfaces, a processor, a power subsystem, and an uplink interface, where the one or more downlink interfaces are connected to the processor, and the processor is connected to the uplink interface, the power source
  • the subsystem provides power for the downlink interface, the processor, and the uplink interface, and the downlink interface is connected to the first sensor S1, the second sensor S2, and the video image sensor S3 through a wireless sensor network.
  • Communication the uplink interface communicating with the athletic information system over a wired or wireless network.
  • the motion information system includes a terminal unit and a cloud system that communicate with each other; the terminal unit and the terminal are integrated or separately, and the cloud system is disposed in a network cloud.
  • the target includes a combat target, a ball, a racquet, a sports apparatus, and the use of the combat target includes a punch, a foot, and a body part hitting the target.
  • the present invention further includes, but is not limited to, the following contents and combinations thereof:
  • cloud center software and application software, among which:
  • the data D5 and the video data D6 complete user interaction and assist in generating the functions of the associated data D3, the household profile data D5, the three-dimensional vectorized data D7, and the calibration data D8.
  • the function of transmitting data to the cloud center to form big data is completed by the application software running on the terminal.
  • the functions of the learning, the training, the user identification, the motion recognition, and the pressure recognition are performed by the application software running on the terminal in conjunction with the cloud center software.
  • the motion information system includes the application software and the cloud center software.
  • the plurality of users' mobile information systems communicate and complete the interactive steps.
  • the present invention has the following beneficial effects:
  • Figure 1 is a system diagram
  • Figure 2 is a structural view of one of the first sensors
  • Figure 3 is a second structural view of the first sensor
  • Figure 4 is a three-figure diagram of the first sensor
  • Figure 5 is a structural view of one of the second sensors
  • Figure 6 is a second structural view of the second sensor
  • Figure 7 is a structural view of a unit sensor network
  • Figure 8 is a second structural view of the unit sensor network
  • Figure 9 is a structural diagram of a micro base station
  • Figure 10 is one of the data image maps
  • Figure 11 is the second of the data image mapping
  • Figure 12 is the third of the data image mapping
  • Figure 13 is the fourth of the data image mapping.
  • the combat training system is mainly used for combat sports users.
  • the system includes 104, 105-10n, 10n+1 ⁇ 10m+1 sensors, including terminals of 101, including a combat information system 2 of 103.
  • the sensor includes a motion sensor, a physiological sensor, a user number generator, a geographic coordinate sensor, a pressure sensor, and the like, wherein the terminal further includes a combat information system 1 of 102.
  • the smallest unit is defined as a motion detection group, including:
  • the four first sensors S1 are 104, 105, 106, 107, respectively, and one terminal 101 composed of a micro base station, including 102 combat information system 1.
  • Four first sensors S1 and one micro base station are connected, and the micro base station is connected. Connected to the Fighting Information System 2.
  • the four first sensors S1 are respectively worn on the wrist and the ankle of the user, one of which is a variety with a physiological sensor, a motion sensor and a user number generator, as shown in FIG. 3; the other three are only with a motion sensor.
  • the user number generator without the variety of physiological sensors, as shown in Figure 4.
  • the motion sensor uses a variety of three-axis gyroscopes and three-axis acceleration sensors, and the physiological sensor is a pulse sensor.
  • the sampling frequency of the motion sensor 10 frames/second to 200 frames/second, and set the heart rate sensor to collect once every minute.
  • the sampling accuracy is 8 ⁇ 16bits.
  • a second sensor S2 which is connected to the micro base station as shown in FIG.
  • the second sensor S2 is composed of a matrix film pressure sensor and has a pressure and position detecting circuit.
  • the range can be divided into several pressure/strike levels such as 50 kg, 200 kg, and 500 kg.
  • the second sensor can be selected for different pressure levels and mounting styles, depending on the shape of the target.
  • a 4-way HD camera can also be equipped as the video image sensor S3. It is connected to the micro base station to complete the image acquisition function.
  • the micro base station includes: 9 downlink interfaces, a processor, a power subsystem, and an uplink interface, wherein 9 downlink interfaces are connected to the processor, the processor is connected to the uplink interface, and the power subsystem is a downlink interface.
  • the processor and the uplink interface provide power, and the downlink interface communicates with the four first sensors S1, the first sensor S2, and the four video image sensors S3 through the wireless sensor network, and the uplink interface communicates with the combat information system through the fiber-optic cable network.
  • the micro base station aggregates the signals of the above sensors and connects them to the combat information system through the optical fibers.
  • the main functions of the equipped striking sensor S2 are as follows:
  • One is to cooperate with the first sensor for correlating and calculating the hit data. That is, when the user hits the target multiple times, the system simultaneously measures the data of the angular velocity and acceleration of the first sensor S1 and the striking force data of the second sensor S2, according to the data of the angular velocity and acceleration of the multiple strikes and the blow of the second sensor.
  • the correspondence between force data is based on Newton's kinematics theorem.
  • the user only needs to use the motion sensor instead of the pressure sensor to convert the striking force data based on the data of the angular velocity and acceleration of the user at the time of the striking.
  • the installation of the pressure sensor is cumbersome and must be installed on the surface of, for example, a fist, which limits the scene to be used, and the method eliminates the pressure sensor by indirect measurement, which is greatly facilitated. User's use.
  • the user's striking force data is directly measured by the second sensor S2.
  • the server which uses a server with a GPU graphics card, provides image computing, big data, and cloud services to the system.
  • the first sensor S1 worn by one user constitutes a unit sensor network
  • a plurality of target devices constitute a unit sensor network
  • the unit sensor network constitutes a personal sensor network or a location sensor network, and then Fight information system connection.
  • the first sensor S1 is composed of a processor and a motion sensor, a physiological sensor, and a pressure sensor.
  • the motion sensor, the physiological sensor, and the pressure sensor are respectively connected to the processor, and the processor and the micro base station terminal are connected.
  • the manner in which the micro base station terminal and the combat information system are connected includes a wired connection and a wireless sensor network connection
  • the manner in which the processor and the terminal are connected includes a wired connection and a wireless sensor network connection.
  • the motion sensor includes a three-axis angular velocity sensor, a three-axis acceleration sensor, and a three-axis magnetic sensor.
  • Physiological sensors include: a pulse sensor, a temperature sensor, and a sound sensor.
  • the pressure sensor includes: a matrix membrane pressure sensor sensor.
  • the position sensor includes: a space coordinate sensor.
  • the video image sensor is a visible light camera.
  • the terminal includes: a micro base station, a smart phone, and a PC.
  • the sport type attribute data D4 includes, but is not limited to, motion rule data and exercise intensity data corresponding to the exercise rule data, exercise level data, exercise amplitude data, damage degree data, persistence data, physical energy consumption degree data, physiological degree data, Match rule data.
  • the rules of exercise include but are not limited to: free combat, standing fighting, unlimited fighting, MMA, UFC, Sanda, martial arts, Tai Chi, Muay Thai, kick boxing, K1 rules, fencing, judo, wrestling, track and field, gymnastics, ball .
  • the user has personal profile data D5, which includes but is not limited to: user's height, weight, three-dimensional, wingspan, arm weight, punch weight, heart rate, blood oxygen, body temperature, vital capacity, date and time, calorie consumption, historical exercise records, history Competition results, typical sports data, strong sports data, weak sports data, voice data, voiceprint data, image data, video data.
  • the combat information system includes a terminal unit and a cloud system that communicate with each other; the terminal unit and the terminal are integrated or discretely arranged, and the cloud system is disposed in the network cloud.
  • the user connected to the terminal completes the connection, collection, and processing including the user, the first data D1, the second data D2, the sport type attribute data D4, the user profile data D5, and the video data D6, and completes the user interaction, and The functions of the associated data D3, the household profile data D5, the three-dimensional vectorized data D7, and the calibration data D8 are assisted.
  • the function of transmitting data to the cloud center to form big data is completed by the application software running on the terminal.
  • the application software running on the terminal cooperates with the cloud center software to complete the functions of learning, training, user identification, motion recognition and stress.
  • the cloud center software running in the cloud center is responsible for the completion of big data including deep learning, data mining, classification algorithms, artificial intelligence processing, generating associated data D3, video data D6, calibration data D8, update D5, cloud center computing, cloud Processing and communication with application software, including central management.
  • the sports information system includes application software and cloud center software.
  • An application software connection manages one user to form a combat information system; multiple application software connections manage multiple users to form multiple combat information systems.
  • the system is connected by a micro base station and two bracelets, two foot loops and one second sensor.
  • the communication is through the BLE Bluetooth low power protocol or the WIFI protocol.
  • the analogy can also use other WSN protocols, and the micro base station.
  • the collected data of the above five sensors are transmitted to the cloud database of the combat information system.
  • the above five sensors realize the synchronization of the collected data through the time of the system in time stamp mode, to obtain the motion data of the user, and cooperate with the cloud center configuration of the cloud center to realize the function of the combat information system.
  • connection, collection, and processing including the user, the first data D1, the second data D2, the motion type attribute data D4, the user profile data D5, and the video data D6 are completed by the configuration running on the mobile phone, and the user interaction is completed and assisted.
  • the functions of the associated data D3, the household profile data D5, the three-dimensional vectorized data D7, and the calibration data D8 are generated.
  • the configuration completed by running on the mobile phone includes the function of transmitting data to the cloud center to form big data.
  • the functions of learning, training, user identification, motion recognition and pressure recognition are configured by the configuration running on the mobile phone in conjunction with the cloud center configuration.
  • the cloud center configuration running in the cloud center is responsible for completing the big data including deep learning, data mining, classification algorithm, artificial intelligence processing, generating associated data D3, video data D6, calibration data D8, update D5, cloud center computing, cloud The process of central management and communication with the terminal application configuration.
  • the sports information system includes terminal application configuration and cloud center configuration.
  • An application configuration connection manages a user to form a motion information system; a plurality of application configuration connections manage multiple users to form a plurality of motion information systems.
  • the first data D1 is monitored by the first sensor S1 (2 wristbands and 2 foot loops) provided on the user's body, and the first data D1 is transmitted to the combat information system using the sensor network. At the same time, the first data D1 is processed.
  • the second sensor D2 is monitored by the second sensor S2 disposed on the target when the user strikes the target. While the user hits the target, the first data D1 and the second data D2 are simultaneously acquired in chronological order, and the associated data D3 is generated. The second data D2 and the associated data D3 are transmitted to the combat information system using the sensor network.
  • the users here include: student users, coach users, and opponent users.
  • the sensing network includes a terminal, and the terminal includes a fixed terminal and a mobile terminal, including a micro base station, a smart phone, and a PC.
  • the target includes a target such as a dummy, a sandbag, a hand target, a foot target, and a wall target.
  • the use of combat targets includes the impact of the punches, feet, and body parts on the target.
  • the user motion data is collected by the motion sensor in the first sensor S1
  • the physiological data of the user is collected by the physiological sensor in S1
  • the pressure data in the S1 is used to collect the pressure data when the user hits the target and hits the opponent.
  • the second sensor S2 disposed on the target device monitors the second data D2 when the user hits the target, uses the pressure sensor in S2 to collect the pressure data when the user hits the target, and uses the position sensor in S2 to collect the target when the user hits the target. Location data.
  • All of the first sensors S1 worn by one user are connected to the personal sensor network, the location sensor network, and the combat information system using the unit sensing network.
  • All of the second sensors S2 equipped with a set of target devices are connected to the personal sensor network, the location sensor network, and the combat information system using the unit sensing network.
  • the system time value T at which the timing at which the first data D1 and the second data D2 are generated is collected and recorded in the first data D1 and the second data D2.
  • A/D conversion is performed on the first data D1 and the second data D2.
  • sampling frequency and sampling accuracy of S1 and S2 are adjusted according to the motion type attribute data D4.
  • the first data D1 and the second data D2 are interpolated according to a predetermined scale, and the first data D1 and the second data D2 are merged into the associated data D3.
  • S1 is set at the user's wrist, ankle, joint, and striking position.
  • the artificial intelligence algorithm is used to extract and extract the user's custom action feature data according to the user's motion data, and record it in the user's profile data D5.
  • the artificial intelligence algorithm is used to extract and extract the voiceprint feature data of the user according to the user voice data, and record it in the user's profile data D5.
  • the artificial intelligence algorithm is used to summarize and extract the motion feature data of the motion according to the motion category attribute data, and record it into the motion category attribute data D4.
  • the sport type attribute data D4 includes, but is not limited to, motion rule data and exercise intensity data corresponding to the exercise rule data, exercise level data, exercise amplitude data, damage degree data, persistence data, physical energy consumption degree data, physiological degree data, Match rule data.
  • the rules of exercise include at least but not limited to: free combat, standing fighting, unrestricted fighting, MMA, UFC, Sanda, martial arts, Tai Chi, Muay Thai, kick boxing, K1 rules, fencing, judo, wrestling, track and field, gymnastics, ball class.
  • the user has personal profile data D5, including but not limited to: user's height, weight, three-dimensional, wingspan, arm weight, punch weight, heart rate, blood oxygen, body temperature, vital capacity, date and time, calorie consumption, historical movement Records, historical game results, typical sports data, strong sports data, weak sports data, voice data, voiceprint data, image data, video data.
  • the motion sensor includes an angular velocity sub-sensor, an acceleration sub-sensor, a magnetic sub-sensor, and the shafting includes, but is not limited to, an XYZ triaxial.
  • the data is formatted for the associated data D3.
  • the decomposition action sequence is an action unit, and the unit data D3-U is calculated.
  • the mapping unit data D3-U is a moving image. According to the collected sequence, in the unit data D3-U, the triaxial data of each acquired motion sensor is taken as a group, and one group is mapped to one pixel in the moving image. Image point mapping.
  • the data collected by each sub-sensor of the X-axis, the Y-axis, and the Z-axis of the motion sensor in the mapping unit data D3-U is a moving image, and each sub-sensor is mapped to a pixel point in the corresponding moving image.
  • the collected data of one of the motion sensors in the mapping unit data D3-U is a moving image
  • the collected data of the other sub-sensors is a channel of the moving image
  • each sub-sensor is mapped to a corresponding moving image.
  • the artificial intelligence image recognition and classification algorithm is used to perform deep learning on a plurality of moving image data, and the feature data including but not limited to the motion type feature, the action type feature, the pressure size feature, and the user identification feature is summarized and calculated. The next time the data D3 is associated, the image depth learning of the comparison feature data is calculated.
  • the multi-map mapping and the single-image mapping are adapted into image and video files, which is convenient for image display and video file reconstruction by the human eye.
  • Artificial intelligence algorithms include but are not limited to: artificial neural network algorithm, CNNs algorithm, RNN algorithm, SVM algorithm, genetic algorithm, ant colony algorithm, simulated annealing algorithm, particle swarm algorithm, Bayes algorithm.
  • the motion recognition it is realized by first establishing an action feature library and secondly querying the action feature library.
  • an action feature library To establish an action feature library, first select some users of the action specification, wear the first sensor S1, perform various actions, and obtain the action data and the action name data for the data, including but not limited to the artificial intelligence analysis using CNNs and SVM algorithms.
  • the action characteristics are extracted and recorded as a function feature database in the database of the cloud center.
  • the following also includes, but is not limited to, using the CNNs and the SVM algorithm to obtain the feature data of the action, and then using the feature data to retrieve the search in the action feature database of the cloud center to determine the similarity.
  • the highest-level action list, the action code is taken out, and the motion recognition is realized.
  • the action data of the user is first obtained, including but not limited to using the CNNs and the SVM algorithm to obtain the behavioral feature and the action feature database of the user, and then using the feature database data.
  • the search is performed in the database of the cloud center to determine the action list with the highest similarity, thereby realizing user identification.
  • the 4-way video image sensor S3 is caused to capture the video image D6 of the 4-way user game.
  • the 4-way video image sensor S3 is communicated through the sensor network and the combat information system.
  • an artificial intelligence algorithm is used to perform three-dimensional vector synthesis of the motion action, and the three-dimensional vectorized data D7 is obtained.
  • the three-dimensional vectorized data D7 is associated with the second data D2, the associated data D3, the sport type attribute data D4, and the profile data D5.
  • the artificial intelligence algorithm is used to identify the motion motion in the video image D6 according to the three-dimensional vectorized data D7 and the motion type attribute data D4, and synchronize the pre- and post-motion points marked in the video image D6.
  • the training includes single-person training, single-handed routines, and multiplayer competitions.
  • the coach user uses the standard action to hit the target according to the sports category attribute data D4, obtains the associated data D3 of the coach, and performs machine learning in the associated data D3 of the coach according to the artificial intelligence algorithm, and obtains the correlation result of the coach D3- AI1 and the coach's confidence result D3-AI2, and updated the coach user's profile data D5 learning coach.
  • the student user hits the target according to the sport type attribute data D4, obtains the student's associated data D3, and performs machine learning in the student's associated data D3 according to the artificial intelligence algorithm, and obtains the student's associated result D3-AI1 and the student's confidence result. D3-AI2, and update the student user's profile data D5 self-training.
  • the cycle compares the student's association result D3-AI1 with the coach's association result D3-AI1, and compares the student's confidence result D3-AI2 with the coach's confidence result D3-AI2.
  • the students' exercise strengths, weaknesses and gaps are calculated and analyzed, and the student's personal file data D5 is updated to calculate the strength and weakness measures for generating and outputting the training suggestion information.
  • the artificial intelligence algorithm is used to identify the single sensor user of the user according to the first data D1 and the user association result D3-AI1, the user confidence result D3-AI2, and the three-dimensional vectorization data D8. Identification.
  • the artificial intelligence algorithm is used to identify the user's custom action user identification according to the first data D1 and the custom action feature data.
  • the artificial intelligence algorithm is used to identify the user's voiceprint feature user identification according to the voice data and the voiceprint feature data.
  • the artificial intelligence algorithm is used to identify the user according to the first data D1 and the associated result D3-AI1, the user's confidence result D3-AI2, and the three-dimensional vectorized data D8. Dual sensor user identification.
  • the artificial intelligence algorithm is used to identify the motion category attribute according to the user, the first data D1 and the user's association result D3-AI1, the user's confidence result D3-AI2, and the three-dimensional vectorized data D8.
  • Single sensor motion recognition of data D4 is used to identify the motion category attribute according to the user, the first data D1 and the user's association result D3-AI1, the user's confidence result D3-AI2, and the three-dimensional vectorized data D8.
  • the artificial intelligence algorithm is used to identify the D3 and AI2 three-dimensional vectorized data D8 according to the user, the first data D1, the user's association result D3-AI1 user's confidence result D3.
  • an artificial intelligence algorithm is used to identify the motion feature motion recognition of the motion type attribute data D4 according to the first data (D1) and the motion feature data.
  • the pressure data generated by the user's striking action is calculated based on the image depth learning step and the calibration data D8.
  • the user is struck against the target, and according to the Newtonian mechanics algorithm, the angular velocity and acceleration data in the first sensor S1 and the pressure data in the second sensor S2 are obtained, and the acceleration pressure correlation D8 is established.
  • the pressure is recognized in the acceleration pressure correlation D8 according to the first data D1.
  • the artificial intelligence algorithm is used to calculate the association result D3-AI1 and the confidence result D3-AI2 corresponding to each user.
  • the corresponding association result D3-AI1 and the confidence result D3-AI2 of the plurality of users are compared, and the immediate degree and the number of times of the attack, the degree and degree of the damage, the countdown and the number of times are obtained.
  • Competition process data including TKO and KO.
  • the dynamic odds and predicted result data of the game are calculated based on the game process data and output.
  • the first sensor S1 and the second sensor S2 are communicated with one or more fixed terminals to calculate absolute data of the first spatial coordinates, the moving speed, and the motion trajectory of the first sensor S1 and the second sensor S2.
  • the first sensor S1 and the second sensor S2 are communicated with one or more mobile terminals, the first sensor S1, and the second sensor S2 to calculate relative data of the first sensor S1 and the second sensor S2, which are spatial coordinates, motion speed, and motion trajectory. .
  • the battle information system result information is processed and displayed by the fixed terminal and the mobile terminal.
  • the result information, the motion action live playback video is transmitted to more than one display device to cause the result information to be displayed in fusion with the live video.
  • the fixed terminal and the mobile terminal include: a micro base station, a PC, and a smart phone.
  • the connection method of the sensor network includes wired mode and wireless mode.
  • the user who wears the first sensor S1 is searched by the combat information system, and the name information is sent to the user, and the first sensor S1 worn by the user responds after receiving the response, thereby realizing the name.
  • the user who wears the first sensor S1 sends the registration information to the combat information system through the first sensor S1, and obtains the response of the combat information system, thereby realizing the registration.
  • the notification information is sent to the first sensor S1 worn by the user by the combat information system. After receiving the notification information, the first sensor S1 answers the combat information system and displays, vibrates and announces the voice on the first sensor S1.
  • the user wearing the first sensor S1 implements positioning including but not limited to a plurality of positioning algorithms by the combat information system through more than one terminal.
  • the user wearing the first sensor S1 issues an active alarm of the active alarm information to the combat information system according to the subjective will of the user.
  • the first sensor S1 issues an abnormality alarm of the alarm information to the motion information system based on the abnormal value of the first data D1.
  • the communication between the combat information system and the first sensor S1 is realized through the sensor network, and the abnormal value includes an alarm trigger condition preset by the user and the motion information system.
  • the combat information system can realize the functions of positioning, registration, name, notification, alarm, etc. for the user, and provides technical support for strengthening management.
  • the system comprises: a first sensor S1, a terminal and a combat information system; the first sensor S1 is connected to the terminal, the terminal is connected to the combat information system, and processes data from the first sensor S1.
  • a second sensor S2, a video image sensor S3; a second sensor S2 and a video image sensor S3 are respectively connected to the terminal, and the terminal is connected to the combat information system.
  • the first sensor S1 is composed of a processor and a motion sensor, a physiological sensor, a pressure sensor, a user number generator, and a geographic coordinate sensor.
  • the motion sensor, the physiological sensor, the pressure sensor, the user number generator, and the geographic coordinate sensor are respectively connected to the processor, and the processor and the terminal are connected.
  • the second sensor S2 includes a pressure sensor and a position sensor.
  • the way the terminal and the combat information system are connected includes a wired connection and a wireless sensor network connection, and the way the processor and the terminal are connected includes a wired connection and a wireless sensor network connection.
  • the motion sensor includes a three-axis angular velocity sensor, a three-axis acceleration sensor, a three-axis magnetic sensor, an electronic compass sensor, a speed sensor, a motion direction sensor, a displacement sensor, a trajectory sensor, a light sensor, and combinations thereof.
  • the physiological sensor includes a blood oxygen sensor, a blood pressure sensor, a pulse sensor, a temperature sensor, a sweating degree sensor, a sound, and a light sensor.
  • the pressure sensor includes: a pressure sensor, a pressure sensor, a momentum sensor, and an impulse sensor.
  • the position sensor includes: a space position sensor, a space coordinate sensor, a light sensor, and a camera.
  • the user number generator includes: a user number storage editing transmission module.
  • the geographic coordinate sensor includes: a navigation satellite positioning module.
  • the video image sensor is a visible light, invisible light camera.
  • the sensor network includes a fixed terminal and a mobile terminal.
  • the terminal includes a micro base station, a smart phone, and a PC; the connection mode of the sensing network includes a wired mode and a wireless mode.
  • the micro base station includes: one or more downlink interfaces, a processor, a power subsystem, and an uplink interface.
  • the one or more downlink interfaces are connected to the processor, the processor is connected to the uplink interface, the power subsystem supplies power to the downlink interface, the processor, and the uplink interface, and the downlink interface passes through the wireless sensor network with the first sensor S1 and the second sensor S2.
  • the video image sensor S3 is connected to communicate, and the uplink interface communicates with the combat information system via a wired or wireless network.
  • the motion information system includes a terminal unit and a cloud system that communicate with each other; the terminal unit and the terminal are integrated or separately, and the cloud system is disposed in the network cloud.
  • Targets include combat targets, balls, racquets, sports equipment, and the use of combat targets includes the impact of punches, feet, and body parts on the target.
  • connection, collection, and processing including the user, the first data D1, the second data D2, the motion type attribute data D4, the user profile data D5, and the video data D6 are completed by the application configuration running on the terminal, and the user interaction is completed. And assisting in generating the functions of the associated data D3, the household profile data D5, the three-dimensional vectorized data D7, and the calibration data D8.
  • the configuration of the application running on the terminal completes the function of transmitting data to the cloud center to form big data.
  • the function of learning, training, user identification, motion recognition and pressure recognition is completed by the application running on the terminal and the cloud center software.
  • the cloud center configuration running in the cloud center is responsible for completing the big data including deep learning, data mining, classification algorithm, artificial intelligence processing, generating associated data D3, video data D6, calibration data D8, update D5, cloud center computing, cloud Processing and communication with application software, including central management.
  • the sports information system includes application configuration and cloud center configuration.
  • An application software connection manages one user to form a motion information system; multiple application software connections manage multiple users to form multiple motion information systems.
  • the problem of dynamic measurement of the impact force when only the angular velocity and the acceleration sensor are used in the fight is solved, which facilitates the implementation and reduces the cost.
  • step 4 the problem of image conversion of motion data is solved, which is visualized and convenient for the application of the existing artificial intelligence image recognition algorithm.
  • step 6 the artificial intelligence assisted combat coaching function is introduced.
  • the system is mainly used for personal sports user identification, motion recognition and management. Specifically, through the extraction and comparison of the user's personal motion characteristics by the wristband sensor, the user identity and motion action are recognized under the support of cloud big data. Identifyed features.
  • the first sensor is a wristband. As shown in Figure 3, it is a motion sensor consisting of a three-axis gyroscope and a three-axis accelerometer.
  • a physiological sensor and user number generator consisting of a heart rate sensor can also be used. Includes geographic coordinate sensors and voice sensors. Set the sampling frequency of the motion sensor to 5 frames/second to 50 frames/second. Set the heart rate sensor to collect once every minute. The sampling accuracy is 8 ⁇ 16bits, and the sampling frequency of the voice sensor is set to 8KHz ⁇ 2.8224MHz.
  • the user's smart phone is connected to the first sensor S1.
  • step 4 using the characteristics of the action to identify: outdoor running, walking, walking, walking; running, walking on the indoor treadmill; step by step, put the sensor on the "step counter artifact” step counter, tie the sensor Animals in animals are counted.
  • the rules of exercise only include running, walking, walking, walking, and not including other sports.
  • the system is connected to the mobile phone and the wristband sensor to obtain the user's motion data, and cooperate with the cloud center's cloud center configuration to realize the function of the motion information system.
  • the user is connected, collected, and processed by the APP application running on the mobile phone, including the user, the first data D1, the second data D2, the sport category attribute data D4, and the user profile data D5, complete the user interaction, and assist in generating the association.
  • the configuration of the APP application running on the mobile phone completes the function of transmitting data to the cloud center to form big data.
  • the application of the APP application running on the mobile phone cooperates with the cloud center software to complete the functions of learning, training, user identification, motion recognition and pressure recognition.
  • the cloud center software running in the cloud center is responsible for completing the processing of big data including deep learning, data mining, classification algorithm, artificial intelligence processing, generating associated data D3, updating D5, cloud center computing, cloud center management, and The steps to communicate with the application configuration.
  • the motion recognition information system includes an application configuration and a cloud center configuration.
  • An application configuration connection manages one user to form a motion information system; a plurality of application configuration connections manage multiple users to form a plurality of motion information systems.
  • User motion data is acquired using motion sensors in the first sensor S1.
  • User physiological data, user number data, and geographic coordinate data are collected by the physiological sensor in the first sensor S1.
  • A/D conversion is performed on the first data D1 and the second data D2.
  • the sampling frequency of the first sensor S1 is adjusted according to the motion type attribute data D4 by 5 frames/second to 50 frames/second, and the sampling precision is 8 to 16 bits.
  • the first sensor S1 is disposed at the wrist or the ankle of the user.
  • the artificial intelligence algorithm is used to extract the user's custom action feature data according to the user's motion data, and record it in the user's profile data D5.
  • the artificial intelligence algorithm is used to extract the voiceprint feature data of the user according to the user voice data, and record it into the user's profile data D5.
  • the motion feature data of the motion is extracted based on the motion type attribute data D4 using an artificial intelligence algorithm, and is recorded in the motion type attribute data D4.
  • the rest of the project is the same as the combat training system.
  • the artificial sensor algorithm is used to identify the single sensor user identification of the user according to the first data D1 and the user association result D3-AI1 and the user confidence result D3-AI2.
  • the artificial intelligence algorithm is used to identify the user's custom action user identification according to the first data D1 and the custom action feature data.
  • the artificial intelligence algorithm is used to identify the user's voiceprint feature user identification according to the voice data and the voiceprint feature data.
  • an artificial intelligence algorithm is used to identify the motion feature motion recognition of the motion type attribute data D4 according to the first data (D1) and the motion feature data.
  • the pressure data generated by the user's striking action is calculated based on the image depth learning step and the calibration data D8.
  • the artificial intelligence algorithm is used to calculate the association result D3-AI1 and the confidence result D3-AI2 corresponding to each user.
  • the corresponding association result D3-AI1 and the confidence result D3-AI2 of the plurality of users are compared, and the instant data is obtained.
  • the dynamic odds and predicted result data of the game are calculated based on the game process data and output.
  • the first sensor S1 is communicated with more than one fixed terminal to calculate absolute data of the first sensor S1's own spatial coordinates, motion speed, and motion trajectory.
  • the first sensor S1 is caused to communicate with more than one mobile terminal to calculate relative data of the first sensor S1's own spatial coordinates, motion speed, and motion trajectory.
  • the system is mainly used for personal sports user identification, motion recognition and management, specifically through the extraction and comparison of the user's personal motion characteristics by the gyroscope and accelerometer provided in the smart phone, supported by the cloud big data. Next, to identify the user identity, motion recognition function.
  • the mobile terminal is configured to capture user data using its own motion sensor, which is required to be held on the hand or on the wrist.
  • the same content as the embodiment "motion recognition system - bracelet version” is not described, except that the three-axis gyroscope, the three-axis accelerometer, and the three-axis magnetometer included in the mobile phone are used instead of the first sensor S1.
  • the APP application software uses artificial intelligence algorithms to identify the data by directly driving and reading the sampled data in the mobile motion sensor.
  • the system is mainly used for the identification and management of ball and track and field users. Compared with the combat training system, the similarities are not described. The difference is:
  • the first sensor S1 is used to detect the movement speed and acceleration of the limbs of the hands and feet, and does not need to detect the striking force. In addition, as a precise speed measurement, it is necessary to convert the distance from the racket to the wrist S1 for different rackets.
  • the racket sets the motion sensor and is incorporated into the management of the sport type attribute data D4 and the user profile data D5.
  • the geographic coordinate sensor collects the geographic coordinates, and uses the unit sensing network to connect all the first sensors S1 worn by one user to the personal sensor network, the location sensor network, and the motion information system.
  • Analog/digital A/D conversion is performed on the first data D1.
  • sampling frequency and sampling accuracy of the first sensor S1 are adjusted according to the motion type attribute data D4.
  • the first sensor S1 is disposed at the wrist, the ankle, and the joint position of the user.
  • the artificial intelligence algorithm is used to extract the user's custom action feature data according to the user's motion data, and record it in the user's profile data D5.
  • the artificial intelligence algorithm is used to extract the voiceprint feature data of the user according to the user voice data, and record it into the user's profile data D5.
  • the artificial intelligence algorithm is used to extract the motion feature data of the motion according to the motion category attribute data, and record it in the motion category attribute data D4.
  • the sport type attribute data D4 includes: motion rule data and exercise intensity data corresponding to the exercise rule data, exercise level data, exercise amplitude data, damage degree data, duration data, physical energy consumption degree data, physiological degree data, and game rule data. .
  • the rules of exercise include at least but not limited to: athletics, gymnastics, and ball.
  • the user has personal profile data D5, and the profile data D5 includes: user's height, weight, three-dimensional, arm span, arm weight, punch weight, heart rate, blood oxygen, body temperature, vital capacity, date and time, calorie consumption, historical exercise record, history Competition results, typical sports data, strong sports data, weak sports data, voice data, voiceprint data, image data, video data.
  • the motion sensor includes an angular velocity sub-sensor, an acceleration sub-sensor, a magnetic sub-sensor, and the shaft system includes at least an XYZ triaxial.
  • the artificial intelligence algorithm is used to identify the single sensor user of the user according to the first data D1 and the user association result D3-AI1, the user confidence result D3-AI2, and the three-dimensional vectorization data D8. Identification.
  • the artificial intelligence algorithm is used to identify the user's custom action user identification according to the first data D1 and the custom action feature data.
  • the artificial intelligence algorithm is used to identify the user's voiceprint feature user identification according to the voice data and the voiceprint feature data.
  • an artificial intelligence algorithm is used to identify the motion feature motion recognition of the motion type attribute data D4 according to the first data (D1) and the motion feature data.
  • the artificial intelligence algorithm is used to calculate the association result D3-AI1 and the confidence result D3-AI2 corresponding to each user.
  • the corresponding association results D3-AI1 and the confidence results D3-AI2 of the plurality of users are compared, and the instant game process data is obtained.
  • the dynamic odds and predicted result data of the game are calculated based on the game process data and output.
  • the system is mainly for the organization and for personnel identification.
  • the system includes an artificial intelligence bracelet, mobile app and cloud center software. details as follows:
  • the exercise rules only contain rules for daily activities, and the others are the same.
  • the artificial sensor algorithm is used to identify the single sensor user identification of the user according to the first data D1 and the user association result D3-AI1 and the user confidence result D3-AI2.
  • the artificial intelligence algorithm is used to identify the user's custom action user identification according to the first data D1 and the custom action feature data.
  • the voice data is included in the first data D1 collected by the user
  • the artificial intelligence algorithm is used to identify the user's voiceprint feature user identification according to the voice data and the voiceprint feature data.
  • an artificial intelligence algorithm is used to identify the motion feature motion recognition of the motion type attribute data D4 according to the first data D1 and the motion feature data.
  • the system is mainly used to complete the management of security rescue by detecting the physiological characteristics of the individual in a dangerous working environment.
  • firefighters are in a fire-fighting environment, building a ship's cabin in a hot summer environment, mining tunnel environment, etc.
  • the system includes several personal smart bracelets, micro base stations, mobile APP and cloud center software. details as follows:
  • the key methods and systems are basically the same in 1 to 15 methods and systems. It is only strengthened in terms of security and rescue software functions. It should be noted that these are the functional points that can be understood by mid-level technicians in the industry and can be designed without innovation. Therefore, it will not be described here.
  • the system is mainly used to detect the management system of animals raising security alarms in the pasture.
  • the system includes several personal intelligence sensors, micro base stations, mobile APP and cloud center software. details as follows:
  • the user is changed to an animal.
  • the first sensor S1 is disposed at the corner of the animal and at the position of the ankle.
  • the artificial intelligence algorithm is used to extract the animal's habitual action characteristic data based on the animal motion data, and the individual file data D5 of the animal is recorded.
  • the artificial intelligence algorithm is used to extract the voiceprint characteristic data of the animal according to the animal sound data, and record the individual file data D5 of the animal.
  • the motion feature data of the motion is extracted based on the motion type attribute data D4, and recorded to the motion type attribute data D4.
  • the rest of the project is the same as the combat training system.
  • the artificial sensor algorithm is used to identify the single sensor animal identification of the animal according to the first data D1 and the associated result D3-AI1 and the confidence result D3-AI2.
  • the artificial intelligence algorithm is used to identify the animal's custom action animal identification based on the first data D1 and the custom action feature data.
  • the artificial intelligence algorithm is used to identify the animal's voiceprint characteristic animal recognition based on the voice data and the voiceprint feature data.
  • the animal information system searches for the animal wearing the first sensor S1, and sends a name information to the animal, and the first sensor S1 worn by the animal receives the response, thereby realizing the name.
  • the animal wearing the first sensor S1 sends the registration information to the animal information system through the first sensor S1, and obtains a response, thereby realizing the registration.
  • the animal wearing the first sensor S1 is positioned by the animal information system through more than one terminal.
  • the first sensor S1 issues an abnormality alarm of the alarm information to the animal information system based on the abnormal value of the first data D1.
  • the animal information system and the first sensor S1 communicate through the sensor network, and the abnormal values include alarm trigger conditions preset by the animal and animal information systems.
  • Communication between the animal information system and the first sensor S1 is achieved via a sensor network.
  • a first sensor S1 a terminal and an animal information system; the first sensor S1 is connected to the terminal, the terminal is connected to the animal information system, and the data from the first sensor S1 is processed.
  • the first sensor S1 includes, but is not limited to, a processor and a motion sensor, a physiological sensor, a user number generator, and a geographic coordinate sensor; wherein the motion sensor, the physiological sensor, the user number generator, and the geographic coordinate sensor Connected to the processor, the processor and the terminal are connected.
  • the way the terminal and the animal information system are connected includes a wired connection and a wireless sensor network connection, and the way the processor and the terminal are connected includes a wired connection and a wireless sensor network connection.
  • the rest of the project is the same as the combat training system.

Landscapes

  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

一种运动数据监测方法和系统,采用第一传感器、第二传感器采集运动数据,采用数据图像化、2D图像3D化合成、通过运动传感器间接测量打击力、支持学习、训练、对练、特征提取、强弱项对策,实现用户自动识别、动作自动识别、强势项目识别、弱势项目识别、自动裁判、自动形成比赛赔率,此外,还能够完成点名、报名、通知、定位、报警等功能。该系统的构成包括传感器、微基站、智能手机APP、PC机和云中心等硬件和云中心软件、应用软件。

Description

一种运动数据监测方法和系统 技术领域
本发明涉及信息技术中人工智能应用领域,尤其是涉及人工智能在体育运动中的应用技术,特别涉及图像识别、运动识别、人员识别、智慧赛训、自动评判的方法和系统,尤其涉及一种运动数据监测方法和系统。
背景技术
人类的体育活动,是十分古老和传统的。作为体育运动的行业,也是一个传统的行业。人工智能技术在体育应用方面的应用,目前尚处在萌芽阶段,在相关专利网站上检索,也未见有与本发明相关的专利申请。
现有技术的不足表现在:
1、体育运动技术整体较为传统,先进技术介入较少。
2、没有一种好的方法来测量运动数据,人体的运动随意性较大,随着场地、运动项目的变化,运动的变化也很大。
3、没有一种有效的方法来分析识别运动数据。
4、人工智能的成果没有用到体育运动中。
本发明的意图,是利用人工智能技术,解决体育运动中的相关问题,试图改变当前体育智能技术的不足,例如人体动态运动(例如搏击)中力学测量、动作识别、人员识别、学习、训练、对练、裁判、评价、赔率计算,并创造性地发明了数据图像化的方法,使得目前人工智能在图像识别领域中的成果得以在体育运动测量数据中借用。
发明内容
为了克服现有技术的不足,本发明的目的是通过以下技术方案实现的:
如图1所示,本发明包括104、105~10n、10n+1~10m+1的传感器,包括101的终端,包括103的搏击信息系统2。其中传感器包括运动传感 器、生理传感器、用户号发生器、地理坐标传感器和压力传感器等,其中终端中还包括102的搏击信息系统1。具体是:
一种运动数据监测的方法,包括但不限于:利用设置在用户身体上的第一传感器S1监测第一数据D1的步骤。
利用传感网络将所述第一数据D1传输到运动信息系统的步骤。对所述第一数据D1进行处理的步骤。
如图2、图3、图4所示,所述第一传感器的结构包括运动传感器、生理传感器、压力传感器、用户号发生器和地理坐标传感器五者之一或者之间的组合,在处理器的管理下工作,其中包括电源子系统。之所以采用运动传感器、生理传感器、压力传感器用户号发生器和地理坐标传感器五者中的哪一种,具体依据应用场景,例如,对于同一个用户而言,带有运动传感器的第一传感器可能在四肢都需要佩戴,以监测四肢的运动,但是对于生理监测,只要在四肢的任何一处监测即可;另外,作为有些运动项目(例如搏击),可能还需要监测压力(如拳头的打击力),此时就不仅需要运动传感器,还需要在特定部位(如拳头部位)设置压力传感器。还有,对于人员或者动物的管理,只需要用户号发生器或者地理坐标传感器就可以满足要求,所以,这里的运动传感器、生理传感器、压力传感器、用户号发生器和地理坐标传感器,究竟采用哪一种或者其组合,根据具体应用场景确定。
在前述技术方案的基础上,在本发明包含但不限于以下改进措施以及它们的组合:
所述利用设置在用户身体上的第一传感器S1监测第一数据D1的步骤包括:
利用所述第一传感器S1中的运动传感器采集所述用户运动数据的步骤。
利用所述手机中包括的运动传感器采集所述用户运动数据并通过所述手机内部直接传输到运动信息系统的步骤。
利用所述第一传感器S1中的生理传感器采集所述用户生理数据的步骤。
利用所述第一传感器S1中的压力传感器采集所述用户打击所述靶具、对手、使用所述靶具时的压力数据的步骤。
利用所述第一传感器S1中的用户号发生器产生所述用户的用户号数据的步骤。
利用所述第一传感器S1中的地理坐标传感器产生所述用户的地理坐标数据的步骤。
所述利用设置在靶具上的第二传感器S2在所述用户打击、使用所述靶具时监测第二数据D2的步骤包括但不限于:
利用所述第二传感器S2中的压力传感器采集所述用户打击、使用所述靶具时的压力数据的步骤。
利用所述第二传感器S2中位置传感器采集所述用户打击打击、使用所述靶具时的位置数据的步骤。
利用单元传感网络将一个所述用户所佩戴的全部所述第一传感器S1连接到个人传感器网络、场所传感器网络、所述运动信息系统的步骤,如图7所示。
利用单元传感网络将一套靶具所装备的全部所述第二传感器S2连接到个人传感器网络、场所传感器网络、所述运动信息系统的步骤,如图8所示。
采集监测所述第一数据D1和所述第二数据D2所发生时刻的系统时间值T,并记录到所述第一数据D1和第二数据D2中的步骤。
对所述第一数据D1和所述第二数据D2进行模/数A/D转换的步骤。
依据所述运动种类属性数据D4调节所述第一传感器S1和所述第二传感器S2采样频率和采样精度的步骤。
依据所述第一数据D1和所述第二数据D2,按照预定的刻度,插值补齐所述第一数据D1和所述第二数据D2,并将所述第一数据D1、第二数据D2合并到所述关联数据D3的步骤。
其中,所述第一传感器S1设置于所述用户的手腕、脚踝、关节和/或打击位置处。
采用所述人工智能算法,依据所述用户运动数据提取所述用户的习惯动作特征数据,并记录到所述用户的个人档案数据D5的步骤。
采用所述人工智能算法,依据所述用户语音数据提取所述用户的声纹特征数据,并记录到所述用户的所述个人档案数据D5的步骤。
采用所述人工智能算法,依据所述运动种类属性数据提取所述运动的动作特征数据,并记录到所述运动种类属性数据D4的步骤。
所述运动种类属性数据D4包括但不限于:运动规则数据以及与所述运动规则数据相对应的运动力度数据、运动级别数据、运动幅度数据、伤害程度数据、持续程度数据、体能消耗程度数据、生理程度数据和/或比赛规则数据。
其中,所述运动规则包括但不限于:自由搏击、站立格斗、无限制格斗、MMA、UFC、散打、武术、太极拳、泰拳、踢拳、K1规则、击剑、柔道、摔跤、田径、体操、球类。
所述用户具有个人档案数据D5,所述个人档案数据D5包括但不限于:所述用户的身高、体重、三维、臂展、臂重、拳重、心率、血氧、体温、肺活量、日期时间、卡路里消耗、历史运动记录、历史比赛成绩、典型运动数据、强势运动项目数据、弱势运动项目数据、声纹数据、图像数据、视频数据。
所述运动传感器包括但不限于角速度子传感器、加速度子传感器、磁力子传感器,轴系至少包括XYZ三轴。
图9是所述微基站结构图。
在前述技术方案的基础上,在本发明包含但不限于以下改进措施以及它们的组合:
依据包括但不限于采样种类、采样频率、采样精度、数据格式的数据内容,对于所述关联数据D3,做数据格式化的步骤。
依据运动动作的特性,在所述关联数据D3中的所述运动数据部分,分解动作序列为动作单元,计算单元数据D3-U的步骤。
如图10至图13所示,1001是所述关联数据D3,经过数据格式化成为1002,经过动作分解,成为单元数据1004,也就是D3-U。
映射所述单元数据D3-U为运动图像,依据采集的序列,在所述单元 数据D3-U中,将每次采集的运动传感器的三轴数据作为一个组,映射一个所述组为所述运动图像中的一个像素点的图像点映射步骤。
如图11中,单元数据D3-U的1004,分解成角速度(陀螺仪)传感器数据组1015和加速度传感器数据组1025,其中某一个采集点,对于组1015中是1016,对于组1025中是1026。
映射所述单元数据D3-U中所述运动传感器X轴、Y轴、Z轴的每个子传感器的采集数据为一幅运动图像,映射每个子传感器每次采集点为对应的所述运动图像中的一个像素点,将所述采集点的X、Y、Z三轴数据作为像素点RGB三原色数据的自变量x,建立RGB色码值y的函数y=f(x),计算出所述RGB三原色的数据的多图映射步骤。
如图11中,角速度传感器的组1015映射成g图1018,组1015中的采集点1016映射成g图1018中的像素点1017;加速度传感器的组1025映射成a图1028,组1025中的采集点1026映射成a图1028中的像素点1027。
映射所述单元数据D3-U中所述运动传感器中的一个子传感器的采集数据为一幅运动图像,映射其它所述子传感器的采集数据为该所述运动图像的通道,映射每个子传感器每次采集点为对应的所述运动图像或所述通道中的一个像素点,将所述采集点的X、Y、Z三轴数据作为像素点RGB三原色数据或通道数据的自变量x,建立RGB色码值y的函数y=f(x),计算出所述RGB三原色的数据或通道数据的单图多通道映射步骤。
如图12中,角速度传感器的组1015映射成g图1018,组1015中的采集点1016映射成g图1018中的像素点1017;加速度传感器的组1025映射成c通道1038,组1025中的采集点1026映射成c通道1038中的像素点1037。
采用人工智能图像识别和分类算法,对于多个所述运动图像数据,进行深度学习,总结计算出包括运动种类特征、动作种类特征、压力大小特征、用户识别特征的特征数据,并在采集到下一次所述关联数据D3时,计算比对所述特征数据的图像深度学习的步骤。
依据图像及视频文件格式,将所述多图映射和单图映射改编成所述 图像及所述视频文件,便于显示器显示和人眼观看的图像及视频文件重构的步骤。
如图13所示,所示图像及视频文件的重构,其方法之一是计算并且添加头文件,即图13中的1119、1129、1139。
所述人工智能算法包括但不限于:人工神经网络算法、卷积神经网络(Convolutional Neural Networks,以下简称CNNs)算法、循环神经网络(Recurrent Neural Networks,以下简称RNN)算法、深度神经网络(Dotnetnuke,以下简称DNN),支持向量机(Support Vector Machine,以下简称为SVM)”算法、遗传算法、蚁群算法、模拟退火算法、粒子群算法、贝叶斯(Bayes,以下简称Bayes)算法。
所述RGB函数包括但不限于线性函数y=kx+j和非线性函数,其中k和j为调整常数。
在前述技术方案的基础上,在本发明包含但不限于以下改进措施以及它们的组合:
使一路以上视频图像传感器S3拍摄一路以上所述用户赛训的视频图像D6的步骤。
使所述一路以上视频图像传感器S3通过所述传感网络和所述运动信息系统通信的步骤。
基于所述视频图像D6和所述第一数据D1,依据所述第一传感器S1在所述视频图像D6中的位置,采用所述人工智能算法,做运动动作的三维矢量化合成,得到三维矢量化数据D7的步骤。
将所述三维矢量化数据D7与所述第二数据D2、所述关联数据D3、所述运动种类属性数据D4和/或所述个人档案数据D5建立关联的步骤。
采用所述人工智能算法,依据所述三维矢量化数据D7和所述运动种类属性数据D4,识别所述视频图像D6中的运动动作,并同步在所述视频图像D6中标注的、所述运动动作前、后时点的步骤。
其中,所述赛训包括但不限于单人训练、单人套路比赛、多人对抗比赛。
在前述技术方案的基础上,在本发明包含但不限于以下改进措施以及它们的组合:
由所述教练用户按照所述运动种类属性数据D4采用规范的动作打击靶具,获得所述教练的关联数据D3,依据所述人工智能算法,在所述教练的关联数据D3中做机器学习,得出所述教练的关联结果D3-AI1和所述教练的置信结果D3-AI2,并更新所述教练用户的所述个人档案数据D5的学习教练的步骤。
由所述学员用户按照所述运动种类属性数据D4打击靶具,获得所述学员的关联数据D3,依据所述人工智能算法,在所述学员的关联数据D3中做机器学习,得出所述学员的关联结果D3-AI1和所述学员的置信结果D3-AI2,并更新所述学员用户的所述个人档案数据D5的自我训练的步骤。
循环比较所述学员的关联结果D3-AI1和所述教练的关联结果D3-AI1的步骤,循环比较所述学员的置信结果D3-AI2和所述教练的置信结果D3-AI2的步骤。
依据所述学员的关联结果D3-AI1和置信结果D3-AI2,计算分析所述学员的运动强项、弱项和差距,更新所述学员的所述个人档案数据D5,计算产生并输出训练建议信息强弱项对策的步骤。
查找所述对手用户的个人档案数据D5和所述学员的个人档案数据D5,比较二者其中的所述典型运动数据、所述强势运动项目数据和所述弱势运动项目数据,计算分析二者的差距,制定针对性训练建议计划,并督促检查训练结果的对手训练的步骤。
在前述技术方案的基础上,在本发明包含但不限于以下改进措施以及它们的组合:
当采集到所述用户的第一数据D1时,采用所述人工智能算法,依据所述第一数据D1和所述关联结果D3-AI1、所述置信结果D3-AI2和/或所述三维矢量化数据D8,识别所述用户的单传感器用户识别步骤。
当采集到所述用户的所述第一数据D1时,采用所述人工智能算法,依据所述第一数据D1和所述习惯动作特征数据,识别所述用户的习惯动 作用户识别步骤。
当采集到所述用户的第一数据D1中包括所述语音数据时,采用所述人工智能算法,依据所述语音数据和所述声纹特征数据,识别所述用户的声纹特征用户识别步骤。
当采集到所述用户的第一数据D1和所述第二数据D2时,采用所述人工智能算法,依据所述第一数据D1和所述关联结果D3-AI1、所述用户的置信结果D3-AI2、所述三维矢量化数据D8,识别所述用户的双传感器用户识别步骤。
当采集到所述用户的第一数据D1时,采用所述人工智能算法,依据所述用户、所述第一数据D1和所述用户的关联结果D3-AI1、所述用户的置信结果D3-AI2和/或所述三维矢量化数据D8,识别所述运动种类属性数据D4的单传感器动作识别步骤。
当采集到所述用户的第一数据D1和所述第二数据D2时,采用所述人工智能算法,依据所述用户、所述第一数据D1和所述用户的关联结果D3-AI1、所述用户的置信结果D3-AI2、所述三维矢量化数据D8,识别所述运动种类属性数据D4的双传感器动作识别步骤。
当采集到所述用户的所述第一数据D1时,采用所述人工智能算法,依据所述第一数据D1和动作特征数据,识别所述运动种类属性数据D4的动作特征动作识别步骤。
依据所述图像深度学习步骤和所述校准数据D8,计算所述用户的打击动作所产生的压力数据的步骤。
使所述用户打击所述靶具,依据牛顿力学算法,取得所述第一传感器S1中的所角速度和加速度数据和所述第二传感器S2中的所述压力数据,建立加速度压力关联D8的步骤。
在所述用户只使用所述第一传感器S1不使用第二传感器S2的情况下打击靶具或者对手,依据所述第一数据D1,在所述加速度压力关联D8中压力识别的步骤。
在前述技术方案的基础上,在本发明包含但不限于以下改进措施以及它们的组合:
包括裁判步骤:
依据所述运动种类属性数据D4中的比赛规则,在多个所述用户的所述赛训时,采用所述人工智能算法,计算各个用户所对应的所述关联结果D3-AI1和所述置信结果D3-AI2的步骤。
依据所述运动种类属性数据D4中的比赛规则,比较多个所述用户的所对应的所述关联结果D3-AI1和所述置信结果D3-AI2,并获得即时的包括重击程度及次数、伤害程度及次数、读秒及次数、TKO及KO在内的比赛过程数据的步骤。
基于所述比赛过程数据计算所述比赛的动态赔率和预测结果数据并输出的步骤。
在前述技术方案的基础上,在本发明包含但不限于以下改进措施以及它们的组合:
使所述第一传感器S1、所述第二传感器S2与一个以上固定终端通信,以计算所述第一传感器S1、所述第二传感器S2自身空间坐标、运动速度、运动轨迹的绝对数据的步骤;
使所述第一传感器S1、所述第二传感器S2与一个以上移动终端、第一传感器S1、第二传感器S2通信,以计算所述第一传感器S1、所述第二传感器S2自身空间坐标、运动速度、运动轨迹的相对数据的步骤。
利用所述固定终端、移动终端处理和显示所述运动信息系统结果信息的步骤。
将包括所述结果信息、运动动作现场回放视频发送到一个以上的显示装置,以使所述结果信息与现场视频融合显示的步骤。
所述固定终端和所述移动终端包括:微基站、PC机、智能手机。
所述传感网络的连接方式包括有线方式和无线方式。
在前述技术方案的基础上,在本发明包含但不限于以下改进措施以及它们的组合:
由所述运动信息系统查找佩戴所述第一传感器S1的所述用户,并向其发出点名信息,所述用户佩戴的所述第一传感器S1收到后做出应答, 从而实现点名步骤。
由佩戴所述第一传感器S1的所述用户,通过所述第一传感器S1向所述运动信息系统发出报名信息,并取得应答,从而实现报名步骤。
由所述运动信息系统向所述用户所佩戴的所述第一传感器S1发出通知信息,所述第一传感器S1接收到所述通知信息后,应答所述运动信息系统,并在所述第一传感器S1上显示和/或震动的通知步骤。
由所述运动信息系统通过一个以上所述终端,对于所述佩戴所述第一传感器S1的所述用户实现定位步骤。
由佩戴所述第一传感器S1的所述用户,根据所述用户的个人主观意愿,向所述运动信息系统发出报警信息的主动报警步骤
由所述第一传感器S1根据所述第一数据D1的异常值,向所述运动信息系统发出报警信息的异常报警步骤。
所述运动信息系统和所述第一传感器S1之间通过传感网络实现通信,所述异常值包括所述用户和/或所述运动信息系统预先设定的报警触发条件。
一种运动数据监测的系统,其特征在于,包括:第一传感器S1、终端和运动信息系统;所述第一传感器S1和所述终端连接,所述终端和所述运动信息系统连接。
在前述技术方案的基础上,在本发明进一步包含但不限于以下内容及其组合:
还包括:第二传感器S2、视频图像传感器S3;所述第二传感器S2和所述视频图像传感器S3分别和终端连接。
在前述技术方案的基础上,在本发明进一步包含但不限于以下内容及其组合:
所述第一传感器S1由处理器和运动传感器、生理传感器、压力传感器、用户号发生器、地理坐标传感器连接构成;其中,所述运动传感器、所述生理传感器、所述压力传感器、所述用户号发生器、所述地理坐标传感器分别与所述处理器连接,所述处理器和所述终端连接。
所述第二传感器S2包括压力传感器和位置传感器。
所述终端和所述运动信息系统连接的方式包括有线连接和无线传感器网络连接,所述处理器和所述终端连接的方式包括有线连接和无线传感器网络连接。
所述运动传感器包括:三轴角速度传感器、三轴加速度传感器、三轴磁传感器、电子罗盘传感器、速度传感器、运动方向传感器、位移传感器、轨迹传感器、光传感器及其它们之间的组合。
所述生理传感器包括:血氧传感器、血压传感器、脉搏传感器、温度传感器、出汗程度传感器、声音、光传感器。
所述压力传感器包括:压力传感器、压强传感器、冲力传感器、冲量传感器。
所述位置传感器包括:空间位置传感器、空间坐标传感器、光传感器、摄像机。
所述用户号发生器包括:用户号存储编辑发送模块。
所述地理坐标传感器包括:导航卫星定位模块。
所述视频图像传感器为可见光、不可见光摄像机。
所述运动种类属性数据D4包括:运动规则数据以及与所述运动规则数据相对应的运动力度数据、运动级别数据、运动幅度数据、伤害程度数据、持续程度数据、体能消耗程度数据、生理程度数据、比赛规则数据。
其中,所述运动规则至少包括:自由搏击、站立格斗、无限制格斗、MMA、UFC、散打、武术、太极拳、泰拳、踢拳、K1规则、击剑、柔道、摔跤、田径、体操、球类。
所述用户具有个人档案数据D5,所述个人档案数据D5包括:所述用户的身高、体重、三维、臂展、臂重、拳重、心率、血氧、体温、肺活量、日期时间、卡路里消耗、历史运动记录、历史比赛成绩、典型运动数据、强势运动项目数据、弱势运动项目数据、语音数据、声纹数据、图像数据、视频数据。
在前述技术方案的基础上,在本发明进一步包含但不限于以下内容 及其组合:
所述传感网络包括固定终端和移动终端,所述终端包括微基站、手机、PC机;所述传感网络的连接方式包括有线方式和无线方式;
所述微基站包括:一个以上下行接口、处理器、电源子系统和上行接口,其中,所述一个以上下行接口与所述处理器相连,所述处理器与所述上行接口相连,所述电源子系统为所述下行接口、所述处理器、所述上行接口提供电源,所述下行接口通过无线传感器网络与所述第一传感器S1、所述第二传感器S2、所述视频图像传感器S3连接通信,所述上行接口通过有线或者无线网络与所述运动信息系统通信。
所述运动信息系统包括相互通信的终端单元和云系统;所述终端单元和所述终端一体或者分立设置,所述云系统设置在网络云中。
所述靶具包括搏击靶具、球类、球拍类、体育器械,对于所述搏击靶具的使用包括拳、脚和身体部位对于靶具的打击。
在前述技术方案的基础上,在本发明进一步包含但不限于以下内容及其组合:
其特征在于包括云中心软件和应用软件,其中:
由运行在所述终端上的应用软件完成对下连接、收集、处理包括所述用户、所述第一数据D1、所述第二数据D2、所述运动种类属性数据D4、所述用户个人档案数据D5、所述视频数据D6,完成用户交互,并辅助生成所述关联数据D3、所述户个人档案数据D5、所述三维矢量化数据D7、所述校准数据D8的功能。
由运行在所述终端上的所述应用软件完成对上包括传送数据到云中心形成大数据的功能。
由运行在所述终端上的所述应用软件配合云中心软件完成所述学习、所述训练、所述用户识别、所述动作识别和压力识别的功能。
由运行在云中心的所述云中心软件负责完成对于所述大数据进行包括所述深度学习、数据挖掘、分类算法、人工智能处理、生成所述关联数据D3、所述视频数据D6、所述校准数据D8、更新D5、云中心计算、云中心管理在内的处理和与所述应用软件通信的步骤。
所述运动信息系统包括所述应用软件和所述云中心软件。
由一个所述应用软件连接管理一个用户,形成一个所述运动信息系统;由多个所述应用软件连接管理多个用户,形成多个所述运动信息系统的步骤。
所述多个用户的运动信息系统之间通信,并完成互动的步骤。
与现有技术相比,本发明具有以下有益效果:
1、解决了人体搏击时动态打击力、打击能量的测量。
2、解决了运动数据的图像化转换问题,做到可视化。
3、解决了人工智能的图像识别算法对于体育运动的识别问题。
4、成功解决人员识别、运动识别、力学测量、自动裁判、动态赔率计算。
5、引入了人工智能对于体育运动的大数据分析管理。
附图说明
图1是系统图;
图2是第一传感器之一结构图;
图3是第一传感器之二结构图;
图4是第一传感器之三结构图;
图5是第二传感器之一结构图;
图6是第二传感器之二结构图;
图7是单元传感网之一结构图;
图8是单元传感网之二结构图;
图9是微基站结构图;
图10是数据图像化映射之一;
图11是数据图像化映射之二;
图12是数据图像化映射之三;
图13是数据图像化映射之四。
具体实施方式
一:搏击赛训系统
(一)、系统概述
该搏击赛训系统主要用于搏击运动类用户。如图1所示,本系统包括104、105~10n、10n+1~10m+1的传感器,包括101的终端,包括103的搏击信息系统2。其中传感器包括运动传感器、生理传感器、用户号发生器、地理坐标传感器和压力传感器等,其中终端内部还包括102的搏击信息系统1。
对于个人或者小型俱乐部来说,最小单位定义为一个运动检测组,包括:
4个第一传感器S1,分别是104、105、106、107,1个由微基站构成的终端101,其中包括102搏击信息系统1。4个第一传感器S1和1个微基站连接,微基站和搏击信息系统2连接。4个第一传感器S1分别佩戴在用户的手腕和脚踝处,其中1个为带有生理传感器、运动传感器和用户号发生器的品种,如图3所示;另外3个为只带有运动传感器和用户号发生器而不带有生理传感器的品种,如图4所示。作为扩展,还可以选配2个用于拳套的压力传感器品种。其中,运动传感器选用带有三轴陀螺仪和三轴加速度传感器的品种,生理传感器为脉搏传感器品种。
根据运动的速度快慢,设置运动传感器的采样频率为10帧/秒至200帧/秒,设置心率传感器为每分钟采集一次,全部采样精度为8~16bits。
作为一个运动检测组,还包括:1个第二传感器S2,如图5所示,它与微基站连接。其中,第二传感器S2由矩阵薄膜压力传感器构成,自带压力和位置检测电路。
量程可分为50公斤、200公斤、500公斤等若干个压力/打击力级别。第二传感器可以针对用户需要,选择不同的压力级别和安装形态,通常都是随靶具形状而改变的。
作为选件,还可以配备4路高清摄像机作为视频图像传感器S3。它与微基站连接,完成图像采集功能。
如图9所示,微基站包括:9个下行接口、处理器、电源子系统和上行接口,其中,9个下行接口与处理器相连,处理器与上行接口相连,电 源子系统为下行接口、处理器、上行接口提供电源,下行接口通过无线传感器网络与4个第一传感器S1、1个第二传感器S2、4个视频图像传感器S3连接通信,上行接口通过光纤有线网络与搏击信息系统通信。
微基站把上述传感器的信号汇总,通过光纤与搏击信息系统连接。
配备的打击传感器S2主要功能有以下两个:
一个是配合第一传感器,用于关联和计算打击数据的。即在用户多次打击靶具时,系统同时测量第一传感器S1的角速度和加速度的数据和第二传感器S2的打击力数据,依据这多次打击的角速度和加速度的数据和第二传感器的打击力数据之间的对应关系,根据牛顿运动学定理,建立对应函数。此后,该用户只需要使用运动传感器而不用压力传感器,就能够就依据该用户在打击时的角速度和加速度的数据,换算出打击力数据。对于用户而言,压力传感器的安装使用比较麻烦,必须要安装在例如拳头的表面上,这将限制了使用的场景,而该方法通过间接的测量方式,取消压力传感器,这将极大地方便了用户的使用。
二是通过第二传感器S2直接测量用户的打击力数据。
服务器,选用带有GPU显卡的服务器,为系统提供图像计算、大数据和云端服务。
作为较大的俱乐部,可以选择以下扩展方案:
如图7和图8所示,将一个用户所佩戴的第一传感器S1构成单元传感器网络,由若干个靶具构成单元传感器网络,由这些单元传感器网络构成个人传感器网络或者场所传感器网络,再与搏击信息系统连接。
作为扩展选项,第一传感器S1由处理器和运动传感器、生理传感器、压力传感器连接构成。其中,运动传感器、生理传感器、压力传感器分别与处理器连接,处理器和微基站终端连接。
微基站终端和搏击信息系统连接的方式包括有线连接和无线传感器网络连接,处理器和终端连接的方式包括有线连接和无线传感器网络连接。
运动传感器包括:三轴角速度传感器、三轴加速度传感器、三轴磁传感器。
生理传感器包括:脉搏传感器、温度传感器、声音传感器。
压力传感器包括:矩阵式薄膜压力传感器传感器。
位置传感器包括:空间坐标传感器。
视频图像传感器为可见光摄像机。
终端包括:微基站、智能手机、PC机。
运动种类属性数据D4包括但不限于:运动规则数据以及与运动规则数据相对应的运动力度数据、运动级别数据、运动幅度数据、伤害程度数据、持续程度数据、体能消耗程度数据、生理程度数据、比赛规则数据。
其中,运动规则包括但不限于:自由搏击、站立格斗、无限制格斗、MMA、UFC、散打、武术、太极拳、泰拳、踢拳、K1规则、击剑、柔道、摔跤、田径、体操、球类。
用户具有个人档案数据D5,它包括但不限于:用户的身高、体重、三维、臂展、臂重、拳重、心率、血氧、体温、肺活量、日期时间、卡路里消耗、历史运动记录、历史比赛成绩、典型运动数据、强势运动项目数据、弱势运动项目数据、语音数据、声纹数据、图像数据、视频数据。
搏击信息系统包括相互通信的终端单元和云系统;终端单元和终端一体或者分立设置,云系统设置在网络云中。
由运行在终端上的应用软件完成对下连接、收集、处理包括用户、第一数据D1、第二数据D2、运动种类属性数据D4、用户个人档案数据D5、视频数据D6,完成用户交互,并辅助生成关联数据D3、户个人档案数据D5、三维矢量化数据D7、校准数据D8的功能。
由运行在终端上的应用软件完成对上包括传送数据到云中心形成大数据的功能。
由运行在终端上的应用软件配合云中心软件完成学习、训练、用户识别、动作识别和压力的功能。
由运行在云中心的云中心软件负责完成对于大数据进行包括深度学 习、数据挖掘、分类算法、人工智能处理、生成关联数据D3、视频数据D6、校准数据D8、更新D5、云中心计算、云中心管理在内的处理和与应用软件通信。
运动信息系统包括应用软件和云中心软件。
由一个应用软件连接管理一个用户,形成一个搏击信息系统;由多个应用软件连接管理多个用户,形成多个搏击信息系统。
多个用户的运动信息系统之间通信,并完成互动。
(二)、配置部分说明
1、手机配置
该系统通过微基站和2个手环、2个脚环以及1个第二传感器相连,这里是通过BLE蓝牙低功耗协议或者WIFI协议通信,当然一次类推,也可以采用其他WSN协议,微基站将上述5个传感器的采集数据,传输到搏击信息系统的云数据库。在此,上述5个传感器通过系统的时间,以时间戳方式,实现采集数据的同步,以取得用户的运动数据,配合云中心的云中心配置,实现搏击信息系统的功能。
由运行在手机上的配置完成对下连接、收集、处理包括用户、第一数据D1、第二数据D2、运动种类属性数据D4、用户个人档案数据D5、视频数据D6,完成用户交互,并辅助生成关联数据D3、户个人档案数据D5、三维矢量化数据D7、校准数据D8的功能。
由运行在手机上的配置完成对上包括传送数据到云中心形成大数据的功能。
由运行在手机上的配置配合云中心配置完成学习、训练、用户识别、动作识别和压力识别的功能。
2、云中心配置
由运行在云中心的云中心配置负责完成对于大数据进行包括深度学习、数据挖掘、分类算法、人工智能处理、生成关联数据D3、视频数据D6、校准数据D8、更新D5、云中心计算、云中心管理在内的处理和与终端应用配置通信的步骤。
运动信息系统包括终端应用配置和云中心配置。
由一个应用配置连接管理一个用户,形成一个运动信息系统;由多 个应用配置连接管理多个用户,形成多个运动信息系统的步骤。
多个用户的运动信息系统之间通信,并完成互动和社交的步骤。
(三)、关键方法步骤
1.利用设置在用户身体上的第一传感器S1(2个手环和2个脚环)监测第一数据D1,利用传感网络将第一数据D1传输到搏击信息系统。同时对第一数据D1进行处理。
2.利用设置在靶具上的第二传感器S2在用户打击靶具时监测第二数据D2。在用户击打靶具的同时,按照时间顺序,同时采集第一数据D1和第二数据D2,并生成关联数据D3。利用传感网络将第二数据D2和关联数据D3传输到搏击信息系统。这里的用户包括:学员用户、教练用户、对手用户。传感网络包括终端,终端包括固定终端和移动终端,包括微基站、智能手机和PC机。靶具包括假人、沙袋、手靶、脚靶、墙靶等靶具。对于搏击靶具的使用包括拳、脚和身体部位对于靶具的打击。
3.利用设置在用户身体上的第一传感器S1监测第一数据D1,包括:
利用第一传感器S1中的运动传感器采集用户运动数据,利用S1中的生理传感器采集用户生理数据,利用S1中的压力传感器采集用户打击靶具、打击对手时的压力数据。利用设置在靶具上的第二传感器S2在用户打击靶具时监测第二数据D2,利用S2中的压力传感器采集用户打击靶具时的压力数据,利用S2中位置传感器采集用户打击靶具时的位置数据。
利用单元传感网络将一个用户所佩戴的全部第一传感器S1连接到个人传感器网络、场所传感器网络、搏击信息系统。利用单元传感网络将一套靶具所装备的全部第二传感器S2连接到个人传感器网络、场所传感器网络、搏击信息系统。
采集监测第一数据D1和第二数据D2所发生时刻的系统时间值T,并记录到第一数据D1和第二数据D2中。
对第一数据D1和第二数据D2进行A/D转换。
依据运动种类属性数据D4调节S1和S2采样频率和采样精度。
依据第一数据D1和第二数据D2,按照预定的刻度,插值补齐第一数据D1和第二数据D2,并将第一数据D1、第二数据D2合并到关联数据D3。
其中,S1设置于用户的手腕、脚踝、关节、打击位置处。
采用人工智能算法,依据用户运动数据总结提取用户的习惯动作特征数据,并记录到用户的个人档案数据D5中。
采用人工智能算法,依据用户语音数据总结提取用户的声纹特征数据,并记录到用户的个人档案数据D5中。
采用人工智能算法,依据运动种类属性数据总结提取运动的动作特征数据,并记录到运动种类属性数据D4中。
运动种类属性数据D4包括但不限于:运动规则数据以及与运动规则数据相对应的运动力度数据、运动级别数据、运动幅度数据、伤害程度数据、持续程度数据、体能消耗程度数据、生理程度数据、比赛规则数据。
其中,运动规则至少包括但不限于:自由搏击、站立格斗、无限制格斗、MMA、UFC、散打、武术、太极拳、泰拳、踢拳、K1规则、击剑、柔道、摔跤、田径、体操、球类。
用户具有个人档案数据D5,个人档案数据D5包括但不限于:用户的身高、体重、三维、臂展、臂重、拳重、心率、血氧、体温、肺活量、日期时间、卡路里消耗、历史运动记录、历史比赛成绩、典型运动数据、强势运动项目数据、弱势运动项目数据、语音数据、声纹数据、图像数据、视频数据。
运动传感器包括角速度子传感器、加速度子传感器、磁力子传感器,轴系包括但不限于XYZ三轴。
4.依据包括但不限于采样种类、采样频率、采样精度、数据格式的数据内容,对于关联数据D3,做数据格式化。依据运动动作的特性,在关联数据D3中的运动数据部分,分解动作序列为动作单元,计算单元数据D3-U。
映射单元数据D3-U为运动图像,依据采集的序列,在单元数据D3-U中,将每次采集的运动传感器的三轴数据作为一个组,映射一个组为运动图像中的一个像素点的图像点映射。
映射单元数据D3-U中运动传感器X轴、Y轴、Z轴的每个子传感器的采集数据为一幅运动图像,映射每个子传感器每次采集点为对应的运动 图像中的一个像素点,将采集点的X、Y、Z三轴数据作为像素点RGB三原色数据的自变量x,建立RGB色码值y的函数y=f(x),计算出RGB三原色的数据的多图映射。
还可以按照以下的多通道映射方法:
映射单元数据D3-U中运动传感器中的一个子传感器的采集数据为一幅运动图像,映射其它子传感器的采集数据为该运动图像的通道,映射每个子传感器每次采集点为对应的运动图像或通道中的一个像素点,将采集点的X、Y、Z三轴数据作为像素点RGB三原色数据或通道数据的自变量x,建立RGB色码值y的函数y=f(x),计算出RGB三原色的数据或通道数据的单图多通道映射。
RGB函数包括线性函数y=kx+j和非线性函数,其中k和j为调整常数。
采用人工智能图像识别和分类算法,对于多个运动图像数据,进行深度学习,总结计算出包括但不限于运动种类特征、动作种类特征、压力大小特征、用户识别特征的特征数据,并在采集到下一次关联数据D3时,计算比对特征数据的图像深度学习。
依据图像及视频文件格式,将多图映射和单图映射改编成图像及视频文件,便于显示器显示和人眼观看的图像及视频文件重构。
人工智能算法包括但不限于:人工神经网络算法、CNNs算法、RNN算法、SVM算法、遗传算法、蚁群算法、模拟退火算法、粒子群算法、Bayes算法。
作为动作识别,是这样实现的:首先建立动作特征库,其次查询动作特征库。
建立动作特征库,首先选择一些动作规范的用户,佩戴第一传感器S1,做各种动作,获得动作数据及其动作名称数据的对于数据,经过包括但不限于采用CNNs和SVM算法的人工智能分析,提取其动作特征,作为动作特征库记录于云中心的数据库中。
其次,一个未知用户动作的数据得到之后,同样包括但不限于采用CNNs和SVM算法,得到该动作的特征数据,再用这个特征数据,在云中 心的动作特征库中去检索查找,以确定相似度最高的动作列表,取出动作代码,从而实现动作识别。
当已知用户而需要识别该用户的动作时,先得到该用户的动作数据,同样包括但不限于采用CNNs和SVM算法,得到该用户的行为特征和动作特征库,再用这个特征库数据,在云中心的数据库中去检索查找,以确定相似度最高的动作列表,从而实现用户识别。
5.还包括2D图像3D化。
使4路视频图像传感器S3拍摄4路用户赛训的视频图像D6。
使4路视频图像传感器S3通过传感网络和搏击信息系统通信。
基于视频图像D6和第一数据D1,依据第一传感器S1在视频图像D6中的位置,采用人工智能算法,做运动动作的三维矢量化合成,得到三维矢量化数据D7。
将三维矢量化数据D7与第二数据D2、关联数据D3、运动种类属性数据D4、个人档案数据D5建立关联。
采用人工智能算法,依据三维矢量化数据D7和运动种类属性数据D4,识别视频图像D6中的运动动作,并同步在视频图像D6中标注的、运动动作前、后时点。
其中,赛训包括单人训练、单人套路比赛、多人对抗比赛。
6.由教练用户按照运动种类属性数据D4采用规范的动作打击靶具,获得教练的关联数据D3,依据人工智能算法,在教练的关联数据D3中做机器学习,得出教练的关联结果D3-AI1和教练的置信结果D3-AI2,并更新教练用户的个人档案数据D5学习教练。
由学员用户按照运动种类属性数据D4打击靶具,获得学员的关联数据D3,依据人工智能算法,在学员的关联数据D3中做机器学习,得出学员的关联结果D3-AI1和学员的置信结果D3-AI2,并更新学员用户的个人档案数据D5自我训练。
循环比较学员的关联结果D3-AI1和教练的关联结果D3-AI1,循环比较学员的置信结果D3-AI2和教练的置信结果D3-AI2。
依据学员的关联结果D3-AI1和置信结果D3-AI2,计算分析学员的运动强项、弱项和差距,更新学员的个人档案数据D5,计算产生并输出训 练建议信息的强弱项对策。
查找对手用户的个人档案数据D5和学员的个人档案数据D5,比较二者其中的典型运动数据、强势运动项目数据和弱势运动项目数据,计算分析二者的差距,制定针对性训练建议计划,并督促检查训练结果的学员的对手训练。
7.当采集到用户的第一数据D1时,采用人工智能算法,依据第一数据D1和用户关联结果D3-AI1、用户置信结果D3-AI2、三维矢量化数据D8,识别用户的单传感器用户识别。
当采集到用户的第一数据D1时,采用人工智能算法,依据第一数据D1和习惯动作特征数据,识别用户的习惯动作用户识别。
当采集到用户的第一数据D1中包括语音数据时,采用人工智能算法,依据语音数据和声纹特征数据,识别用户的声纹特征用户识别。
当采集到用户的第一数据D1和第二数据D2时,采用人工智能算法,依据第一数据D1和关联结果D3-AI1、用户的置信结果D3-AI2、三维矢量化数据D8,识别用户的双传感器用户识别。
当采集到用户的第一数据D1时,采用人工智能算法,依据用户、第一数据D1和用户的关联结果D3-AI1、用户的置信结果D3-AI2、三维矢量化数据D8,识别运动种类属性数据D4的单传感器动作识别。
当采集到用户的第一数据D1和第二数据D2时,采用人工智能算法,依据用户、第一数据D1、用户的关联结果D3-AI1用户的置信结果D3-AI2三维矢量化数据D8,识别运动种类属性数据D4的双传感器动作识别。
当采集到用户的第一数据D1时,采用人工智能算法,依据第一数据(D1)和动作特征数据,识别运动种类属性数据D4的动作特征动作识别。
依据图像深度学习步骤和校准数据D8,计算用户的打击动作所产生的压力数据。
使用户打击靶具,依据牛顿力学算法,取得第一传感器S1中的角速度和加速度数据和第二传感器S2中的压力数据,建立加速度压力关联D8。
在用户只使用第一传感器S1不使用第二传感器S2的情况下打击靶具或者对手,依据第一数据D1,在加速度压力关联D8中压力识别。
以拳击为例,设用户打击到对手的力量为F,在F中,分解手臂肌肉产生的张力为F1,冲力为F2,则根据牛顿力学定律F=F1+F2=F1+ma,此时,m为拳头的等效质量,这个等效质量是包括拳套以外的身体其它部位的运动而在拳头上施加的影响,a为拳头的加速度,基于用户的身体尺寸数据和身体各部位质量数据在短时间内不会有变化的依据,经过大量的训练,身体和肌肉形成了记忆效应,所以,发明人的除判断:在相同的动作下,加速度a相同,输出的打击力F的值也就相同。于是,只要在具有S2的情况下,测量S1,建立D1、D2的对于关系,此后,只要测量D1,就可以推算到D2的数值。这就是发明人所提出的打击力识别的原理和方法。
8.依据运动种类属性数据D4中的比赛规则,在多个用户的赛训时,采用人工智能算法,计算各个用户所对应的关联结果D3-AI1和置信结果D3-AI2。
依据运动种类属性数据D4中的比赛规则,比较多个用户的所对应的关联结果D3-AI1和置信结果D3-AI2,并获得即时的包括重击程度及次数、伤害程度及次数、读秒及次数、TKO及KO在内的比赛过程数据。
基于比赛过程数据计算比赛的动态赔率和预测结果数据并输出。
9.使第一传感器S1、第二传感器S2与一个以上固定终端通信,以计算第一传感器S1、第二传感器S2的自身空间坐标、运动速度、运动轨迹的绝对数据。
使第一传感器S1、第二传感器S2与一个以上移动终端、第一传感器S1、第二传感器S2通信,以计算第一传感器S1、第二传感器S2自身空间坐标、运动速度、运动轨迹的相对数据。
利用固定终端、移动终端处理和显示搏击信息系统结果信息。
将包括结果信息、运动动作现场回放视频发送到一个以上的显示装置,以使结果信息与现场视频融合显示。
固定终端和移动终端包括:微基站、PC机、智能手机。传感网络的连接方式包括有线方式和无线方式。
10.由搏击信息系统查找佩戴第一传感器S1的用户,并向其发出点名信息,用户佩戴的第一传感器S1收到后做出应答,从而实现点名。
由佩戴第一传感器S1的用户,通过第一传感器S1向搏击信息系统发出报名信息,并取得搏击信息系统的应答,从而实现报名。
由搏击信息系统向用户所佩戴的第一传感器S1发出通知信息,第一传感器S1接收到通知信息后,应答搏击信息系统,并在第一传感器S1上显示、震动和语音的通知。
由搏击信息系统通过一个以上终端,对于佩戴第一传感器S1的用户实现包括但不限于多种定位算法的定位。
由佩戴第一传感器S1的用户,根据用户的个人主观意愿,向搏击信息系统发出主动报警信息的主动报警。
由第一传感器S1根据第一数据D1的异常值,向运动信息系统发出报警信息的异常报警。
搏击信息系统和第一传感器S1之间通过传感网络实现通信,异常值包括用户、运动信息系统预先设定的报警触发条件。
据此,搏击信息系统可以实现对于用户的定位、报名、点名、通知、报警等功能,对于加强管理,提供了技术支撑。
11.本系统包括:第一传感器S1、终端和搏击信息系统;第一传感器S1和终端连接,终端和搏击信息系统连接,并处理来自第一传感器S1的数据。
12.还包括:第二传感器S2、视频图像传感器S3;第二传感器S2和视频图像传感器S3分别和终端连接,终端和搏击信息系统连接。
13.第一传感器S1由处理器和运动传感器、生理传感器、压力传感器、用户号发生器、地理坐标传感器连接构成。其中,运动传感器、生理传感器、压力传感器、用户号发生器、地理坐标传感器分别与处理器连接,处理器和终端连接。
第二传感器S2包括压力传感器和位置传感器。终端和搏击信息系统连接的方式包括有线连接和无线传感器网络连接,处理器和终端连接的方式包括有线连接和无线传感器网络连接。
运动传感器包括:三轴角速度传感器、三轴加速度传感器、三轴磁传感器、电子罗盘传感器、速度传感器、运动方向传感器、位移传感器、轨迹传感器、光传感器及其它们之间的组合。
生理传感器包括:血氧传感器、血压传感器、脉搏传感器、温度传感器、出汗程度传感器、声音、光传感器。
压力传感器包括:压力传感器、压强传感器、冲力传感器、冲量传感器。
位置传感器包括:空间位置传感器、空间坐标传感器、光传感器、摄像机。
用户号发生器包括:用户号存储编辑发送模块。
地理坐标传感器包括:导航卫星定位模块。
视频图像传感器为可见光、不可见光摄像机。
14.传感网络包括固定终端和移动终端。终端包括微基站、智能手机、PC机;传感网络的连接方式包括有线方式和无线方式。
微基站包括:一个以上下行接口、处理器、电源子系统和上行接口。其中,一个以上下行接口与处理器相连,处理器与上行接口相连,电源子系统为下行接口、处理器、上行接口提供电源,下行接口通过无线传感器网络与第一传感器S1、第二传感器S2、视频图像传感器S3连接通信,上行接口通过有线或者无线网络与搏击信息系统通信。
运动信息系统包括相互通信的终端单元和云系统;终端单元和终端一体或者分立设置,云系统设置在网络云中。
靶具包括搏击靶具、球类、球拍类、体育器械,对于搏击靶具的使用包括拳、脚和身体部位对于靶具的打击。
15.由运行在终端上的应用配置完成对下连接、收集、处理包括用户、第一数据D1、第二数据D2、运动种类属性数据D4、用户个人档案数据D5、视频数据D6,完成用户交互,并辅助生成关联数据D3、户个人档案数据D5、三维矢量化数据D7、校准数据D8的功能。
由运行在终端上的应用配置完成对上包括传送数据到云中心形成大数据的功能。
由运行在终端上的应用配置配合云中心软件完成学习、训练、用户识别、动作识别和压力识别的功能。
由运行在云中心的云中心配置负责完成对于大数据进行包括深度学习、数据挖掘、分类算法、人工智能处理、生成关联数据D3、视频数据 D6、校准数据D8、更新D5、云中心计算、云中心管理在内的处理和与应用软件通信。
运动信息系统包括应用配置和云中心配置。
由一个应用软件连接管理一个用户,形成一个运动信息系统;由多个应用软件连接管理多个用户,形成多个运动信息系统。
多个用户的运动信息系统之间通信,并完成互动。
(四)、有益效果说明
1、依据压力识别步骤,解决了搏击时只采用角速度、加速度传感器时动态测量打击力问题,方便了实施,降低了成本。
2、依据关键方法步骤4,解决了运动数据的图像化转换问题,做到可视化,便于现有的人工智能图像识别算法的应用。
3、依据关键方法步骤5,解决了视频2D录像3D矢量化问题。
4、依据关键方法步骤7和8,解决人员识别、运动识别、力学测量、自动裁判、动态赔率计算。
5、依据关键方法步骤6,引入了人工智能辅助搏击教练功能。
6、依据关键方法步骤10,开发了用户定位、报名、点名、通知、报警等新功能。
二:运动识别系统——手环版
(一)、系统概述
该系统主要用于个人运动用户的身份识别、运动识别和管理,具体是通过手环传感器对于用户的个人运动特征的提取和和比对,在云大数据的支撑下,识别用户身份、运动动作识别的功能。
与“搏击赛训系统”相比,相同之处不予叙述,不同之处在于:
1、第一传感器为手环一件,如图3所示,是由一个三轴陀螺仪、一个三轴加速度计构成的运动传感器,一个心率传感器构成的生理传感器和用户号发生器,还可以包括地理坐标传感器和语音传感器。设置运动传感器的采样频率为5帧/秒至50帧/秒,设置心率传感器为每分钟采集一次,全部采样精度为8~16bits,设置语音传感器采样频率8KHz~2.8224MHz。
2、不采用微基站,而是采用用户的智能手机与第一传感器S1相连。
3、增加声纹特征用户识别,与习惯动作用户识别来同步识别用户身份。
4、用动作特征来识别:户外跑步、竞走、健走、散步;室内跑步机上的跑步、健走;甩手计步、把传感器放在“计步神器”上的器具计步、把传感器绑在动物身上的动物计步等。
5、运动规则只包括跑步、竞走、健走、漫步,不包括其它运动。
6、不包括打击力识别,不包括2D数据3D化识别。
7、统计管理用户的运动识别运动。
(二)、配置部分说明
1、手机配置
该系统通过手机和手环传感器相连,以取得用户的运动数据,配合云中心的云中心配置,实现运动信息系统的功能。
由运行在手机上的APP应用软件完成对下连接、收集、处理包括用户、第一数据D1、第二数据D2、运动种类属性数据D4、用户个人档案数据D5,完成用户交互,并辅助生成关联数据D3、户个人档案数据D5的功能。
由运行在手机上的APP应用配置完成对上包括传送数据到云中心形成大数据的功能。
由运行在手机上的APP应用配置配合云中心软件完成学习、训练、用户识别、动作识别和压力识别的功能。
2、云中心配置
由运行在云中心的云中心软件负责完成对于大数据进行包括深度学习、数据挖掘、分类算法、人工智能处理、生成关联数据D3、、更新D5、云中心计算、云中心管理在内的处理和与应用配置通信的步骤。
运动识别信息系统包括应用配置和云中心配置。
由一个应用配置连接管理一个用户,形成一个运动信息系统;由多个应用配置连接管理多个用户,形成多个运动信息系统的步骤。
多个用户的运动识别信息系统之间通信,并完成互动的步骤。
(三)、关键方法步骤
与搏击赛训系统相比,异同点在于:
1.只用一个手环传感器,其余相同。
2.无第二传感器,其余相同。
3.利用设置在用户身体上的第一传感器S1监测第一数据D1,包括:
利用第一传感器S1中的运动传感器采集用户运动数据的。利用第一传感器S1中的生理传感器采集用户生理数据、用户号数据、地理坐标数据。
对第一数据D1和第二数据D2进行A/D转换。
依据运动种类属性数据D4调节第一传感器S1采样频率5帧/秒至50帧/秒,为和采样精度为8~16bits。
其中,第一传感器S1设置于用户的手腕处或者脚踝处。
采用人工智能算法,依据用户运动数据提取用户的习惯动作特征数据,并记录到用户的个人档案数据D5中。
采用人工智能算法,依据用户语音数据提取用户的声纹特征数据,并记录到用户的个人档案数据D5中。
采用人工智能算法,依据运动种类属性数据D4提取运动的动作特征数据,并记录到运动种类属性数据D4中。
本项目的其余内容同搏击赛训系统。
4.相同。
5.无此项。
6.相同。
7.当采集到用户的第一数据D1时,采用人工智能算法,依据第一数据D1和用户关联结果D3-AI1、用户置信结果D3-AI2识别用户的单传感器用户识别。
当采集到用户的第一数据D1时,采用人工智能算法,依据第一数据D1和习惯动作特征数据,识别用户的习惯动作用户识别。
当采集到用户的第一数据D1中包括语音数据时,采用人工智能算法,依据语音数据和声纹特征数据,识别用户的声纹特征用户识别。
当采集到用户的第一数据D1时,采用人工智能算法,依据第一数据(D1)和动作特征数据,识别运动种类属性数据D4的动作特征动作识别。
依据图像深度学习步骤和校准数据D8,计算用户的打击动作所产生 的压力数据。
以拳击为例,设用户打击到对手的力量为F,在F中,分解手臂肌肉产生的张力为F1,冲力为F2,则根据牛顿力学定律F=F1+F2=F1+ma,此时,m为拳头的等效质量,a为拳头的加速度,基于用户的身体尺寸数据和身体各部位质量数据在短时间内不会有变化的依据,在相同的动作下,加速度a相同,输出的打击力F的值也就相同,于是,只要提前在具有S2的情况下,测量S1,建立D1、D2的对于关系,此后,只要测量D1,就可以推算到D2的数值。这就是打击力识别的原理和方法。
8.依据运动种类属性数据D4中的比赛规则,在多个用户的赛训时,采用人工智能算法,计算各个用户所对应的关联结果D3-AI1和置信结果D3-AI2。
依据运动种类属性数据D4中的比赛规则,比较多个用户的所对应的关联结果D3-AI1和置信结果D3-AI2,并获得即时的数据。
基于比赛过程数据计算比赛的动态赔率和预测结果数据并输出。
9.使第一传感器S1与一个以上固定终端通信,以计算第一传感器S1的自身空间坐标、运动速度、运动轨迹的绝对数据。
使第一传感器S1与一个以上移动终端通信,以计算第一传感器S1自身空间坐标、运动速度、运动轨迹的相对数据。
10、11.相同。
12.无此项。
13、14.无第二传感器和压力传感器,无视频图像传感器,其它相同。
15.无第二传感器和压力传感器,无视频图像传感器,无视频数据D6,无三维矢量化数据D7,无校准数据D8,其它相同。
(四)、有益效果说明
1、通过使用户佩戴手环传感器,解决人员识别问题。
2、解决运动识别问题,尤其是解决具体的室外跑步、室外竞走、室外健走、室内跑步机跑步、室内跑步机健走、甩手计步、器具计步、动物计步等的识别。
3、引入了人工智能辅助搏击教练功能。
4、具备用户定位、报名、点名、通知、报警等新功能。
三:运动识别系统——纯APP版
(一)、系统概述
该系统主要用于个人运动用户的身份识别、运动识别和管理,具体是通过智能手机中自带的陀螺仪和加速度计对于用户的个人运动特征的提取和和比对,在云大数据的支撑下,从而识别用户身份、运动动作识别的功能。
移动终端被配置为可以利用其自身携带的运动传感器来采集用户数据,使用时,需要将手机握在手上或者帮在手腕上。
与实施例“运动识别系统——手环版”相同的内容不予叙述,所不同的在于采用手机内部包括的三轴陀螺仪、三轴加速度计、三轴磁力计替代第一传感器S1,由APP应用软件,通过直接底层驱动和读取手机运动传感器中的采样数据,采用人工智能算法,去做识别。
(二)、关键方法步骤
与运动识别系统——手环版相比,异同点在于:
1.用智能手机自身携带的运动传感器取代手环传感器采集用户运动数据,其余相同。
2.相同。
3.利用智能手机监测第一数据D1,其余相同。
4.~15项,相同。
(三)、有益效果说明
1、只需要使用户使用手机,不需要手环传感器就解决人员识别问题。
2、解决运动识别问题,尤其是解决具体的室外跑步、室外竞走、室外健走、室内跑步机跑步、室内跑步机健走、甩手计步、器具计步、动物计步等的识别。
3、引入了人工智能辅助搏击教练功能。
四:球类/田径运动识别系统
(一)、系统概述
该系统主要用于球类和田径类用户的识别和管理,它与搏击赛训系 统相比,相同之处不予叙述,不同之处在于:
1、第一传感器S1用于检测手脚四肢的运动速度和加速度,不需要检测打击力。此外,作为精准的速度测量,对于不同的球拍还需要根据球拍到手腕S1出的距离进行换算。
2、计算四肢的包括水平跑动和垂直弹跳运动量和卡路里消耗量。
3、球拍设置运动传感器,并且纳入到运动种类属性数据D4管理和用户个人档案数据D5的管理。
(二)、关键方法步骤
与搏击赛训系统相比,异同点在于:
1.相同。
2.无此条款。
3.利用设置在用户身体上的第一传感器S1监测第一数据D1,包括:
利用第一传感器S1中的运动传感器采集用户运动数据,利用第一传感器S1中的生理传感器采集用户生理数据,利用第一传感器S1中的用户号发生器采集用户号,利用第一传感器S1中的地理坐标传感器采集地理坐标,利用单元传感网络将一个用户所佩戴的全部第一传感器S1连接到个人感器网络、场所传感器网络、运动信息系统。
对第一数据D1和进行模/数A/D转换。
依据运动种类属性数据D4调节第一传感器S1采样频率和采样精度。
其中,第一传感器S1设置于用户的手腕、脚踝、关节位置处。
采用人工智能算法,依据用户运动数据提取用户的习惯动作特征数据,并记录到用户的个人档案数据D5中。
采用人工智能算法,依据用户语音数据提取用户的声纹特征数据,并记录到用户的个人档案数据D5中。
采用人工智能算法,依据运动种类属性数据提取运动的动作特征数据,并记录到运动种类属性数据D4中。
运动种类属性数据D4包括:运动规则数据以及与运动规则数据相对应的运动力度数据、运动级别数据、运动幅度数据、伤害程度数据、持续程度数据、体能消耗程度数据、生理程度数据、比赛规则数据。
其中,运动规则至少包括但不限于:田径、体操、球类。
用户具有个人档案数据D5,个人档案数据D5包括:用户的身高、体重、三维、臂展、臂重、拳重、心率、血氧、体温、肺活量、日期时间、卡路里消耗、历史运动记录、历史比赛成绩、典型运动数据、强势运动项目数据、弱势运动项目数据、语音数据、声纹数据、图像数据、视频数据。
运动传感器包括角速度子传感器、加速度子传感器、磁力子传感器,轴系至少包括XYZ三轴。
4.相同。
5.无此条款。
6.相同。
7.当采集到用户的第一数据D1时,采用人工智能算法,依据第一数据D1和用户关联结果D3-AI1、用户置信结果D3-AI2、三维矢量化数据D8,识别用户的单传感器用户识别。
当采集到用户的第一数据D1时,采用人工智能算法,依据第一数据D1和习惯动作特征数据,识别用户的习惯动作用户识别。
当采集到用户的第一数据D1中包括语音数据时,采用人工智能算法,依据语音数据和声纹特征数据,识别用户的声纹特征用户识别。
当采集到用户的第一数据D1时,采用人工智能算法,依据第一数据(D1)和动作特征数据,识别运动种类属性数据D4的动作特征动作识别。
8.依据运动种类属性数据D4中的比赛规则,在多个用户的赛训时,采用人工智能算法,计算各个用户所对应的关联结果D3-AI1和置信结果D3-AI2。
依据运动种类属性数据D4中的比赛规则,比较多个用户的所对应的关联结果D3-AI1和置信结果D3-AI2,并获得即时的比赛过程数据。
基于比赛过程数据计算比赛的动态赔率和预测结果数据并输出。
9.无第二传感器和视频图像传感器,其它相同。
10.相同。
11.无第二传感器、压力传感器、位置传感器和视频图像传感器,其它相同。
12.无此项。
13.无第二传感器、压力传感器、位置传感器和视频图像传感器,其它相同。
14.相同。
15.无第二传感器、压力传感器、位置传感器和视频图像传感器,其它相同。
(二)、有益效果说明
1、通过使用户佩戴手环传感器,解决人员识别问题。
2、解决运动识别问题,尤其是解决各种田径运动的动作识别和运动管理的问题。
3、引入了人工智能辅助搏击教练功能。
4、具备用户定位、报名、点名、通知、报警等新功能。
五:人员及动作识别系统
(一)、系统概述
该系统主要是面向机构,做人员鉴别之用。
通过对于人的动作及声音的采集,来分析查找个体之间的差异,从而识别人,即认人。同时,还对典型运动动作进行分类,进而实现对于同一个个体的运动动作来进一步鉴别身份。
系统包括一个人工智能手环、手机APP和云中心软件。具体如下:
(二)、关键方法步骤
与运动识别系统——手环版相比,异同点在于:
1、2、相同。
3、运动规则只包含日常活动规则,其它相同。
4、5、6、相同。
7.当采集到用户的第一数据D1时,采用人工智能算法,依据第一数据D1和用户关联结果D3-AI1、用户置信结果D3-AI2识别用户的单传感器用户识别。当采集到用户的第一数据D1时,采用人工智能算法,依据第一数据D1和习惯动作特征数据,识别用户的习惯动作用户识别。当采集到用户的第一数据D1中包括语音数据时,采用人工智能算法,依据语音数据和声纹特征数据,识别用户的声纹特征用户识别。当采集到用户的第一数据D1时,采用人工智能算法,依据第一数据D1和动作特征数据, 识别运动种类属性数据D4的动作特征动作识别。
8、9、无此项目。
10.~15.均相同。
(三)、有益效果说明
1、解决人员自身的鉴别问题,能够实现认人功能。
2、检测用户的健康状况。
3、引入了人工智能辅助运动教练功能。
4、具备用户定位、报名、点名、通知、报警等新功能。
六:危险工作安保救援系统
(一)、系统概述
该系统主要用于通过检测个人在危险工作环境中的生理特征来完成安保救援的管理的问题。例如,消防队员在救火环境、修造船船舱夏天闷热环境、矿业坑道环境等。
系统包括若干个人工智能手环、微基站、手机APP和云中心软件。具体如下:
(二)、关键方法步骤
与人员及动作识别系统——手环版相比,关键方法和系统在1~15项方法和系统基本一致。只是在安保和救援软件功能方面,相应地加强。需要说明的是,这些都是业内中级技术人员可以理解、并且能够在不需要创新的情况下设计的功能点。因此在此不予叙述。
(三)、有益效果说明
1、解决人员自身的识别问题,能够实现认人功能。
2、解决运动识别和生理识别问题,提供生命危险预警和救援引导功能。
3、检测用户的健康状况。
4、引入了人工智能辅助运动教练功能。
5、具备用户定位、报名、点名、通知、报警等新功能。
七:牧场定位报警系统
(一)、系统概述
该系统主要用于检测动物在牧场中养育安保报警的管理系统。
系统包括若干个人工智能传感器、微基站、手机APP和云中心软件。具体如下:
(二)、关键方法步骤
与搏击赛训系统相比,关键方法和系统异同点在于:
1、用户改为动物。
2、不用此项。
3、第一传感器S1设置于动物的犄角、脚踝位置处。采用人工智能算法,依据动物运动数据提取动物的习惯动作特征数据,并记录到动物的个体档案数据D5。采用人工智能算法,依据动物叫声数据提取动物的声纹特征数据,并记录到动物的个体档案数据D5。采用人工智能算法,依据运动种类属性数据D4提取运动的动作特征数据,并记录到运动种类属性数据D4。
本项目的其余内容同搏击赛训系统。
4、相同。
5、6、不用此项。
7、当采集到动物的第一数据D1时,采用人工智能算法,依据第一数据D1和关联结果D3-AI1、置信结果D3-AI2识别动物的单传感器动物识别。
当采集到动物的第一数据D1时,采用人工智能算法,依据第一数据D1和习惯动作特征数据,识别动物的习惯动作动物识别。
当采集到动物的第一数据D1中包括叫声数据时,采用人工智能算法,依据语音数据和声纹特征数据,识别动物的声纹特征动物识别。
8、9、不用此项。
10、由动物信息系统查找佩戴第一传感器S1的动物,并向其发出点名信息,动物佩戴的第一传感器S1收到后做出应答,从而实现点名。
由佩戴第一传感器S1的动物,通过第一传感器S1向动物信息系统发出报名信息,并取得应答,从而实现报名。
由动物信息系统通过一个以上终端,对于佩戴第一传感器S1的动物实现定位。
由第一传感器S1根据第一数据D1的异常值,向动物信息系统发出报 警信息的异常报警。
动物信息系统和第一传感器S1之间通过传感网络实现通信,异常值包括动物、动物信息系统预先设定的报警触发条件。
动物信息系统和第一传感器S1之间通过传感网络实现通信。
通过点名,可以查看某一个动物的位置及生理情况。运动情况;通过异常报警,可以了解动物是否越界、生理数据异常、生病等。
11、第一传感器S1、终端和动物信息系统;第一传感器S1和终端连接,终端和动物信息系统连接,并处理来自第一传感器S1的数据。
12.无此项目。
13、具体是:第一传感器S1包括但不限于:处理器和运动传感器、生理传感器、用户号发生器、地理坐标传感器连接构成;其中,运动传感器、生理传感器、用户号发生器、地理坐标传感器分别与处理器连接,处理器和终端连接。终端和动物信息系统连接的方式包括有线连接和无线传感器网络连接,处理器和终端连接的方式包括有线连接和无线传感器网络连接。
本项目的其余内容同搏击赛训系统。
14、15、将人类用户改为动物用户,其它相同。
(三)、有益效果说明
1、解决牧场动物自身的识别问题。
2、解决越界报警、定位问题和生病报警问题。
3、检测动物的健康状况。
4、引入了人工智能辅助动物饲养。

Claims (15)

  1. 一种运动数据监测的方法,其特征在于,包括:
    利用设置在用户身体上的第一传感器(S1)监测第一数据(D1)的步骤;
    利用传感网络将所述第一数据(D1)传输到运动信息系统的步骤;
    和/或,
    对所述第一数据(D1)进行处理的步骤。
  2. 根据权利要求1所述的方法,其特征在于,还包括:
    利用设置在靶具上的第二传感器(S2)在所述用户使用所述靶具时监测第二数据(D2)的步骤;
    在所述用户使用所述靶具的同时,按照时间顺序,同时采集所述第一数据(D1)和所述第二数据(D2),并生成关联数据(D3)的步骤;和/或,
    利用所述传感网络将所述第二数据(D2)和所述关联数据(D3)传输到运动信息系统的步骤;和/或,
    所述用户至少包括:学员用户、教练用户、对手用户和动物用户;所述传感网络包括固定终端和移动终端,包括微基站、智能手机和PC机;所述靶具包括搏击靶具、球类、球拍类、体育器械,对于所述搏击靶具的使用包括拳、脚和身体部位对于靶具的打击。
  3. 根据权利要求2所述的方法,其特征在于,所述利用设置在用户身体上的第一传感器(S1)监测第一数据(D1)的步骤包括:
    利用所述第一传感器(S1)中的运动传感器采集所述用户运动数据的步骤;和/或,
    利用所述智能手机中包括的运动传感器采集所述用户运动数据并通过所述智能手机内部直接传输到运动信息系统的步骤;和/或,
    利用所述第一传感器(S1)中的生理传感器采集所述用户生理数据的步骤;和/或,
    利用所述第一传感器(S1)中的压力传感器采集所述用户使用所述靶具和/或打击对手时的压力数据的步骤;和/或,
    利用所述第一传感器(S1)中包括的用户号发生器产生所述用户的用 户号数据的步骤;和/或,
    利用所述第一传感器(S1)中包括的地理坐标传感器产生所述用户的地理坐标数据的步骤;和/或,
    所述利用设置在靶具上的第二传感器(S2)在所述用户使用所述靶具时监测第二数据(D2)的步骤包括:
    利用所述第二传感器(S2)中的压力传感器采集所述用户使用所述靶具时的压力数据的步骤;和/或,
    利用所述第二传感器(S2)中位置传感器采集所述用户使用所述靶具时的位置数据的步骤;和/或,
    利用单元传感网络将一个所述用户所佩戴的全部所述第一传感器(S1)连接到个人传感器网络和/或场所传感器网络和/或所述运动信息系统的步骤;和/或,
    利用单元传感网络将一套靶具所装备的全部所述第二传感器(S2)连接到个人传感器网络和/或场所传感器网络和/或所述运动信息系统的步骤;和/或,
    采集监测所述第一数据(D1)和所述第二数据(D2)所发生时刻的系统时间值(T),并记录到所述第一数据(D1)和第二数据(D2)中的步骤;和/或,
    对所述第一数据(D1)和所述第二数据(D2)进行模/数(A/D)转换的步骤;和/或,
    依据运动种类属性数据(D4)调节所述第一传感器(S1)和所述第二传感器(S2)采样频率和采样精度的步骤;和/或,
    依据所述第一数据(D1)和所述第二数据(D2),按照预定的刻度,插值补齐所述第一数据(D1)和所述第二数据(D2),并将所述第一数据(D1)和/或第二数据(D2)合并到所述关联数据(D3)的步骤;
    其中,所述第一传感器(S1)设置于所述用户的手腕、脚踝、关节和/或打击位置处;和/或,
    采用人工智能算法,依据所述用户运动数据提取所述用户的习惯动作特征数据,并记录到所述用户的个人档案数据(D5)的步骤;和/或,
    采用所述人工智能算法,依据所述用户语音数据提取所述用户的声 纹特征数据,并记录到所述用户的所述个人档案数据(D5)的步骤;和/或,
    采用所述人工智能算法,依据所述运动种类属性数据(D4)提取所述运动的动作特征数据,并记录到所述运动种类属性数据(D4)的步骤;和/或,
    所述运动种类属性数据(D4)包括:运动规则数据以及与所述运动规则数据相对应的运动力度数据、运动级别数据、运动幅度数据、伤害程度数据、持续程度数据、体能消耗程度数据、生理程度数据和/或比赛规则数据;其中,所述运动规则至少包括:自由搏击、站立格斗、无限制格斗、MMA、UFC、散打、武术、太极拳、泰拳、踢拳、K1规则、击剑、柔道、摔跤、田径、体操、球类;
    所述用户具有个人档案数据(D5),所述个人档案数据(D5)包括:所述用户的身高、体重、三维、臂展、臂重、拳重、心率、血氧、体温、肺活量、日期时间、卡路里消耗、历史运动记录、历史比赛成绩、典型运动数据、强势运动项目数据、弱势运动项目数据、语音数据、声纹数据、图像数据、视频数据。
  4. 根据权利要求3所述的方法,其特征在于,还包括:
    依据包括采样种类、采样频率、采样精度、数据格式的数据内容,对于所述关联数据(D3),做数据格式化的步骤;和/或,
    依据运动动作的特性,在所述关联数据(D3)中的所述运动数据部分,分解动作序列为动作单元,计算单元数据(D3-U)的步骤;
    映射所述单元数据(D3-U)为运动图像,依据采集的序列,在所述单元数据(D3-U)中,将每次采集的运动传感器的三轴数据作为一个组,映射一个所述组为所述运动图像中的一个像素点的步骤;
    映射所述单元数据(D3-U)中所述运动传感器X轴、Y轴、Z轴的每个子传感器的采集数据为一幅运动图像,映射每个子传感器每次采集点为对应的所述运动图像中的一个像素点,将所述采集点的X、Y、Z三轴数据作为像素点RGB三原色数据的自变量x,建立RGB色码值y的函数y=f(x),计算出所述RGB三原色的数据的步骤;和/或,
    映射所述单元数据(D3-U)中所述运动传感器中的一个子传感器的采 集数据为一幅运动图像,映射其它所述子传感器的采集数据为该所述运动图像的通道,映射每个子传感器每次采集点为对应的所述运动图像或所述通道中的一个像素点,将所述采集点的X、Y、Z三轴数据作为像素点RGB三原色数据或通道数据的自变量x,建立RGB色码值y的函数y=f(x),计算出所述RGB三原色的数据或通道数据的步骤;和/或,
    采用人工智能中图像识别和分类算法,对于多个所述运动图像数据,进行深度学习,计算出所述用户的所述习惯动作特征、所述用户的所述声纹特征、所述运动的所述动作特征以及所述压力大小特征,并在采集到下一次所述关联数据(D3)时,比对特征数据的步骤;和/或,
    依据图像及视频文件格式,将所述多图映射和所述单图多通道映射改编成所述图像及所述视频文件,便于在显示器上显示和人眼观看的步骤;和/或,
    所述人工智能算法至少包括:人工神经网络算法、CNNs卷积神经网络算法、RNN循环神经网络算法、SVM支持向量机算法、遗传算法、蚁群算法、模拟退火算法、粒子群算法、Bayes贝叶斯算法;
    所述RGB函数包括线性函数y=kx+j和非线性函数,其中k和j为调整常数。
  5. 根据权利要求3或4所述的方法,其特征在于,还包括:
    使一路以上视频图像传感器(S3)拍摄一路以上所述用户赛训的视频图像(D6)的步骤;和/或,
    使所述一路以上视频图像传感器(S3)通过所述传感网络和所述运动信息系统通信的步骤;和/或,
    基于所述视频图像(D6)和所述第一数据(D1),依据所述第一传感器(S1)在所述视频图像(D6)中的位置,采用所述人工智能算法,做运动动作的三维矢量化合成,得到三维矢量化数据(D7)的步骤;和/或,
    将所述三维矢量化数据(D7)与所述第二数据(D2)、所述关联数据(D3)、所述运动种类属性数据(D4)和/或所述个人档案数据(D5)建立关联的步骤;和/或,
    采用所述人工智能算法,依据所述三维矢量化数据(D7)和所述运动种类属性数据(D4),识别所述视频图像(D6)中的运动动作,并同步在 所述视频图像(D6)中标注的、所述运动动作前、后时点的步骤;
    其中,所述赛训包括单人训练、单人套路比赛、多人对抗比赛。
  6. 根据权利要求4所述的方法,其特征在于,还包括:
    由所述教练用户按照所述运动种类属性数据(D4)采用规范的动作打击靶具,获得所述教练的关联数据(D3),依据所述人工智能算法,在所述教练的关联数据(D3)中做机器学习,得出所述教练的关联结果(D3-AI1)和所述教练的置信结果(D3-AI2),并更新所述教练用户的所述个人档案数据(D5)的步骤;和/或,
    由所述学员用户按照所述运动种类属性数据(D4)打击靶具,获得所述学员的关联数据(D3),依据所述人工智能算法,在所述学员的关联数据(D3)中做机器学习,得出所述学员的关联结果(D3-AI1)和所述学员的置信结果(D3-AI2),并更新所述学员用户的所述个人档案数据(D5)的步骤;和/或,
    循环比较所述学员的所述关联结果(D3-AI1)和所述教练的所述关联结果(D3-AI1)的步骤,循环比较所述学员的所述置信结果(D3-AI2)和所述教练的所述置信结果(D3-AI2)的步骤;和/或,
    依据所述学员的所述关联结果(D3-AI1)和所述置信结果(D3-AI2),计算分析所述学员的典型运动数据、强项、弱项和差距,更新所述学员的所述个人档案数据(D5),计算产生并输出训练建议信息的步骤;和/或,
    查找所述对手用户的所述个人档案数据(D5)和所述学员的所述个人档案数据(D5),比较二者其中的所述典型运动数据、所述强项数据和所述弱项数据,计算分析二者的差距,制定针对性训练建议计划,并督促检查训练结果的步骤。
  7. 根据权利要求5所述的方法,其特征在于,还包括:
    当采集到所述用户的所述第一数据(D1)时,采用所述人工智能算法,依据所述第一数据(D1)和所述关联结果(D3-AI1)和/或所述置信结果(D3-AI2)和/或所述三维矢量化数据(D8),识别所述用户的步骤;或者,
    当采集到所述用户的所述第一数据(D1)时,采用所述人工智能算 法,依据所述第一数据(D1)和所述习惯动作特征数据,识别所述用户的步骤;或者,
    当采集到所述用户的第一数据(D1)中包括所述语音数据时,采用所述人工智能算法,依据所述语音数据和所述声纹特征数据,识别所述用户的步骤;或者,
    当采集到所述用户的所述第一数据(D1)和所述第二数据(D2)时,采用所述人工智能算法,依据所述第一数据(D1)和所述关联结果(D3-AI1)和/或所述置信结果(D3-AI2)和/或所述三维矢量化数据(D8),识别所述用户的步骤;和/或,
    当采集到所述用户的所述第一数据(D1)时,采用所述人工智能算法,依据所述用户、所述第一数据(D1)和所述关联结果(D3-AI1)和/或所述置信结果(D3-AI2)和/或所述三维矢量化数据(D8),识别所述运动种类属性数据(D4)的步骤;或者,
    当采集到所述用户的所述第一数据(D1)和所述第二数据(D2)时,采用所述人工智能算法,依据所述用户、所述第一数据(D1)和所述关联结果(D3-AI1)和/或所述置信结果(D3-AI2)和/或所述三维矢量化数据(D8),识别所述运动种类属性数据(D4)的步骤;
    当采集到所述用户的所述第一数据(D1)时,采用所述人工智能算法,依据所述第一数据(D1)和动作特征数据,识别所述运动种类属性数据(D4)的步骤;
    依据所述图像深度学习步骤和所述校准数据(D8),计算所述用户的打击动作所产生的压力数据的步骤;
    使所述用户打击所述靶具,依据牛顿力学算法,取得所述第一传感器(S1)中的所述角速度数据和所述加速度数据和所述第二传感器(S2)中的所述压力数据,建立加速度压力关联(D8)的步骤;和/或,
    在所述用户只使用所述第一传感器(S1)不使用第二传感器(S2)的情况下打击靶具或者对手,依据所述第一数据(D1),在所述加速度压力关联(D8)中压力识别的步骤。
  8. 根据权利要求7所述的方法,其特征在于,还包括:
    依据所述运动种类属性数据(D4)中的比赛规则,在多个所述用户的 所述赛训时,采用所述人工智能算法,计算各个用户所对应的所述关联结果(D3-AI1)和所述置信结果(D3-AI2)的步骤;
    依据所述运动种类属性数据(D4)中的比赛规则,比较多个所述用户的所对应的所述关联结果(D3-AI1)和所述置信结果(D3-AI2),并获得即时的包括重击程度及次数、伤害程度及次数、读秒及次数、TKO及KO在内的比赛过程数据的步骤;
    基于所述比赛过程数据计算所述比赛的动态赔率和预测结果数据并输出的步骤。
  9. 根据权利要求2所述的方法,其特征在于,还包括:
    使所述第一传感器(S1)和/或所述第二传感器(S2)与一个以上固定终端通信,以计算所述第一传感器(S1)和/或所述第二传感器(S2)自身空间坐标、运动速度、运动轨迹的绝对数据的步骤;和/或,
    使所述第一传感器(S1)和/或所述第二传感器(S2)与一个以上移动终端、第一传感器(S1)和/或第二传感器(S2)通信,以计算所述第一传感器(S1)和/或所述第二传感器(S2)自身空间坐标、运动速度、运动轨迹的相对数据的步骤;和/或,
    利用所述固定终端和/或移动终端处理和显示所述运动信息系统结果信息的步骤;和/或,
    将包括所述结果信息和/或运动动作现场回放视频发送到一个以上的显示装置,以使所述结果信息与现场视频融合显示的步骤。
  10. 根据权利要求3所述的方法,其特征在于,还包括:
    由所述运动信息系统查找佩戴所述第一传感器(S1)的所述用户,并向其发出点名信息,所述用户佩戴的所述第一传感器(S1)收到后做出应答的步骤;和/或,
    由佩戴所述第一传感器(S1)的所述用户,通过所述第一传感器(S1)向所述运动信息系统发出报名信息,并取得应答的步骤;和/或,
    由所述运动信息系统向所述用户所佩戴的所述第一传感器(S1)发出通知信息,所述第一传感器(S1)接收到所述通知信息后,应答所述运动信息系统,并在所述第一传感器(S1)上显示和/或震动的步骤;和/或,
    由所述运动信息系统通过一个以上所述终端,对于所述佩戴所述第 一传感器(S1)的所述用户实现定位步骤;和/或,
    由佩戴所述第一传感器(S1)的所述用户,根据所述用户的个人主观意愿,向所述运动信息系统发出报警信息的步骤;和/或,
    由所述第一传感器(S1)根据所述第一数据(D1)的异常值,向所述运动信息系统发出报警信息的步骤;和/或,
    所述运动信息系统和所述第一传感器(S1)之间通过传感网络实现通信;所述异常值包括所述用户和/或所述运动信息系统预先设定的报警触发条件。
  11. 一种运动数据监测的系统,其特征在于,包括:第一传感器(S1)、终端和运动信息系统;所述第一传感器(S1)和所述终端连接,所述终端和所述运动信息系统连接,并处理来自所述第一传感器(S1)的数据。
  12. 根据权利要求11所述的系统,其特征在于,还包括:第二传感器(S2)和/或视频图像传感器(S3);所述第二传感器(S2)和所述视频图像传感器(S3)分别和终端连接,所述终端和所述运动信息系统连接。
  13. 根据权利要求11或12所述的系统,其特征在于:
    所述第一传感器(S1)由处理器和运动传感器和/或生理传感器和/或压力传感器和/或用户号发生器和/或地理坐标传感器连接构成;其中,所述运动传感器、所述生理传感器、所述压力传感器、所述用户号发生器、所述地理坐标传感器分别与所述处理器连接,所述处理器和所述终端连接;和/或,
    所述第二传感器(S2)包括压力传感器和位置传感器,
    所述终端和所述运动信息系统连接的方式包括有线连接和无线传感器网络连接,所述处理器和所述终端连接的方式包括有线连接和无线传感器网络连接;
    所述运动传感器包括:三轴角速度传感器、三轴加速度传感器、三轴磁传感器、电子罗盘传感器、速度传感器、运动方向传感器、位移传感器、轨迹传感器、光传感器及其它们之间的组合;
    所述生理传感器包括:血氧传感器、血压传感器、脉搏传感器、温 度传感器、出汗程度传感器、声音和/或光传感器;
    所述压力传感器包括:压力传感器、压强传感器、冲力传感器和/或冲量传感器;
    所述位置传感器包括:空间位置传感器、空间坐标传感器、光传感器和/或摄像机;
    所述用户号发生器包括:用户号存储编辑发送模块;
    所述地理坐标传感器包括:导航卫星定位模块;
    所述视频图像传感器为可见光和/或不可见光摄像机。
  14. 根据权利要求13所述的系统,
    所述传感网络包括固定终端和移动终端,所述终端包括微基站和/或手机和/或PC机;所述传感网络的连接方式包括有线方式和无线方式;
    所述微基站包括:一个以上下行接口、处理器、电源子系统和上行接口,其中,所述一个以上下行接口与所述处理器相连,所述处理器与所述上行接口相连,所述电源子系统为所述下行接口、所述处理器、所述上行接口提供电源,所述下行接口通过无线传感器网络与所述第一传感器(S1)和/或所述第二传感器(S2)和/或所述视频图像传感器(S3)连接通信,所述上行接口通过有线或者无线网络与所述运动信息系统通信;
    所述运动信息系统包括相互通信的终端单元和云中心;所述终端单元和所述终端一体或者分立设置;
    所述靶具包括搏击靶具、球类、球拍类、体育器械,对于所述搏击靶具的使用包括拳、脚和身体部位对于靶具的打击。
  15. 根据权利要求14所述的系统,其特征在于,所述云中心被配置为:
    由所述终端完成对下连接、收集、处理包括所述用户、所述第一数据(D1)、所述第二数据(D2)、所述运动种类属性数据(D4)、所述用户个人档案数据(D5)、所述视频数据(D6),完成用户交互,并辅助生成所述关联数据(D3)、所述户个人档案数据(D5)、所述三维矢量化数据(D7)、所述校准数据(D8)的功能;
    由所述终端完成对上包括传送数据到云中心形成大数据的功能;
    由所述终端与云中心进行交互,完成所述学习、所述训练、所述用户识别、所述动作识别和压力识别的功能;
    由所述云中心完成对于所述大数据进行包括所述深度学习、数据挖掘、分类算法、人工智能处理、生成所述关联数据(D3)、所述视频数据(D6)、所述校准数据(D8)、更新(D5)、云计算、云管理在内的处理和与所述应用软件通信的功能;
    所述运动信息系统被配置在所述终端和所述云中心。
PCT/CN2018/120363 2017-12-11 2018-12-11 一种运动数据监测方法和系统 WO2019114708A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711310325.XA CN108096807A (zh) 2017-12-11 2017-12-11 一种运动数据监测方法和系统
CN201711310325.X 2017-12-11

Publications (1)

Publication Number Publication Date
WO2019114708A1 true WO2019114708A1 (zh) 2019-06-20

Family

ID=62208337

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/120363 WO2019114708A1 (zh) 2017-12-11 2018-12-11 一种运动数据监测方法和系统

Country Status (2)

Country Link
CN (1) CN108096807A (zh)
WO (1) WO2019114708A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117100255A (zh) * 2023-10-25 2023-11-24 四川大学华西医院 一种基于神经网络模型进行防摔倒判定的方法和相关产品

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108096807A (zh) * 2017-12-11 2018-06-01 丁贤根 一种运动数据监测方法和系统
CN109107136A (zh) * 2018-09-07 2019-01-01 广州仕伯特体育文化有限公司 一种运动参数监测方法及装置
CN109718528B (zh) * 2018-11-28 2021-06-04 浙江骏炜健电子科技有限责任公司 基于运动特征参数的身份识别方法和系统
CN109800860A (zh) * 2018-12-28 2019-05-24 北京工业大学 一种面向社区基于cnn算法的老年人跌倒检测方法
CN109769213B (zh) * 2019-01-25 2022-01-14 努比亚技术有限公司 用户行为轨迹记录的方法、移动终端及计算机存储介质
CN110412627A (zh) * 2019-05-30 2019-11-05 沈恒 一种静水项目船、桨数据采集的应用方法
CN110314346A (zh) * 2019-07-03 2019-10-11 重庆道吧网络科技有限公司 基于大数据分析的智能格斗竞技拳套、脚套、系统及方法
CN110507969A (zh) * 2019-08-30 2019-11-29 佛山市启明星智能科技有限公司 一种跆拳道的训练系统与方法
CN114080258B (zh) * 2020-06-17 2022-08-09 华为技术有限公司 一种运动模型生成方法及相关设备
TWI803833B (zh) * 2021-03-02 2023-06-01 國立屏東科技大學 雲端化球類運動之動作影像訓練系統及其方法
CN112884062B (zh) * 2021-03-11 2024-02-13 四川省博瑞恩科技有限公司 一种基于cnn分类模型和生成对抗网络的运动想象分类方法及系统
CN113317783B (zh) * 2021-04-20 2022-02-01 港湾之星健康生物(深圳)有限公司 多模个性化纵横校准的方法
US20230060394A1 (en) * 2021-08-27 2023-03-02 Rapsodo Pte. Ltd. Intelligent analysis and automatic grouping of activity sensors
CN113996048B (zh) * 2021-11-18 2023-03-14 宜宾显微智能科技有限公司 一种基于姿势识别及电子护具监测的搏击计分系统及方法
CN114886387B (zh) * 2022-07-11 2023-02-14 深圳市奋达智能技术有限公司 基于压感的走跑运动卡路里计算方法、系统及存储介质
US20240078842A1 (en) * 2022-09-02 2024-03-07 Htc Corporation Posture correction system and method
CN115869608A (zh) * 2022-11-29 2023-03-31 京东方科技集团股份有限公司 击剑比赛裁判方法及装置、系统、计算机可读存储介质
CN116269266B (zh) * 2023-05-22 2023-08-04 广州培生智能科技有限公司 基于ai的老年人健康监测方法和系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140270375A1 (en) * 2013-03-15 2014-09-18 Focus Ventures, Inc. System and Method for Identifying and Interpreting Repetitive Motions
CN105183152A (zh) * 2015-08-25 2015-12-23 小米科技有限责任公司 运动能力的分析方法、装置及终端
CN105453128A (zh) * 2013-05-30 2016-03-30 阿特拉斯维拉伯斯公司 便携式计算设备以及对从其捕捉的个人数据的分析
CN106823348A (zh) * 2017-01-20 2017-06-13 广东小天才科技有限公司 一种运动数据管理方法、装置及系统、用户设备
CN107213619A (zh) * 2017-07-04 2017-09-29 曲阜师范大学 体育运动训练评估系统
CN108096807A (zh) * 2017-12-11 2018-06-01 丁贤根 一种运动数据监测方法和系统

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3949226B2 (ja) * 1997-06-11 2007-07-25 カシオ計算機株式会社 衝撃力推定装置、衝撃力推定方法、及び衝撃力推定処理プログラムを記憶した記憶媒体
CN202366428U (zh) * 2011-12-22 2012-08-08 钟亚平 一种跆拳道击打训练数字采集系统
CN103463804A (zh) * 2013-09-06 2013-12-25 南京物联传感技术有限公司 拳击训练感知系统及其方法
KR20160074289A (ko) * 2014-12-18 2016-06-28 조선아 타격 판정 장치 및 방법
CN107126680A (zh) * 2017-06-13 2017-09-05 广州体育学院 一种基于运动类传感器的跑步监测和语音提醒系统

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140270375A1 (en) * 2013-03-15 2014-09-18 Focus Ventures, Inc. System and Method for Identifying and Interpreting Repetitive Motions
CN105453128A (zh) * 2013-05-30 2016-03-30 阿特拉斯维拉伯斯公司 便携式计算设备以及对从其捕捉的个人数据的分析
CN105183152A (zh) * 2015-08-25 2015-12-23 小米科技有限责任公司 运动能力的分析方法、装置及终端
CN106823348A (zh) * 2017-01-20 2017-06-13 广东小天才科技有限公司 一种运动数据管理方法、装置及系统、用户设备
CN107213619A (zh) * 2017-07-04 2017-09-29 曲阜师范大学 体育运动训练评估系统
CN108096807A (zh) * 2017-12-11 2018-06-01 丁贤根 一种运动数据监测方法和系统

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117100255A (zh) * 2023-10-25 2023-11-24 四川大学华西医院 一种基于神经网络模型进行防摔倒判定的方法和相关产品
CN117100255B (zh) * 2023-10-25 2024-01-23 四川大学华西医院 一种基于神经网络模型进行防摔倒判定的方法和相关产品

Also Published As

Publication number Publication date
CN108096807A (zh) 2018-06-01

Similar Documents

Publication Publication Date Title
WO2019114708A1 (zh) 一种运动数据监测方法和系统
Rana et al. Wearable sensors for real-time kinematics analysis in sports: A review
US11990160B2 (en) Disparate sensor event correlation system
US11355160B2 (en) Multi-source event correlation system
US10124210B2 (en) Systems and methods for qualitative assessment of sports performance
US9911045B2 (en) Event analysis and tagging system
KR101687252B1 (ko) 맞춤형 개인 트레이닝 관리 시스템 및 방법
Baca et al. Ubiquitous computing in sports: A review and analysis
US9401178B2 (en) Event analysis system
US9406336B2 (en) Multi-sensor event detection system
CN109692003B (zh) 一种儿童跑步姿态纠正训练系统
US20180160943A1 (en) Signature based monitoring systems and methods
CN107211109B (zh) 视频和运动事件集成系统
CN105498188A (zh) 一种体育活动监控装置
JP2018523868A (ja) 統合されたセンサおよびビデオモーション解析方法
Saponara Wearable biometric performance measurement system for combat sports
JP2017521017A (ja) モーション事象認識およびビデオ同期システム並びに方法
CN104075731A (zh) 确定个人和运动物体的表现信息的方法
KR20160045833A (ko) 에너지 소모 디바이스
WO2017011811A1 (en) Event analysis and tagging system
Kos et al. Tennis stroke consistency analysis using miniature wearable IMU
CN111672089B (zh) 一种针对多人对抗类项目的电子计分系统及实现方法
US20160180059A1 (en) Method and system for generating a report for a physical activity
US20230302325A1 (en) Systems and methods for measuring and analyzing the motion of a swing and matching the motion of a swing to optimized swing equipment
Hu et al. Application of intelligent sports goods based on human-computer interaction concept in training management

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18888948

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18888948

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 17/11/2020)

122 Ep: pct application non-entry in european phase

Ref document number: 18888948

Country of ref document: EP

Kind code of ref document: A1