CN116747495A - Action counting method and device, terminal equipment and readable storage medium - Google Patents

Action counting method and device, terminal equipment and readable storage medium Download PDF

Info

Publication number
CN116747495A
CN116747495A CN202310716784.7A CN202310716784A CN116747495A CN 116747495 A CN116747495 A CN 116747495A CN 202310716784 A CN202310716784 A CN 202310716784A CN 116747495 A CN116747495 A CN 116747495A
Authority
CN
China
Prior art keywords
data
speed information
preset
user
moment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310716784.7A
Other languages
Chinese (zh)
Inventor
游伟强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Qianhai Yanxiang Asia Pacific Electronic Equipment Technology Co ltd
Original Assignee
Shenzhen Qianhai Yanxiang Asia Pacific Electronic Equipment Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Qianhai Yanxiang Asia Pacific Electronic Equipment Technology Co ltd filed Critical Shenzhen Qianhai Yanxiang Asia Pacific Electronic Equipment Technology Co ltd
Priority to CN202310716784.7A priority Critical patent/CN116747495A/en
Publication of CN116747495A publication Critical patent/CN116747495A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B24/00Electric or electronic controls for exercising apparatus of preceding groups; Controlling or monitoring of exercises, sportive games, training or athletic performances
    • A63B24/0075Means for generating exercise programs or schemes, e.g. computerized virtual trainer, e.g. using expert databases
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B71/00Games or sports accessories not covered in groups A63B1/00 - A63B69/00
    • A63B71/06Indicating or scoring devices for games or players, or for other sports activities
    • A63B71/0619Displays, user interfaces and indicating devices, specially adapted for sport equipment, e.g. display mounted on treadmills
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • G06N3/0442Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0499Feedforward networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/30ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B71/00Games or sports accessories not covered in groups A63B1/00 - A63B69/00
    • A63B71/06Indicating or scoring devices for games or players, or for other sports activities
    • A63B71/0619Displays, user interfaces and indicating devices, specially adapted for sport equipment, e.g. display mounted on treadmills
    • A63B2071/0658Position or arrangement of display
    • A63B2071/0661Position or arrangement of display arranged on the user
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B2220/00Measuring of physical parameters relating to sporting activity
    • A63B2220/30Speed
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B2220/00Measuring of physical parameters relating to sporting activity
    • A63B2220/80Special sensors, transducers or devices therefor
    • A63B2220/803Motion sensors
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B2225/00Miscellaneous features of sport apparatus, devices or equipment
    • A63B2225/20Miscellaneous features of sport apparatus, devices or equipment with means for remote communication, e.g. internet or the like
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application relates to the technical field of artificial intelligence and provides an action counting method, an action counting device, terminal equipment and a readable storage medium. The action counting method comprises the following steps: acquiring speed information of each moment in a preset period, wherein the speed information is data acquired by a sensor on wearing equipment worn by a user; extracting effective data in the speed information, wherein the effective data is data corresponding to the moment when the user is in a motion state; inputting the effective data into a preset model to obtain an output motion type of the preset model; and determining the action times of the user in the preset period according to the motion type corresponding to each moment. By implementing the application, different types of body-building actions can be identified and counted, and body-building experience is improved.

Description

Action counting method and device, terminal equipment and readable storage medium
Technical Field
The application relates to the technical field of artificial intelligence, and particularly provides an action counting method, an action counting device, terminal equipment and a readable storage medium.
Background
At present, the life rhythm of people is quickened, more and more people like body building, particularly men like gymnasiums to build body, and muscles are strengthened through a plurality of body building projects, including abdominal muscle training chairs, sitting postures, chest pushing, pulley back pulling down, shoulder arm pushing up and the like, and muscles at different parts of a human body are exercised. Many players now count by means of a counter. However, in the prior art, the number displayed on the counting device is used for counting, the recording is inaccurate, the user can not count the exercise condition of the user conveniently, and the body building efficiency is low.
Disclosure of Invention
The application aims to provide a motion counting method, a motion counting device, terminal equipment and a readable storage medium, which aim to solve the existing problem that the efficiency of manually counting body-building motions is low.
In order to achieve the above purpose, the application adopts the following technical scheme:
in a first aspect, the present application provides an action counting method, comprising: acquiring speed information of each moment in a preset period, wherein the speed information is data acquired by a sensor on wearing equipment worn by a user; extracting effective data in the speed information, wherein the effective data is data corresponding to the moment when the user is in a motion state; inputting the effective data into a preset model to obtain an output motion type of the preset model; and determining the action times of the user in the preset period according to the motion type corresponding to each moment.
In one embodiment, the extracting valid data in the speed information includes: calculating a first pearson coefficient of the speed information; and window segmentation is carried out on the speed information according to the first pearson coefficient, and the effective data are generated.
In one embodiment, the window-dividing the speed information according to the first pearson coefficient, generating the valid data includes: if the first pearson coefficient is larger than a first preset threshold value, sliding the division window according to the coincidence degree as a first coincidence value; if the first pearson coefficient is smaller than or equal to the first preset threshold, sliding the segmentation window according to the coincidence degree as a second coincidence value, wherein the first coincidence value is smaller than the second coincidence value; calculating a second pearson coefficient of the data in the segmentation window; if the second pearson coefficient is smaller than or equal to a second preset threshold value, determining the second pearson coefficient as a new first pearson coefficient; and if the second pearson coefficient is larger than the second preset threshold value, determining the effective data according to the data in the segmentation window.
In one embodiment, after the extracting the valid data in the speed information, the method further includes: and carrying out normalization processing on the effective data.
In one embodiment, the predetermined model is an LSTM-CNN model.
In one embodiment, the determining the number of actions of the user in the preset period according to the motion type corresponding to each moment includes: determining the times of preset action types according to the motion types corresponding to each moment; and taking the times of the preset action types as the action times of the user in the preset period.
In one embodiment, the method further comprises: and outputting the current action times.
In a second aspect, the present application provides an action counting device comprising: the acquisition module is used for acquiring speed information of each moment in a preset period, wherein the speed information is data acquired by a sensor on wearing equipment worn by a user; the effective data extraction module is used for extracting effective data in the speed information, wherein the effective data is data corresponding to the moment when the user is in a motion state; the identification module is used for inputting the effective data into a preset model to obtain the output motion type of the preset model; the frequency determining module is used for determining the action frequency of the user in the preset period according to the motion type corresponding to each moment.
In a third aspect, the present application also provides a terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor; the processor, when executing the computer program, implements the action counting method of the first aspect described above.
In a fourth aspect, the present application also provides a computer readable storage medium storing a computer program which when executed by a processor implements the action counting method of the first aspect.
The application has the beneficial effects that:
the motion counting method, the motion counting device, the terminal equipment and the readable storage medium provided by the application can identify different types of body-building motions and count the body-building motions, and improve body-building experience.
Specifically, firstly, acquiring speed information of each moment in a preset period, wherein the speed information is data acquired by a sensor on wearing equipment worn by a user so as to acquire data required by identifying exercise actions, then extracting effective data in the speed information so as to determine data corresponding to the moment when the user is in a motion state, and inputting the effective data into a preset model to acquire the output motion type of the preset model so as to identify different exercise motion types; and determining the action times of the user in the preset period according to the motion type corresponding to each moment. Through the process, the times of body-building exercises can be automatically counted, manual counting is not needed, and body-building experience of a user is improved.
It will be appreciated that the apparatus, the terminal device and the readable storage medium, which can implement the above method, have the same advantageous effects.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a motion counting method according to a first embodiment of the present application;
FIG. 2 is a flowchart of a motion counting method according to a first embodiment of the present application;
FIG. 3 is a flowchart of a motion counting method according to a first embodiment of the present application;
FIG. 4 is a flowchart of a motion counting method according to a first embodiment of the present application;
FIG. 5 is a diagram of a model structure of a second embodiment of the motion counting method of the present application;
FIG. 6 is a block diagram of an embodiment of an action counting device of the present application;
fig. 7 is a block diagram of the terminal device according to the present application.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative and intended to explain the present application and should not be construed as limiting the application.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present application, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
The action counting method provided by the embodiment of the application can be applied to terminal equipment such as mobile phones, tablet computers and the like, and the embodiment of the application does not limit the specific type of the terminal equipment.
In order to explain the technical scheme of the application, the following examples are used for illustration.
Example 1
Referring to fig. 1, an action counting method provided by an embodiment of the present application includes:
step S11, acquiring speed information of each moment in a preset period, wherein the speed information is data acquired by a sensor on wearing equipment worn by a user.
The intelligent wearing equipment can be worn on the wrist, the arm, the waist, the ankle, the thigh and the like. The speed information collected by the sensor on the intelligent wearable device can be data such as a movement direction and acceleration. Acceleration data during different exercises can be collected, for example, by a 6-axis accelerometer in the smart wearable device. The 6-axis accelerometer collects speed information at each moment in a preset period, for example, 25 acceleration data are collected in 1 second, and the speed information is collected at a frequency of 25 HZ.
And step S12, extracting effective data in the speed information, wherein the effective data is data corresponding to the moment when the user is in a motion state.
Referring to fig. 2, in one embodiment, the extracting valid data in the speed information includes:
the method for extracting the effective data in the speed information comprises the following specific steps of:
step S121, calculating a first pearson coefficient of the speed information.
This step is to calculate an initialized pearson coefficient for subsequent determination. The pearson coefficient is an index that measures the linear dependence of data, and its value is between-1 and 1, reflecting the degree of linear dependence between variables. Therefore, the pearson coefficient is used for measuring the correlation degree, and the similarity between the data segment and the stored manually-segmented motion data can be calculated to complete the data interception. The pearson coefficient S is calculated as follows:
wherein Cov (data, mdata) is covariance representing data to be segmented and motion data manually segmented, σ data Is the standard deviation, sigma, representing the data to be segmented mdata Representing the standard deviation of the manually segmented motion data. The manually segmented exercise data is speed information of a user aiming at different exercise types, such as acceleration data of abdominal training chairs, sitting postures, chest pushing, pulley back pulling and shoulder and arm pushing, collected during exercise, and manually segmented intercepted template data. By calculating the pearson coefficients, the correlation of the data to be segmented with the template data can be determined.
And step S122, window segmentation is carried out on the speed information according to the first Pearson coefficient, and the effective data are generated.
The window division is to divide the speed information by a division window of a fixed time window length. Specifically, the segmentation can be performed by calculating the similarity between the data segment to be segmented and a manually segmented motion data template stored in advance, so that effective data corresponding to a specific motion type is segmented in the speed information. The similarity between the data segment to be segmented and the pre-stored manually segmented motion data template can be measured by a first pearson coefficient. The velocity information may be window partitioned according to the first pearson coefficient to generate valid data.
Referring to fig. 3, in one embodiment, the window-dividing the speed information according to the first pearson coefficient, generating the valid data includes:
in step S1221, if the first pearson coefficient is greater than the first preset threshold, the split window is slid according to the coincidence ratio as the first coincidence value.
Step S1222, if the first pearson coefficient is less than or equal to the first preset threshold, sliding the split window according to the overlap ratio being a second overlap value, where the first overlap value is less than the second overlap value.
In an application, the length of the split window may be 65 data, i.e. 65 acceleration data are contained within the split window. The first overlap value may be 0 and the second overlap value may be 70%.
Different coincidence degrees are set, so that the speed information and the manually segmented data can be fully matched, and the data cannot be missed. The first pearson coefficient is larger than a first preset threshold value, which indicates that the speed information in the current segmentation window is intercepted in the previous judging period and is determined to be effective data, and at the moment, the segmentation window is slid according to the coincidence degree of 0, so that the omission of the effective data is not caused. The first pearson coefficient is smaller than or equal to a first preset threshold, at the moment, the segmentation window is slid according to the coincidence degree of 70%, and 70% of speed information in the segmentation window before sliding is reserved in the segmentation window after sliding, so that effective data is ensured not to be missed to a certain extent.
The sliding of the split window with the overlap ratio being a specific value means that the overlap ratio of the split window before and after sliding is the specific value. Taking the overlap ratio of 0 as an example, it is assumed that the first to sixty-five speed information is intercepted by the dividing window before sliding, and sixty-six to first hundred thirty speed information is intercepted by the dividing window after sliding according to the overlap ratio of 0, that is, the speed information intercepted by the dividing window before and after sliding has no overlap part. Taking the example of the overlap ratio of 70%, it is assumed that the first to sixty-fifth speed information is cut out before the sliding of the dividing window, and the twentieth to eighty-fifth speed information is cut out after the sliding of the dividing window according to the overlap ratio of 70%, that is, the speed information cut out by the dividing window before and after the sliding has an overlap portion, and the proportion of the overlap portion is 70%.
Step S1223, calculating a second pearson coefficient of the data within the segmentation window.
The method of calculating the second pearson coefficient is the same as the method of calculating the first pearson coefficient and is described in detail with reference to the above method.
Step S1224, if the second pearson coefficient is less than or equal to the second preset threshold, determining the second pearson coefficient as the new first pearson coefficient.
Step S1225, if the second pearson coefficient is greater than the second preset threshold, determining the valid data according to the data in the segmentation window.
It should be noted that, if the second pearson coefficient is less than or equal to the second preset threshold, the second pearson coefficient is determined to be a new first pearson coefficient, and then step S1221 is returned to processing again, and if the second pearson coefficient is greater than the second preset threshold, the data in the segmentation window may be determined to be valid data.
In the above embodiment, it may be determined whether the data intercepted by the current segmentation window is potential motion data by calculating the pearson coefficient.
In one embodiment, after the extracting the valid data in the speed information, the method further includes:
and carrying out normalization processing on the effective data.
Different feature data often have different dimensions and dimension units, which may affect the result of data analysis, and in order to eliminate the dimension effect between indexes, data normalization processing is required to solve the comparability between data indexes. After the data normalization processing, adverse effects caused by singular sample data can be eliminated, the speed of gradient descent for solving the optimal solution can be increased, and the accuracy can be possibly improved.
In application, the data may be normalized using a Z-score algorithm. Specifically, the normalization processing is performed on the effective data, including:
for each data X within the segmentation window i Normalization processing is performed by adopting the following formula to generate preprocessing data Y i
Wherein X is i For the data within the segmentation window,std (x) is the standard deviation of the data within the segmentation window, which is the average of the data within the segmentation window.
Through the conversion of the formula, the data accords with normal distribution with the mean value of 0 and the standard deviation of 1.
And S13, inputting the effective data into a preset model to obtain the output motion type of the preset model.
The preset model can be a deep learning model, and is used for identifying the effective data at each moment and outputting the probability of the identified motion type. The user may select different deep learning models, such as convolutional neural network models, etc., as desired. Before reasoning is performed by using the preset model, training of the preset model by using sample data is required. And comparing the probability of the motion type output by the preset model with preset conditions during each training, and if the probability of the motion type output by the preset model does not meet the conditions, adjusting parameters of the preset model until the probability of the motion type output by the preset model meets the preset conditions, and then, directly using the preset model for reasoning.
Step S14, determining the action times of the user in the preset period according to the motion types corresponding to each moment.
Referring to fig. 4, in one embodiment, the determining the number of actions of the user in the preset period according to the motion type corresponding to each moment includes:
step S141, determining the times of the preset action types according to the motion types corresponding to each moment.
In step S142, the number of times of the preset action type is used as the number of times of the user action in the preset period.
The user can create a list in the database to record the times of different motion types, and each time the preset model identifies one motion type, such as abdominal training chair, sitting posture chest pushing, pulley back pulling, shoulder arm pushing and the like, the user can calculate the times of the corresponding motion type on the list by adding one and timely output the action count value of the action type.
In application, besides the fact that the intelligent wearable device is used for collecting data of a user to identify an action count value of an action type, parameters of the body-building device, such as weight and running speed of the body-building device, can be combined to comprehensively analyze body-building quality of the user, so that health management is conducted. For example, the sitting posture chest pushing is combined with the times of the body-building exercise type, the load weight and the running speed of the sitting posture chest pushing equipment, the body-building intensity value is calculated by setting different weights, and a threshold value is set, and when the body-building intensity value reaches the threshold value, a prompt tone is sent out to inform a user to rest or convert other body-building actions, so that the body-building effect is improved.
In one embodiment, the method further comprises: and outputting the current action times.
In the application, the user can set a plurality of action times output scenes as required. For example, in the case of an ongoing sport, the number of actions of the currently ongoing sport type may be counted and displayed in real time. For another example, the number of actions of each type of movement may be counted after performing a plurality of movements, and the user may query the number of actions of a specific type of movement. For another example, the energy consumed by the user may be output according to a plurality of motions and the consumed energy corresponding to each motion.
The embodiment of the application can identify and count different types of body-building actions and improve body-building experience.
Specifically, firstly, acquiring speed information of each moment in a preset period, wherein the speed information is data acquired by a sensor on wearing equipment worn by a user so as to acquire data required by identifying exercise actions, then extracting effective data in the speed information so as to determine data corresponding to the moment when the user is in a motion state, and inputting the effective data into a preset model to acquire the output motion type of the preset model so as to identify different exercise motion types; and determining the action times of the user in the preset period according to the motion type corresponding to each moment. Through the process, the times of body-building exercises can be automatically counted, manual counting is not needed, and body-building experience of a user is improved.
Example two
The embodiment of the present application provides an action counting method, which includes steps S11 to S14 in the first embodiment, and this embodiment is further described in the first embodiment, and the same or similar parts as those in the first embodiment can be referred to in the description of the first embodiment, and will not be repeated here.
In one embodiment, the predetermined model is an LSTM-CNN model.
The waveform change of the acceleration data of the body-building exercise has strong space-time law. The long-short-term memory recurrent neural network (Long Short Term Memory, LSTM) can capture the time law of motion, and achieves satisfactory effects in the prediction and classification of time-series data. Convolutional neural networks (Convolutional Neural Network, CNN) are widely applied to feature extraction, so that the problem of inaccuracy in the subjective feature extraction of people is avoided. Therefore, the LSTM network has good effect on the processing time sequence, the CNN network has good effect on processing multiple classifications, the acquired gesture data is classified and identified through the combination of the LSTM-CNN, then the LSTM is used for extracting the time characteristics of the gait data, and the CNN is used for extracting the space local characteristics.
Fig. 5 is a structure of an LSTM cell. At time t, X is used t Representing input of neurons, C t Representing the content of the memory, H t Representing the output, the output of the forget gate is f t The input gate calculates new candidate memory cellsTo generate the current memory C t And combines it with the old memory output by the forgetter, the specific formula is as follows:
f t =σ(W f [x t, h t-1 ]+b f )
i t =σ(W i [x t, h t-1 ]+b i )
sigma represents the activation function Sigmoid, W f Representing weights, b f Representing the bias term, i t Representing the activation value of the input gate, f t Indicating the activation value of the forgetting gate, W i And W is c Representing weights, b i Representing the bias term.
The output of the LSTM cell is determined by the input gate:
O t =σ(W o [x t, h t-1 ]+b o )
h t =O t *tanh(C t )
O t to output the activation value of the gate, h t The output value of the current neuron at the time t.
The LSTM-CNN network consists of an LSTM network and a CNN network, and comprises two layers of LSTM,2 convolution layers and 2 pooling layers of 64 neurons, and finally is connected with softmax through a full connection layer. Normalized 6-axis acceleration data 65 x 6 are input into a two-layer LSTM of 64 total neurons, L1 and L2, respectively. The size of LSTM output data is 32 x 65, the L2 layer is connected with a CNN network of 2 layers, the convolution kernels are respectively C1 and C2 and are set to 64 and 128, the size of the convolution kernel is 1*3, the step size is 1, the convolution is connected through a maximum pooling layer, the dimension of the feature map is reduced, and the feature map is compressed. And finally, the abdominal muscle training chair, the sitting posture chest pushing, the pulley back pulling down, the shoulder arm lifting and other movements can be identified and counted through an LSTM-CNN network by connecting the abdominal muscle training chair with the softmax classifier through the full connecting layer, outputting a probability value of the movement type through the softmax classifier, and the movement type with the largest probability value is the identified movement type.
In another embodiment, the preset model may also be a feed-forward backward neural network model (BP). Before the feedforward reverse neural network is used, different nodes for wearing the wearable device are divided into an upper body and a lower body, and the characteristics (mean value, variance and energy) of effective data of ankle and thigh nodes are extracted in the lower body categories, so that the feedforward reverse neural network is trained to classify the lower body states. The upper body category extracts the characteristics (mean, variance, energy) of the effective data of wrist, arm and waist nodes, designs the feedforward reverse neural network of the upper body corresponding to different lower body states to identify the upper body action, and deduces the final body-building action.
The embodiment of the application can identify and count different types of body-building actions and improve body-building experience.
Specifically, firstly, acquiring speed information of each moment in a preset period, wherein the speed information is data acquired by a sensor on wearing equipment worn by a user so as to acquire data required by identifying exercise actions, then extracting effective data in the speed information so as to determine data corresponding to the moment when the user is in a motion state, and inputting the effective data into a preset model to acquire the output motion type of the preset model so as to identify different exercise motion types; and determining the action times of the user in the preset period according to the motion type corresponding to each moment. Through the process, the times of body-building exercises can be automatically counted, manual counting is not needed, and body-building experience of a user is improved.
In addition to the beneficial effects, the embodiment of the application also classifies and identifies the collected effective data through the combination of LSTM-CNN. Because the LSTM network has good effect on the processing time sequence, the CNN network has good effect on processing multiple classifications, the LSTM is used for extracting the time characteristics of gait data, and the CNN is used for extracting the spatial local characteristics, so that the good recognition classification effect can be achieved.
Example III
Fig. 6 shows a block diagram of an action counting device 60 according to an embodiment of the present application, corresponding to the action counting method described in the above embodiment, where the system may be a virtual device (virtual appliance) in the terminal device, and the system may be executed by a processor of the terminal device, or may be integrated in the terminal device itself. For convenience of explanation, only portions relevant to the embodiments of the present application are shown.
The motion counting device 60 according to the embodiment of the present application includes:
the acquiring module 61 is configured to acquire speed information at each moment in a preset period, where the speed information is data acquired by a sensor on a wearable device worn by a user;
the effective data extracting module 62 is configured to extract effective data in the speed information, where the effective data is data corresponding to a moment when the user is in a motion state;
the identification module 63 is configured to input the valid data into a preset model, and obtain an output motion type of the preset model;
the number determining module 64 is configured to determine the number of actions of the user in the preset period according to the motion type corresponding to each moment.
In one embodiment, the valid data extraction module comprises:
a first coefficient calculation unit for calculating a first pearson coefficient of the speed information;
and the window segmentation unit is used for carrying out window segmentation on the speed information according to the first pearson coefficient to generate the effective data.
In one embodiment, the window dividing unit includes:
the first sliding subunit is used for sliding the division window according to the coincidence degree as a first coincidence value if the first pearson coefficient is larger than a first preset threshold value;
the second sliding subunit is configured to slide the split window according to a second overlap ratio, where the overlap ratio is a second overlap value, if the first pearson coefficient is smaller than or equal to the first preset threshold, and the first overlap value is smaller than the second overlap value;
a second coefficient calculating subunit, configured to calculate a second pearson coefficient of the data in the partition window;
a loop subunit, configured to determine the second pearson coefficient as a new first pearson coefficient if the second pearson coefficient is less than or equal to a second preset threshold;
and the effective data determining subunit is used for determining the effective data according to the data in the dividing window if the second pearson coefficient is larger than the second preset threshold value.
In one embodiment, after the valid data extraction module, the method further includes:
and the normalization module is used for carrying out normalization processing on the effective data.
In one embodiment, the predetermined model is an LSTM-CNN model.
In one embodiment, the number of times determination module includes:
the preset action frequency determining unit is used for determining the frequency of the preset action type according to the motion type corresponding to each moment;
and the action frequency setting unit is used for taking the frequency of the preset action type as the action frequency of the user in the preset period.
In one embodiment, the system further comprises:
the frequency output module is used for outputting the current action frequency.
The embodiment of the application can identify and count different types of body-building actions and improve body-building experience.
Specifically, the acquisition module firstly acquires speed information of each moment in a preset period, wherein the speed information is data acquired by a sensor on wearing equipment worn by a user so as to acquire data required by identifying body-building actions, then the effective data extraction module extracts effective data in the speed information so as to determine data corresponding to the moment when the user is in a motion state, and the identification module inputs the effective data into a preset model to acquire the output motion type of the preset model so as to identify different body-building motion types; and the frequency determining module determines the action frequency of the user in the preset period according to the motion type corresponding to each moment. Through the process, the times of body-building exercises can be automatically counted, manual counting is not needed, and body-building experience of a user is improved.
Example IV
As shown in fig. 7, the application also provides a terminal device 70 comprising a memory 71, a processor 72 and a computer program 73 stored in said memory and executable on said processor, for example. The processor 72, when executing the computer program 73, implements the steps of the various action counting method embodiments described above, such as the method steps of embodiment one and/or embodiment two. The processor 72 implements the functions of the modules in the above-described embodiments of the apparatus, for example, the functions of the modules and units in the third embodiment, when executing the computer program 73.
Illustratively, the computer program 73 may be partitioned into one or more modules that are stored in the memory 71 and executed by the processor 72 to perform the first, second, and/or third embodiments of the present application. The one or more modules may be a series of computer program instruction segments capable of performing specific functions for describing the execution of the computer program 73 in the terminal device 70. For example, the computer program 73 may be divided into an acquisition module, a valid data extraction module, an identification module, a number determination module, and the like, and specific functions of each module are described in the third embodiment, and are not described herein.
The terminal device 70 may be an access terminal device. The terminal device may include, but is not limited to, a memory 71, a processor 72. It will be appreciated by those skilled in the art that fig. 7 is merely an example of a terminal device 70 and is not intended to limit the terminal device 70, and may include more or fewer components than shown, or may combine certain components, or different components, e.g., the terminal device may further include an input-output device, a network access device, a bus, etc.
The memory 71 may be an internal storage unit of the terminal device 70, such as a hard disk or a memory of the terminal device 70. The memory 71 may be an external storage device of the terminal device 70, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the terminal device 70. Further, the memory 71 may also include both an internal storage unit and an external storage device of the terminal device 70. The memory 71 is used for storing the computer program as well as other programs and data required by the terminal device. The memory 71 may also be used for temporarily storing data that has been output or is to be output.
The processor 72 may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field-programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units and modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other manners. For example, the apparatus/terminal device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical function division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium contains content that can be appropriately scaled according to the requirements of jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is subject to legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunication signals.
The above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (10)

1. A method of action counting, the method comprising:
acquiring speed information of each moment in a preset period, wherein the speed information is data acquired by a sensor on wearing equipment worn by a user;
extracting effective data in the speed information, wherein the effective data is data corresponding to the moment when the user is in a motion state;
inputting the effective data into a preset model to obtain an output motion type of the preset model;
and determining the action times of the user in the preset period according to the motion type corresponding to each moment.
2. The method of claim 1, wherein the extracting valid data in the speed information comprises:
calculating a first pearson coefficient of the speed information;
and window segmentation is carried out on the speed information according to the first pearson coefficient, and the effective data are generated.
3. The method of claim 2, wherein the window-dividing the velocity information according to the first pearson coefficient to generate the valid data comprises:
if the first pearson coefficient is larger than a first preset threshold value, sliding the division window according to the coincidence degree as a first coincidence value;
if the first pearson coefficient is smaller than or equal to the first preset threshold, sliding the segmentation window according to the coincidence degree as a second coincidence value, wherein the first coincidence value is smaller than the second coincidence value;
calculating a second pearson coefficient of the data in the segmentation window;
if the second pearson coefficient is smaller than or equal to a second preset threshold value, determining the second pearson coefficient as a new first pearson coefficient;
and if the second pearson coefficient is larger than the second preset threshold value, determining the effective data according to the data in the segmentation window.
4. A method according to claim 3, further comprising, after extracting valid data in the speed information:
and carrying out normalization processing on the effective data.
5. The method of claim 1, wherein the predetermined model is an LSTM-CNN model.
6. The method according to claim 1, wherein the determining the number of actions of the user in the preset period according to the motion type corresponding to each moment includes:
determining the times of preset action types according to the motion types corresponding to each moment;
and taking the times of the preset action types as the action times of the user in the preset period.
7. The method according to any one of claims 1 to 6, further comprising:
and outputting the current action times.
8. An action counting device, comprising:
the acquisition module is used for acquiring speed information of each moment in a preset period, wherein the speed information is data acquired by a sensor on wearing equipment worn by a user;
the effective data extraction module is used for extracting effective data in the speed information, wherein the effective data is data corresponding to the moment when the user is in a motion state;
the identification module is used for inputting the effective data into a preset model to obtain the output motion type of the preset model;
the frequency determining module is used for determining the action frequency of the user in the preset period according to the motion type corresponding to each moment.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and operable on the processor; the processor, when executing the computer program, implements the action counting method of any one of claims 1 to 7.
10. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program which, when executed by a processor, implements the action counting method of any one of claims 1 to 7.
CN202310716784.7A 2023-06-15 2023-06-15 Action counting method and device, terminal equipment and readable storage medium Pending CN116747495A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310716784.7A CN116747495A (en) 2023-06-15 2023-06-15 Action counting method and device, terminal equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310716784.7A CN116747495A (en) 2023-06-15 2023-06-15 Action counting method and device, terminal equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN116747495A true CN116747495A (en) 2023-09-15

Family

ID=87956610

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310716784.7A Pending CN116747495A (en) 2023-06-15 2023-06-15 Action counting method and device, terminal equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN116747495A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117194927A (en) * 2023-11-02 2023-12-08 深圳市微克科技有限公司 Indoor rope skipping counting method, system and medium based on triaxial acceleration sensor

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117194927A (en) * 2023-11-02 2023-12-08 深圳市微克科技有限公司 Indoor rope skipping counting method, system and medium based on triaxial acceleration sensor
CN117194927B (en) * 2023-11-02 2024-03-22 深圳市微克科技股份有限公司 Indoor rope skipping counting method, system and medium based on triaxial acceleration sensor

Similar Documents

Publication Publication Date Title
CN111160139B (en) Electrocardiosignal processing method and device and terminal equipment
Lester et al. A hybrid discriminative/generative approach for modeling human activities
CN108446733A (en) A kind of human body behavior monitoring and intelligent identification Method based on multi-sensor data
CN110659677A (en) Human body falling detection method based on movable sensor combination equipment
CN110478883B (en) Body-building action teaching and correcting system and method
Bu Human motion gesture recognition algorithm in video based on convolutional neural features of training images
CN110575663A (en) physical education auxiliary training method based on artificial intelligence
CN116747495A (en) Action counting method and device, terminal equipment and readable storage medium
Koçer et al. Classifying neuromuscular diseases using artificial neural networks with applied Autoregressive and Cepstral analysis
CN108717548B (en) Behavior recognition model updating method and system for dynamic increase of sensors
CN111401435B (en) Human body motion mode identification method based on motion bracelet
Sayed Biometric Gait Recognition Based on Machine Learning Algorithms.
CN111967361A (en) Emotion detection method based on baby expression recognition and crying
CN111652138A (en) Face recognition method, device and equipment for wearing mask and storage medium
CN115238796A (en) Motor imagery electroencephalogram signal classification method based on parallel DAMSCN-LSTM
CN113116363A (en) Method for judging hand fatigue degree based on surface electromyographic signals
Qi et al. A hybrid hierarchical framework for free weight exercise recognition and intensity measurement with accelerometer and ecg data fusion
CN115410267A (en) Statistical algorithm based on interaction action analysis data of human skeleton and muscle
CN114913547A (en) Fall detection method based on improved Transformer network
CN114613015A (en) Body-building action image identification method, device, equipment and storage medium
Kishore et al. A hybrid method for activity monitoring using principal component analysis and back-propagation neural network
Jeong et al. Physical workout classification using wrist accelerometer data by deep convolutional neural networks
Wei Individualized wrist motion models for detecting eating episodes using deep learning
Kumar Heart disease detection using radial basis function classifier
CN117010971B (en) Intelligent health risk providing method and system based on portrait identification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication