CN114563012A - Step counting method, step counting device, step counting equipment and storage medium - Google Patents

Step counting method, step counting device, step counting equipment and storage medium Download PDF

Info

Publication number
CN114563012A
CN114563012A CN202011367658.8A CN202011367658A CN114563012A CN 114563012 A CN114563012 A CN 114563012A CN 202011367658 A CN202011367658 A CN 202011367658A CN 114563012 A CN114563012 A CN 114563012A
Authority
CN
China
Prior art keywords
data
sample
input data
step counting
single step
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011367658.8A
Other languages
Chinese (zh)
Other versions
CN114563012B (en
Inventor
彭聪
高文俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN202011367658.8A priority Critical patent/CN114563012B/en
Publication of CN114563012A publication Critical patent/CN114563012A/en
Application granted granted Critical
Publication of CN114563012B publication Critical patent/CN114563012B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C22/00Measuring distance traversed on the ground by vehicles, persons, animals or other moving solid bodies, e.g. using odometers, using pedometers
    • G01C22/006Pedometers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The present disclosure relates to a step counting method, apparatus, device and storage medium, the method comprising: acquiring acceleration data of the terminal equipment in the current time period; respectively intercepting data from the acceleration data based on at least two data sampling lengths to obtain a plurality of subdata fragments corresponding to each data sampling length; taking a subdata fragment respectively intercepted based on the at least two data sampling lengths as primary input data, and inputting the primary input data into a pre-trained deep neural network to obtain a single step counting result corresponding to the primary input data; and accumulating the single step counting results corresponding to each input data to obtain the total step result in the current time period. The method and the device can adapt to the data characteristics of various steps such as running, walking and the like of the user, improve the accuracy of determining the single step counting result, and further improve the accuracy of determining the total step result based on the single step counting result.

Description

Step counting method, step counting device, step counting equipment and storage medium
Technical Field
The present disclosure relates to the field of terminal technologies, and in particular, to a step counting method, apparatus, device, and storage medium.
Background
In the related art, the pedometer function of the terminal equipment can record the number of steps taken by a user, and the user can know the amount of exercise of the user. The current health class APP can access this data frequently to complete a health record. However, the pedometer in the terminal device has strong limitation, and only the most typical step can be identified, and accurate step counting cannot be realized.
Disclosure of Invention
To overcome the problems in the related art, embodiments of the present disclosure provide a step counting method, apparatus, device and storage medium to solve the drawbacks in the related art.
According to a first aspect of embodiments of the present disclosure, there is provided a step counting method, the method including:
acquiring acceleration data of the terminal equipment in the current time period;
respectively intercepting data from the acceleration data based on at least two data sampling lengths to obtain a plurality of subdata fragments corresponding to each data sampling length;
taking a subdata fragment respectively intercepted based on the at least two data sampling lengths as primary input data, and inputting the subdata fragment into a pre-trained deep neural network to obtain a single step counting result corresponding to the primary input data;
and accumulating the single step counting results corresponding to each input data to obtain the total step result in the current time period.
In an embodiment, the method further comprises:
determining the average pace speed of the user based on the single step counting result corresponding to each input data;
and correcting the single step counting result corresponding to each input data based on the average step speed.
In one embodiment, the pre-constructed deep neural network comprises two convolutional layers, two fully-connected layers and one output layer which are connected in sequence;
the step counting method comprises the following steps of taking a subdata fragment respectively intercepted based on the at least two data sampling lengths as primary input data, inputting the primary input data into a pre-trained deep neural network, and obtaining a single step counting result corresponding to the primary input data, wherein the step counting method comprises the following steps:
inputting the primary input data into the two convolution layers to carry out two-layer convolution and data series connection;
and inputting the data after series connection into the two fully-connected layers for classification judgment, and obtaining a single step counting result corresponding to the primary input data through the output layer.
In an embodiment, the method further comprises training the deep neural network based on the steps comprising:
acquiring sample acceleration data;
respectively intercepting sample data from the sample acceleration data based on at least two data sampling lengths to obtain a plurality of sample subdata fragments corresponding to each data sampling length;
taking one sample subdata fragment respectively intercepted based on the at least two data sampling lengths as primary sample input data, and determining a single step counting result corresponding to the primary sample input data;
and taking each primary sample input data and the corresponding single step counting result as a training set to train the pre-constructed deep neural network.
In one embodiment, the acquiring sample acceleration data comprises:
acquiring historical acceleration data of the terminal equipment in a historical time period;
determining sample acceleration data based on the historical acceleration data and preset noise data.
According to a second aspect of embodiments of the present disclosure, there is provided a step counting apparatus, the apparatus including:
the data acquisition module is used for acquiring the acceleration data of the terminal equipment in the current time period;
the segment acquisition module is used for respectively intercepting data from the acceleration data based on at least two data sampling lengths to obtain a plurality of subdata segments corresponding to each data sampling length;
the single step counting module is used for taking a subdata segment respectively intercepted based on the at least two data sampling lengths as primary input data and inputting the subdata segment into a pre-trained deep neural network to obtain a single step counting result corresponding to the primary input data;
and the step counting module is used for accumulating the single step counting results corresponding to each input data to obtain the total step result in the current time period.
In one embodiment, the apparatus further comprises: a result correction module;
the result correction module includes:
the speed determining unit is used for determining the average pace speed of the user based on the single step counting result corresponding to the input data;
and the result correction unit is used for correcting the single step counting result corresponding to the input data based on the average step speed.
In one embodiment, the pre-constructed deep neural network comprises two convolutional layers, two fully-connected layers and one output layer which are connected in sequence;
the single step counting module comprises:
the convolution unit is used for inputting the primary input data into the two convolution layers to carry out two-layer convolution and data series connection;
and the classification unit is used for inputting the data after serial connection to the two fully-connected layers for classification judgment and obtaining a single step counting result corresponding to the primary input data through the output layer.
In one embodiment, the apparatus further comprises: a neural network training module;
the neural network training module comprises:
the sample data acquisition unit is used for acquiring sample acceleration data;
a sample fragment obtaining unit, configured to respectively intercept sample data from the sample acceleration data based on at least two data sampling lengths, and obtain multiple sample sub-data fragments corresponding to each data sampling length;
a single step result determining unit, configured to use one sample sub-data segment respectively intercepted based on the at least two data sampling lengths as primary sample input data, and determine a single step counting result corresponding to the primary sample input data;
and the neural network training unit is used for training the pre-constructed deep neural network by taking each primary sample input data and the corresponding single step counting result as a training set.
In an embodiment, the sample data obtaining unit is further configured to:
acquiring historical acceleration data of the terminal equipment in a historical time period;
sample acceleration data is determined based on the historical acceleration data and preset noise data.
According to a third aspect of embodiments of the present disclosure, there is provided a step counting apparatus, the apparatus including:
a processor, and a memory for storing processor-executable instructions;
wherein the processor is configured to:
acquiring acceleration data of the terminal equipment in the current time period;
respectively intercepting data from the acceleration data based on at least two data sampling lengths to obtain a plurality of subdata fragments corresponding to each data sampling length;
taking a subdata fragment respectively intercepted based on the at least two data sampling lengths as primary input data, and inputting the primary input data into a pre-trained deep neural network to obtain a single step counting result corresponding to the primary input data;
and accumulating the single step counting results corresponding to each input data to obtain the total step result in the current time period.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements:
acquiring acceleration data of the terminal equipment in the current time period;
respectively intercepting data from the acceleration data based on at least two data sampling lengths to obtain a plurality of subdata fragments corresponding to each data sampling length;
taking a subdata fragment respectively intercepted based on the at least two data sampling lengths as primary input data, and inputting the primary input data into a pre-trained deep neural network to obtain a single step counting result corresponding to the primary input data;
and accumulating the single step counting results corresponding to each input data to obtain the total step result in the current time period.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
the method comprises the steps of obtaining acceleration data of terminal equipment in the current time period, respectively intercepting data from the acceleration data based on at least two data sampling lengths to obtain a plurality of subdata fragments corresponding to each data sampling length, then taking one subdata fragment respectively intercepted based on the at least two data sampling lengths as primary input data, inputting the primary input data into a pre-trained deep neural network to obtain a single-step counting result corresponding to the primary input data, and further accumulating the single-step counting results corresponding to the input data to obtain a total step result in the current time period. Because data are respectively intercepted from acceleration data based on at least two data sampling lengths, and a subdata segment respectively intercepted by the at least two data sampling lengths is input into a pre-trained deep neural network as primary input data to obtain a corresponding single step counting result, compared with the method that data with the same length are adopted to detect steps (namely, whether each segment of data contains complete steps is detected to obtain a single step counting result) in the related technology, the method can be more suitable for the data characteristics of various steps of a user such as running, walking and the like in a mode of obtaining a total step result based on the single step counting result of each segment of data, the accuracy of determining the single step counting result is improved, and the accuracy of determining the total step result based on the single step counting result in the follow-up process can be improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow chart illustrating a step counting method according to an exemplary embodiment;
FIG. 2 is a flow chart illustrating a step counting method according to yet another exemplary embodiment;
FIG. 3A is a flow diagram illustrating how a single step result corresponding to a single input datum is obtained in accordance with an illustrative embodiment;
FIG. 3B is a schematic diagram illustrating the structure of a deep neural network in accordance with an exemplary embodiment;
FIG. 4 is a flow diagram illustrating how the deep neural network is trained in accordance with an exemplary embodiment;
FIG. 5 is a flow chart illustrating how sample acceleration data is acquired in accordance with an exemplary embodiment;
FIG. 6 is a block diagram illustrating a step-counting device in accordance with an exemplary embodiment;
FIG. 7 is a block diagram of a step counting device according to yet another exemplary embodiment;
FIG. 8 is a block diagram illustrating an electronic device in accordance with an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
FIG. 1 is a flow chart illustrating a step counting method according to an exemplary embodiment; the method of the embodiment can be applied to terminal devices (such as smart phones, tablet computers, notebook computers or wearable devices).
As shown in fig. 1, the method comprises the following steps S101-S104:
in step S101, acceleration data of the terminal device in the current time period is acquired.
In this embodiment, the terminal device may obtain acceleration data of its own current time period through a built-in acceleration sensor.
The length of the time period may be freely set by a developer based on business needs, for example, the time period is set to be a time length of updating step counting data each time, such as 1 minute, 2 minutes, and the like, which is not limited in this embodiment.
For example, the acceleration data may be a plurality of axial acceleration data collected by two or more axial acceleration sensors. Taking a three-axis acceleration sensor as an example, the acquired acceleration data may include acceleration data of the three-axis acceleration sensor in the x-axis direction, acceleration data of the three-axis acceleration sensor in the y-axis direction, acceleration data of the three-axis acceleration sensor in the z-axis direction, and the like.
In step S102, data is respectively intercepted from the acceleration data based on at least two data sampling lengths, so as to obtain a plurality of sub-data segments corresponding to each data sampling length.
In this embodiment, after the acceleration data of the terminal device in the current time period is obtained, the sub-data segments may be respectively intercepted from the acceleration data based on at least two pre-designed data sampling lengths, so as to obtain a plurality of sub-data segments corresponding to each data sampling length.
In this embodiment, the at least two data samples have different lengths. Taking the at least two data sample lengths as an example, one of the data sample lengths may be truncated to have a data length of 12, and the other data sample length may be truncated to have a data length of 18. The data length may be used to indicate the number of data included in the sub data segment. And, in order to fully utilize the acquired acceleration data, adjacent sub data segments intercepted by the same data sampling length can have an overlapping part. For example, after the acceleration data of the terminal device in the current time period is obtained, each subdata segment with a data length of 12 may be intercepted based on the first data sampling length, that is, the subdata segments formed by the 1 st to 12 th, 2 nd to 13 th, … …, and i th to (i +11) th data are obtained respectively; and intercepting each sub data segment with the data length of 18 according to the second data sampling length, namely obtaining sub data segments formed by the 1 st to 18 th, 2 nd to 19 th, … … th and the i th to (i +17) th data respectively.
It should be noted that the data length intercepted by the at least two data sample lengths may be set by a developer based on actual needs, which is not limited in this embodiment. For example, in consideration of that in a normal situation, the time duration consumed by each step is different between running and walking, so that the method is adapted to the step counting characteristics in the running and walking states, and therefore, in this embodiment, at least two data sampling lengths are simultaneously adopted to respectively intercept sub data segments with different lengths from the acceleration data for subsequent processing, so that the step counting scheme can be adapted to the data characteristics of multiple steps of the user, such as running, walking, and the like.
In the related art, steps are detected based on data with the same length (that is, whether each piece of data contains a complete step is detected to obtain a single step counting result), and then the single step counting results based on each piece of data are accumulated to obtain a total step result. However, in different exercise states of the user, the steps of the user are different in size, and the data with the same length cannot adapt to the step characteristics of different exercise states, so that the step counting result is inaccurate. In the scheme, the data are divided into the sub-segments with different lengths, and each single step detection is performed on the two data sub-segments with different lengths, so that the method can adapt to the step characteristics of the user in different motion states, improve the single step detection accuracy, and further improve the accuracy of obtaining the total step result by accumulating based on the single step counting result.
In step S103, a sub-data segment respectively intercepted based on the at least two data sampling lengths is used as primary input data and is input into a pre-trained deep neural network, so as to obtain a single step counting result corresponding to the primary input data.
In this embodiment, after data is respectively intercepted from the acceleration data based on at least two data sampling lengths to obtain a plurality of sub data segments corresponding to each data sampling length, one sub data segment respectively intercepted by the at least two data sampling lengths may be used as primary input data and input into a pre-trained deep neural network to obtain a single step counting result corresponding to the primary input data.
For example, when each sub-data segment having a data length of 12 is truncated based on the first data sample length, that is, sub-data fragments composed of the 1 st to 12 th, 2 nd to 13 th, … … th, and i th to (i +11) th data are obtained, and truncating each sub-data segment of data length 18 based on the second data sample length, that is, sub-data fragments consisting of the 1 st to 18 th, 2 nd to 19 th, … … th, and i th to (i +17) th data are obtained, the method can take a first subdata segment (namely, a subdata segment formed by 1-12 data) intercepted by a first data sampling length and a first data segment (namely, a subdata segment formed by 1-18 data) intercepted by a second data sampling length as first input data to be input into a pre-trained deep neural network for single-step identification; similarly, the ith sub-data segment (i.e. sub-data segments consisting of the ith to (i +11) th data, i is 2,3, … …, n, where n is the total number of data) cut by the first data sample length and the ith sub-data segment (i.e. sub-data segments consisting of the ith to (i +17) th data, i is 2,3, … …, n) cut by the second data sample length can be input into the pre-trained deep neural network as the ith input data for single step recognition.
In an embodiment, the deep neural network may be trained in advance based on sample acceleration data, and then after obtaining acceleration data of the terminal device in a current time period and intercepting a plurality of subdata segments from the acceleration data based on at least two data sampling lengths, a single-step counting result corresponding to each input data may be obtained based on the pre-trained deep neural network.
For example, the single step counting result can be used to characterize whether the user takes one step. For example, a single step result may be a "1" or a "0". Where a "1" may indicate that the user is walking one step and a "0" may indicate that the user is not walking.
In step S104, the single step counting results corresponding to each input data are accumulated to obtain the total step result in the current time period.
In this embodiment, when one subdata segment respectively intercepted based on the at least two data sampling lengths is used as primary input data and is input to a pre-trained deep neural network to obtain a single-step counting result corresponding to the primary input data, the single-step counting results corresponding to each input data may be accumulated to obtain a total step result in the current time period.
For example, if the obtained single step counting result corresponding to each input data is: 1. 0, 1, … …, 1, 0, 1, these single step count results may be accumulated to obtain the user's total step result, i.e., the total step result in the current time period.
As can be seen from the above description, in this embodiment, acceleration data of a terminal device in a current time period is obtained, data is respectively intercepted from the acceleration data based on at least two data sampling lengths, so as to obtain a plurality of sub data segments corresponding to each data sampling length, then one sub data segment respectively intercepted based on the at least two data sampling lengths is used as primary input data and is input into a pre-trained deep neural network, so as to obtain a single step counting result corresponding to the primary input data, and then the single step counting results corresponding to each input data are accumulated, so as to obtain a total step result in the current time period. Because data are respectively intercepted from the acceleration data based on at least two data sampling lengths, and a subdata segment respectively intercepted by the at least two data sampling lengths is input into the pre-trained deep neural network as one-time input data to obtain a corresponding single-step counting result, compared with a method of adopting data with the same length in the related technology, the method can adapt to the data characteristics of various steps of a user such as running, walking and the like, the accuracy of determining the single-step counting result is improved, and the accuracy of subsequently determining the total step result based on the single-step counting result is improved.
FIG. 2 is a flow chart illustrating a step counting method according to yet another exemplary embodiment. The method of the embodiment can be applied to terminal devices (such as smart phones, tablet computers, notebook computers or wearable devices).
As shown in fig. 2, the method comprises the following steps S201-S206:
in step S201, acquiring acceleration data of the terminal device in the current time period;
in step S202, respectively intercepting data from the acceleration data based on at least two data sampling lengths to obtain a plurality of subdata segments corresponding to each data sampling length;
in step S203, a sub-data segment respectively intercepted based on the at least two data sampling lengths is used as primary input data and input into a pre-trained deep neural network, so as to obtain a single step counting result corresponding to the primary input data.
In step S204, an average step speed of the user is determined based on the single step counting result corresponding to the input data.
In this embodiment, when one subdata segment respectively intercepted based on the at least two data sampling lengths is used as primary input data and is input to a pre-trained deep neural network to obtain a single-step counting result corresponding to the primary input data, the average step speed of the user may be determined based on the single-step counting result corresponding to each input data.
Wherein the average pace speed can be used to characterize the time required for the user to walk each step.
For example, if the single-step result corresponding to each input datum is 0, 1, 0, 1, 0, 1, the time corresponding to each step (i.e., each "1") in the single-step result can be determined. For example, the first "1" is at the 1 st second, the 2 nd "1" is at the 2 nd second, the third "1" is at the 3 rd second, and the fourth "1" is at the 3.25 th second, and then it can be determined that the average pace speed of the user in the current time zone is 4 seconds/5 steps to 0.8 seconds/step based on the time corresponding to each step.
In step S205, the single step count result corresponding to each input data is corrected based on the average step speed.
In this embodiment, after the average step speed of the user is determined based on the single step result corresponding to each input data, the single step result corresponding to each input data may be corrected based on the average step speed.
For example, when it is determined that the average pace speed of the user in the current time period is 4 seconds/5 steps to 0.8 seconds/step, the time used by each step (i.e., each "1") in the step counting result, i.e., the pace speed of each step, such as the pace speed of the first "1" being 1 second/step, the pace speed of the 2 nd "1" being 1 second/step, the pace speed of the third "1" being 1 second/step, and the pace speed of the fourth "1" being 1 second/step but 0.251 second/step, is greatly different from the average pace speed of 0.8 second/step, so that the step can be regarded as "misjudged", and the step counting result of the fourth step can be ignored, so that the corrected step counting result is 0, 1, 0, and the like, 0. 1.
In step S206, the single step counting results corresponding to each input data are accumulated to obtain the total step result in the current time period.
In this embodiment, after the single step counting results corresponding to the input data of each time are corrected based on the average pace speed, the corrected single step counting results may be accumulated to obtain the total step result in the current time period.
For explanation and explanation of steps S201 to S203 and step S206, reference may be made to the above embodiments, which are not described herein again.
As can be seen from the above description, in this embodiment, the average step speed of the user is determined based on the single step result corresponding to each input data, and the single step result corresponding to each input data is corrected based on the average step speed, so that the accuracy of determining the single step result corresponding to each input data can be improved, and further, the accuracy of determining the total step result can be improved based on the corrected single step result.
FIG. 3A is a flow diagram illustrating how a single step result corresponding to a single input datum is obtained according to an example embodiment. FIG. 3B is a schematic diagram illustrating the structure of a deep neural network, according to an example embodiment.
The present embodiment is exemplarily illustrated on the basis of the above embodiments by taking an example of how to obtain a single step counting result corresponding to one-time input data. As shown in fig. 3B, the pre-constructed deep neural network in this embodiment may include two convolutional layers, two fully-connected layers, and one output layer, which are connected in sequence.
As shown in fig. 3A, the step S103 may include the following steps S301 to S302 of taking a sub-data segment respectively cut based on the at least two data sampling lengths as primary input data and inputting the primary input data into a pre-trained deep neural network to obtain a single step counting result corresponding to the primary input data:
in step S301, the primary input data is input to the two convolutional layers for two-layer convolution and data concatenation.
In step S302, the serially connected data is input to the two fully-connected layers for classification and judgment, and a single step counting result corresponding to the primary input data is obtained through the output layer.
In this embodiment, after data is respectively intercepted from the acceleration data based on at least two data sampling lengths to obtain a plurality of sub-data segments corresponding to each data sampling length, input data of each time may be respectively input to two convolution layers of the deep neural network, convolution operation is performed on convolution kernels based on the two convolution layers, and the data obtained by the operation are connected in series, and then the data connected in series are input to the two fully-connected layers to perform classification judgment, and a single step counting result corresponding to the input data of one time is obtained through the output layer. Based on experimental data verification, the stacking layers of the deep neural network perform best under the network with the same calculation power.
As can be seen from the above description, in this embodiment, a deep neural network including two convolutional layers, two fully-connected layers, and an output layer that are sequentially connected is pre-constructed, the primary input data is input to the two convolutional layers to perform two-layer convolution and data series connection, the data after series connection is input to the two fully-connected layers to perform classification judgment, and a single-step counting result corresponding to the primary input data is obtained through the output layer, so that the accuracy of determining a single-step counting result can be improved, and the accuracy of determining a total step result based on a subsequent single-step counting result can be improved.
FIG. 4 is a flow diagram illustrating how the deep neural network is trained in accordance with an exemplary embodiment. The present embodiment is exemplified by how to train the deep neural network based on the above embodiments. As shown in fig. 4, the method of this embodiment further includes training the deep neural network based on the following steps S401-S404:
in step S401, sample acceleration data is acquired.
In this embodiment, sample acceleration data may be acquired in order to train a deep neural network for determining single step measurements based on single input data.
Wherein the sample acceleration data may comprise historical acceleration data of the terminal device over a historical period of time.
The length of the history time period may be freely set by a developer based on the service requirement, which is not limited in this embodiment.
For example, the historical acceleration data may be multiple axial historical acceleration data acquired by two or more axial acceleration sensors in multiple historical time periods. Taking a three-axis acceleration sensor as an example, the acquired historical acceleration data may include historical acceleration data of the three-axis acceleration sensor in the x-axis direction, historical acceleration data of the three-axis acceleration sensor in the y-axis direction, historical acceleration data of the three-axis acceleration sensor in the z-axis direction, and the like.
In another embodiment, the above-mentioned manner of acquiring the sample acceleration data can also be referred to the following embodiment shown in fig. 5, which is not described in detail herein.
In step S402, sample data is respectively intercepted from the sample acceleration data based on at least two data sampling lengths, so as to obtain a plurality of sample subdata segments corresponding to each data sampling length.
In this embodiment, after the sample acceleration data is obtained, sample data may be respectively intercepted from the sample acceleration data based on at least two data sampling lengths, so as to obtain a plurality of sample sub-data fragments corresponding to each data sampling length.
In this embodiment, the sample data lengths intercepted by the at least two data sample lengths are different. Taking the above-mentioned at least two data sample lengths as an example, the length of the sample data truncated by one data sample length may be 12, and the length of the sample data truncated by another data sample length may be 18. The sample data length may be used to indicate the number of sample data contained in the sample sub-data segment. And in order to fully utilize the acquired sample acceleration data, the adjacent sample sub-data segments cut by the same data sampling length can have an overlapping part. For example, after sample acceleration data is obtained, each sample subdata segment with the sample data length of 12 can be intercepted based on the first data sampling length, that is, sample subdata segments formed by sample data of 1 st to 12 th, 2 nd to 13 th, … … th and (i +11) th are obtained respectively; and intercepting each sample sub-data fragment with the sample data length of 18 according to the second data sampling length, namely obtaining the sample sub-data fragments formed by the sample data of No. 1 to No. 18, No. 2 to No. 19, No. … … and No. i to No. (i + 17).
It should be noted that the sample data length intercepted by the at least two data sample lengths may be set by a developer based on actual needs, which is not limited in this embodiment. For example, considering that in a normal situation, the time duration consumed by each step is different during running and walking, and therefore the method is adapted to the step counting characteristics in the running and walking states, in this embodiment, at least two data sampling lengths are simultaneously adopted to respectively intercept sample sub-data segments with different lengths from the sample acceleration data, so as to perform the subsequent deep neural network training, and the deep neural network that can be trained can adapt to the data characteristics of multiple steps of the user, such as running and walking, and improve the accuracy of determining the single-step measurement result by the deep neural network.
In step S403, one sample sub-data segment respectively cut based on the at least two data sampling lengths is used as primary sample input data, and a single step counting result corresponding to the primary sample input data is determined.
In this embodiment, after sample data is respectively intercepted from the sample acceleration data based on at least two data sampling lengths to obtain a plurality of sample sub-data segments corresponding to each data sampling length, one sample sub-data segment respectively intercepted based on the at least two data sampling lengths may be used as primary sample input data, and a single step counting result corresponding to the primary sample input data is determined.
For example, when each sub-data segment of samples with a sample data length of 12 is truncated based on the first data sample length, respectively obtaining sample sub-data fragments composed of the sample data of the 1 st to 12 th, the 2 nd to 13 th, … … th and the i th to the (i +11) th, and truncating each sample sub-data segment of sample data length 18 based on the second data sample length, namely, after obtaining sample sub-data fragments composed of the sample data of 1 st to 18 th, 2 nd to 19 th, … … th, i th to (i +17) th, a first sample data sub-data fragment (namely, a sample sub-data fragment formed by 1 st to 12 th sample data) intercepted by a first data sampling length and a first sample data fragment (namely, a sample sub-data fragment formed by 1 st to 18 th sample data) intercepted by a second data sampling length can be used as first sample input data; similarly, the ith sample sub-data segment (i.e. the sample sub-data segment consisting of the i to (i +11) th sample data, i ═ 2,3, … …, n, where n is the total number of sample data) cut by the first data sample length and the ith sample sub-data segment (i.e. the sample sub-data segment consisting of the i to (i +17) th sample data, i ═ 2,3, … …, n) cut by the second data sample length may be used as the ith sample input data.
On the basis, single step counting results corresponding to the sample input data of each time can be determined by adopting modes such as manual calibration and the like.
For example, the single step counting result can be used to characterize whether the user takes one step. For example, a single step result may be a "1" or a "0". Where a "1" may indicate that the user is walking one step and a "0" may indicate that the user is not walking.
In step S404, each of the primary sample input data and the corresponding single step counting result is used as a training set to train a pre-constructed deep neural network.
In this embodiment, after one sample sub-data segment respectively intercepted based on the at least two data sampling lengths is used as primary sample input data and a single step counting result corresponding to the primary sample input data is determined, each of the primary sample input data and the corresponding single step counting result may be used as a training set to train a pre-constructed deep neural network.
For example, after a training set composed of each of the primary sample input data and the corresponding single step counting result is obtained, a pre-constructed deep neural network may be trained based on the training set, and then the network training process is ended after a set training termination condition is reached, so as to obtain a trained deep neural network.
It should be noted that the type of the deep neural network may be set by a developer based on actual business needs, and this embodiment does not limit this.
As can be seen from the above description, in this embodiment, sample acceleration data is obtained, sample data is respectively intercepted from the sample acceleration data based on at least two data sampling lengths, so as to obtain a plurality of sample sub-data segments corresponding to each data sampling length, then one sample sub-data segment respectively intercepted based on the at least two data sampling lengths is used as primary sample input data, a single-step counting result corresponding to the primary sample input data is determined, each primary sample input data and the corresponding single-step counting result are used as a training set, a pre-constructed deep neural network is trained, a deep neural network can be trained based on the sample acceleration data, and then a single-step measurement result of a current time period can be determined based on the trained deep neural network, so that the single-step measurement result can be accumulated, the total step result of the current time period is obtained, the method can adapt to the data characteristics of various steps such as running, walking and the like of the user, the accuracy of determining the single step counting result is improved, and the accuracy of determining the total step result based on the single step counting result in the following process can be further improved.
FIG. 5 is a flow chart illustrating how sample acceleration data is acquired in accordance with an exemplary embodiment. The present embodiment is exemplified by how to acquire sample acceleration data based on the above-described embodiments. As shown in fig. 5, the acquiring of the sample acceleration data in step S401 may include the following steps S501 to S502:
in step S501, historical acceleration data of the terminal device within a historical period is acquired.
To train a deep neural network for determining single-step measurements based on single-input data, historical acceleration data of the terminal device over a historical period of time may be obtained.
The length of the history time period may be freely set by a developer based on the service requirement, which is not limited in this embodiment.
For example, the historical acceleration data may be multiple axial historical acceleration data acquired by two or more axial acceleration sensors in multiple historical time periods. Taking a three-axis acceleration sensor as an example, the acquired historical acceleration data may include historical acceleration data of the three-axis acceleration sensor in the x-axis direction, historical acceleration data of the three-axis acceleration sensor in the y-axis direction, historical acceleration data of the three-axis acceleration sensor in the z-axis direction, and the like.
In step S502, sample acceleration data is determined based on the historical acceleration data and preset noise data.
In this embodiment, after obtaining the historical acceleration data of the terminal device in the historical time period, the sample acceleration data may be determined based on the historical acceleration data and the preset noise data.
It should be noted that the type of the preset noise data may be set by a developer based on actual needs, for example, set as gaussian noise, and the level of the corresponding SNR is set to 5-20, which is not limited in this embodiment.
It can be understood that specific noise is added to the original historical acceleration data in the deep neural network training process, so that the nonlinear recognition capability of the network can be greatly improved, and the training data are more accurate.
As can be seen from the above description, in this embodiment, by acquiring historical acceleration data of the terminal device in a historical time period and determining sample acceleration data based on the historical acceleration data and preset noise data, the nonlinear identification capability of the network can be greatly improved, so that the training data is more accurate, the accuracy of determining a single-step result based on a trained deep neural network can be improved, and the accuracy of determining a total-step result based on the single-step result can be improved.
FIG. 6 is a block diagram illustrating a step-counting device in accordance with an exemplary embodiment; the apparatus of the embodiment may be applied to a terminal device (e.g., a smart phone, a tablet computer, a notebook computer, or a wearable device). As shown in fig. 6, the apparatus includes: a data acquisition module 110, a fragment acquisition module 120, a single step count module 130, and a total step count module 140, wherein:
a data obtaining module 110, configured to obtain acceleration data of the terminal device in a current time period;
a segment obtaining module 120, configured to respectively intercept data from the acceleration data based on at least two data sampling lengths, to obtain a plurality of subdata segments corresponding to each data sampling length;
the single step counting module 130 is configured to take one subdata segment respectively intercepted based on the at least two data sampling lengths as primary input data, and input the primary input data into a pre-trained deep neural network to obtain a single step counting result corresponding to the primary input data;
and the step counting total module 140 is configured to accumulate the step counting single step results corresponding to each input data to obtain a step counting total result in the current time period.
As can be seen from the above description, in this embodiment, acceleration data of a terminal device in a current time period is obtained, data is respectively intercepted from the acceleration data based on at least two data sampling lengths, so as to obtain a plurality of sub data segments corresponding to each data sampling length, then one sub data segment respectively intercepted based on the at least two data sampling lengths is used as primary input data and is input into a pre-trained deep neural network, so as to obtain a single step counting result corresponding to the primary input data, and then the single step counting results corresponding to each input data are accumulated, so as to obtain a total step result in the current time period. Because data are respectively intercepted from the acceleration data based on at least two data sampling lengths, and a subdata segment respectively intercepted by the at least two data sampling lengths is input into the pre-trained deep neural network as one-time input data to obtain a corresponding single-step counting result, compared with a method of adopting data with the same length in the related technology, the method can adapt to the data characteristics of various steps of a user such as running, walking and the like, the accuracy of determining the single-step counting result is improved, and the accuracy of subsequently determining the total step result based on the single-step counting result is improved.
FIG. 7 is a block diagram illustrating a step-counting device according to yet another exemplary embodiment; the apparatus of the embodiment may be applied to a terminal device (e.g., a smart phone, a tablet computer, a notebook computer, or a wearable device). The data obtaining module 210, the fragment obtaining module 220, the single step counting module 230, and the total step counting module 240 have the same functions as the data obtaining module 110, the fragment obtaining module 120, the single step counting module 130, and the total step counting module 140 in the embodiment shown in fig. 6, and are not described herein again.
As shown in fig. 7, the apparatus may further include: a result correction module 250;
the result correction module 250 may include:
a speed determining unit 251, configured to determine an average pace speed of the user based on the single step counting result corresponding to the respective input data;
a result correcting unit 252, configured to correct a single step counting result corresponding to the input data based on the average step speed.
In an embodiment, the pre-constructed deep neural network includes two convolutional layers, two fully-connected layers, and an output layer, which are connected in sequence;
the single step counting module 230 may include:
a convolution unit 231 for inputting the primary input data to the two convolution layers for two-layer convolution and data concatenation;
and the classifying unit 232 is configured to input the serially connected data to the two fully connected layers for classification and judgment, and obtain a single step counting result corresponding to the primary input data through the output layer.
In an embodiment, the apparatus may further include: a neural network training module 260;
the neural network training module 260 may include:
a sample data obtaining unit 261, configured to obtain sample acceleration data;
a sample fragment obtaining unit 262, configured to respectively intercept sample data from the sample acceleration data based on at least two data sampling lengths, to obtain a plurality of sample sub-data fragments corresponding to each data sampling length;
a single step result determining unit 263, configured to use one sample sub-data segment respectively intercepted based on the at least two data sampling lengths as primary sample input data, and determine a single step counting result corresponding to the primary sample input data;
and the neural network training unit 264 is configured to train the pre-constructed deep neural network by using each of the primary sample input data and the corresponding single step counting result as a training set.
In an embodiment, the sample data obtaining unit 261 may be further configured to:
acquiring historical acceleration data of the terminal equipment in a historical time period;
sample acceleration data is determined based on the historical acceleration data and preset noise data.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
FIG. 8 is a block diagram of an electronic device shown in accordance with an example embodiment. For example, the apparatus 900 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 8, apparatus 900 may include one or more of the following components: processing component 902, memory 904, power component 906, multimedia component 908, audio component 910, input/output (I/O) interface 912, sensor component 914, and communication component 916.
The processing component 902 generally controls overall operation of the device 900, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. Processing element 902 may include one or more processors 920 to execute instructions to perform all or a portion of the steps of the methods described above. Further, processing component 902 can include one or more modules that facilitate interaction between processing component 902 and other components. For example, the processing component 902 can include a multimedia module to facilitate interaction between the multimedia component 908 and the processing component 902.
The memory 904 is configured to store various types of data to support operation at the device 900. Examples of such data include instructions for any application or method operating on device 900, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 904 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power component 906 provides power to the various components of device 900. Power components 906 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for device 900.
The multimedia components 908 include a screen that provides an output interface between the device 900 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 908 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 900 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 910 is configured to output and/or input audio signals. For example, audio component 910 includes a Microphone (MIC) configured to receive external audio signals when apparatus 900 is in an operating mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 904 or transmitted via the communication component 916. In some embodiments, audio component 910 also includes a speaker for outputting audio signals.
I/O interface 912 provides an interface between processing component 902 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 914 includes one or more sensors for providing status assessment of various aspects of the apparatus 900. For example, sensor assembly 914 may detect an open/closed state of device 900, the relative positioning of components, such as a display and keypad of device 900, the change in position of device 900 or a component of device 900, the presence or absence of user contact with device 900, the orientation or acceleration/deceleration of device 900, and the change in temperature of device 900. The sensor assembly 914 may also include a proximity sensor configured to detect the presence of a nearby object in the absence of any physical contact. The sensor assembly 914 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 914 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 916 is configured to facilitate communications between the apparatus 900 and other devices in a wired or wireless manner. The apparatus 900 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, 4G or 5G or a combination thereof. In an exemplary embodiment, the communication component 916 receives a broadcast signal or broadcast associated information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communications component 916 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 900 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as the memory 904 comprising instructions, executable by the processor 920 of the apparatus 900 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice in the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements that have been described above and shown in the drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (12)

1. A method of step counting, the method comprising:
acquiring acceleration data of the terminal equipment in the current time period;
respectively intercepting data from the acceleration data based on at least two data sampling lengths to obtain a plurality of subdata segments corresponding to each data sampling length;
taking a subdata fragment respectively intercepted based on the at least two data sampling lengths as primary input data, and inputting the primary input data into a pre-trained deep neural network to obtain a single step counting result corresponding to the primary input data;
and accumulating the single step counting results corresponding to each input data to obtain the total step result in the current time period.
2. The method of claim 1, further comprising:
determining the average pace speed of the user based on the single step counting result corresponding to each input data;
and correcting the single step counting result corresponding to each input data based on the average step speed.
3. The method of claim 1, wherein the pre-constructed deep neural network comprises two convolutional layers, two fully-connected layers, and one output layer connected in sequence;
the step counting method comprises the following steps of taking a subdata fragment respectively intercepted based on the at least two data sampling lengths as primary input data, inputting the primary input data into a pre-trained deep neural network, and obtaining a single step counting result corresponding to the primary input data, wherein the step counting method comprises the following steps:
inputting the primary input data into the two convolution layers to carry out two-layer convolution and data series connection;
and inputting the data after series connection into the two fully-connected layers for classification judgment, and obtaining a single step counting result corresponding to the primary input data through the output layer.
4. The method of claim 1, further comprising training the deep neural network based on steps comprising:
acquiring sample acceleration data;
respectively intercepting sample data from the sample acceleration data based on at least two data sampling lengths to obtain a plurality of sample subdata fragments corresponding to each data sampling length;
taking one sample subdata fragment respectively intercepted based on the at least two data sampling lengths as primary sample input data, and determining a single step counting result corresponding to the primary sample input data;
and taking each primary sample input data and the corresponding single step counting result as a training set to train the pre-constructed deep neural network.
5. The method of claim 4, wherein said obtaining sample acceleration data comprises:
acquiring historical acceleration data of the terminal equipment in a historical time period;
sample acceleration data is determined based on the historical acceleration data and preset noise data.
6. A step-counting device, characterized in that the device comprises:
the data acquisition module is used for acquiring the acceleration data of the terminal equipment in the current time period;
the segment acquisition module is used for respectively intercepting data from the acceleration data based on at least two data sampling lengths to obtain a plurality of subdata segments corresponding to each data sampling length;
the single step counting module is used for taking a subdata fragment respectively intercepted based on the at least two data sampling lengths as primary input data and inputting the primary input data into a pre-trained deep neural network to obtain a single step counting result corresponding to the primary input data;
and the step counting module is used for accumulating the single step counting results corresponding to each input data to obtain the total step result in the current time period.
7. The apparatus of claim 6, further comprising: a result correction module;
the result correction module includes:
the speed determining unit is used for determining the average pace speed of the user based on the single step counting result corresponding to the input data;
and the result correction unit is used for correcting the single step counting result corresponding to the input data based on the average step speed.
8. The apparatus of claim 6, wherein the pre-constructed deep neural network comprises two convolutional layers, two fully-connected layers, and one output layer connected in sequence;
the single step counting module comprises:
the convolution unit is used for inputting the primary input data into the two convolution layers to carry out two-layer convolution and data series connection;
and the classification unit is used for inputting the data after serial connection to the two fully-connected layers for classification judgment and obtaining a single step counting result corresponding to the primary input data through the output layer.
9. The apparatus of claim 6, further comprising: a neural network training module;
the neural network training module comprises:
the sample data acquisition unit is used for acquiring sample acceleration data;
a sample fragment obtaining unit, configured to respectively intercept sample data from the sample acceleration data based on at least two data sampling lengths, and obtain a plurality of sample subdata fragments corresponding to each data sampling length;
a single step result determining unit, configured to use one sample sub-data segment respectively intercepted based on the at least two data sampling lengths as primary sample input data, and determine a single step counting result corresponding to the primary sample input data;
and the neural network training unit is used for training the pre-constructed deep neural network by taking each one-time sample input data and the corresponding single step counting result as a training set.
10. The apparatus according to claim 9, wherein the sample data obtaining unit is further configured to:
acquiring historical acceleration data of the terminal equipment in a historical time period;
sample acceleration data is determined based on the historical acceleration data and preset noise data.
11. A step-counting device, characterized in that the device comprises:
a processor, and a memory for storing processor-executable instructions;
wherein the processor is configured to:
acquiring acceleration data of the terminal equipment in the current time period;
respectively intercepting data from the acceleration data based on at least two data sampling lengths to obtain a plurality of subdata fragments corresponding to each data sampling length;
taking a subdata fragment respectively intercepted based on the at least two data sampling lengths as primary input data, and inputting the primary input data into a pre-trained deep neural network to obtain a single step counting result corresponding to the primary input data;
and accumulating the single step counting results corresponding to each input data to obtain the total step result in the current time period.
12. A computer-readable storage medium on which a computer program is stored, the program, when executed by a processor, implementing:
acquiring acceleration data of the terminal equipment in the current time period;
respectively intercepting data from the acceleration data based on at least two data sampling lengths to obtain a plurality of subdata fragments corresponding to each data sampling length;
taking a subdata fragment respectively intercepted based on the at least two data sampling lengths as primary input data, and inputting the primary input data into a pre-trained deep neural network to obtain a single step counting result corresponding to the primary input data;
and accumulating the single step counting results corresponding to each input data to obtain the total step result in the current time period.
CN202011367658.8A 2020-11-27 2020-11-27 Step counting method, device, equipment and storage medium Active CN114563012B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011367658.8A CN114563012B (en) 2020-11-27 2020-11-27 Step counting method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011367658.8A CN114563012B (en) 2020-11-27 2020-11-27 Step counting method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114563012A true CN114563012A (en) 2022-05-31
CN114563012B CN114563012B (en) 2024-06-04

Family

ID=81711714

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011367658.8A Active CN114563012B (en) 2020-11-27 2020-11-27 Step counting method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114563012B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104406604A (en) * 2014-11-21 2015-03-11 中国科学院计算技术研究所 Step counting method
US20160089080A1 (en) * 2014-09-30 2016-03-31 Mophie, Inc. System and method for activity determination
CN106289306A (en) * 2016-07-29 2017-01-04 广东欧珀移动通信有限公司 Step-recording method and device
CN106500717A (en) * 2015-09-08 2017-03-15 中兴通讯股份有限公司 A kind of method and device for realizing counting step
CN106525068A (en) * 2016-11-08 2017-03-22 深圳市金立通信设备有限公司 Step-counting method and terminal
CN106725512A (en) * 2017-02-22 2017-05-31 安徽华米信息科技有限公司 Motion monitoring method, device and wearable device
US20180181860A1 (en) * 2015-06-26 2018-06-28 Sentiance Nv Deriving movement behaviour from sensor data
CN108969980A (en) * 2018-06-28 2018-12-11 广州视源电子科技股份有限公司 Treadmill and step counting method, device and storage medium thereof
CN108981745A (en) * 2018-09-30 2018-12-11 深圳个人数据管理服务有限公司 A kind of step-recording method, device, equipment and storage medium
CN109870172A (en) * 2019-02-25 2019-06-11 广州市香港科大霍英东研究院 Step counting detection method, device, equipment and storage medium
CN110057380A (en) * 2019-04-30 2019-07-26 北京卡路里信息技术有限公司 Step-recording method, device, terminal and storage medium
CN110811578A (en) * 2019-11-27 2020-02-21 青岛歌尔智能传感器有限公司 Step counting device and step counting method thereof, controller and readable storage medium
CN111879334A (en) * 2020-07-31 2020-11-03 歌尔科技有限公司 Step counting method, step counting device and computer readable storage medium

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160089080A1 (en) * 2014-09-30 2016-03-31 Mophie, Inc. System and method for activity determination
CN104406604A (en) * 2014-11-21 2015-03-11 中国科学院计算技术研究所 Step counting method
US20180181860A1 (en) * 2015-06-26 2018-06-28 Sentiance Nv Deriving movement behaviour from sensor data
CN106500717A (en) * 2015-09-08 2017-03-15 中兴通讯股份有限公司 A kind of method and device for realizing counting step
US20180252549A1 (en) * 2015-09-08 2018-09-06 Zte Corporation Method and apparatus for realizing step counting
CN106289306A (en) * 2016-07-29 2017-01-04 广东欧珀移动通信有限公司 Step-recording method and device
CN106525068A (en) * 2016-11-08 2017-03-22 深圳市金立通信设备有限公司 Step-counting method and terminal
CN106725512A (en) * 2017-02-22 2017-05-31 安徽华米信息科技有限公司 Motion monitoring method, device and wearable device
CN108969980A (en) * 2018-06-28 2018-12-11 广州视源电子科技股份有限公司 Treadmill and step counting method, device and storage medium thereof
CN108981745A (en) * 2018-09-30 2018-12-11 深圳个人数据管理服务有限公司 A kind of step-recording method, device, equipment and storage medium
CN109870172A (en) * 2019-02-25 2019-06-11 广州市香港科大霍英东研究院 Step counting detection method, device, equipment and storage medium
CN110057380A (en) * 2019-04-30 2019-07-26 北京卡路里信息技术有限公司 Step-recording method, device, terminal and storage medium
CN110811578A (en) * 2019-11-27 2020-02-21 青岛歌尔智能传感器有限公司 Step counting device and step counting method thereof, controller and readable storage medium
CN111879334A (en) * 2020-07-31 2020-11-03 歌尔科技有限公司 Step counting method, step counting device and computer readable storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
YIWEN JIANG; WEI TANG; NENG GAO;等: "Your Pedometer Tells You: Attribute Inference via Daily Walking Step Count", 2019 IEEE SMARTWORLD, UBIQUITOUS INTELLIGENCE & COMPUTING, ADVANCED & TRUSTED COMPUTING, SCALABLE COMPUTING & COMMUNICATIONS, CLOUD & BIG DATA COMPUTING, INTERNET OF PEOPLE AND SMART CITY INNOVATION, 9 April 2020 (2020-04-09) *
施文美: "基于神经网络校正技术的智能计步器设计", 仪表技术, no. 9, 30 September 2016 (2016-09-30) *
梁久祯;朱向军;陈;: "基于手机加速度传感器的高精低采样计步算法设计", 西北大学学报(自然科学版), vol. 45, no. 05, 31 October 2015 (2015-10-31) *

Also Published As

Publication number Publication date
CN114563012B (en) 2024-06-04

Similar Documents

Publication Publication Date Title
CN107582028B (en) Sleep monitoring method and device
RU2656694C1 (en) Method and device for analysis of social relations
EP2919165A2 (en) Method and device for clustering
CN111160448B (en) Training method and device for image classification model
CN107202574B (en) Motion trail information correction method and device
CN111539443A (en) Image recognition model training method and device and storage medium
CN110751659B (en) Image segmentation method and device, terminal and storage medium
CN107480785B (en) Convolutional neural network training method and device
EP3312702A1 (en) Method and device for identifying gesture
CN111553464A (en) Image processing method and device based on hyper network and intelligent equipment
CN107025421B (en) Fingerprint identification method and device
CN109246184B (en) Time information acquisition method and device and readable storage medium
CN108020374B (en) Air pressure value determination method and device
CN105242837A (en) Application page acquisition method and terminal
CN108984628B (en) Loss value obtaining method and device of content description generation model
CN111539617B (en) Data processing method and device, electronic equipment, interaction system and storage medium
CN111177521A (en) Method and device for determining query term classification model
CN107158685B (en) Exercise verification method and apparatus
CN109145151B (en) Video emotion classification acquisition method and device
CN104991644B (en) Determine the method and apparatus that mobile terminal uses object
CN114563012B (en) Step counting method, device, equipment and storage medium
CN113190725B (en) Object recommendation and model training method and device, equipment, medium and product
CN104954683B (en) Determine the method and device of photographic device
CN109711386B (en) Method and device for obtaining recognition model, electronic equipment and storage medium
CN111650554A (en) Positioning method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant