CN106653058B - Dual-track-based step detection method - Google Patents

Dual-track-based step detection method Download PDF

Info

Publication number
CN106653058B
CN106653058B CN201610971951.2A CN201610971951A CN106653058B CN 106653058 B CN106653058 B CN 106653058B CN 201610971951 A CN201610971951 A CN 201610971951A CN 106653058 B CN106653058 B CN 106653058B
Authority
CN
China
Prior art keywords
probability
audio frame
audio
sound
foot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610971951.2A
Other languages
Chinese (zh)
Other versions
CN106653058A (en
Inventor
王成
龙舟
钱跃良
王向东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Computing Technology of CAS
Original Assignee
Institute of Computing Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Computing Technology of CAS filed Critical Institute of Computing Technology of CAS
Priority to CN201610971951.2A priority Critical patent/CN106653058B/en
Publication of CN106653058A publication Critical patent/CN106653058A/en
Application granted granted Critical
Publication of CN106653058B publication Critical patent/CN106653058B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/45Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of analysis window

Abstract

The invention provides a step detection method based on double sound channels, which comprises the following steps: 1) acquiring dual-channel audio data to be detected and performing framing processing; 2) extracting a feature vector of each audio frame, and obtaining the probability of each audio frame belonging to the step based on the step detection model; the step detection model is a machine learning model which takes the feature vector of an audio frame as input and takes the probability of the audio frame belonging to the step as output, the machine learning model takes the audio frame marked with the heel grounding sound and the audio frame marked with the sole grounding sound as positive samples, and a plurality of audio frames between the sole grounding sound of the previous step and the heel grounding sound of the next step are taken as negative samples for training; 3) and obtaining the time interval corresponding to the steps according to the obtained probability that each audio frame belongs to the steps. The invention can detect the corresponding steps only by the dual-channel audio, and has high precision and recall rate; the method can adapt to various different application scenes and has strong universality.

Description

Dual-track-based step detection method
Technical Field
The invention relates to the technical field of computer application, in particular to the technical field of gait analysis.
Background
Gait analysis (gait analysis) is a technology for obtaining and analyzing gait parameters by observing or collecting postures of a human body during walking, and common gait parameters include spatial parameters (stride length, step width and the like), time parameters (step frequency, step speed and the like), symmetry of left and right feet of the parameters, stability of long-term data and the like. Gait analysis plays a very important role in sports, medical rehabilitation and the like, and is widely applied and researched.
Most of the existing gait analysis technologies are based on video images, pressure sensors, electromyography and the like, and the devices have great invasiveness to the testee, so that the application range is limited. Wangcheng et al in chinese patent application 201610519761.7 propose a method and apparatus for acquiring gait parameters. The method is a gait parameter acquisition scheme based on the two channels. Specifically, the collection device comprises two parts, namely a left foot part and a right foot part, each of which comprises a sound collection sensor, and the two parts are respectively worn on the ankles of the two feet. The step sound data of both feet are gathered simultaneously at the walking in-process to realize the synchro control of two equipment through the bluetooth, satisfied the requirement of the complete and symmetry of both feet data. The scheme firstly provides a gait parameter acquisition based on a sound signal of a double-track and provides a corresponding step identification scheme. However, the step identification based on the sound signal faces a very complicated application scenario, and the accuracy of the step identification may be affected by different types of shoes, different types of grounds, different walking directions and different loading conditions. Therefore, there is a need for a binaural step detection solution that can improve recognition accuracy.
Disclosure of Invention
The invention aims to provide a two-channel footstep detection solution capable of improving the identification accuracy.
The invention provides a dual-track-based step detection method, which comprises the following steps:
1) acquiring dual-channel audio data to be detected and performing framing processing to obtain corresponding audio frames; the dual-channel audio data comprises left foot channel audio data collected by a left foot collecting device and right foot channel audio data collected by a right foot collecting device;
2) extracting a feature vector of each audio frame, and obtaining the probability of each audio frame belonging to the step based on the step detection model; the step detection model is a machine learning model which takes the feature vector of an audio frame as input and takes the probability of the audio frame belonging to the step as output, the machine learning model takes the audio frame marked with the heel grounding sound and the audio frame marked with the sole grounding sound as positive samples, and a plurality of audio frames between the sole grounding sound of the previous step and the heel grounding sound of the next step are taken as negative samples for training;
3) and obtaining whether each audio frame contains footstep sound according to the obtained probability that each audio frame belongs to the footstep, and further obtaining the time interval corresponding to the footstep.
In the step 2), the audio frame of the left foot sound channel and the probability of the audio frame belonging to the step form a left foot sound channel probability curve, and the audio frame of the right foot sound channel and the probability of the audio frame belonging to the step form a right foot sound channel probability curve;
the step 3) further comprises the following steps: and fusing the left foot sound channel probability curve and the right foot sound channel probability curve into a comprehensive probability curve, smoothing the comprehensive probability curve, and obtaining whether each audio frame contains footstep sound or not based on a preset probability threshold value so as to obtain a time interval corresponding to the footsteps.
In the step 1), a hamming window is adopted to perform framing processing on the binaural audio data.
Wherein, in the step 1), the length of the obtained audio frame is 10-30 ms, and the overlapping part of the adjacent frames is 20-30% of the length of the Hamming window.
Wherein, in the step 2), the features of the feature vector constituting the audio frame include: autocorrelation coefficients, sub-band energy characteristics, zero crossing rates, linear prediction coefficient characteristics, and mel-frequency cepstrum coefficient characteristics.
In the step 2), the machine learning model adopts an SVM classifier model, and the positive samples include three audio frames centered at each position where the heel strike is marked in the known left-foot channel audio data and three audio frames centered at each position where the sole strike is marked in the known right-foot channel audio data, and three audio frames centered at each position where the heel strike is marked and three audio frames centered at each position where the sole strike is marked in the known right-foot channel audio data; the negative examples include: nine audio frames between the forefoot strike and the heel strike of the preceding step in the left-foot channel audio data, and nine audio frames between the forefoot strike and the heel strike of the succeeding step in the right-foot channel audio data.
In the step 3), the comprehensive probability curve is smoothed by a low-pass filter.
Wherein, in the step 3), the relative cut-off frequency of the low-pass filter does not exceed 0.1.
In the step 3), the comprehensive probability curve is a summation superposition of the left and right foot sound channel probability curves.
In the step 3), the comprehensive probability curve is a fusion probability curve obtained by taking a larger value for the left and right foot sound channel probability curves.
Compared with the prior art, the invention has the following technical effects:
1. the invention can detect the corresponding steps only by the two-channel audio, and has high precision and recall rate.
2. The invention can adapt to various different application scenes and has strong universality.
Drawings
Embodiments of the invention are described in detail below with reference to the attached drawing figures, wherein:
fig. 1 shows an example of binaural audio data to which the invention relates;
FIG. 2 illustrates a framing approach in one embodiment of the invention;
FIG. 3 illustrates an example of binaural data labeling in one embodiment of the invention, where the peak labeled as heel (vertical line) represents the rear heel strike sound, and the forefoot is not labeled directly in this figure for neatness of the drawing, and in fact, each peak labeled as light color behind the heel peak may be labeled as forefoot, which represents the forefoot strike sound;
FIG. 4 illustrates respective probability curves for a left foot channel and a right foot channel in one embodiment of the invention;
FIG. 5 illustrates a combined probability curve of a summation of probability curves for a left foot channel and a right foot channel in an embodiment of the invention;
FIG. 6 is a graph illustrating the combined probability curves for the larger values of the left channel and right channel probabilities in one embodiment of the present invention;
FIG. 7 shows the result of smoothing the integrated probability curve after summing;
FIG. 8 shows the smoothed result of the integrated probability curve obtained after taking a larger value;
FIG. 9 shows a flow diagram of a step detection method according to an embodiment of the invention.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
According to one embodiment of the invention, a binaural-based step detection method is provided. According to the method, a wearable acoustic sensor is used independently to collect gait data, and then the data are processed and analyzed based on a certain algorithm according to the collected acoustic gait data, so that corresponding footsteps are detected.
In this embodiment, the detection object of the step detection method based on the two channels is audio data of the two channels, and the audio data is obtained by respectively deploying two acoustic sensors at the left foot and the right foot of the person to be detected and acquiring corresponding acoustic signals in real time during the walking process of the person to be detected. And the two acoustic sensors at the left and right foot constitute the binaural. A method of acquiring binaural audio data will be further exemplarily described hereinafter.
In this embodiment, the audio data of the two channels is first framed, then the features of the audio frames are extracted, the classifier is trained by the artificially labeled positive and negative samples, and then the classifier identifies whether a certain audio frame belongs to a part of the footstep sound based on the trained classification, so as to obtain the probability that the audio frame belongs to the footstep sound. For the audio data of the two channels, the probability that all audio frames belong to the footstep sound can be obtained, so that a corresponding probability curve is obtained. Furthermore, after the judgment results of the two audio channels are combined and smoothed, the continuous interval with higher step probability can be analyzed, so that the step interval can be determined.
Specifically, the step detection method of the present embodiment includes the following steps (see fig. 9):
step 1: and performing frame division and windowing on the audio data of the two channels to obtain a series of audio frames. Fig. 2 shows an example of frame windowing. In this example, at an audio sampling rate of 8000hz, each audio frame contains 200 samples, with an overlap interval of 120 samples between adjacent frames, and each audio frame is windowed. That is, a sliding window is added to the audio data, and the sliding window is used to take the corresponding audio frame as the basic unit of investigation of this embodiment. The sliding step of the sliding window is chosen appropriately so that adjacent audio frames have an overlap. The window length of the Hamming window is generally 10-30 ms, and about 20% -30% of the window length is taken as the sliding step length.
Step 2: and extracting the features of the audio frame to obtain the feature vector of the audio frame. In this embodiment, the feature vector includes: autocorrelation coefficients, sub-band energy (0-4 kHz) characteristics, zero crossing rates, linear prediction coefficient (LPCC) characteristics, and Mel-cepstrum coefficient (MFCC) characteristics. In one embodiment, the feature vector is specifically constructed as: the method comprises the following steps of (1) 36 dimensions including 10-dimensional sub-band energy characteristics, 12-dimensional Mel cepstrum coefficient characteristics, 12-dimensional linear prediction coefficients and zero crossing rate and autocorrelation coefficients. Table 1 shows the dimensions of the feature vector.
TABLE 1
Figure BDA0001145016700000041
It should be noted that the dimensions of the feature vectors and the specific combination of features that make up the feature vectors are not exclusive. In other embodiments, the feature vector may also be: and (3) free combination of part or all of characteristics of autocorrelation coefficients, sub-band energy (0-4 kHz), zero crossing rate, LPCC and MFCC. So long as the combination of features better characterizes the information implied by the audio frame.
And step 3: positive and negative samples for training are selected.
The inventor analyzes that the step sound is typically characterized by comprising two sounds of the heel and the forefoot of the foot touching the ground, and the devices of the left foot and the right foot can acquire signals comprising corresponding touch sound signals, but the audio signals of the foot on the side are relatively strong. Therefore, in the manual labeling, the positions of two tones per step (two tones of landing the heel and the forefoot) can be sequentially labeled on the audio data of the corresponding sides according to the left and right feet on the two audio channels of the left and right feet (refer to fig. 3).
In this embodiment, 3 frames are taken as positive samples respectively on two channels of audio of the left and right feet with each marked position as the center, so that each step in a single channel corresponds to 6 positive samples, and the two channels have 12 positive samples in total. Then, 9 consecutive frames are taken as negative samples at the middle position of two adjacent steps (the middle position between the second sound of the previous foot and the first sound of the next foot), so that 9 negative samples exist between every two steps in the single channel (namely, between the adjacent left foot landing sound and right foot landing sound or between the adjacent right foot landing sound and left foot landing sound), and 18 negative samples exist between every two steps in the dual channel.
And 4, step 4: the positive and negative samples are combined to form a sample library, and a step detection classifier is trained. The step detection classifier may employ an SVM classifier. The input of the method is a feature vector representing an audio frame, and the output is the probability that the audio frame belongs to a step. For positive samples, the probability is 1, and for negative samples, the probability is 0.
And 5: and detecting each audio frame of the two-channel audio data to be detected by using the trained step detection classifier to obtain the probability of each audio frame belonging to the step, and establishing a corresponding probability curve. The probability curve is a curve whose abscissa is the number of audio frames (or the time represented by an audio frame) and whose ordinate is the probability that the corresponding audio frame belongs to a step. In the step detection process, after continuous framing is performed on the two-channel audio data to be detected, feature vectors are extracted from each frame, then the trained classifier is used for calculating the probability of the step voice, a probability change curve of the continuous audio frames belonging to the step voice can be obtained, and the two channels of audio data of the left foot and the right foot correspond to the two probability curves (refer to fig. 4). In this step, the audio frame of the binaural audio data to be detected is obtained according to the method of step 1, and then the feature vector is extracted according to the method of step 2, which is not described herein again.
Step 6: and smoothing the probability curves of the left foot and the right foot, and finding out an interval with the value continuously larger than a preset threshold value in the smoothed curves so as to obtain an interval belonging to the step.
In one embodiment (refer to fig. 5), the probability curves of the left and right feet are merged based on a probability summation method, and then in order to overcome the larger instability and noise points of the probability curves, a low-pass filter (with a relative cut-off frequency of 0.1) is used for smoothing, and the smoothed probability curves have a more obvious "large probability" interval, so that the intervals continuously exceeding the threshold can be found according to a preset threshold (for example, 0.8 or 1), and the intervals are determined as the intervals belonging to the steps. The principle of this treatment method is: the probabilities on both sides are higher at the position of the step sound, and the probability of one step can be more prominent after summation. Fig. 7 shows the result of smoothing the integrated probability curve obtained after summing.
In another embodiment (see fig. 6), the step interval is determined based on a binaural probability maximization method. Generally, the audio data of the side channel is determined to be the step with a higher probability, so that the audio data of the side channel can be more depended on, and the probability of the audio data of the other side plays a complementary role. At this time, the preset threshold may be 0.5. For each pair of candidate audio frames (referring to the audio frames of the left and right channels with the same time table), the one with the higher probability is selected, and then the probability value of the audio frame position in the comprehensive probability curve is represented by the selected one, so that the probability curve of the comprehensive left and right feet audio data is obtained. And searching for the intervals continuously exceeding the threshold value by using a preset probability threshold for the comprehensive probability curve, and judging the intervals as the intervals belonging to the steps. Fig. 8 shows the result of smoothing the integrated probability curve obtained after taking a larger value.
The sound-based detection method does not miss real footstep sound, and has high recall rate and accuracy rate.
The inventors have conducted tests according to the above method, and the test data are shown in table 2.
TABLE 2
Figure BDA0001145016700000061
And (3) testing results: aiming at test data including sports shoes, leather shoes, wooden floors, cement floors, different walking directions and different loads, the average accuracy and recall rate of the invention are respectively as follows: 90.89% and 97.29%.
Further, a wearable gait data acquisition device and an acquisition method for acquiring binaural audio data are exemplarily described below. Illustratively, the wearable gait data acquisition device comprises a microphone unit capable of acquiring acoustic signals. The acquisition device can transmit acquired data to the intelligent terminal for processing signals. The gait data acquisition device comprises a left foot gait data acquisition node and a right foot gait data acquisition node. Each gait data acquisition node comprises a storage unit, a Microprocessor (MCU), a power supply unit, a wireless transceiver unit (mainly used for connecting with a gait data analysis end and transmitting data of the acquisition node to the gait data analysis end, such as bluetooth or WIFI or a telecommunication network), a signal collector (such as a microphone capable of receiving common sound waves and ultrasonic waves at the same time), and a signal transmitter (such as a microphone with an ultrasonic transmitting function). When data is collected, a signal collector (such as a microphone) collects sound signals, and the collected signals are sent to the MCU for processing. In addition, the MCU is also used for scheduling the data transmitted and received by the wireless transmitting and receiving unit, the access of the data of the storage unit and the like.
As described above, binaural-based step detection requires binaural audio data collected during the walking process of the person under test. Illustratively, the method for acquiring the binaural audio data comprises the following steps:
step a: and respectively fixing the left foot gait data acquisition node and the right foot gait data acquisition node on the left foot and the right foot of the tested person. Two gait data acquisition equipment nodes are simultaneously used on double feet (the meaning of the feet is the same as that of the feet, and the feet can be replaced mutually), the data of the left foot and the right foot are analyzed and fused, and more accurate information can be obtained compared with a single-foot measurement mode. Specifically, the gait data acquisition nodes can be worn at different positions on the sole, and the gait data acquisition nodes can be pre-installed in the shoe during shoe production, for example, the pre-installed positions can be the front side, the outer side or the rear side of the upper, and can also be the position of the sole near the forefoot, the middle or the heel. Preferably, the left foot gait data acquisition node and the right foot gait data acquisition node are worn on the symmetrical positions of the left foot and the right foot.
In another embodiment, the gait data collecting node may also be a shoe-independent device, and the independent gait data collecting node may be worn at both ankles. The gait data acquisition node can be fixed on the outer side, the rear side or the front side of the ankle by an elastic bandage. Preferably, the left foot gait data acquisition node and the right foot gait data acquisition node are worn on the symmetrical positions of the left foot and the right foot. After the gait data acquisition node is worn by both feet, the elastic bandage can be adjusted according to the requirement, so that the elastic bandage is tightly fixed on the feet and does not move. This kind of wearing mode is favorable to the comfort level that the user dressed, also can make gait data acquisition node and foot fixed well simultaneously, and then the accuracy that the step detected. When gait data is collected, the collecting device (node) is powered on and started, the biped node program is started, and after the tested object starts to walk, the gait walking data of the tested object is collected.
Step b: in the walking process of the tested person, the left foot gait data acquisition node and the right foot gait data acquisition node respectively acquire sound signals generated by foot walking, so that the required two-channel audio data is obtained.
Finally, it should be noted that the above examples are only intended to describe the technical solutions of the present invention and not to limit the technical methods, the present invention can be extended in application to other modifications, variations, applications and embodiments, and therefore all such modifications, variations, applications, embodiments are considered to be within the spirit and teaching scope of the present invention.

Claims (9)

1. A dual-track-based step detection method comprises the following steps:
1) acquiring dual-channel audio data to be detected and performing framing processing to obtain corresponding audio frames; the dual-channel audio data comprises left foot channel audio data collected by a left foot collecting device and right foot channel audio data collected by a right foot collecting device;
2) extracting a feature vector of each audio frame, and obtaining the probability of each audio frame belonging to the step based on the step detection model; the step detection model is a machine learning model which takes the feature vector of an audio frame as input and takes the probability of the audio frame belonging to the step as output, the machine learning model takes the audio frame marked with the heel grounding sound and the audio frame marked with the sole grounding sound as positive samples, and a plurality of audio frames between the sole grounding sound of the previous step and the heel grounding sound of the next step are taken as negative samples for training;
3) obtaining whether each audio frame contains footstep sound according to the obtained probability that each audio frame belongs to the footstep, and further obtaining a time interval corresponding to the footstep;
in the step 2), the audio frame of the left foot sound channel and the probability of the audio frame belonging to the footsteps form a left foot sound channel probability curve, and the audio frame of the right foot sound channel and the probability of the audio frame belonging to the footsteps form a right foot sound channel probability curve;
the step 3) further comprises the following steps: and fusing the left foot sound channel probability curve and the right foot sound channel probability curve into a comprehensive probability curve, smoothing the comprehensive probability curve, and obtaining whether each audio frame contains footstep sound or not based on a preset probability threshold value so as to obtain a time interval corresponding to the footsteps.
2. The binaural-based step detection method according to claim 1, wherein in step 1), the binaural audio data are framed using a hamming window.
3. The binaural-based step detection method according to claim 2, wherein in step 1), the length of the obtained audio frame is 10-30 ms, and the overlapping portion of the adjacent frames is 20% -30% of the length of the hamming window.
4. The binaural-based step detection method according to claim 3, wherein the step 2) of constructing the feature vector of the audio frame comprises: autocorrelation coefficients, sub-band energy characteristics, zero crossing rates, linear prediction coefficient characteristics, and mel-frequency cepstrum coefficient characteristics.
5. The binaural-based step detection method according to claim 3, wherein in the step 2), the machine learning model employs an SVM classifier model, and the positive samples include three audio frames centered at each position where heel strike is labeled and three audio frames centered at each position where forefoot strike is labeled in the known left-foot channel audio data, and three audio frames centered at each position where heel strike is labeled and three audio frames centered at each position where forefoot strike is labeled in the known right-foot channel audio data; the negative examples include: nine audio frames between the forefoot strike and the heel strike of the preceding step in the left-foot channel audio data, and nine audio frames between the forefoot strike and the heel strike of the succeeding step in the right-foot channel audio data.
6. The binaural-based step detection method according to claim 1, wherein in step 3), the integrated probability curve is smoothed by a low-pass filter.
7. The binaural-based step detection method according to claim 6, wherein in step 3) the relative cut-off frequency of the low-pass filter does not exceed 0.1.
8. The binaural-based step detection method according to claim 6, wherein in step 3), the composite probability curve is a sum-and-sum superposition of the left and right foot vocal tract probability curves.
9. The binaural-based step detection method according to claim 6, wherein in the step 3), the synthetic probability curve is a fused probability curve obtained by taking a larger value for the left and right foot vocal tract probability curves.
CN201610971951.2A 2016-10-28 2016-10-28 Dual-track-based step detection method Active CN106653058B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610971951.2A CN106653058B (en) 2016-10-28 2016-10-28 Dual-track-based step detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610971951.2A CN106653058B (en) 2016-10-28 2016-10-28 Dual-track-based step detection method

Publications (2)

Publication Number Publication Date
CN106653058A CN106653058A (en) 2017-05-10
CN106653058B true CN106653058B (en) 2020-03-17

Family

ID=58821880

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610971951.2A Active CN106653058B (en) 2016-10-28 2016-10-28 Dual-track-based step detection method

Country Status (1)

Country Link
CN (1) CN106653058B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109147771B (en) * 2017-06-28 2021-07-06 广州视源电子科技股份有限公司 Audio segmentation method and system
CN108388594A (en) * 2018-01-31 2018-08-10 上海乐愚智能科技有限公司 It wears the clothes reminding method and intelligent appliance
CN109473113A (en) * 2018-11-13 2019-03-15 北京物灵智能科技有限公司 A kind of sound identification method and device
CN110189767B (en) * 2019-04-30 2022-05-03 上海大学 Recording mobile equipment detection method based on dual-channel audio
CN113689861B (en) * 2021-08-10 2024-02-27 上海淇玥信息技术有限公司 Intelligent track dividing method, device and system for mono call recording

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1787076A (en) * 2005-12-13 2006-06-14 浙江大学 Method for distinguishing speek person based on hybrid supporting vector machine
CN1940570A (en) * 2005-09-16 2007-04-04 三星电子株式会社 Apparatus and method for detecting steps in a personal navigation system
CN101057273A (en) * 2004-10-18 2007-10-17 索尼株式会社 Content reproducing method and content reproducing device
CN102890930A (en) * 2011-07-19 2013-01-23 上海上大海润信息系统有限公司 Speech emotion recognizing method based on hidden Markov model (HMM) / self-organizing feature map neural network (SOFMNN) hybrid model
CN104729507A (en) * 2015-04-13 2015-06-24 大连理工大学 Gait recognition method based on inertial sensor
CN105912142A (en) * 2016-02-05 2016-08-31 深圳市爱康伟达智能医疗科技有限公司 Step recording and behavior identification method based on acceleration sensor

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2877350B2 (en) * 1989-05-24 1999-03-31 株式会社東芝 Voice recognition device with environmental monitor
JP2002197437A (en) * 2000-12-27 2002-07-12 Sony Corp Walking detection system, walking detector, device and walking detecting method
JP2013113746A (en) * 2011-11-29 2013-06-10 Panasonic Corp Noise monitoring device
JP5768021B2 (en) * 2012-08-07 2015-08-26 日本電信電話株式会社 Gait measuring device, method and program

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101057273A (en) * 2004-10-18 2007-10-17 索尼株式会社 Content reproducing method and content reproducing device
CN1940570A (en) * 2005-09-16 2007-04-04 三星电子株式会社 Apparatus and method for detecting steps in a personal navigation system
CN1787076A (en) * 2005-12-13 2006-06-14 浙江大学 Method for distinguishing speek person based on hybrid supporting vector machine
CN102890930A (en) * 2011-07-19 2013-01-23 上海上大海润信息系统有限公司 Speech emotion recognizing method based on hidden Markov model (HMM) / self-organizing feature map neural network (SOFMNN) hybrid model
CN104729507A (en) * 2015-04-13 2015-06-24 大连理工大学 Gait recognition method based on inertial sensor
CN105912142A (en) * 2016-02-05 2016-08-31 深圳市爱康伟达智能医疗科技有限公司 Step recording and behavior identification method based on acceleration sensor

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Estimate spatial-temporal parameters of human gait using inertial sensors";Zhelong Wang;《2015 IEEE International Conference on Cyber Technology in Automation, Control, and Intelligent Systems 》;20151005;全文 *
"Real-time gait cycle parameters recognition using a wearable motion detector";Che-Chang Yang;《Proceedings 2011 International Conference on System Science and Engineering》;20110725;全文 *

Also Published As

Publication number Publication date
CN106653058A (en) 2017-05-10

Similar Documents

Publication Publication Date Title
CN106653058B (en) Dual-track-based step detection method
CN106166071B (en) A kind of acquisition method and equipment of gait parameter
CN106531186B (en) Merge the step detection method of acceleration and audio-frequency information
Mazilu et al. Online detection of freezing of gait with smartphones and machine learning techniques
US11047706B2 (en) Pedometer with accelerometer and foot motion distinguishing method
Wang et al. Spiderwalk: Circumstance-aware transportation activity detection using a novel contact vibration sensor
US20140066816A1 (en) Method, apparatus, and system for characterizing gait
CN109480857B (en) Device and method for detecting frozen gait of Parkinson disease patient
Song et al. Speed estimation from a tri-axial accelerometer using neural networks
Barth et al. Subsequence dynamic time warping as a method for robust step segmentation using gyroscope signals of daily life activities
TW201642803A (en) Wearable pulse sensing device signal quality estimation
JP2002197437A (en) Walking detection system, walking detector, device and walking detecting method
CN107170466B (en) Mopping sound detection method based on audio
CN108958482B (en) Similarity action recognition device and method based on convolutional neural network
CN107194193A (en) A kind of ankle pump motion monitoring method and device
KR102304300B1 (en) A method and apparatus for detecting walking factor with portion acceleration sensor
KR101998465B1 (en) A gait analysis device and method for neurological diseases using ankle-worn accelerometer
CN107247974B (en) Body-building exercise identification method and system based on multi-source data fusion
CN109743667A (en) Earphone wears detection method and earphone
CN105496371A (en) Method for emotion monitoring of call center service staff
CN108919962B (en) Auxiliary piano training method based on brain-computer data centralized processing
CN111079651A (en) Re-optimization of gait classification method
GB2600126A (en) Improvements in or relating to wearable sensor apparatus
CN103996405B (en) music interaction method and system
CN110141266B (en) Bowel sound detection method based on wearable body sound capture technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20170510

Assignee: Beijing Zhongke Huicheng Technology Co., Ltd.

Assignor: Institute of Computing Technology, Chinese Academy of Sciences

Contract record no.: 2018110000005

Denomination of invention: Double-channel step detection method

License type: Common License

Record date: 20180222

EC01 Cancellation of recordation of patent licensing contract
EC01 Cancellation of recordation of patent licensing contract

Assignee: Beijing Zhongke Huicheng Technology Co., Ltd.

Assignor: Institute of Computing Technology, Chinese Academy of Sciences

Contract record no.: 2018110000005

Date of cancellation: 20180309

EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20170510

Assignee: Luoyang Zhongke Huicheng Technology Co., Ltd.

Assignor: Institute of Computing Technology, Chinese Academy of Sciences

Contract record no.: 2018110000009

Denomination of invention: Double-channel step detection method

License type: Common License

Record date: 20180319

GR01 Patent grant
GR01 Patent grant