CN107862276B - Behavior identification method and terminal - Google Patents

Behavior identification method and terminal Download PDF

Info

Publication number
CN107862276B
CN107862276B CN201711060185.5A CN201711060185A CN107862276B CN 107862276 B CN107862276 B CN 107862276B CN 201711060185 A CN201711060185 A CN 201711060185A CN 107862276 B CN107862276 B CN 107862276B
Authority
CN
China
Prior art keywords
signal
segmented
segmentation
window
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711060185.5A
Other languages
Chinese (zh)
Other versions
CN107862276A (en
Inventor
凌茵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Watertek Information Technology Co Ltd
Original Assignee
Beijing Watertek Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Watertek Information Technology Co Ltd filed Critical Beijing Watertek Information Technology Co Ltd
Priority to CN201711060185.5A priority Critical patent/CN107862276B/en
Publication of CN107862276A publication Critical patent/CN107862276A/en
Application granted granted Critical
Publication of CN107862276B publication Critical patent/CN107862276B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Complex Calculations (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention discloses a behavior identification method and a terminal, wherein the behavior identification method comprises the following steps: the method comprises the steps of collecting induction data of behaviors to be recognized, segmenting signals generated by the induction data to generate segmented signals, and selecting at least one segment of segmented signals to recognize the behaviors. The behavior recognition method and the terminal provided by the invention realize automatic segmentation of the motion signal in the behavior to be recognized so as to separate different types of behaviors to be recognized in the induction data, only a certain segment of segmentation signal generated by segmentation is required to be recognized for the behavior type, so that frequent classification and behavior recognition operation are avoided, frequent automatic recognition operation overhead is reduced, and the recognition accuracy is high and the time consumption is short.

Description

Behavior identification method and terminal
Technical Field
The present invention relates to computer technologies, and in particular, to a behavior recognition method and a terminal.
Background
Human behavior analysis based on intelligent motion sensor technology has been applied in the fields of health care, social networks, games, education, transportation, and the like. Through the assistance of intelligent computation, human behavior patterns, abnormal accidents or diseases, behavior habits, motion information statistics and other behaviors can be identified.
At present, the behavior recognition method mainly classifies behaviors of recognized behavior data, and repeatedly discriminates the same type of behaviors to a certain number, namely, recognizes the behaviors as discriminated behaviors. However, in order to ensure the accuracy of behavior recognition, the current behavior recognition method needs to process frequent classification and behavior recognition operations, which is time-consuming and resource-consuming.
Disclosure of Invention
In order to solve the technical problems, the invention provides a behavior identification method and a terminal, which reduce the frequent automatic identification operation overhead, and have high identification accuracy and short time consumption.
In one aspect, the present invention provides a behavior recognition method, including:
acquiring sensing data of behaviors to be recognized;
segmenting the signals generated by the induction data to generate segmented signals;
and selecting at least one section of the segmented signals to perform behavior recognition.
In another aspect, the present invention provides a terminal, including:
the acquisition module is used for acquiring the induction data of the behavior to be identified;
the segmentation module is used for segmenting the signals generated by the induction data to generate segmented signals;
and the identification module is used for selecting at least one section of the segmented signals to perform behavior identification.
In yet another aspect, the present invention provides a terminal, including: a processor and a memory for storing execution instructions; the processor calls the execution instruction to execute the operation of the behavior recognition method embodiment.
According to the behavior recognition method and the terminal, the induction data of the behavior to be recognized are collected, the signals generated by the induction data are segmented to generate segmented signals, and at least one segment of segmented signals are selected to recognize the behavior. According to the embodiment of the invention, the signals generated by the induction data are segmented, and each segment of signals in the segmented signals corresponds to one motion behavior of the user, so that the motion signals in the behaviors to be identified are automatically segmented, the behaviors to be identified in the induction data are separated, only a certain segment of segmented signals generated by segmentation is required to be identified, the behavior types are identified, the frequent classification and behavior identification operation is avoided, the frequent automatic identification operation cost is reduced, and the identification accuracy is high and the time consumption is short.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the example serve to explain the principles of the invention and not to limit the invention.
Fig. 1 is a flowchart of a behavior recognition method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a first quadrant of normalized auto-correlation of a reference window and a test window before a first zero crossing provided by an embodiment of the present invention;
fig. 3 is a flowchart of a behavior recognition method according to a second embodiment of the present invention;
fig. 4 is a schematic structural diagram of a terminal according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a terminal according to a second embodiment of the present invention;
FIG. 6 is a segmented result of walking, running, sitting of the measured behavior data set 1 applying the present invention;
FIG. 7 is a noise separation algorithm for the walk and cross-run handoff phases provided by an embodiment of the present invention;
fig. 8 shows the segmented results of walking, sitting, running and standing of the measured behavior data set 2 applying the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail below with reference to the accompanying drawings. It should be noted that the embodiments and features of the embodiments in the present application may be arbitrarily combined with each other without conflict.
The steps illustrated in the flow charts of the figures may be performed in a computer system such as a set of computer-executable instructions. Also, while a logical order is shown in the flow diagrams, in some cases, the steps shown or described may be performed in an order different than here.
Fig. 1 is a flowchart of a behavior recognition method according to an embodiment of the present invention, and as shown in fig. 1, the behavior recognition method specifically includes the following steps:
s101: and acquiring sensing data of the behavior to be identified.
The execution main body of the embodiment of the invention can be an intelligent terminal with a sensor function, such as a smart phone; it may also be a wearable device with sensor functionality, such as a smart bracelet.
Specifically, the intelligent terminal or the wearable device collects the sensing data of the behavior to be identified in real time through the sensor. The behavior to be recognized may refer to an activity state of the user, such as walking, running, sitting through, standing, or the like; it may also refer to a scene where the user is located, such as outdoors, indoors, or in public places; it may also refer to a combination of the user activity status and the scene the user is in, such as standing outdoors or sitting indoors, etc.
S102: and segmenting the signal generated by the induction data to generate a segmented signal.
Specifically, each segment of the segmented signals corresponds to an exercise behavior of the user, for example, if the signal generated by the sensing data includes two behaviors to be recognized, namely, a user walking behavior and a user running behavior, the segmented signals include two segments of signals, one segment corresponds to the user walking behavior, and the other segment corresponds to the user running behavior. According to the embodiment of the invention, the signals generated by the induction data are segmented, so that the automatic segmentation of the motion signals in the behaviors to be recognized is realized, different types of behaviors to be recognized in the induction data are separated, and the induction data of the behaviors to be recognized are divided into different types of data subsequences. The signal generated by the induction data can be obtained by filtering the acquired induction data.
Specifically, the segmenting the signal generated by the sensing data according to the embodiment of the present invention may specifically be: carrying out segmentation by adopting statistical analysis based on a Gaussian model to generate a segmentation signal which changes according to a mean value and a variance; or, a time sequence processing method of a sliding window autocorrelation algorithm is adopted for segmentation, and segmented signals which change according to frequency and amplitude are generated; or, a time sequence processing method based on the statistical analysis of a Gaussian model and a sliding window autocorrelation algorithm is adopted for segmentation at the same time, and segmented signals changing according to the mean value, the variance, the frequency and the amplitude are generated. The specific segmentation method of the time sequence processing method based on the statistical analysis of the gaussian model and the sliding window autocorrelation algorithm is described in detail in the following embodiments.
S103: and selecting at least one segment of segmented signals to perform behavior recognition.
Specifically, according to the service provided by the intelligent terminal or the wearable device for the user according to the self requirement, one or more segmented signals can be selected for identification. For example, the intelligent terminal or the wearable device needs to provide a user with a walking service when the user walks, and only the segmented signal corresponding to the walking service of the user needs to be selected for identification. That is, in the embodiment of the present invention, only the motion type of the motion type data subsequence corresponding to the segment signal generated by the S102 segment needs to be identified. The embodiment of the present invention may apply a feature extraction and classification technology to perform behavior recognition on each segment of segmented signals, and the implementation principle of performing behavior recognition by using the feature extraction and classification technology is the same as that of the prior art, which is not described herein again.
According to the behavior recognition method provided by the embodiment of the invention, the induction data of the behavior to be recognized is collected, the signals generated by the induction data are segmented to generate segmented signals, and at least one segment of segmented signals is selected for behavior recognition. According to the embodiment of the invention, the signals generated by the induction data are segmented, and each segment of signals in the segmented signals corresponds to one motion behavior of the user, so that the motion signals in the behaviors to be identified are automatically segmented, the behaviors to be identified in the induction data are separated, only a certain segment of segmented signals generated by segmentation is required to be identified, the behavior types are identified, the frequent classification and behavior identification operation is avoided, the frequent automatic identification operation cost is reduced, and the identification accuracy is high and the time consumption is short.
Further, in the above embodiments, segmenting the signal generated by sensing data to generate a segmented signal includes: generating a reference window and a test window corresponding to a signal generated by sensing data; the test window slides from the signal starting end to the signal tail end, when the signal autocorrelation difference value of the reference window and the test window is larger than a set threshold value, a segmentation change point is generated, a first segmentation signal segmented at the segmentation change point is generated, and the first segmentation signal is a segmentation signal determined according to frequency and amplitude change; the first segmented signal is taken as a segmented signal.
Specifically, the embodiment of the present invention adopts a time sequence processing method of a sliding window autocorrelation algorithm to perform segmentation, fig. 2 is a schematic diagram of a first quadrant of normalized autocorrelation of a reference window and a test window before a first zero-crossing point, which is provided by the embodiment of the present invention, and as shown in fig. 2, the sliding window autocorrelation algorithm is used to measure a normalized autocorrelation difference between the reference window before signal frequency and amplitude are changed and the test window after the change. Wherein the reference window is fixed and the test window is sliding, the window length of the test window should be long enough to reflect the slowest frequency of the signal and shorter than the smallest expected segment, wherein the smallest expected segment refers to the shortest segment length among the separate segments, and the detection of the segment signal with the shortest segment length is avoided when the test window length is longer than the shortest segment length, so as to ensure that the segment signals with all segment lengths can be detected. When the autocorrelation difference between the reference window and the test window exceeds a set threshold, a new segment is generated, and the reference window is moved to the boundary of the current segment. The preset threshold is obtained by applying a segmentation test according to the training data and adjusting the accuracy of the comparison result of the threshold. Typically, the boundary points of the test window where segmentation occurs are not ideal estimates of the change points, but can be used to locate the initial position of the change points, within the test window, the determination of which calculates the maximum autocorrelation difference between the reference window and the test window by sliding the test window.
Specifically, the sliding window autocorrelation algorithm of the embodiment of the invention adopts a formula
Figure BDA0001454634200000051
And calculating the signal autocorrelation difference of the reference window and the test window.
Wherein ATHR and FTHR are respective threshold values of a set amplitude difference value and a set frequency difference value; the amplitude difference ADIFF is represented by the formula
Figure BDA0001454634200000052
Calculated to obtain p (0)RIs the autocorrelation value of the reference window when the test window slides to the beginning of the signal, p (0)TIs the autocorrelation value of the test window when the test window slides to the beginning of the signal; the frequency difference FDIFF is calculated from the formula FDIFF, B/C, where B is the difference part of the autocorrelation signal of the reference window and the test window, and C is the sum of the reference window and the test windowThe same portion of the window autocorrelation signal is tested.
Wherein, the signal autocorrelation p (h) of the reference window and the test window is represented by the formula
Figure BDA0001454634200000053
And calculating, wherein h is a time interval in the autocorrelation operation of the reference window and the test window, and when h is 0, the test window is positioned at the beginning of the signal. i is the index of the sequence point in the reference window or the test window; x is the number ofiIs a sequence point in a reference window or a test window; t is the window length of the reference window or the test window.
The embodiment of the invention carries out segmentation by adopting a time sequence processing method of a sliding window autocorrelation algorithm, and generates new segmentation when the autocorrelation difference between a reference window and a test window exceeds a set threshold value, thereby realizing a first segmentation signal which changes according to frequency and amplitude.
Further, in the above embodiments, segmenting the signal generated by sensing data to generate a segmented signal includes: generating a comparison function corresponding to the sensing data; the minimization of the comparison function generates a second segmented signal, which is a segmented signal that varies by mean and variance.
Specifically, the embodiment of the invention adopts a statistical analysis method based on a gaussian model to carry out segmentation, the statistical analysis method based on the gaussian model is to calculate a comparison function generated by gaussian models with different mean variances according to the calculated number of segments M, and minimize the comparison function to generate a first segmentation signal which changes according to the mean and the variance.
Specifically, generating a comparison function corresponding to the sensing data includes:
using formula Xi=μiiεiGenerating Observation data Xi
Using a formula
Figure BDA0001454634200000061
And
Figure BDA0001454634200000062
Figure BDA0001454634200000063
calculating observed data XiAnd a difference value J (i, x) between the preset Gaussian model and the reference value J;
using the formulae h (i) ═ J (i, x) + β m (i) and
Figure BDA0001454634200000064
generating a comparison function H (i);
where i is the index of the observed variable, μiAnd σiRespectively mean value and standard deviation of the observed variable in the preset Gaussian model of the preset segmentiIs a random variable of an observed variable in a preset segment; m (i) is a sequence of fractional numbers, M (i) is determined by the dimension K (i) of index i; k (i) is the sequence of change points resulting from the minimization of the comparison function; m is the sequence of fractional numbers M (i) the maximum value of the quadratic deviation in the normalized comparison is greater than 0.75; t is the total length of the sequence of fractional numbers M (i); j is the index of the change point sequence, j is 1,2, … M; k is a radical ofjIs the jth change point, k, in the sequence of change pointsj-1Is the j-1 th change point in the sequence of change points; k is a radical ofj-1+1:kjIs from the change point kj-1To the next change point kjThe sequence points are arranged in a sequence of points,
Figure BDA0001454634200000065
is that the index i changes from the point kj-1To the next change point kjMean values of observed data corresponding to the sequence points.
The embodiment of the invention carries out segmentation by adopting a statistical analysis method based on a Gaussian model, calculates a comparison function generated by the Gaussian models with different mean variances, and minimizes the comparison function so as to realize a first segmentation signal which changes according to the mean and the variance.
Fig. 3 is a flowchart of a behavior recognition method according to a second embodiment of the present invention, and as shown in fig. 3, the behavior recognition method specifically includes the following steps:
s301: and acquiring sensing data of the behavior to be identified.
It should be noted that the implementation manners of S301 and S101 are the same, and refer to the description of S101 for details, which are not described herein again.
S302: and generating a reference window and a test window corresponding to the signals generated by the induction data.
S303: and sliding the test window from the signal starting end to the signal tail end, generating a segment change point when the signal autocorrelation difference value of the reference window and the test window is greater than a set threshold value, and generating a first segment signal segmented at the change point.
Wherein the first segmented signal is a segmented signal determined by frequency and amplitude variations.
It should be noted that, S302 and S303 are the same as the implementation manner of performing segmentation by using the timing processing method of the sliding window autocorrelation algorithm in the foregoing embodiment, and refer to the description of performing segmentation by using the timing processing method of the sliding window autocorrelation algorithm for details, which is not described herein again.
S304: and generating a comparison function corresponding to the sensing data, and minimizing the comparison function to generate a second segmented signal.
Wherein the second segmented signal is a segmented signal determined by mean and variance variations.
It should be noted that, the implementation manner of S304 is the same as that of the segmentation performed by using the statistical analysis method based on the gaussian model in the foregoing embodiment, and details of the segmentation performed by using the statistical analysis method based on the gaussian model are described in the foregoing, and are not described herein again.
It should be noted that the order of S304 and S302 may be changed, that is, the step of executing S304 first and then executing S302 and S303, and the embodiment of the present invention is only described by taking the step of executing S302 first, but is not limited thereto.
S305: and fusing the first segmented signal and the second segmented signal to generate a final segmented signal.
Specifically, since it is not sufficient that the segmentation only pays attention to the changes of the mean and the variance, the embodiment of the present invention fuses the first segmented signal and the second segmented signal to generate the final segmented signal fused with the mean, the variance, the amplitude and the frequency changes, and uses the frequency and amplitude changing sliding window autocorrelation algorithm to correct the segments generated by the changes of the mean and the variance to determine the reasonable segmentation result. According to the embodiment of the invention, more effective data statistical characteristics such as mean value, variance, amplitude and frequency can be observed by fusing the first segmented signal and the second segmented signal, and the change of the signal attributes effectively divides different motion type data subsequences in the motion signal, so that the automatic segmentation of the motion signal is realized.
Specifically, the merging of the first segment signal and the second segment signal may be: and comparing the first segmented signal and the second segmented signal with a golden criterion, and reserving and superposing the signals which are closest to the golden criterion in the first segmented signal and the second segmented signal. The golden criterion is a standard criterion known to those skilled in the art, and mainly defines standard forms of signals of different motion types, and the embodiment of the present invention is not described herein again.
S306: and selecting at least one section of final segmented signal to perform behavior recognition.
Specifically, the implementation manner of S306 is the same as that of S101, and see the description of S101 for details, which is not described herein again. The only difference from S101 is that the final segmented signal of the embodiment of the present invention is a segmented signal that incorporates mean, variance, amplitude and frequency changes.
According to the behavior recognition method provided by the embodiment of the invention, the sensing data of the behavior to be recognized is collected, the first segmentation signal is generated by segmentation by adopting a time sequence processing method of a sliding window autocorrelation algorithm, the second segmentation signal is generated by segmentation by adopting a statistical analysis method based on a Gaussian model, the first segmentation signal and the second segmentation signal are fused to generate a final segmentation signal, and at least one section of the final segmentation signal is selected for behavior recognition. According to the embodiment of the invention, the signals generated by the induction data are segmented, and each segment of signals in the segmented signals corresponds to one motion behavior of the user, so that the motion signals in the behaviors to be identified are automatically segmented, the behaviors to be identified in the induction data are separated, only a certain segment of segmented signals generated by segmentation is required to be identified, the behavior types are identified, the frequent classification and behavior identification operation is avoided, the frequent automatic identification operation cost is reduced, and the identification accuracy is high and the time consumption is short. Meanwhile, the automatic segmentation algorithm designed by the embodiment of the invention mainly applies a statistical model and a time sequence processing method, and realizes unsupervised automatic segmentation by observing the inherent statistical characteristics of the signals without the guidance of additional tag data. And finally, the variation of the mean value, the variance, the amplitude and the frequency are fused in the segmented signals, so that the generated segmented signals are more reasonable and accurate, and the accuracy of behavior identification is improved.
Fig. 4 is a schematic structural diagram of a terminal according to an embodiment of the present invention, and as shown in fig. 4, the terminal according to the embodiment of the present invention includes: an acquisition module 41, a segmentation module 42 and an identification module 43.
The acquisition module 41 is used for acquiring sensing data of the behavior to be identified; a segmenting module 42, configured to segment the signal generated by the sensing data to generate a segmented signal; and the identification module 43 is configured to select at least one segment of the segmented signal for behavior identification.
The terminal provided in this embodiment is configured to execute the technical solution of the method embodiment shown in fig. 1, and the implementation principle and the implementation effect are similar, which are not described herein again.
Further, the segmenting module 42 is configured to segment the signal generated by the sensing data to generate a segmented signal, including:
generating a reference window and a test window corresponding to the signal generated by the induction data; sliding the test window from a signal starting end to a signal tail end, generating a segment change point when the signal autocorrelation difference value of the reference window and the test window is larger than a set threshold value, and generating a first segment signal segmented at the segment change point, wherein the first segment signal is a segment signal determined according to frequency and amplitude changes; taking the first segmented signal as the segmented signal;
further, the segmentation module 42 is configured to:
generating a reference window and a test window corresponding to the signal generated by the induction data; sliding the test window from a signal starting end to a signal tail end, generating a segmentation change point when the signal autocorrelation difference value of the reference window and the test window is larger than a set threshold value, and generating a first segmentation signal segmented at the segmentation change point, wherein the first segmentation signal is a segmentation signal determined according to frequency and amplitude change;
generating a comparison function corresponding to the sensing data; minimizing the comparison function to generate a second segmented signal, the second segmented signal being a mean and variance varying segmented signal;
fusing the first segmented signal and the second segmented signal to generate a final segmented signal;
the identification module 43 is configured to select at least one segment of the segmented signal for behavior identification, and includes:
and selecting at least one section of the final segmented signal to perform behavior recognition.
Further, the terminal further includes: and a calculation module.
A calculation module for employing a formula
Figure BDA0001454634200000091
Calculating the signal autocorrelation difference of the reference window and the test window;
wherein ATHR and FTHR are respective threshold values of a set amplitude difference value and a set frequency difference value; the amplitude difference ADIFF is represented by the formula
Figure BDA0001454634200000092
Calculated to obtain p (0)RIs the autocorrelation value of the reference window when the test window slides to the beginning of the signal, p (0)TIs the autocorrelation value of the test window when the test window slides to the beginning of the signal; the frequency difference FDIFF is calculated from the formula FDIFF, B being the difference part of the autocorrelation signals of the reference window and the test window, C being the same part of the autocorrelation signals of the reference window and the test window.
Further, the segmentation module 42 is configured to generate a comparison function corresponding to the sensing data, and includes:
using formula Xi=μiiεiGenerating Observation data Xi
Using a formula
Figure BDA0001454634200000101
And
Figure BDA0001454634200000102
Figure BDA0001454634200000103
calculating observed data XiAnd a difference value J (i, x) between the preset Gaussian model and the reference value J;
using the formulae h (i) ═ J (i, x) + β m (i) and
Figure BDA0001454634200000104
generating a comparison function H (i);
where i is the index of the observed variable, μiAnd σiRespectively mean value and standard deviation of the observed variable in the preset Gaussian model of the preset segmentiIs a random variable of an observed variable in a preset segment; m (i) is a sequence of fractional numbers, M (i) is determined by the dimension K (i) of index i; k (i) is the sequence of change points resulting from the minimization of the comparison function; m is the sequence of fractional numbers M (i) the maximum value of the quadratic deviation in the normalized comparison is greater than 0.75; t is the total length of the sequence of fractional numbers M (i); j is the index of the change point sequence, j is 1,2, … M; k is a radical ofjIs the jth change point, k, in the sequence of change pointsj-1Is the j-1 th change point in the sequence of change points; k is a radical ofj-1+1:kjIs from the change point kj-1To the next change point kjThe sequence points are arranged in a sequence of points,
Figure BDA0001454634200000105
is that the index i changes from the point kj-1To the next change point kjMean values of observed data corresponding to the sequence points.
Fig. 5 is a schematic structural diagram of a terminal according to a second embodiment of the present invention, and as shown in fig. 5, the terminal according to the second embodiment of the present invention includes: a memory 51 and a processor 52.
The memory 51 is used for storing execution instructions, and the processor 52 may be a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits that implement the embodiments of the present invention. When the base station is running, the processor 52 communicates with the memory 51, and the processor 52 invokes execution instructions for performing the following operations:
acquiring sensing data of behaviors to be recognized;
segmenting the signals generated by the induction data to generate segmented signals;
and selecting at least one section of the segmented signals to perform behavior recognition.
Further, the processor 52 segments the signal generated by the sensing data to generate a segmented signal, which includes:
generating a reference window and a test window corresponding to the signal generated by the induction data;
sliding the test window from a signal starting end to a signal tail end, generating a segmentation change point when the signal autocorrelation difference value of the reference window and the test window is larger than a set threshold value, and generating a first segmentation signal segmented at the segmentation change point, wherein the first segmentation signal is a segmentation signal determined according to frequency and amplitude change;
and taking the first segmentation signal as the segmentation signal.
Further, processor 52 is also configured to:
generating a reference window and a test window corresponding to the signal generated by the induction data;
sliding the test window from a signal starting end to a signal tail end, generating a segmentation change point when the signal autocorrelation difference value of the reference window and the test window is larger than a set threshold value, and generating a first segmentation signal segmented at the segmentation change point, wherein the first segmentation signal is a segmentation signal determined according to frequency and amplitude change;
generating a comparison function corresponding to the sensing data; minimizing the comparison function to generate a second segmented signal, the second segmented signal being a mean and variance varying segmented signal;
fusing the first segmented signal and the second segmented signal to generate a final segmented signal;
and selecting at least one section of the final segmented signal to perform behavior recognition.
Further, processor 52 is also configured to:
using a formula
Figure BDA0001454634200000111
Calculating the signal autocorrelation difference of the reference window and the test window;
wherein ATHR and FTHR are respective threshold values of a set amplitude difference value and a set frequency difference value; the amplitude difference ADIFF is represented by the formula
Figure BDA0001454634200000112
Calculated to obtain p (0)RIs the autocorrelation value of the reference window when the test window slides to the beginning of the signal, p (0)TIs the autocorrelation value of the test window when the test window slides to the beginning of the signal; the frequency difference FDIFF is calculated from the formula FDIFF, B being the difference part of the autocorrelation signals of the reference window and the test window, C being the same part of the autocorrelation signals of the reference window and the test window.
Further, processor 52 generates a comparison function corresponding to the sensed data, including:
using formula Xi=μiiεiGenerating Observation data Xi
Using a formula
Figure BDA0001454634200000113
And
Figure BDA0001454634200000114
Figure BDA0001454634200000121
calculating observed data XiAnd a difference value J (i, x) between the preset Gaussian model and the reference value J;
using the formulae h (i) ═ J (i, x) + β m (i) and
Figure BDA0001454634200000122
generating a comparison function H (i);
where i is the index of the observed variable, μiAnd σiRespectively mean value and standard deviation of the observed variable in the preset Gaussian model of the preset segmentiIs a random variable of an observed variable in a preset segment; m (i) is a sequence of fractional numbers, M (i) is determined by the dimension K (i) of index i; k (i) is the sequence of change points resulting from the minimization of the comparison function; m is the sequence of fractional numbers M (i) the maximum value of the quadratic deviation in the normalized comparison is greater than 0.75; t is the total length of the sequence of fractional numbers M (i); j is the index of the change point sequence, j is 1,2, … M; k is a radical ofjIs the jth change point, k, in the sequence of change pointsj-1Is the j-1 th change point in the sequence of change points; k is a radical ofj-1+1:kjIs from the change point kj-1To the next change point kjThe sequence points are arranged in a sequence of points,
Figure BDA0001454634200000123
is that the index i changes from the point kj-1To the next change point kjMean values of observed data corresponding to the sequence points.
Experiment 1:
the smart phone with the embedded inertial motion sensor is attached to a human body, acceleration signals are collected, a human behavior data set 1 including walking, running and sitting is generated, fig. 6 shows a segmentation result of the actually measured behavior data set 1 of walking, running and sitting applied with the method, and fig. 7 shows noise generated in a switching stage between walking and running separated by a segmentation algorithm provided by the embodiment of the invention. As shown in fig. 6 and 7, not only the different movements successfully form segments, the sensor drift at the beginning of the measurement, the noise at the step and run switching phases, and the noise at the end of the measurement are successfully separated.
Experiment 2:
experimentally obtained human behavior dataset 2: walk, sit, run and stand, fig. 8 is a segmented result of the measured behavior data set 2 walk, sit, run and stand to which the present invention is applied, and as shown in fig. 8, four daily behaviors are successfully separated. Based on segmented experimental results, behavior recognition is carried out by applying a feature extraction and classification technology, the test results are shown in the attached table 1, and the accuracy rate exceeds 98%.
Figure BDA0001454634200000124
Figure BDA0001454634200000131
TABLE 1
Although the embodiments of the present invention have been described above, the above description is only for the convenience of understanding the present invention, and is not intended to limit the present invention. It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (7)

1. A behavior recognition method, comprising:
acquiring sensing data of behaviors to be recognized;
segmenting the signals generated by the induction data to generate segmented signals;
selecting at least one section of segmented signal to perform behavior recognition;
the segmenting the signal generated by the sensing data to generate a segmented signal includes:
generating a reference window and a test window corresponding to the signal generated by the induction data;
sliding the test window from a signal starting end to a signal tail end, generating a segmentation change point when the signal autocorrelation difference value of the reference window and the test window is larger than a set threshold value, and generating a first segmentation signal segmented at the segmentation change point, wherein the first segmentation signal is a segmentation signal determined according to frequency and amplitude change;
generating a comparison function corresponding to the sensing data; minimizing the comparison function to generate a second segmented signal, the second segmented signal being a segmented signal determined by mean and variance variations;
fusing the first segmented signal and the second segmented signal to generate a final segmented signal;
the selecting at least one segment of the segmented signals for behavior recognition comprises: selecting at least one section of the final segmented signal to perform behavior recognition;
the generating of the comparison function corresponding to the sensing data includes:
using formula Xi=μiiεiGenerating Observation data Xi
Using a formula
Figure FDA0002955688650000011
And
Figure FDA0002955688650000012
Figure FDA0002955688650000013
calculating observed data XiAnd a difference value J (i, x) between the preset Gaussian model and the reference value J;
using the formulae h (i) ═ J (i, x) + β m (i) and
Figure FDA0002955688650000014
generating a comparison function H (i);
where i is the index of the observed variable, μiAnd σiRespectively mean value and standard deviation of the observed variable in the preset Gaussian model of the preset segmentiIs a random variable of an observed variable in a preset segment; m (i) is a sequence of fractional numbers, M (i) is determined by the dimension K (i) of index i; k (i) is the sequence of change points resulting from the minimization of the comparison function; m is the sequence of fractional numbers M (i) the maximum value of the quadratic deviation in the normalized comparison is greater than 0.75; t is the total length of the sequence of fractional numbers M (i); j is the index of the change point sequence, j is 1,2, … M; k is a radical ofjIs the jth change point, k, in the sequence of change pointsj-1Is the j-1 th change point in the sequence of change points; k is a radical ofj-1+1:kjIs from the change point kj-1To the next change point kjThe sequence points are arranged in a sequence of points,
Figure FDA0002955688650000021
is that the index i changes from the point kj-1To the next change point kjMean values of observed data corresponding to the sequence points.
2. The behavior recognition method of claim 1, wherein segmenting the signal generated by the sensing data to generate a segmented signal further comprises:
generating a reference window and a test window corresponding to the signal generated by the induction data;
sliding the test window from a signal starting end to a signal tail end, generating a segmentation change point when the signal autocorrelation difference value of the reference window and the test window is larger than a set threshold value, and generating a first segmentation signal segmented at the segmentation change point, wherein the first segmentation signal is a segmentation signal determined according to frequency and amplitude change;
and taking the first segmentation signal as the segmentation signal.
3. A method for identifying behaviour according to claim 1 or 2, characterised in that the difference in the autocorrelation of the signals of said reference window and said test window is obtained by:
using a formula
Figure FDA0002955688650000022
Calculating the signal autocorrelation difference of the reference window and the test window;
wherein ATHR and FTHR are respective threshold values of a set amplitude difference value and a set frequency difference value; the amplitude difference ADIFF is represented by the formula
Figure FDA0002955688650000023
Calculated to obtain p (0)RIs the autocorrelation value of the reference window when the test window slides to the beginning of the signal, p (0)TIs the autocorrelation value of the test window when the test window slides to the beginning of the signal; the frequency difference FDIFF is calculated from the formula FDIFF, B/C, B being the difference part of the autocorrelation signals of the reference window and the test window,c is the same portion of the reference window and the test window autocorrelation signal.
4. A terminal, comprising:
the acquisition module is used for acquiring the induction data of the behavior to be identified;
the segmentation module is used for segmenting the signals generated by the induction data to generate segmented signals;
the identification module is used for selecting at least one section of the segmented signals to perform behavior identification;
the segmenting module is used for segmenting the signal generated by the sensing data to generate a segmented signal, and comprises:
generating a reference window and a test window corresponding to the signal generated by the induction data;
sliding the test window from a signal starting end to a signal tail end, generating a segmentation change point when the signal autocorrelation difference value of the reference window and the test window is larger than a set threshold value, and generating a first segmentation signal segmented at the segmentation change point, wherein the first segmentation signal is a segmentation signal determined according to frequency and amplitude change;
generating a comparison function corresponding to the sensing data; minimizing the comparison function to generate a second segmented signal, the second segmented signal being a mean and variance varying segmented signal;
fusing the first segmented signal and the second segmented signal to generate a final segmented signal;
the identification module is used for selecting at least one section of the segmented signals to perform behavior identification, and comprises:
selecting at least one section of the final segmented signal to perform behavior recognition;
the segmentation module is used for generating a comparison function corresponding to the sensing data, and comprises:
using formula Xi=μiiεiGenerating Observation data Xi
Using a formula
Figure FDA0002955688650000031
And
Figure FDA0002955688650000032
Figure FDA0002955688650000033
calculating observed data XiAnd a difference value J (i, x) between the preset Gaussian model and the reference value J;
using the formulae h (i) ═ J (i, x) + β m (i) and
Figure FDA0002955688650000034
generating a comparison function H (i);
where i is the index of the observed variable, μiAnd σiRespectively mean value and standard deviation of the observed variable in the preset Gaussian model of the preset segmentiIs a random variable of an observed variable in a preset segment; m (i) is a sequence of fractional numbers, M (i) is determined by the dimension K (i) of index i; k (i) is the sequence of change points resulting from the minimization of the comparison function; m is the sequence of fractional numbers M (i) the maximum value of the quadratic deviation in the normalized comparison is greater than 0.75; t is the total length of the sequence of fractional numbers M (i); j is the index of the change point sequence, j is 1,2, … M; k is a radical ofjIs the jth change point, k, in the sequence of change pointsj-1Is the j-1 th change point in the sequence of change points; k is a radical ofj-1+1:kjIs from the change point kj-1To the next change point kjThe sequence points are arranged in a sequence of points,
Figure FDA0002955688650000035
is that the index i changes from the point kj-1To the next change point kjMean values of observed data corresponding to the sequence points.
5. The terminal of claim 4, wherein the segmenting module is configured to segment the signal generated by the sensing data to generate a segmented signal, and further comprising:
generating a reference window and a test window corresponding to the signal generated by the induction data;
sliding the test window from a signal starting end to a signal tail end, generating a segment change point when the signal autocorrelation difference value of the reference window and the test window is larger than a set threshold value, and generating a first segment signal segmented at the segment change point, wherein the first segment signal is a segment signal determined according to frequency and amplitude changes;
and taking the first segmentation signal as the segmentation signal.
6. The terminal according to claim 4 or 5, characterized in that the terminal further comprises:
a calculation module for employing a formula
Figure FDA0002955688650000041
Calculating the signal autocorrelation difference of the reference window and the test window;
wherein ATHR and FTHR are respective threshold values of a set amplitude difference value and a set frequency difference value; the amplitude difference ADIFF is represented by the formula
Figure FDA0002955688650000042
Calculated to obtain p (0)RIs the autocorrelation value of the reference window when the test window slides to the beginning of the signal, p (0)TIs the autocorrelation value of the test window when the test window slides to the beginning of the signal; the frequency difference FDIFF is calculated from the formula FDIFF, B being the difference part of the autocorrelation signals of the reference window and the test window, C being the same part of the autocorrelation signals of the reference window and the test window.
7. A terminal, comprising: a processor and a memory for storing execution instructions; the processor calls the execution instruction for executing the behavior recognition method according to any one of claims 1 to 3.
CN201711060185.5A 2017-11-01 2017-11-01 Behavior identification method and terminal Active CN107862276B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711060185.5A CN107862276B (en) 2017-11-01 2017-11-01 Behavior identification method and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711060185.5A CN107862276B (en) 2017-11-01 2017-11-01 Behavior identification method and terminal

Publications (2)

Publication Number Publication Date
CN107862276A CN107862276A (en) 2018-03-30
CN107862276B true CN107862276B (en) 2021-05-18

Family

ID=61696620

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711060185.5A Active CN107862276B (en) 2017-11-01 2017-11-01 Behavior identification method and terminal

Country Status (1)

Country Link
CN (1) CN107862276B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109151593B (en) * 2018-09-30 2021-07-02 广州酷狗计算机科技有限公司 Anchor recommendation method, device and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104268577A (en) * 2014-06-27 2015-01-07 大连理工大学 Human body behavior identification method based on inertial sensor
CN106339071A (en) * 2015-07-08 2017-01-18 中兴通讯股份有限公司 Method and device for identifying behaviors
CN106503673A (en) * 2016-11-03 2017-03-15 北京文安智能技术股份有限公司 A kind of recognition methodss of traffic driving behavior, device and a kind of video acquisition device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104268577A (en) * 2014-06-27 2015-01-07 大连理工大学 Human body behavior identification method based on inertial sensor
CN106339071A (en) * 2015-07-08 2017-01-18 中兴通讯股份有限公司 Method and device for identifying behaviors
CN106503673A (en) * 2016-11-03 2017-03-15 北京文安智能技术股份有限公司 A kind of recognition methodss of traffic driving behavior, device and a kind of video acquisition device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
The sliding window correlation procedure for detecting hidden correlations: existence of behavioral subgroups illustrated with aged rats;Daniela Schulz et al.;《Journal of Neuroscience Methods》;20021016;第121卷(第2期);第129-137页 *
基于可穿戴设备的人体行为识别与状态监测方法研究;杨伟笃;《中国优秀硕士学位论文全文数据库 信息科技辑》;20170215;第2017年卷(第2期);正文第9-27页 *

Also Published As

Publication number Publication date
CN107862276A (en) 2018-03-30

Similar Documents

Publication Publication Date Title
CN107506684B (en) Gait recognition method and device
CN109145937A (en) A kind of method and device of model training
CN105912142B (en) A kind of note step and Activity recognition method based on acceleration sensor
CN108197592B (en) Information acquisition method and device
WO2012013711A2 (en) Semantic parsing of objects in video
WO2012119126A2 (en) Apparatus, system, and method for automatic identification of sensor placement
Kobayashi et al. Three-way auto-correlation approach to motion recognition
CN108814618A (en) A kind of recognition methods of motion state, device and terminal device
KR20120052610A (en) Apparatus and method for recognizing motion using neural network learning algorithm
CN111208508A (en) Motion quantity measuring method and device and electronic equipment
Avgerinakis et al. Activity detection using sequential statistical boundary detection (ssbd)
Zeng et al. Gait recognition across different walking speeds via deterministic learning
CN110348492A (en) A kind of correlation filtering method for tracking target based on contextual information and multiple features fusion
CN114048773A (en) Behavior identification method and system based on transfer learning and WiFi
KR20220098312A (en) Method, apparatus, device and recording medium for detecting related objects in an image
CN107862276B (en) Behavior identification method and terminal
CN113837268B (en) Method, device, equipment and medium for determining track point state
Liu et al. Automatic fall risk detection based on imbalanced data
Khalifa et al. Feature selection for floor-changing activity recognition in multi-floor pedestrian navigation
Bobkov et al. Activity recognition on handheld devices for pedestrian indoor navigation
CN113205045A (en) Pedestrian re-identification method and device, electronic equipment and readable storage medium
Vrânceanu et al. NLP EAC recognition by component separation in the eye region
Albu et al. Generic temporal segmentation of cyclic human motion
Chen An LSTM recurrent network for step counting
Abdulghani et al. Discover human poses similarity and action recognition based on machine learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant