CN114495279A - Active learning behavior recognition model training method, terminal device and storage medium - Google Patents

Active learning behavior recognition model training method, terminal device and storage medium Download PDF

Info

Publication number
CN114495279A
CN114495279A CN202210110983.9A CN202210110983A CN114495279A CN 114495279 A CN114495279 A CN 114495279A CN 202210110983 A CN202210110983 A CN 202210110983A CN 114495279 A CN114495279 A CN 114495279A
Authority
CN
China
Prior art keywords
behavior recognition
recognition model
sample
behavior
channel state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210110983.9A
Other languages
Chinese (zh)
Inventor
赵广智
龚伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Advanced Technology University of Science and Technology of China
Original Assignee
Institute of Advanced Technology University of Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Advanced Technology University of Science and Technology of China filed Critical Institute of Advanced Technology University of Science and Technology of China
Priority to CN202210110983.9A priority Critical patent/CN114495279A/en
Publication of CN114495279A publication Critical patent/CN114495279A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a behavior recognition model training method for active learning, a terminal device and a storage medium, wherein the method comprises the following steps: inputting a sample to be trained into the behavior recognition model to determine a behavior recognition result of the sample to be trained, and determining a target training sample according to the behavior recognition result; acquiring a behavior label corresponding to the target training sample; and inputting the target training sample and the behavior label into a behavior recognition model for training so as to update the behavior recognition model. According to the method, the samples to be trained are input into the behavior recognition model to determine the behavior recognition result of the samples to be trained, the target training samples are further determined according to the behavior recognition result, the target training samples are selected from the samples to be trained to serve as 'difficult samples' of the current behavior recognition model, participation in training of invalid or inefficient samples to be trained is reduced, and training efficiency of the behavior recognition model is improved.

Description

Active learning behavior recognition model training method, terminal device and storage medium
Technical Field
The invention relates to the technical field of behavior recognition, in particular to a behavior recognition model training method for active learning, terminal equipment and a storage medium.
Background
In recent years, the behavior recognition model training technology is rapidly developed and gradually applied to various fields such as motion tracking, security monitoring, medical health and the like. The current behavior recognition model training modes can be divided into three types, namely vision-based, wearable-sensor-based and wireless-signal-based. The behavior recognition model training based on the wireless signals such as RFID, ZigBee and WiFi is to distinguish corresponding actions according to the changes of different behaviors of users to parameters such as signal intensity and channel quality of the wireless signals. However, the current behavior recognition model training mode based on signals usually adopts deep learning in a fully supervised mode, so that a large number of labeled training samples are required for constructing the recognition model to ensure the performance of the model, the deployment cost is high, and the model training efficiency is low.
The above is only for the purpose of assisting understanding of the technical aspects of the present invention, and does not represent an admission that the above is prior art.
Disclosure of Invention
The invention mainly aims to provide an active learning behavior recognition model training method, terminal equipment and a storage medium, aiming at reducing the number of required labeled training samples while considering the performance of constructing a recognition model by deep learning, thereby improving the training efficiency of the model and reducing the deployment cost.
In order to achieve the above object, a method for training an active learning behavior recognition model includes:
inputting a sample to be trained into the behavior recognition model to determine a behavior recognition result of the sample to be trained, and determining a target training sample according to the behavior recognition result;
acquiring a behavior label corresponding to the target training sample;
and inputting the target training sample and the behavior label into a behavior recognition model for training so as to update the behavior recognition model.
Optionally, the step of determining a target training sample according to the behavior recognition result includes:
and when the behavior recognition result is difficult to judge, determining the sample to be trained as the target training sample, or determining the sample to be trained as the target training sample when the probability values corresponding to the limb action behavior types in the behavior recognition result are all smaller than or equal to preset probability values.
Optionally, the step of obtaining a behavior label corresponding to the target training sample includes:
acquiring channel state information data and video information when a target object acts;
determining the sample to be trained according to the channel state information data, wherein the sample to be trained comprises the target training sample;
determining the corresponding relation between the sample to be trained and the video information according to the time information;
and acquiring a behavior label corresponding to the target training sample according to the corresponding relation.
Optionally, the step of determining the to-be-trained sample according to the channel state information data includes:
preprocessing the channel state information data to obtain the preprocessed channel state information data;
and generating a channel state information spectrogram according to the preprocessed channel state information data and a Fourier transform algorithm so as to determine the sample to be trained.
Optionally, the step of preprocessing the channel state information data to obtain the preprocessed channel state information data includes:
performing first denoising processing on the channel state information data through a preset filter to obtain denoised channel state information data;
and carrying out second denoising processing on the denoised channel state information data according to a principal component analysis method to obtain the preprocessed channel state information data.
Optionally, after the step of inputting the target training sample and the behavior label into a behavior recognition model for training to update the behavior recognition model, the method further includes:
acquiring channel state information data when a target object acts;
and determining the body behavior action of the target object according to the channel state information data and the updated behavior recognition model.
Optionally, the sample to be trained includes data samples corresponding to the same limb action behavior in multiple application environments.
Optionally, the network model corresponding to the behavior recognition model adopts an AlexNet model.
In addition, to achieve the above object, the present invention also provides a terminal device, including: the system comprises a memory, a processor and an actively-learned behavior recognition model training program stored in the memory and executable on the processor, wherein the actively-learned behavior recognition model training program, when executed by the processor, implements the steps of the actively-learned behavior recognition model training method as described above.
In addition, to achieve the above object, the present invention also provides a storage medium having stored thereon an actively-learned behavior recognition model training program, which when executed by the processor implements the steps of the actively-learned behavior recognition model training method as described above.
The invention provides an active learning behavior recognition model training method, terminal equipment and storage medium, which determines the behavior recognition result of a sample to be trained by inputting the sample to be trained into a behavior recognition model, further determines a target training sample according to the behavior recognition result, selects the target training sample from the sample to be trained as a 'difficult sample' of the current behavior recognition model, reduces the participation of invalid or low-efficiency sample to be trained in training, improves the training efficiency of the behavior recognition model, simultaneously enriches the training sample set of the behavior recognition model, leads the training sample set of the behavior recognition model to have diversity, further trains the behavior recognition model through the target training sample and the behavior label corresponding to the target training sample to update the behavior recognition model, improves the recognition performance of the updated behavior recognition model and improves the recognition accuracy, the robustness, universality and practicability of the behavior recognition model are enhanced.
Drawings
Fig. 1 is a schematic structural diagram of a terminal device related to each embodiment of the active learning behavior recognition model training method of the present invention;
FIG. 2 is a flowchart illustrating a first embodiment of the active learning behavior recognition model training method according to the present invention;
FIG. 3 is a schematic diagram of a simple construction of a wireless signal sensing area;
FIG. 4 is a schematic flow chart illustrating the process of determining the target training samples according to the first embodiment of the active learning behavior recognition model training method of the present invention;
FIG. 5 is a graph comparing performance of training with a target training sample and training with all samples for behavior recognition based on deep learning according to the present invention;
FIG. 6 is a network architecture diagram of the AlexNet model;
FIG. 7 is a flowchart illustrating a second embodiment of the active learning behavior recognition model training method according to the present invention;
FIG. 8 is a schematic diagram of video information corresponding to two body behaviors;
fig. 9 is a simple flow chart of the active learning behavior recognition model training method of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The invention provides a method for training an active learning behavior recognition model, which comprises the following steps:
inputting a sample to be trained into a behavior recognition model to determine a behavior recognition result of the sample to be trained, and determining a target training sample according to the behavior recognition result;
acquiring a behavior label corresponding to the target training sample;
and inputting the target training sample and the behavior label into a behavior recognition model for training so as to update the behavior recognition model.
The invention discloses a method for training an active learning behavior recognition model, which comprises the steps of inputting a sample to be trained into a behavior recognition model to determine a behavior recognition result of the sample to be trained, further determining a target training sample according to the behavior recognition result, selecting the target training sample from the sample to be trained as a 'difficult sample' of the current behavior recognition model, reducing the participation of invalid or low-efficiency sample to be trained, improving the training efficiency of the behavior recognition model, reducing the deployment cost, enriching a training sample set of the behavior recognition model, enabling the training sample set of the behavior recognition model to have diversity, further training the behavior recognition model through the target training sample and a behavior label corresponding to the target training sample to update the behavior recognition model, improving the recognition performance of the updated behavior recognition model and improving the recognition accuracy, the robustness, universality and practicability of the behavior recognition model are enhanced.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for facilitating the explanation of the present invention, and have no specific meaning in itself. Thus, "module", "component" or "unit" may be used mixedly.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a terminal device according to various embodiments of an active learning behavior recognition model training method of the present invention. The terminal device involved in the active learning behavior recognition model training method is implemented in various forms. For example, the terminal devices described in the present invention may include terminal devices such as a server, a mobile phone, a tablet computer, a notebook computer, a palmtop computer, and a Personal Digital Assistant (PDA).
As shown in fig. 1, the terminal device may include: a memory 101 and a processor 102. Those skilled in the art will appreciate that the block diagram of the terminal shown in fig. 1 does not constitute a limitation of the terminal, and that the terminal may include more or less components than those shown, or may combine certain components, or a different arrangement of components. The memory 101 stores therein an operating system and a behavior recognition model training program for active learning. The processor 102 is a control center of the terminal device, and the processor 102 executes the active learning behavior recognition model training program stored in the memory 101 to implement the steps of the embodiments of the active learning behavior recognition model training method of the present invention.
Optionally, the terminal device may further include a communication unit 103, where the communication unit 103 establishes data communication (the data communication may be IP communication, WiFi communication, or bluetooth channel) with another terminal device through a network protocol, for example, another terminal device such as an auxiliary vision device, where the auxiliary vision device is configured to obtain video information of a target object such as a person during motion, and the auxiliary vision device may send the video information to obtain the video information of the target object during motion.
Optionally, the terminal device is a wireless signal receiving terminal device.
Based on the structural block diagram of the terminal device, the invention provides various embodiments of the active learning behavior recognition model training method.
In a first embodiment, the present invention provides a method for training an active learning behavior recognition model, please refer to fig. 2, and fig. 2 is a flowchart illustrating the method for training the active learning behavior recognition model according to the first embodiment of the present invention. In this embodiment, the active learning behavior recognition model training method includes the following steps:
step S10, determining a target training sample according to the sample to be trained;
step S20, acquiring a behavior label corresponding to the target training sample;
step S30, inputting the target training sample and the behavior label to a behavior recognition model for training, so as to update the behavior recognition model.
The to-be-trained sample is a data sample corresponding to channel state information data acquired when a target object in a wireless signal perception area acts. The behavior tag refers to the body behavior action when the target object in the wireless signal perception area acts. For example, behavior tags include, but are not limited to, the identification of physical behavior actions of walking, jumping, turning, squatting, standing up, and running, and the physical behavior actions when the target object acts may be determined by the behavior tags.
Optionally, before step S10, the method includes: acquiring channel state information data when a target object acts; and determining the sample to be trained according to the channel state information data.
The target object refers to an object to be monitored. Alternatively, the target object may be a person. Channel State Information (CSI) data refers to Channel State Information data of a physical layer of a wireless signal, and mainly includes frequency domain Information of the signal, such as amplitude and phase.
In the practical application process, with the development of the wireless network, the wireless network gradually realizes full coverage. The wireless signals can be reflected and scattered when passing through obstacles such as a moving human body to form multipath superposed signals, and the information influencing the channel state of the wireless signals is different due to different motions of the human body by analyzing the corresponding changes of the wireless signals caused by human body behaviors. The terminal equipment can acquire information which influences the wireless channel state by the movement of the target object in the wireless signal sensing area by adopting a high-performance commercial-based wireless network card so as to obtain channel state information data when the target object acts.
Illustratively, information that human motion in the wireless signal sensing area affects the wireless channel state is collected to obtain channel state information data during human motion, and the information that human motion in the wireless signal sensing area affects the wireless channel state is collected by adopting a high-performance commercial-based wireless network card to obtain channel state information data during human motion.
Specific implementation of determining the to-be-trained sample according to the channel state information data can be specifically referred to the second embodiment section, and detailed description is not given in this embodiment.
Optionally, the wireless signal channel state may be a Wi-Fi channel state.
It should be noted that the wireless signal sensing area may be determined by a sensing area formed by the wireless signal transmitting terminal device and the wireless signal receiving terminal device.
Alternatively, the number of the transmitting antennas configured in the wireless signal transmitting terminal device may be one, or may be at least two.
Alternatively, the number of the receiving antennas configured in the wireless signal receiving terminal device may be one, or may be at least two.
Optionally, the wireless signal sending terminal device adopts a WIFI sending terminal device, and the wireless signal receiving terminal device adopts a WIFI receiving terminal device.
For example, a WiFi sending terminal device (configured with 1 omni-directional antenna) and a WiFi receiving terminal device (configured with 3 omni-directional antennas) may be configured to form a sensing area, so as to determine a wireless signal sensing area, specifically, refer to fig. 3, where fig. 3 is a schematic diagram of a simple configuration of a wireless signal sensing area.
As an alternative embodiment, please refer to fig. 4, wherein fig. 4 is a schematic flowchart illustrating a process of determining the target training sample in the first embodiment of the active learning behavior recognition model training method of the present invention, and step S10 includes:
step S11, inputting a sample to be trained into the behavior recognition model to determine a behavior recognition result of the sample to be trained;
and step S12, determining the target training sample according to the behavior recognition result.
The behavior recognition result comprises at least one of hard discrimination, recognition, limb action behavior type and probability value corresponding to the limb action behavior type. Among them, the difficulty in discrimination is understood as being unrecognizable. The limb action behavior type refers to limb action, which includes, but is not limited to, walking, jumping, turning, squatting, standing up, and running. The probability value corresponding to the limb action behavior type refers to the probability value of the limb action behavior obtained by identifying the sample to be trained. Exemplarily, the samples to be trained are input into the behavior recognition model, and the probability values of walking, jumping, turning, squatting, standing up and running of the behavior recognition results of the samples to be trained are respectively: 0.1, 0.1, 0.1, 0, 0.5 and 0.2, which indicate that the behavior label or the type of the limb action behavior or the limb action behavior corresponding to the sample to be trained is upright.
Optionally, step S12 includes:
and when the behavior recognition result is difficult to judge, determining the sample to be trained as the target training sample, or determining the sample to be trained as the target training sample when the probability values corresponding to the limb action behavior types in the behavior recognition result are all smaller than or equal to preset probability values.
When the behavior recognition result is difficult to judge, the to-be-trained sample is indicated to be a 'difficult sample' for the behavior recognition model, similarly, when the probability values corresponding to the limb action behavior types in the behavior recognition result are all smaller than or equal to the preset probability values, the accuracy of recognizing the to-be-trained sample by the current behavior recognition model is low, the to-be-trained sample is also indicated to be a 'difficult sample' for the behavior recognition model, and the to-be-trained sample is determined to be a target training sample, so that the behavior recognition model is trained according to the target training sample, and the target training sample can be recognized by the behavior recognition model trained according to the target training sample or the accuracy of recognizing the target training sample is improved.
It is easy to understand that when the probability value corresponding to the limb action type is greater than the preset probability value in the action recognition result, it indicates that the limb action corresponding to the sample to be trained can be accurately recognized through the action recognition model, and the sample to be trained can be reduced or the action recognition model is not trained by using the sample to be trained.
It should be noted that, in this embodiment, a method of actively learning and selecting a "difficult sample" is used, a sample, which is difficult to be distinguished by a current behavior recognition model or has low accuracy of behavior recognition, is preferentially selected from samples to be trained to serve as a target training sample, and the behavior recognition model is trained by using a target sample with rich information, so that participation of invalid or inefficient samples in training is reduced, requirements on the samples to be trained with behavior labels are reduced, and further, practical application across environments is met.
Optionally, the sample to be trained includes data samples corresponding to the same limb action behavior in multiple application environments. Limb motion activities include, but are not limited to, walking, jumping, turning, squatting, standing, and running. Exemplarily, it is assumed that the application environments include 6 daily environments of a corridor, a conference room, a laboratory, a hall, an elevator entrance and exit, and an open platform, wherein each application environment is provided with a wireless signal sensing area, for the acquisition of a to-be-trained sample, 4 body behaviors of walking, jumping, turning, and standing up in each daily environment can be performed by summoning volunteers, channel state information data corresponding to the 4 body behaviors in each daily environment is collected, and then the to-be-trained sample is determined according to the channel information data, so as to enrich the training sample set of the behavior recognition model, so that the training sample set of the behavior recognition model has diversity, and further achieve the practical application meeting the cross-environment.
Illustratively, referring to fig. 5, fig. 5 is a graph illustrating the performance of training with target training samples and training with all samples based on behavior recognition of deep learning according to the present invention. The training of standard training samples comprises training of behavior recognition models through data sets of 6 environments, the recognition accuracy of the training with 15% of labeled samples can reach 58.97%, compared with the training with 100% of behavior labeled samples for a behavior recognition system based on deep learning, the recognition accuracy of the training with only 62.19% of labeled samples to be trained is achieved, when the recognition accuracy similar to that of the behavior recognition system based on deep learning is achieved, the active learning adopted in the embodiment selects target training samples from the samples to be trained for training, and the number of required behavior labeled samples can be reduced by more than 80%.
For example, a sample which is difficult to distinguish by a current behavior recognition model or has low accuracy of behavior recognition is preferentially selected from samples to be trained as a target training sample, and the target training sample can be regarded as a 'difficult sample'. Taking animal identification model to distinguish cats and dogs as an example: when the animal identification model can accurately distinguish common types of cats and dogs, such as a husky cat, a golden hair cat, a puppet cat and a raccoon cat, it is not necessary to add such data samples to the animal identification model, and for the animal identification model, it is difficult to distinguish or accurately distinguish a machine cat and a cartoon dog, the data samples corresponding to the machine cat and the cartoon dog respectively are "difficult samples" relative to the animal identification model.
Optionally, in step S20, a behavior tag corresponding to a target training sample may be obtained, channel state information data and video information when a target object acts may be obtained, the sample to be trained is determined according to the channel state information data, a correspondence between the sample to be trained and the video information is determined according to time information, and the behavior tag corresponding to the target training sample is obtained according to the correspondence, where the sample to be trained includes the target training sample, and specific implementation of the method may refer to the second embodiment.
Wherein, the channel state information data and the video information when the target object acts are obtained, the sample to be trained is determined according to the channel state information data, and the method can be used in the actual application scene or the actual application environment, the data sample corresponding to the channel state information data acquired when the target object acts is difficult to distinguish according to the behavior recognition model, or when the accuracy of the data sample corresponding to the channel state information data acquired when the target object is identified according to the behavior identification model is low, the data sample corresponding to the channel state information data can be determined as a target training sample to enrich the training sample set of the behavior recognition model, so that the training sample set of the behavior recognition model has diversity, the behavior recognition model is trained through the target training sample to update the behavior recognition model, so that the recognition performance of the updated behavior recognition model is improved, and the recognition accuracy is improved. It should be noted that the sample to be trained includes a target training sample.
Optionally, in step S20, the behavior label corresponding to the target training sample is obtained, the target data sample matching the target training sample may be searched from the preset data sample set, and the behavior label corresponding to the target data sample is determined based on the preset data sample set, so as to obtain the behavior label corresponding to the target training sample. The preset data sample set comprises a large number of data samples, and the data samples correspond to the behavior labels in advance. Wherein the data sample is obtainable from a search directly from a search engine.
Step S30, inputting the target training sample and the behavior label into a behavior recognition model for training, so as to update the behavior recognition model, thereby improving the recognition performance of the updated behavior recognition model and improving the recognition accuracy.
Optionally, when the number of the target training samples is multiple, inputting the target training samples and the behavior labels into the behavior recognition model for training so as to update the behavior recognition model, recognizing the target training samples by the behavior recognition model, and stopping training the behavior recognition model when the probability value of the limb action behavior type in the obtained behavior recognition result is greater than the preset probability value; training the behavior recognition model can also be completed in a plurality of target training samples.
As an alternative implementation, after step S30, the method further includes:
acquiring channel state information data when a target object acts;
and determining the body behavior action of the target object according to the channel state information data and the updated behavior recognition model.
And acquiring channel state information data when the target object acts, wherein the terminal equipment can acquire information which influences the wireless channel state by the movement of the target object in the wireless signal sensing area through a high-performance commercial wireless network card so as to obtain the channel state information data when the target object acts.
Determining the body behavior action of the target object according to the channel state information data and the updated behavior recognition model, determining a channel state information spectrogram according to the channel state information data, and determining the body behavior action of the target object according to the channel state information spectrogram and the updated behavior recognition model.
The step of determining the channel state information spectrogram according to the channel state information data comprises preprocessing the channel state information data to obtain preprocessed channel state information data, and generating the channel state information spectrogram according to the preprocessed channel state information data and a Fourier transform algorithm. The specific implementation of this step can be referred to the second embodiment, and is not specifically described in this embodiment.
Optionally, the network model corresponding to the behavior recognition model adopts an AlexNet model, please refer to fig. 6, and fig. 6 is a network structure diagram of the AlexNet model.
In the technical scheme disclosed in this embodiment, a to-be-trained sample is input to a behavior recognition model to determine a behavior recognition result of the to-be-trained sample, and then a target training sample is determined according to the behavior recognition result, and the target training sample is selected from the to-be-trained sample as a "difficult sample" of a current behavior recognition model, so that participation of invalid or inefficient to-be-trained samples in training is reduced, training efficiency of the behavior recognition model is improved, deployment cost is reduced, a training sample set of the behavior recognition model can be enriched, the training sample set of the behavior recognition model has diversity, and further the behavior recognition model is trained through the target training sample and a behavior label corresponding to the target training sample to update the behavior recognition model, so that recognition performance and recognition accuracy of the updated behavior recognition model are improved, robustness of the behavior recognition model is enhanced, and the target training sample is taken as a "difficult sample" for the current behavior recognition model, Universality and practicability.
In a second embodiment based on the first embodiment, please refer to fig. 7, and fig. 7 is a flowchart illustrating a second embodiment of the active learning behavior recognition model training method according to the present invention. In this embodiment, step S20 includes:
step S21, acquiring channel state information data and video information when the target object acts;
step S22, determining the sample to be trained according to the channel state information data, wherein the sample to be trained comprises the target training sample;
step S23, determining the corresponding relation between the sample to be trained and the video information according to the time information;
and step S24, acquiring a behavior label corresponding to the target training sample according to the corresponding relation.
Based on the first embodiment, information that target object motion in the wireless signal sensing area affects the wireless channel state can be collected by using a commercial wireless network card based on high performance, so as to obtain channel state information data when the target object acts.
Optionally, an auxiliary vision device may be provided within the wireless signal perception area for capturing video information of a target object, such as a person, as it moves. Illustratively, an auxiliary vision device such as a video camera.
Optionally, the channel state information data and the video information when the target object acts are obtained in real time.
Optionally, the channel state information data and the video information obtained when the target object moves are the channel state information data and the video information which are obtained based on the arrangement sequence of the time information and are synchronized when the target object moves, that is, the channel state information data and the video information at the same time point or the same time period have a synchronization correspondence relationship.
In an actual application process, when receiving a wireless signal, a wireless signal receiving terminal device receives a plurality of data packets, each data packet may include time information and channel state information data, and the channel state information data in the data packets may be arranged according to the time information to form a CSI time sequence. Similarly, the auxiliary visual device may collect video information of the target object during the action, where the video information includes time information, and thus, may correspond to the same time information, such as a time point or a time period, to obtain the synchronized channel state information data and the video information, so as to determine the corresponding relationship between the channel state information data and the video information, and further determine the corresponding relationship between the sample to be trained and the video information on the basis of determining the sample to be trained according to the channel state information data.
As an alternative embodiment, step S50 includes:
preprocessing the channel state information data to obtain the preprocessed channel state information data;
and generating a channel state information spectrogram according to the preprocessed channel state information data and a Fourier transform algorithm so as to determine the sample to be trained.
Optionally, the step of preprocessing the channel state information data to obtain the preprocessed channel state information data includes:
performing first denoising processing on the channel state information data through a preset filter to obtain denoised channel state information data;
and carrying out second denoising processing on the denoised channel state information data according to a principal component analysis method to obtain the preprocessed channel state information data.
In practical application, due to the wireless signal receiving terminal equipment or external environment problems, abnormal values exist in the channel state information data in the CSI time sequence of the acquired wireless signal, and the accuracy of extracting the behavior characteristics of the target object is influenced. In order to accurately obtain the behavior characteristics of the target object, the collected channel state information data can be subjected to first denoising processing through a preset filter, so that denoised channel state information data can be obtained.
Optionally, the preset filter may adopt a Hampel filter, and the Hampel filter performs a first denoising process on the channel state information data to obtain the denoised channel state information data. Illustratively, abnormal data detection can be performed on the channel state information data through a Hampel filter, a difference value between each data in the channel state information data and the median is determined by determining the median in the collected channel state information data, when the absolute value of the difference value is greater than or equal to a preset value, the data corresponding to the difference value is determined to be abnormal data, and the abnormal data is replaced by the median, so that the channel state information data is prevented from being lost or missing; the CSI time sequence, namely the position where the abnormal value of the channel state information data appears, can be identified through a Hampel filter, a least square support vector machine regression model is adopted, the abnormal value in the CSI time sequence is detected based on a recursion prediction method, analysis and processing of the abnormal value of the information in the CSI time sequence are monitored, similarly, in practical application, after the abnormal value at a certain position in the CSI time sequence is deleted, the phenomenon that the CSI time sequence is partially lost or lost can be caused, the corresponding relation between the channel state information data and the time information can be disturbed, the whole CSI time sequence can be influenced, interpolation processing can be carried out at the position where the abnormal value of the information is deleted in the CSI time sequence to supplement the lost data, and the accuracy of the information sequence of the CSI time sequence is ensured.
And performing second denoising on the denoised channel state information data according to a principal component analysis method to obtain preprocessed channel state information data, further processing the denoised channel state information data by using the principal component analysis method, wherein the denoised channel state information data can be regarded as first CSI data, dividing each first CSI data into a plurality of sub-CSI data with the same time length, such as a plurality of sub-CSI data in a section of 8 seconds, respectively performing principal component analysis on each sub-CSI data, and then selecting the first 20 principal components in the ranking of each first CSI data for normalization to eliminate the noise of the data, extract the principal characteristics of the data and reduce the dimensionality of a data space.
Optionally, the specific implementation of performing the second denoising processing on the denoised channel state information data according to the principal component analysis method may also refer to other implementable manners in the prior art, and this step is not specifically limited in this embodiment.
Generating a channel state information spectrogram according to the preprocessed channel state information data and a Fourier transform algorithm to determine a sample to be trained, wherein the preprocessed channel state information data is converted into a CSI spectrogram through a preset Fourier transform algorithm such as a short-time Fourier transform algorithm, and the preset Fourier transform algorithm is as follows:
Figure BDA0003494528570000121
where x [ n ] is a discrete time series of signals, w [ m ] is a window of short-time Fourier transform whose magnitude squared produces a spectrogram of:
Spectrogram{x[n]}=|X(ejw,n)|2
as an optional implementation manner, in step S70, the behavior label corresponding to the target training sample is determined according to the correspondence, and the correspondence between the to-be-trained sample and the video information is determined by performing ranking based on the time information, where the to-be-trained sample includes the target training sample, and the time information of the to-be-trained sample that is the same as the target training sample can be searched through the determined target training sample, so as to obtain the video information corresponding to the time information, and the behavior label corresponding to the target training sample is determined according to the video information.
Optionally, the behavior label corresponding to the target training sample is determined according to the video information, the video information is manually identified, and after the behavior label corresponding to the target training sample is determined through the video information, the corresponding relationship between the target training sample and the behavior label is established. For example, referring to fig. 8, fig. 8 is a schematic diagram of video information corresponding to two body behaviors respectively.
For example, referring to fig. 9, fig. 9 is a simple flow chart of the active learning behavior recognition model training method of the present invention.
Optionally, the behavior label corresponding to the target training sample is determined according to the video information, and the behavior label corresponding to the video information can be obtained by actively analyzing the video information through a video information recognition algorithm so as to determine the behavior label corresponding to the target training sample.
Alternatively, the time information may be a time point or a time period.
The invention also proposes a terminal device, comprising: the method comprises a memory, a processor and an active learning behavior recognition model training program which is stored in the memory and can run on the processor, wherein the steps of the active learning behavior recognition model training method in any embodiment are realized when the active learning behavior recognition model training program is executed by the processor.
The present invention further provides a storage medium, on which an actively-learned behavior recognition model training program is stored, and the steps of the actively-learned behavior recognition model training method according to any one of the above embodiments are implemented when the actively-learned behavior recognition model training program is executed by a processor.
In the embodiments of the terminal device and the storage medium provided by the present invention, all technical features of the embodiments of the behavior recognition model training method for active learning are included, and the contents of the expansion and the explanation of the specification are basically the same as those of the embodiments of the behavior recognition model training method for active learning, and are not described herein again.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, a controlled terminal, or a network device) to execute the method of each embodiment of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. An active learning behavior recognition model training method, comprising:
inputting a sample to be trained into a behavior recognition model to determine a behavior recognition result of the sample to be trained, and determining a target training sample according to the behavior recognition result;
acquiring a behavior label corresponding to the target training sample;
inputting the target training sample and the behavior label into the behavior recognition model for training so as to update the behavior recognition model.
2. The active learning behavior recognition model training method according to claim 1, wherein the step of determining target training samples according to the behavior recognition result comprises:
and when the behavior recognition result is difficult to judge, determining the sample to be trained as the target training sample, or determining the sample to be trained as the target training sample when the probability values corresponding to the limb action behavior types in the behavior recognition result are all smaller than or equal to preset probability values.
3. The active learning behavior recognition model training method according to claim 1, wherein the step of obtaining the behavior labels corresponding to the target training samples comprises:
acquiring channel state information data and video information when a target object acts;
determining the sample to be trained according to the channel state information data, wherein the sample to be trained comprises the target training sample;
determining the corresponding relation between the sample to be trained and the video information according to the time information;
and acquiring a behavior label corresponding to the target training sample according to the corresponding relation.
4. The active learning behavior recognition model training method of claim 3, wherein the step of determining the sample to be trained from the channel state information data comprises:
preprocessing the channel state information data to obtain the preprocessed channel state information data;
and generating a channel state information spectrogram according to the preprocessed channel state information data and a Fourier transform algorithm so as to determine the sample to be trained.
5. The method as claimed in claim 4, wherein the step of preprocessing the channel state information data to obtain the preprocessed channel state information data comprises:
performing first denoising processing on the channel state information data through a preset filter to obtain denoised channel state information data;
and carrying out second denoising processing on the denoised channel state information data according to a principal component analysis method to obtain the preprocessed channel state information data.
6. The method for training an active learning behavior recognition model according to claim 1, wherein after the step of inputting the target training samples and the behavior labels into the behavior recognition model for training to update the behavior recognition model, the method further comprises:
acquiring channel state information data when a target object acts;
and determining the body behavior action of the target object according to the channel state information data and the updated behavior recognition model.
7. The active learning behavior recognition model training method according to claim 1, wherein the to-be-trained samples comprise data samples corresponding to the same body motion behavior in a plurality of application environments.
8. The active learning behavior recognition model training method according to claim 1, wherein the network model corresponding to the behavior recognition model adopts an AlexNet model.
9. An actively-learned behavior recognition model training terminal device, comprising: a memory, a processor, and an actively-learned behavior recognition model training program stored in the memory and executable on the processor, the actively-learned behavior recognition model training program when executed by the processor implementing the steps of the actively-learned behavior recognition model training method of any one of claims 1-8.
10. A storage medium having stored thereon an actively-learned behavior recognition model training program, which when executed by a processor implements the steps of the actively-learned behavior recognition model training method according to any one of claims 1-8.
CN202210110983.9A 2022-01-28 2022-01-28 Active learning behavior recognition model training method, terminal device and storage medium Pending CN114495279A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210110983.9A CN114495279A (en) 2022-01-28 2022-01-28 Active learning behavior recognition model training method, terminal device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210110983.9A CN114495279A (en) 2022-01-28 2022-01-28 Active learning behavior recognition model training method, terminal device and storage medium

Publications (1)

Publication Number Publication Date
CN114495279A true CN114495279A (en) 2022-05-13

Family

ID=81479578

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210110983.9A Pending CN114495279A (en) 2022-01-28 2022-01-28 Active learning behavior recognition model training method, terminal device and storage medium

Country Status (1)

Country Link
CN (1) CN114495279A (en)

Similar Documents

Publication Publication Date Title
US20210352441A1 (en) Handling concept drift in wi-fi-based localization
US20220070633A1 (en) Proximity-based model for indoor localization using wireless signals
US20210104021A1 (en) Method and apparatus for processing image noise
CN108509896B (en) Trajectory tracking method and device and storage medium
US10058076B2 (en) Method of monitoring infectious disease, system using the same, and recording medium for performing the same
US11082109B2 (en) Self-learning based on Wi-Fi-based monitoring and augmentation
US20200341114A1 (en) Identification system for subject or activity identification using range and velocity data
US10999705B2 (en) Motion vector identification in a Wi-Fi motion detection system
CN111178331B (en) Radar image recognition system, method, apparatus, and computer-readable storage medium
US20120275690A1 (en) Distributed artificial intelligence services on a cell phone
CN104820488A (en) User-directed personal information assistant
CN111033445B (en) System and method for gesture recognition
Shi et al. Human activity recognition using deep learning networks with enhanced channel state information
Yang et al. Door-monitor: Counting in-and-out visitors with COTS WiFi devices
Pires et al. Identification of activities of daily living using sensors available in off-the-shelf mobile devices: Research and hypothesis
CN110929242B (en) Method and system for carrying out attitude-independent continuous user authentication based on wireless signals
Mo et al. A deep learning-based human identification system with wi-fi csi data augmentation
Sangavi et al. Human Activity Recognition for Ambient Assisted Living
CN114495279A (en) Active learning behavior recognition model training method, terminal device and storage medium
KR101340287B1 (en) Intrusion detection system using mining based pattern analysis in smart home
Gao et al. Multi-scale Convolution Transformer for Human Activity Detection
CN112347834A (en) Remote nursing method and device based on personnel category attributes and readable storage medium
CN114463776A (en) Fall identification method, device, equipment and storage medium
CN109740559A (en) Personal identification method, apparatus and system
JP2019200535A (en) Movement information utilization apparatus and movement information utilization method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination