WO2013159282A1 - Système et procédé d'identification à auto-apprentissage personnalisé - Google Patents

Système et procédé d'identification à auto-apprentissage personnalisé Download PDF

Info

Publication number
WO2013159282A1
WO2013159282A1 PCT/CN2012/074584 CN2012074584W WO2013159282A1 WO 2013159282 A1 WO2013159282 A1 WO 2013159282A1 CN 2012074584 W CN2012074584 W CN 2012074584W WO 2013159282 A1 WO2013159282 A1 WO 2013159282A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
sample
input
processing unit
detected object
Prior art date
Application number
PCT/CN2012/074584
Other languages
English (en)
Chinese (zh)
Inventor
陈澎
Original Assignee
北京英福生科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京英福生科技有限公司 filed Critical 北京英福生科技有限公司
Priority to PCT/CN2012/074584 priority Critical patent/WO2013159282A1/fr
Publication of WO2013159282A1 publication Critical patent/WO2013159282A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • G06F18/2178Validation; Performance evaluation; Active pattern learning techniques based on feedback of a supervisor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing

Definitions

  • the invention belongs to the field of pattern recognition, and in particular relates to a personalized self-learning recognition system and method.
  • pattern recognition technology has been used to identify various types of physiological state information of the human body for health monitoring and medical diagnosis.
  • various types of sensors can be used to detect physiological parameters of the human body and the like; for example, motion sensors, electrocardiogram sensors, electromyogram sensors, electroencephalogram sensors, blood oxygen sensors, and the like can be used for various actions and electrocardiograms of the human body.
  • EMG, EEG, blood oxygen signal, etc. are detected; then the detected signal is preprocessed, and feature extraction and selection are performed, and then the recognition algorithm is used to classify and recognize according to the previously trained model.
  • this technique to record the types of physiological states of various human bodies, it is possible to analyze the physiological health status of the human body.
  • a large number of documents document the technical principles of pattern recognition.
  • the existing pattern recognition technology is based on the classification and recognition of models established by collecting and training physiological state signals of various people, and the models are not collected and trained according to the physiological state signals of the individual users. . Due to individual differences, many existing health monitoring systems and tools do not accurately identify the physiological health status of a particular user.
  • the object of the present invention is to provide a personalized self-learning recognition system and method, which can collect and perform self-learning training on personalized data of a specific object, thereby greatly improving the recognition rate of the state of a specific object.
  • the technical solution of the present invention is specifically a personalized self-learning recognition system, including
  • One or more sensors for detecting a signal of the detected object are provided.
  • a storage unit for storing a sample/model library, including a model and a training sample set
  • a processing unit configured to receive a signal of the detected object detected by the one or more sensors, and identify a state type of the detected object according to a corresponding model in the sample/model library;
  • Input/output means for outputting a recognition result and receiving feedback information input based on the recognition result
  • the processing unit also trains the model using the corresponding training sample set and updates the sample/model library based on the feedback information.
  • the set of training samples includes characteristics of signals and/or signals of the detected object.
  • the sample/model library includes corresponding models in one or more scenarios.
  • the feedback information includes an identification of a model to be established.
  • the feedback information includes a scene identifier of a model to be established and a corresponding model identifier.
  • the processing unit When the input identifier of the model to be established is the same as the identifier of the model in the sample/model library, the processing unit adds the corresponding training sample to the training sample set of the corresponding model for training to establish a corresponding to the identifier. model.
  • the processing unit is further configured to select to communicate with the one or more sensors according to a selection instruction received by the input/output device.
  • the input/output device is also operative to receive an input selected mode of operation.
  • the operation mode includes one of a training model mode, a monitoring mode, a recording mode, or a combination thereof;
  • the processing unit separately performs model training, state recognition, and stores the recognition result and/or the corresponding detection signal into the storage unit according to the input operation mode.
  • the present invention further provides a personalized self-learning recognition method, including
  • the input feedback information is received, and the model is trained based on the feedback information and the sample/model library is updated.
  • the feedback information includes an identification of a model to be established.
  • the corresponding training sample is added to the training sample set of the corresponding model for training to establish a model corresponding to the identifier.
  • the invention also provides a personalized self-learning recognition system, comprising
  • a server for storing a sample/model library, including a training sample set and a model
  • the client the network connection with the server, and further includes,
  • One or more sensors for detecting a signal of the detected object are provided.
  • a processing unit configured to receive a signal of the detected object detected by the sensor, and identify a state type of the detected object according to the sample/model library stored by the server;
  • Input/output means for outputting a recognition result and receiving feedback information input based on the recognition result
  • the processing unit also trains the model using the corresponding training sample set and updates the sample/model library based on the feedback information.
  • the server is further configured to instruct the processing unit of the client to select to communicate with the one or more sensors.
  • the invention further provides a personalized self-learning recognition system, including a client and a server:
  • the server includes a memory for storing a sample/model library
  • the client further includes
  • One or more sensors for detecting a signal of the detected object are provided.
  • a processing unit configured to receive a signal of the detected object detected by the sensor, and identify a state type of the detected object according to the sample/model library stored by the server;
  • Input/output means for outputting a recognition result and receiving feedback information input based on the recognition result
  • the processing unit further transmits the feedback information and training samples of the model to be established to the server through a network;
  • the server is further configured to receive the feedback information sent by the client and the training samples of the model to be established, and use the corresponding training sample set to train the model and update the model library.
  • FIG. 1 is a structural diagram of a personalized learning recognition system according to an embodiment of the present invention.
  • FIG. 2 is a structural diagram of a processing unit in the system of Figure 1;
  • FIG. 3 is a flowchart of a personalized learning and recognition method according to an embodiment of the present invention.
  • FIG. 4 is a schematic structural diagram of a personalized learning recognition system according to another embodiment of the present invention.
  • the personalized self-learning recognition system 100 shown in FIG. 1 includes one or more sensors 1021-102n, a processing unit 101, a storage unit 103, and an input/output device 104.
  • the sensors 1021-102n may be signals for detecting the state of the detected object.
  • the signal indicating the state of the detected object may include a physiological state signal.
  • it may include a motion sensor (eg, an acceleration sensor, a gyroscope, an angular velocity sensor, etc.), a pulse sensor, an electrocardiogram sensor (ECG) Sensor), EMG sensor, EEG sensor, blood oxygen sensor (SPO2)
  • ECG electrocardiogram sensor
  • EMG electrocardiogram sensor
  • EEG EEG sensor
  • SPO2 blood oxygen sensor
  • the sensors 1021-102n may further include an environmental sensor for detecting a state affecting the detected object, for example, one of a temperature sensor, a humidity sensor, or the like combination.
  • the type of the sensor it can be placed on the hand of the object to be detected, for detecting blood oxygen, myoelectricity and pulse of the object to be detected; placing it on the leg of the object to be detected, for Detecting a motion signal of the detected object; placing it on the chest of the detected object for detecting the electrocardiographic signal; placing it on the head of the detected object, for detecting the EEG signal of the detected object, etc.; or placing
  • the position close to the object to be detected and the surroundings of the object to be detected are, for example, used to detect the sound of the object to be detected, the temperature of the video signal and its surroundings, humidity, and the like.
  • the object to be detected may be the user himself or other person who needs to monitor the user, such as an elderly person, a patient, a child, an athlete in training, or the like.
  • the storage unit 103 is configured to store a model, such as a physiological state model library, and may further include an action model library, an electrocardiogram model library, a pulse model library, an electromyography model library, an electroencephalogram model library, a blood oxygen model library, and a sound model. Library, video model library, temperature model library, humidity model library, etc.
  • the storage unit 103 may be integrated inside the processing unit 101 or may be disposed outside the processing unit 101.
  • the storage unit 103 may store an initial reference model, wherein the reference model may be a model trained according to a large sample set, for example, the motion model library may include a walking model, running and jumping Model, sleep behavior model, etc.; myocardial infarction model, arrhythmia model, etc.
  • the ECG model library may include a normal model, an epilepsy model, a sleep disorder model, etc.; the pulse model library may include a normal model An abnormal model, etc.; the muscle electrical model library may include a muscle fatigue model and a muscle excitation model; the blood oxygen model library may include a hypoxemia model and a normal oxygen carrying model; the sound model library may include a quiet model, a sleep model, and a working model.
  • the hybrid model; the video model library may include a normal behavior model and an abnormal behavior model; the temperature model library and the humidity model library may include a normal model and a prone disease model.
  • the initialization time storage unit 103 may not store any reference model, and may continuously establish new various types of models and update the storage unit 103 by training according to the input detected signal of the detected object. Model library.
  • the processing unit 101 can be connected to the sensors 1021-102n in various manners, for example, through an I2C bus, a UART, an SPI bus, or the like, or can be connected by a wired method such as a USB, a network interface, or the like. Modes such as Bluetooth, Zigbee, Wifi, infrared, etc. are wirelessly connected to communicate with sensors 1021-102n.
  • the processing unit 101 may further include an identification module 101a for identifying a state type of the detected object represented by the detection signal input by the sensors 1021-102n; and a training module 101b for training using the training sample model.
  • the storage unit 103 is further configured to store training samples.
  • the training sample may be a detection signal input by the sensors 1021-102n, or may be a feature after the feature extraction by the identification module 101a.
  • the processing unit 101 receives the detection signals of the sensors 1021-102n;
  • the identification module 101a in the processing unit 101 performs recognition processing in accordance with the corresponding model.
  • the identification processing process further includes a pre-processing step 200, a feature extraction step 201, a classification identification step 202, and the like.
  • step 200 the identification module 101a in the processing unit 101 performs corresponding pre-processing on the detection signals from the sensors 1021-102n, which may include removing noise in the signal by using an algorithm such as filtering;
  • the identification module 101a in the processing unit 101 extracts the pre-processed signal according to the feature.
  • the features extracted from the motion signals transmitted by the acceleration sensor may include time domain and frequency domain features, wherein the time domain features include, for example, mean, variance, short-term energy, autocorrelation coefficient, and cross-correlation of the amplitude of the motion signal, Signal period, etc.; frequency domain features include cross-correlation coefficients in the frequency domain obtained by FFT (Fast Fourier Transform) of motion signals, MFCC (Meier Cepstral Coefficient), and the like.
  • FFT Fast Fourier Transform
  • the feature extracted from the electrocardiographic signal transmitted from the electrocardiographic sensor may include a QT period of the QRS wave, a QRS slope, a ST segment slope, etc.; the feature extracted from the myoelectric signal according to the electromyogram sensor may include, for example, a time domain feature
  • the average absolute value, the absolute value of the average slope, the zero-crossing rate, etc., such as the average power frequency, the median frequency, the peak frequency, etc. in the frequency domain characteristics; the temperature change rate and the like can be extracted according to the temperature signal transmitted from the temperature sensor. Then, the feature is selected by assigning weight values to the extracted features.
  • the identification module 101a of the processing unit 101 classifies and extracts the extracted features according to corresponding models in the storage unit 103, wherein the classification recognition algorithm may adopt k-nearest neighbor, Gaussian, Bayesian, artificial neural network, etc. .
  • the processing unit 101 outputs the recognition result to the user via the input/output device 104.
  • the output device may be an audio output device, a liquid crystal display or the like for providing an audio output or a user interface.
  • the input device may include a button, a keyboard, a touch screen, an audio or video sensor, and the like.
  • the output recognition result is a rejection type, that is, a model that does not belong to the stored sample/model library. Type; or, when the recognition result output by the processing unit is wrong, for example, the electrocardiographic recognition module outputs the recognition result "normal" through the input/output device 104, but the detected object is abnormal, or the motion recognition module 101a passes through the input/output device 104.
  • the output recognition result is “running”, but when the actual action of the user is “walking”, the processing unit 101 can receive feedback information input by the user through the input/output device 104, for example, the indication system recognizes an error instruction, and judges according to the feedback information. Whether to establish a new model; if yes, proceed to step 204 to receive feedback information of the model identification to be established input by the user through the input/output device 104; if not, receive the next one or more detection signals and proceed to the step 200.
  • the feedback information may further include an identifier of a scene corresponding to each model.
  • the user input model is identified as a "golf scene” and “Swing action.”
  • the user can enter the model identification as "angina.”
  • the model identifier library may be pre-stored in the storage unit 103, and the user may select, by using the input/output device 104, the model identifier to be established from the model identifier library, if the model identifier to be established is not stored in the existing model identifier. When the library is available, the user can enter the model ID.
  • the processing unit 101 takes the corresponding signal/feature as a training sample and trains the model through the training module 101b.
  • the training module 101c in the processing unit 101 can be trained by a machine learning algorithm well known in the art, such as hybrid Gaussian, support vector machine, Bayesian or other well-known algorithms.
  • the processing unit 101 determines, according to the existing sample/model library, whether the model identifier is identical to a certain model identifier in the sample/model library;
  • the training module 101b in the processing unit 101 adds the extracted and selected features of the current input signal to the existing sample set having the same model identification for training; for example, if the processing unit 101 receives
  • the signal is an electrocardiogram signal and an action signal, and the identifier of the model to be established input by the user is “abnormal”, and the ECG signal and the motion signal can be respectively added to the existing sample/model library and identified as “abnormal”.
  • the corresponding signal training of the model is focused on training and a new "abnormal" model is established.
  • step 206 the existing identically identified model is updated to update the existing sample/model library.
  • the processing unit 101 may also select a detection signal that only communicates with one or more of the sensors or processes only its input, according to an instruction input by the user via the input/output device 104.
  • a detection signal that only communicates with one or more of the sensors or processes only its input, according to an instruction input by the user via the input/output device 104.
  • an instruction can be input to instruct the processing unit 101 to communicate only with the EEG and motion sensors, or only the detection signals input by the two sensors.
  • the storage unit 103 can also store a corresponding sample/model library based on various different scenarios, such as a fitness scene, an office scene, a home scene, and the like. Each scene may also include multiple sub-scenes.
  • the fitness scene may also include a yoga scene, a tennis court view, and the like.
  • Each scene or sub-scene may include a sample/model library corresponding to each detection signal.
  • the yoga sub-scenario may include a sample/model library corresponding to the motion signal, such as a leg lift, a bent-up sample/model, etc.
  • the tennis sub-scenario may include a tennis action sample/model library, such as a serve, a swipe sample/model, etc.
  • the action sample/model library corresponding to the scene may include work, rest, and other action samples/models
  • the stored action sample/model library of the family scene may include action samples/models such as housework, watching TV, and eating;
  • the myoelectric sample/model library corresponding to the myoelectric signal may include a muscle fatigue model and a muscle excitation model;
  • the EEG sample/model library corresponding to the EEG signal may include a mental stress model, Mental relaxation samples/models;
  • ECG samples/model libraries corresponding to ECG signals may include samples/models such as “normal” and “abnormal”, where abnormal samples/model libraries may also include “arrhythmia” and “myocardial infarction” "Sequence samples/models.
  • the input/output device 104 can output a scene identifier for the user to select, and the user can select a scene or a sub-scene thereof through the input/output 104 device; the processing unit 101 can identify the status type of the detected object according to the model corresponding to each scene. For example, the type of action performed, the type of brain electrical power, the type of electrocardiogram, etc., and the recognition results are respectively output.
  • the user can also input the scene identifier corresponding to the model to be established through the input/output device 104.
  • the model to be trained by the detected object is a "swing" action model in a golf scene and a corresponding blood oxygen model, an electroencephalogram model, an electrocardiogram model, a humidity model, a temperature model, etc.
  • the user can input according to the input.
  • the prompt input scene of the output device 104 is identified as "golf scene”, and then the input model is identified as "swing”; the processing unit 101 receives the corresponding scene identifier and model identifier input by the user, and marks the current model of the new training as "Swing" and then update the corresponding sample/model library.
  • the processing unit 101 may further perform the training model on the extracted features according to the signals detected by the plurality of sensors 1021-102n.
  • the classification algorithm of the identification module 101a in the processing unit 101 can perform feature extraction according to the maximum feature dimension preset by different sensors, and perform feature selection according to the actually connected sensor; similarly, the training module 101b can be set to be the largest according to the preset.
  • Feature dimension is used to extract features and select features for training.
  • the sensors that the system 100 can connect are motion sensors, ECG sensors, and EEG sensors, wherein the 3-D feature vector of the mean, variance, and short-time energy of the amplitude is extracted from the motion signal detected by the motion sensor;
  • the signal extracted by the sensor is characterized by the QT period of the QRS wave, the QRS slope, and the ST-segment slope.
  • the two-dimensional vector extracted from the myoelectric sensor is the average absolute value and the average absolute value of the slope.
  • the corresponding classification/training algorithm is to classify/train according to the 8-dimensional feature vector.
  • the weight value of the feature of the electrocardiographic signal in the 8-dimensional feature vector corresponding to the recognition module 101a is set to (0, 0) and classified/trained.
  • the detected signals can be separately extracted according to the motion sensor, electrocardiogram, electroencephalogram, blood oxygen, temperature, humidity sensor, etc., and the model can be trained. In this way, the user can monitor the status type of the detected object in different environments.
  • the sensors 1021-102n continue to collect motion signals for the detected object. Based on the updated state model in the storage unit 103, the processing unit 101 continues to identify the state type of the detected object and trains to continuously update and refine the state samples/models.
  • the recognition rate of the initial model may be Not high.
  • the number of training samples also increases, and the training samples of the same state are combined for training, and adaptive learning is performed on the basis of the existing model, thereby possibly making the training
  • the recognition rate of the model is improved.
  • the storage unit 103 is further configured to store a detection signal and/or a recognition result of the detected object.
  • the input/output device 104 of the system 100 can also provide a user interface for the user to select different modes of operation, for example, a training mode, a monitoring mode, a recording mode, and the like can be provided.
  • the user can select different modes through the input/output device 104 as needed, for example, the training mode, the user can actively perform model training and build a model; in the monitoring mode, the processing unit 101 can be compared according to the recognition algorithm and the sample/model library. The state of the detected object is monitored and identified; in the recording mode, the processing unit 101 can record the signal and/or the recognition result of the detected object.
  • FIG. 4 shows a personalized self-learning recognition system 300 of the present invention.
  • the client 301 and the server 302 are included.
  • Client 301 and server 302 perform data transmission over network 303, such as Wi-Fi, GSM, LAN, USB, Bluetooth, WLAN, etc., as is known in the art.
  • the server 302 includes a storage unit 3023 for storing a sample/model library
  • the components of the client 301 that have the same or similar functions as the system 100 shown in FIG. 1 are not described again.
  • the sensors 30121-3012n are used to detect a signal of the detected object
  • the processing unit 3011 is configured to receive the signal of the detected object detected by the sensors 30121-3012n, and identify the current state type of the user according to the sample/model library downloaded from the server 302 and output the recognition result through the input/output device 3013;
  • the user may input a feedback information through the input/output device 3013, for example, indicating the recognition result error information, and the processing unit 3011 transmits the feedback information to the processing unit 3011 according to the feedback information. ;
  • the processing unit 3011 uses the corresponding training sample training model, and then sends the trained completed model and the corresponding training samples to the server 302;
  • the server 302 then updates the sample/model library.
  • the server 302 is further configured to store the detected signal library and/or the recognition result.
  • the input/output device 3013 may also prompt the user to input the model identifier to be trained, and if the model identifier input by the user is the same as the model identifier in the existing sample/model library, then the corresponding model is added.
  • the training samples are focused on training and updating the sample/model library.
  • the server 302 may further include a processing unit 3021 and an input/output device 3022.
  • the user may send various detected detection signals of the detected object to other remote monitoring terminals through the processing unit 3021 in the server 302. For example, Sending to the remote monitoring center for health analysis; the user can also use the input/output device 3022 in the server 302 to set the signal type of the sensor to be processed by the processing unit 3011 of the client according to the personalized condition of the detected object and its status.
  • the configuration information may be set as the ECG signal and the motion signal to be detected, and sent to the processing unit 3011 of the client, and the processing unit 3011 processes only according to the configuration information.
  • the ECG signal and motion signal of the detected object may be set as the ECG signal and the motion signal to be detected, and sent to the processing unit 3011 of the client, and the processing unit 3011 processes only according to the configuration information.
  • the ECG signal and motion signal of the detected object may be set as the ECG signal and the motion signal to be detected, and sent to the
  • the system 300 may be further configured to: the processing unit 3011 of the client 301 is configured to receive the signal of the detected object detected by the sensors 30121-3012n, and according to the sample/model library downloaded from the server 302. Identify the user's status type;
  • the user can input feedback information including model identification information through the input/output device 3013; the processing unit 3011 receives the feedback information and passes the corresponding training sample through the network. 303 is sent to the server 302;
  • the processing unit 3021 in the server 302 is configured to receive the feedback information sent by the client 301 and the training samples of the model to be established, and use the corresponding training sample set to train the model and identify the established new model, and then update the storage unit 3023.
  • Sample/model library
  • the system of the present invention can also be used to monitor the condition of an object. For example, whether the instrument in operation is normal, etc., different types of sensors such as an audio sensor, a vibration sensor, a pressure sensor, etc. can be placed at corresponding positions for detecting sound, vibration, pressure signals, and identifying instruments according to normal or abnormal models.
  • Working condition the user trains and builds a new model based on the identified results and continuously updates the sample/model library to more accurately monitor the working state of the instrument.

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

L'invention porte sur un système et un procédé d'identification à auto-apprentissage personnalisé, le système comprenant : un ou plusieurs capteurs, utilisés pour détecter le signal d'un objet à détecter ; une unité de stockage, utilisée pour stocker une bibliothèque d'échantillons/modèles comprenant un modèle et un ensemble d'échantillons d'apprentissage ; une unité de traitement, utilisée pour recevoir le signal de l'objet détecté ayant été détecté par le ou les capteurs et pour identifier le type d'état de l'objet détecté conformément à un modèle correspondant figurant dans la bibliothèque d'échantillons/modèles ; un dispositif d'entrée/sortie, utilisé pour produire un résultat d'identification et recevoir des informations de rétroaction introduites en fonction du résultat d'identification ; l'unité de traitement utilise également un ensemble d'échantillons d'apprentissage correspondant pour apprendre un modèle et mettre à jour la bibliothèque d'échantillons/modèles conformément aux informations de rétroaction. Le procédé d'identification à auto-apprentissage personnalisé de la présente invention permet à un utilisateur d'établir un modèle personnalisé, ce qui permet d'améliorer fortement le taux d'identification pour un type d'état personnalisé.
PCT/CN2012/074584 2012-04-24 2012-04-24 Système et procédé d'identification à auto-apprentissage personnalisé WO2013159282A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2012/074584 WO2013159282A1 (fr) 2012-04-24 2012-04-24 Système et procédé d'identification à auto-apprentissage personnalisé

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2012/074584 WO2013159282A1 (fr) 2012-04-24 2012-04-24 Système et procédé d'identification à auto-apprentissage personnalisé

Publications (1)

Publication Number Publication Date
WO2013159282A1 true WO2013159282A1 (fr) 2013-10-31

Family

ID=49482113

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2012/074584 WO2013159282A1 (fr) 2012-04-24 2012-04-24 Système et procédé d'identification à auto-apprentissage personnalisé

Country Status (1)

Country Link
WO (1) WO2013159282A1 (fr)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110473306A (zh) * 2019-08-15 2019-11-19 优估(上海)信息科技有限公司 一种基于面部识别的考勤方法、装置和系统
CN111310658A (zh) * 2020-02-14 2020-06-19 北京海益同展信息科技有限公司 一种动作模式识别模型的更新方法和装置
CN111796980A (zh) * 2019-04-09 2020-10-20 Oppo广东移动通信有限公司 数据处理方法、装置、电子设备和存储介质
CN113255748A (zh) * 2021-05-14 2021-08-13 广州织点智能科技有限公司 一种商品识别模型的特征底库更新方法及装置
CN114625076A (zh) * 2016-05-09 2022-06-14 强力物联网投资组合2016有限公司 用于工业物联网的方法和系统
CN114841201A (zh) * 2022-04-23 2022-08-02 中国人民解放军32802部队 一种面向智能化雷达对抗的动态知识库设计方法及装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5078152A (en) * 1985-06-23 1992-01-07 Loredan Biomedical, Inc. Method for diagnosis and/or training of proprioceptor feedback capabilities in a muscle and joint system of a human patient
CN101662986A (zh) * 2007-04-20 2010-03-03 皇家飞利浦电子股份有限公司 评估运动模式的系统和方法
CN102087712A (zh) * 2010-02-22 2011-06-08 艾利维公司 个性化动作控制的系统和方法
CN102368297A (zh) * 2011-09-14 2012-03-07 北京英福生科技有限公司 一种用于识别被检测对象动作的设备、系统及方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5078152A (en) * 1985-06-23 1992-01-07 Loredan Biomedical, Inc. Method for diagnosis and/or training of proprioceptor feedback capabilities in a muscle and joint system of a human patient
CN101662986A (zh) * 2007-04-20 2010-03-03 皇家飞利浦电子股份有限公司 评估运动模式的系统和方法
CN102087712A (zh) * 2010-02-22 2011-06-08 艾利维公司 个性化动作控制的系统和方法
CN102368297A (zh) * 2011-09-14 2012-03-07 北京英福生科技有限公司 一种用于识别被检测对象动作的设备、系统及方法

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114625076A (zh) * 2016-05-09 2022-06-14 强力物联网投资组合2016有限公司 用于工业物联网的方法和系统
CN111796980A (zh) * 2019-04-09 2020-10-20 Oppo广东移动通信有限公司 数据处理方法、装置、电子设备和存储介质
CN111796980B (zh) * 2019-04-09 2023-02-28 Oppo广东移动通信有限公司 数据处理方法、装置、电子设备和存储介质
CN110473306A (zh) * 2019-08-15 2019-11-19 优估(上海)信息科技有限公司 一种基于面部识别的考勤方法、装置和系统
CN111310658A (zh) * 2020-02-14 2020-06-19 北京海益同展信息科技有限公司 一种动作模式识别模型的更新方法和装置
CN113255748A (zh) * 2021-05-14 2021-08-13 广州织点智能科技有限公司 一种商品识别模型的特征底库更新方法及装置
CN114841201A (zh) * 2022-04-23 2022-08-02 中国人民解放军32802部队 一种面向智能化雷达对抗的动态知识库设计方法及装置

Similar Documents

Publication Publication Date Title
WO2013159282A1 (fr) Système et procédé d'identification à auto-apprentissage personnalisé
EP3843617B1 (fr) Interprétation de signaux neuromusculaires guidée par caméra
Esposito et al. A piezoresistive array armband with reduced number of sensors for hand gesture recognition
Wu et al. Fuzzy integral with particle swarm optimization for a motor-imagery-based brain–computer interface
CN106255449A (zh) 具有用于生命体征扫描的多个集成传感器的便携式装置
AU2013239151A1 (en) System for the acquisition and analysis of muscle activity and operation method thereof
US9770179B2 (en) System, method and device for detecting heart rate
WO2017146519A1 (fr) Détection de variations de santé et de seuils de ventilation basée sur des capteurs
EP2839774B1 (fr) Appareil d'interface de biosignal et méthode d'exploitation d'un appareil d'interface de biosignal
CN107440695B (zh) 生理信号感测装置
WO2013141419A1 (fr) Système biométrique utilisant deux mains pour l'évaluation de la fonction des vaisseaux sanguins et cardiopulmonaire
WO2017188099A1 (fr) Dispositif, terminal et système d'informations biométriques
JP7330507B2 (ja) 情報処理装置、プログラム、及び、方法
WO2020122792A1 (fr) Commande d'un dispositif orthotique actif
KR20190080598A (ko) 생체신호를 이용한 감성 검출 시스템 및 그 방법
JP7303544B2 (ja) 情報処理装置、プログラム、及び方法
Miyake et al. Heel-contact gait phase detection based on specific poses with muscle deformation
JP2021089635A (ja) 情報処理装置及びプログラム
US10691218B2 (en) Gesture recognition apparatus and components thereof
JP7343168B2 (ja) 情報処理装置、プログラム、及び、方法
JP2003244780A (ja) 生体信号を利用したリモートコントローラ
CN210052319U (zh) 手持式普通话训练器
WO2016149830A1 (fr) Système, procédé et dispositif pour détecter une fréquence cardiaque
KR101603148B1 (ko) 적응적으로 특징과 채널을 선택하는 표면근전도 신호기반 보행단계 인식 방법
TW201617027A (zh) 使用表面肌電描記法減少移動假影之系統及方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12874957

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12874957

Country of ref document: EP

Kind code of ref document: A1