WO2021045354A1 - Procédé de détermination de fonction de détection de mouvement utilisant une réduction de dimension d'une pluralité d'éléments de données de capteur, procédé de détection de mouvement d'utilisateur et appareil associé - Google Patents

Procédé de détermination de fonction de détection de mouvement utilisant une réduction de dimension d'une pluralité d'éléments de données de capteur, procédé de détection de mouvement d'utilisateur et appareil associé Download PDF

Info

Publication number
WO2021045354A1
WO2021045354A1 PCT/KR2020/007435 KR2020007435W WO2021045354A1 WO 2021045354 A1 WO2021045354 A1 WO 2021045354A1 KR 2020007435 W KR2020007435 W KR 2020007435W WO 2021045354 A1 WO2021045354 A1 WO 2021045354A1
Authority
WO
WIPO (PCT)
Prior art keywords
motion
motion detection
user
sensor data
sensor
Prior art date
Application number
PCT/KR2020/007435
Other languages
English (en)
Korean (ko)
Inventor
신민용
신성준
유흥종
전진홍
최윤철
Original Assignee
주식회사 바딧
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 바딧 filed Critical 주식회사 바딧
Publication of WO2021045354A1 publication Critical patent/WO2021045354A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/038Control and interface arrangements therefor, e.g. drivers or device-embedded control circuitry
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Definitions

  • the present invention relates to a method for determining a motion detection function, a method for detecting user motion, and a device for the same.
  • the user's motion is detected in advance by efficiently processing multidimensional sensor data using a learning model based on dimension reduction and deep learning. It relates to a method and apparatus capable of determining a possible motion detection function and confirming a user's motion.
  • the electronic device may recognize a user's voice and a user's motion in real time.
  • an IMU Inertial Measurement Unit
  • This IMU sensor may be composed of an acceleration sensor, a gyro sensor, and a geomagnetic sensor. Angle information such as values can be extracted in real time, the direction of speed is tracked using a 3-axis acceleration sensor, and the direction of motion can be tracked by tracking the value of the earth's magnetic field using a geomagnetic sensor.
  • the sensor data detected using these IMU sensors are multidimensional (different sensors and axes) data with different information, when using this to analyze the user's voice and the user's motion, the operation becomes complex and the accuracy of the analysis is low. There was a problem.
  • the present invention has been conceived to solve the above problems, and an object of the present invention is to provide a method and apparatus for efficiently analyzing user voices and motions using a motion confirmation learning model based on dimension reduction and deep learning.
  • a method of determining a motion detection function using dimension reduction of a plurality of sensor data includes: acquiring the plurality of sensor data related to the user's motion as learning data; Reducing the dimension of the acquired plurality of sensor data and deriving a plurality of motion detection functions; And machine learning the derived plurality of motion detection functions through a preset motion confirmation learning model, and determining at least one motion detection function derived as a result of the machine learning as a function for detecting the user's motion.
  • a method for detecting a user motion using dimension reduction of a plurality of sensor data includes: detecting the plurality of sensor data related to the user motion; Reducing a dimension of a plurality of sensor data detected by the sensor unit and deriving a plurality of motion detection functions; Machine learning the derived plurality of motion detection functions through a preset motion confirmation learning model, and outputting at least one motion detection function derived as a result of the machine learning; And checking a user's motion based on the outputted at least one motion detection function.
  • an apparatus for detecting a user motion using a dimension reduction of a plurality of sensor data includes: a sensor unit configured to detect the plurality of sensor data related to the user motion; A dimension reduction unit for reducing a dimension of a plurality of sensor data detected by the sensor unit and for deriving a plurality of motion detection functions; A deep learning unit for machine learning the derived plurality of motion detection functions through a preset motion verification learning model and outputting at least one motion detection function derived as a result of the machine learning; And a motion classification unit that checks a user's motion based on the outputted at least one motion detection function.
  • the user's motion is determined by using a learning model that is machine-learned based on sensor data related to the motion and a plurality of motion detection functions.
  • a method may be provided that includes determining at least one motion detection function for detection, and detecting the user's motion using the determined at least one motion detection function.
  • At least some of the sensor data may be reduced in dimension.
  • the plurality of motion detection functions may be specified by combining a time series function corresponding to the sensor data.
  • the learning model may be updated based on a result of detecting the user's motion using the determined at least one motion detection function.
  • a non-transitory computer-readable recording medium for recording a computer program for executing the above method may be provided.
  • a system for determining a user's motion based on machine learning the user's motion using a learning model that is machine-learned based on sensor data related to motion and a plurality of motion detection functions.
  • a system including a first module that determines at least one motion detection function for detecting a motion detection function, and a second module that detects the user's motion using the determined at least one motion detection function may be provided.
  • At least some of the sensor data may be reduced in dimension.
  • the plurality of motion detection functions may be specified by combining a time series function corresponding to the sensor data.
  • the learning model may be updated based on a result of detecting the user's motion using the determined at least one motion detection function.
  • a plurality of sensor data can be reduced in dimensionality, thereby increasing the operation processing speed of user operation confirmation.
  • the present invention can increase the accuracy of the user's motion check by applying a pre-built motion check learning model to the reduced-dimensional motion detection function.
  • FIG. 1 is a block diagram showing an apparatus for detecting user motion according to an embodiment of the present invention.
  • FIG. 2 is a block diagram showing a processing flow of a control unit according to an embodiment of the present invention.
  • FIG. 3 is a flowchart illustrating a method of determining a motion detection function for confirming a user's motion by using a dimension reduction and learning model of a plurality of sensor data according to an embodiment of the present invention.
  • FIG. 4 is a flowchart illustrating a method of detecting a user's motion by using a dimension reduction and learning model of a plurality of sensor data according to an embodiment of the present invention.
  • control unit 110 control unit
  • Various embodiments of the present document may be implemented as software (eg, programs) including instructions stored in a machine-readable storage medium (eg, a computer).
  • the device is a device capable of calling a stored command from a storage medium and operating according to the called command, and may include an electronic device (eg, a server) according to the disclosed embodiments. Instructions may include code generated or executed by a compiler or interpreter.
  • a storage medium that can be read by a device may be provided in the form of a non-transitory storage medium.
  • “non-transitory” means that the storage medium does not contain a signal and is tangible, but does not distinguish between semi-permanent or temporary storage of data in the storage medium.
  • the method according to various embodiments disclosed in the present document may be provided by being included in a computer program product.
  • Computer program products can be traded between sellers and buyers as commodities.
  • the computer program product may be distributed online in the form of a device-readable storage medium (eg, compact disc read only memory (CD-ROM)) or through an application store (eg, Play StoreTM).
  • CD-ROM compact disc read only memory
  • application store eg, Play StoreTM
  • at least some of the computer program products may be temporarily stored or temporarily generated in a storage medium such as a server of a manufacturer, a server of an application store, or a memory of a relay server.
  • Each of the constituent elements may be composed of a singular or a plurality of entities, and some sub-elements of the above-described sub-elements are omitted, or other sub-elements are It may be further included in various embodiments.
  • some constituent elements eg, a module or a program
  • 1 is a block diagram showing an apparatus for detecting user motion according to an embodiment of the present invention.
  • 2 is a block diagram showing a processing flow of a control unit according to an embodiment of the present invention.
  • the electronic device 100 may be communicatively connected with an external electronic device 200 through a network.
  • the network may include a wireless network and a wired network.
  • the network may be a short-range communication network (e.g., Bluetooth, WiFi direct, or IrDA (infrared data association)) or a telecommunication network (e.g., a cellular network, the Internet, or a computer network (e.g., LAN or WAN)).
  • a short-range communication network e.g., Bluetooth, WiFi direct, or IrDA (infrared data association)
  • a telecommunication network e.g., a cellular network, the Internet, or a computer network (e.g., LAN or WAN)
  • the electronic device 100 may be a wearable device worn on at least a portion of a user's body or an electronic device (eg, a smartphone) carried by the user.
  • an electronic device eg, a smartphone
  • the external device 200 may be a management device capable of receiving a user's operation check result, converting data related to the user's operation into big data, and analyzing and storing the user's state.
  • the external device 200 is, for example, a smart phone (smartphone), a server (server), a tablet PC (tablet personal computer), a desktop PC (desktop PC), a laptop PC (laptop PC), a netbook computer It may include at least one of a (netbook computer), a workstation, a personal digital assistant (PDA), a portable multimedia player (PMP), an MP3 player, a mobile medical device, or a camera.
  • PDA personal digital assistant
  • PMP portable multimedia player
  • MP3 player MP3 player
  • the electronic device 100 may detect a motion of a user wearing the electronic device 100 by using sensor data collected by the sensor unit 130.
  • the electronic device 100 may include a control unit 110, a database 120, and a sensor unit 130.
  • the database 120 includes a plurality of sensor data detected by the sensor unit 130, a learning result of the deep learning unit 112, a motion detection function optimized for detecting a specific motion of a user, and a motion classification unit 113. ) Can be saved as big data.
  • the sensor unit 130 may include an acceleration sensor, a gyro sensor, and a geomagnetic sensor.
  • the sensor unit 130 includes a 3-axis acceleration sensor, a gyro sensor, and It may be an IMU (inertial measurement unit) module composed of a geomagnetic sensor
  • an acceleration sensor, a gyro sensor, and a geomagnetic sensor may each be provided with a plurality of sensor data. sensor) and 3-axis data of a geomagnetic sensor, that is, the plurality of sensor data may be a 9-dimensional sensor, where the plurality of sensor data and the multidimensional sensor data are used interchangeably with the same meaning. I can.
  • the acceleration sensor may measure 3-axis (x, y, z-axis) acceleration.
  • the acceleration sensor may measure acceleration based on three axes of the electronic device 100.
  • the acceleration sensor can measure the acceleration for each axis direction over time and output the measured value.
  • the acceleration sensor can output the acceleration in each axis direction measured at a predetermined time interval.
  • the acceleration sensor in the x, y, z axis directions at the i-th time index can be measured and output axi, ayi, and azi. I can.
  • the z-axis direction may be defined as a gravitational direction
  • the x and y-axis directions may be defined as axial directions that are orthogonal to the z-axis direction and orthogonal to each other.
  • the gyro sensor may measure a 3-axis angular velocity.
  • the gyro sensor may measure an angular velocity of three axes representing rotation based on the three axes of the electronic device 100.
  • the geomagnetic sensor may measure a three-axis azimuth (geomagnetic measurement.
  • the geomagnetic sensor may measure the azimuth angle of the electronic device 100 for the earth's magnetic field).
  • the sensor unit 130 may further include other sensors such as a gravity sensor, and may collect various sensor data.
  • the controller 110 may control overall operations of components included in the electronic device 100. To this end, the controller 110 may perform data communication through respective components and an internal network.
  • control unit 110 may include a dimension reduction unit 111, a deep learning unit 112, and a motion classification unit 113.
  • the control unit 110 may receive a plurality of sensor data from the sensor unit 130.
  • the plurality of sensor data may include at least one of 3-axis sensor data of an acceleration sensor, 3-axis sensor data of a gyro sensor, or 3-axis sensor data of a geomagnetic sensor.
  • a plurality of sensor data may be more than nine. Otherwise, if any one sensor is omitted, a plurality of sensor data is less than nine. Can be less (e.g. 6-axis data)
  • the dimension reduction unit 111 may reduce a dimension of a plurality of sensor data detected by the sensor unit 130 and derive a plurality of motion detection functions. This may be a process performed to increase the processing speed of sensor data.
  • the plurality of sensor data detected by the sensor unit 130 may be, for example, 6-dimensional data when the sensor is a 6-axis sensor, and 9-dimensional data when the sensor is a 9-axis sensor. Therefore, when the sensor data to be analyzed in order to check the user's motion is high-dimensional data, it takes a lot of time to analyze. Accordingly, the dimension reduction unit 111 may reduce the dimensions of the plurality of sensor data.
  • each of the plurality of sensor data may be a time series function
  • the plurality of time series functions may be a polynomial consisting of one or more variables.
  • the plurality of motion detection functions may be functions derived from a polynomial combination of the plurality of time series functions.
  • a plurality of motion detection functions may be calculated by applying any one of multiple regression analysis and polynomial regression analysis to graphs of a plurality of sensor data as one of polynomial combinations.
  • a plurality of regression equations corresponding to a plurality of time series functions are derived by applying multiple regression analysis to each time series function of a plurality of sensor data, and a plurality of motion detection functions are obtained by variously combining the derived plurality of regression equations. Can be derived.
  • the dimension reduction unit 111 may reduce the dimension of the data by using Principal Component Analysis on a plurality of sensor data in addition to the above analysis method.
  • Principal component analysis is a technique that reduces the dimension and minimizes the difference from the original by extracting eigenvectors and eigenvalues that well represent the features of the data, and extracting only values with a high feature component among the eigenvectors.
  • a principal component axis that can express data well is obtained based on an eigenvector and an eigenvalue, and the dimension is reduced by projecting the data on the principal component axis.
  • the deep learning unit 112 may generate and update a motion verification learning model, and output a motion verification function suitable for predicting a user's motion by using the motion verification learning model.
  • the multicollinearity that can exist in the multiple regression prediction equation is predicted by the strong correlation between the independent variables for the dependent variable. It causes an error in the model.
  • the dimensional reduction unit 111 derives as many motion detection functions as possible through polynomial combination, and the deep learning unit 112 machine learns them to output at least one motion detection function suitable for predicting a specific motion of the user. I can.
  • the deep learning unit 112 may build an optimized motion verification learning model for predicting user motion using a machine learning technique based on the database 120.
  • the deep learning unit 112 may build a learning model for operation verification by mixing a statistical analysis technique such as a time series prediction technique and a machine learning technique such as a random forest.
  • the deep learning unit 112 performs a statistical technique through time series analysis based on the characteristics of a plurality of sensor data to be analyzed and a plurality of motion detection functions, and receives the result value of the time series analysis for user motion analysis.
  • a machine learning technique is applied, and the prediction result according to the user's motion analysis is received, and the prediction result is integrated, thereby constructing an optimized motion verification learning model.
  • the operation confirmation learning model is a decision tree, a Bayesian network, a support vector machine (SVM), a K-Nearest neighbors (KNN), an artificial neural network (ANN), and at least one of them. It may be at least one of a combination of two or more learning models.
  • the decision tree is an analysis method for performing classification and prediction by charting a decision rule in a tree structure.
  • the Bayesian network is a model that expresses a probabilistic relationship (conditional independence) between a number of variables in a graph structure. Bayesian networks are suitable for data mining through unsupervised learning.
  • the support vector machine is a model of supervised learning for pattern recognition and data analysis, and is mainly used for classification and regression analysis.
  • An artificial neural network is an information processing system in which a number of neurons called nodes or processing elements are connected in the form of a layer structure, which is a modeling of a biological neuron's operating principle and the connection relationship between neurons.
  • Artificial neural networks are models used in machine learning, and are statistical learning algorithms inspired by biological neural networks (especially the brain among animals' central nervous systems) in machine learning and cognitive science.
  • the artificial neural network may refer to an overall model having problem-solving ability by changing the strength of synaptic bonding through learning by artificial neurons (nodes) forming a network by combining synapses.
  • the artificial neural network may include a plurality of layers, and each of the layers may include a plurality of neurons.
  • artificial neural networks may include synapses that connect neurons and neurons.
  • artificial neural networks have three factors: (1) the connection pattern between neurons in different layers, (2) the learning process to update the weight of the connection, and (3) the output value from the weighted sum of the inputs received from the previous layer. It can be defined by the activation function it creates.
  • the artificial neural network may include network models such as DNN (Deep Neural Network), RNN (Recurrent Neural Network), BRDNN (Bidirectional Recurrent Deep Neural Network), MLP (Multilayer Perceptron), and CNN (Convolutional Neural Network). , Is not limited thereto.
  • Artificial neural networks are divided into single-layer neural networks and multi-layer neural networks according to the number of layers.
  • a typical single-layer neural network consists of an input layer and an output layer.
  • a general multilayer neural network is composed of an input layer, one or more hidden layers, and an output layer.
  • the input layer is a layer that receives external data
  • the number of neurons in the input layer is the same as the number of input variables
  • the hidden layer is located between the input layer and the output layer, receives signals from the input layer, extracts characteristics, and transfers them to the output layer. do.
  • the output layer receives a signal from the hidden layer and outputs an output value based on the received signal.
  • the input signal between neurons is multiplied by each connection strength (weight) and then summed. If the sum is greater than the neuron's threshold, the neuron is activated and the output value obtained through the activation function is output.
  • a deep neural network including a plurality of hidden layers between an input layer and an output layer may be a representative artificial neural network implementing deep learning, a type of machine learning technology.
  • the operation confirmation learning model may be trained using training data.
  • learning means a process of determining parameters of an artificial neural network using training data in order to achieve the purpose of classifying, regressing, or clustering input data.
  • parameters of an artificial neural network include weights applied to synapses or biases applied to neurons.
  • the deep learning unit 112 may evaluate the motion verification learning model based on the user's motion verification result, and gradually optimize the evaluation result by reflecting the evaluation result on the motion verification learning model.
  • the motion classification unit 113 may check user motion based on at least one motion detection function output from the deep learning unit 112.
  • the electronic device 100 may further include a display unit, and the control unit 110 may display a screen for monitoring a user's operation situation through the display unit.
  • the electronic device 100 of the present invention is a device for efficiently detecting a user's motion and is a product produced through a manufacturing process. Accordingly, the present invention determines a motion detection function for efficiently detecting a user's motion until the electronic device 100 is manufactured and shipped through dimension reduction and deep learning, and the electronic device 100 is manufactured and shipped. It is a device that can continuously check user behavior while optimizing the learning model through dimension reduction and deep learning.
  • FIG. 3 a method of determining the motion detection function before the electronic device 100 is manufactured and shipped will be described in detail, and in FIG. 4 below, the user's operation is performed after the electronic device 100 is manufactured and shipped.
  • the detection method will be described in detail.
  • FIG. 2 is a block diagram showing a processing flow of a control unit according to an embodiment of the present invention.
  • 3 is a flowchart illustrating a method of determining a motion detection function for confirming a user's motion by using a dimension reduction and learning model of a plurality of sensor data according to an embodiment of the present invention.
  • the operations of FIG. 3 may be performed by the dimension reduction unit 111, the deep learning unit 112, and the motion classification unit 113 of FIG. 2. Of course, the operation of FIG. 3 may be performed collectively by the control unit 100 of FIG. 1.
  • the sensor unit 130 may acquire a plurality of sensor data 10 related to a user's motion as learning data in operation 31.
  • the plurality of sensor data 10 may include at least one of 3-axis sensor data of an acceleration sensor, 3-axis sensor data of a gyro sensor, or 3-axis sensor data of a geomagnetic sensor.
  • the plurality of sensor data 10 may range from 3-axis data to 9-axis data.
  • sensors such as a gravity sensor may be further added, so that the plurality of sensor data 10 may be high-dimensional sensor data higher than that of 9 dimensions.
  • the dimension reduction unit 111 may reduce the dimensions of a plurality of sensor data detected by the sensor unit 130 and derive a plurality of motion detection functions capable of confirming a user's motion.
  • a user's motion includes a wide variety of motions such as a walking motion, a running motion, and a sitting motion
  • the motion confirmation learning model may store a motion detection function corresponding to each motion of the user through continuous learning.
  • each result value of the plurality of motion detection functions may be a probability value of whether all of the plurality of motion detection functions are determined to be the same specific user motion.
  • the plurality of motion detection functions may be n, and may include F1(t), F2(t), and Fn(t).
  • the plurality of motion detection functions Fn(t) may be functions for detecting the same motion (eg, walking motion) of the user. That is, although the plurality of motion detection functions are all functions for detecting a user's walking motion, sensitivity and accuracy for detecting a specific motion (eg, walking motion) of the user may be different from each other.
  • the dimension reduction unit 111 may derive a plurality of motion detection functions by combining a polynomial of a plurality of time series functions corresponding to each of a plurality of sensor data.
  • the plurality of time series functions may be polynomials composed of one or more variables
  • the plurality of motion detection functions may be polynomials including variables included in the plurality of time series functions.
  • polynomial combination may be performed using at least one method of multiple regression analysis or polynomial regression analysis.
  • a plurality of motion detection functions is a polynomial consisting of a simple formula by applying a time series multiple regression method that sets the result of each time series function of a plurality of sensor data as an independent variable and the probability of a specific motion of the user as a dependent variable. It can be composed of.
  • a plurality of motion detection functions may be configured with the following equation having only an initial value and a slope.
  • Equation 1 will be described as an example.
  • n is the number of motion detection functions
  • m is the number of a plurality of sensor data
  • Xm, Ym, Zm may be the result of 3-axis data of the sensor
  • ti may be the detection time of the sensor data
  • Y is It may be a probability of a specific action of the user.
  • m may be 3, and the variables are 9 (X1, Y1, Z1, X2, Y2, Z2). , X3, Y3, Z3).
  • n may mean the number of training data sets for deep learning. That is, the n motion detection functions may be a set of learning data for determining a motion detection function for detecting a specific motion (eg, walking motion) of a user through machine learning.
  • a specific motion eg, walking motion
  • F1(t) to Fn(t) each have the same variables (X1, Y1, Z1, X2, Y2, Z2, Xm, Ym, Zm) as shown in Equation 1, and have different coefficients (b1). , b2, b3, b4, b5, b6, b3m-2, b3m-1, b3m). That is, in order to more accurately detect a specific motion of a user, the more sets of learning data to be used for machine learning, the more effective it can be. Therefore, n motion detection functions can be derived by variously adjusting the coefficients as much as possible. Here, the operation of adjusting the coefficient may be adjusted in a variety of known methods so as to detect a specific operation of the user.
  • the plurality of motion detection functions may be formed of polynomials having different variables and coefficients, respectively.
  • F1(t) to Fn(t) each have at least three or more of the variables of Equation 1 (X1, Y1, Z1, X2, Y2, Z2, Xm, Ym, Zm), and have different coefficients.
  • It may be a motion detection function having That is, in order to more accurately detect a specific motion of a user, the more sets of learning data to be used for machine learning, the more effective it can be. Therefore, n motion detection functions can be derived by variously adjusting variables and coefficients as much as possible.
  • the operation of adjusting the variables and coefficients may be adjusted in a variety of known methods so as to detect a specific operation of the user.
  • the deep learning unit 112 machine learns the derived plurality of motion detection functions through a preset motion verification learning model, and uses at least one motion detection function derived as a result of the machine learning to perform user motion. It can be determined as a function to detect. For example, as described above, when all of the plurality of motion detection functions are functions for detecting a user's specific motion (e.g., walking motion), the sensitivity to detect each other's specific motion (e.g., walking motion) , Accuracy, etc. may be different.
  • the deep learning unit 112 may determine at least one motion detection function with high sensitivity or accuracy so that a specific motion (eg, walking motion) of a user can be more accurately identified among a plurality of motion detection functions through machine learning. . That is, as shown in FIG. 2, the deep learning unit 112 includes at least one motion detection function Fk(t) suitable for a user's specific motion (e.g., walking motion) among n F1(t) to Fn(t). Can be printed.
  • Fk(t) suitable for a user's specific motion (e.g., walking motion) among n F1(t) to Fn(t).
  • At least one motion detection function is F1(t), F2(t), F3( It can be t), F4(t), F5(t).
  • the deep learning unit 112 feeds back a result of determining at least one motion detection function using the motion confirmation learning model, and determines the result of determining the at least one motion detection function, and learns motion confirmation.
  • the operation check learning model can be gradually optimized. For example, since the learning model according to deep learning may become more sophisticated as the machine learning is repeated, the deep learning unit 112 can continuously update the operation check learning model whenever the result of determining the motion detection function is derived. have.
  • FIG. 4 is a flowchart illustrating a method of detecting a user's motion by using a dimension reduction and learning model of a plurality of sensor data according to an embodiment of the present invention.
  • the operations of FIG. 4 may be performed by the dimension reduction unit 111, the deep learning unit 112, and the motion classification unit 113 of FIG. 2. Of course, the operation of FIG. 4 may be performed collectively by the control unit 100 of FIG. 1.
  • the deep learning unit 112 optimizes for predicting a user's motion using a machine learning technique based on a plurality of previously collected sensor data in operation 41. It is possible to build a learning model for confirming the operation in advance.
  • the motion check model can be built in advance through the actions of FIG. 3, and continuously learn a motion detection function that can more accurately determine the user's motion by repeating the actions 42 to 46 shown in FIG. Accordingly, a motion verification learning model capable of extracting a motion detection function with high accuracy among a plurality of motion detection functions can be constructed as a more sophisticated model.
  • a user's motion includes a wide variety of motions such as a walking motion, a running motion, and a sitting motion
  • the motion confirmation learning model may store a motion detection function corresponding to each motion of the user through continuous learning.
  • the sensor unit 130 may detect a plurality of sensor data 10 related to a user's motion.
  • the plurality of sensor data 10 may include at least one of 3-axis sensor data of an acceleration sensor, 3-axis sensor data of a gyro sensor, or 3-axis sensor data of a geomagnetic sensor.
  • the plurality of sensor data 10 may range from 3-axis data to 9-axis data. Accordingly, as described above, the plurality of sensor data 10 may be 3D to 9D sensor data.
  • sensors such as a gravity sensor may be further added, so that the plurality of sensor data 10 may be high-dimensional sensor data higher than that of 9 dimensions.
  • the dimension reduction unit 111 may reduce the dimensions of a plurality of sensor data detected by the sensor unit 130 and derive a plurality of motion detection functions.
  • each result value of the plurality of motion detection functions may be a probability value of whether all of the plurality of motion detection functions are determined to be the same specific user motion.
  • the plurality of motion detection functions may be n, and may include F1(t), F2(t), and Fn(t).
  • the plurality of motion detection functions Fn(t) may be functions for detecting the same motion (eg, walking motion) of the user. That is, although the plurality of motion detection functions are all functions for detecting a user's walking motion, sensitivity and accuracy for detecting a specific motion (eg, walking motion) of the user may be different from each other.
  • the dimension reduction unit 111 may derive a plurality of motion detection functions by combining a polynomial of a plurality of time series functions corresponding to each of a plurality of sensor data.
  • the plurality of time series functions may be polynomials composed of one or more variables
  • the plurality of motion detection functions may be polynomials including variables included in the plurality of time series functions.
  • polynomial combination may be performed using at least one method of multiple regression analysis or polynomial regression analysis.
  • a plurality of motion detection functions is a polynomial consisting of a simple formula by applying a time series multiple regression method that sets the result of each time series function of a plurality of sensor data as an independent variable and the probability of a specific motion of the user as a dependent variable. It can be composed of.
  • a plurality of motion detection functions may be configured with the following equation having only an initial value and a slope.
  • Equation 1 will be described as an example.
  • n is the number of motion detection functions
  • m is the number of a plurality of sensor data
  • Xm, Ym, Zm may be the result of 3-axis data of the sensor
  • ti may be the detection time of the sensor data
  • Y is It may be a probability of a specific action of the user.
  • m may be 3, and the variables are 9 (X1, Y1, Z1, X2, Y2, Z2). , X3, Y3, Z3).
  • n may mean the number of training data sets for deep learning. That is, the n motion detection functions may be a set of learning data for determining a motion detection function for detecting a specific motion (eg, walking motion) of a user through machine learning.
  • a specific motion eg, walking motion
  • F1(t) to Fn(t) each have the same variables (X1, Y1, Z1, X2, Y2, Z2, Xm, Ym, Zm) as shown in Equation 1, and have different coefficients (b1). , b2, b3, b4, b5, b6, b3m-2, b3m-1, b3m). That is, in order to more accurately detect a specific motion of a user, the more sets of learning data to be used for machine learning, the more effective it can be. Therefore, n motion detection functions can be derived by variously adjusting the coefficients as much as possible. Here, the operation of adjusting the coefficient may be adjusted in various known ways to detect a specific operation of the user.
  • the plurality of motion detection functions may be formed of polynomials having different variables and coefficients, respectively.
  • F1(t) to Fn(t) each have at least three or more of the variables of Equation 1 (X1, Y1, Z1, X2, Y2, Z2, Xm, Ym, Zm), and have different coefficients.
  • It may be a motion detection function having That is, in order to more accurately detect a specific motion of a user, the more sets of learning data to be used for machine learning, the more effective it can be. Therefore, n motion detection functions can be derived by variously adjusting variables and coefficients as much as possible.
  • the operation of adjusting the variables and coefficients may be adjusted in a variety of known methods so as to detect a specific operation of the user.
  • the deep learning unit 112 machine learns a plurality of motion detection functions derived in operation 44 through a preset motion verification learning model, and outputs at least one motion detection function derived as a result of the machine learning.
  • the plurality of motion detection functions are functions for detecting a user's specific motion (e.g., walking motion)
  • the sensitivity to detect each other's specific motion e.g., walking motion
  • Accuracy, etc. may be different. Therefore, the deep learning unit 112 may output at least one motion detection function with high sensitivity or accuracy so that a specific motion (eg, walking motion) of a user among a plurality of motion detection functions can be more accurately identified through machine learning. have.
  • the deep learning unit 112 includes at least one motion detection function Fk(t) suitable for a user's specific motion (e.g., walking motion) among n F1(t) to Fn(t).
  • Fk(t) suitable for a user's specific motion (e.g., walking motion) among n F1(t) to Fn(t).
  • n and k mean the number of functions, and may be n>k. That is, for example, k may be an integer of 1 or more.
  • At least one motion detection function is F1(t), F2(t), F3( It can be t), F4(t), F5(t).
  • the motion classification unit 113 may check the user's motion based on the output at least one motion detection function. For example, when at least one motion detection function is F1(t), F2(t), F3(t), F4(t), F5(t), the motion classification unit 113 is F1(t), You can check the result values of F2(t), F3(t), F4(t), and F5(t). Each result value may be a probability value that the user actually performed a specific operation at the measurement time of the plurality of sensor data 10.
  • the motion classification unit 113 may check the user's motion when the result value of the at least one motion detection function is greater than or equal to the threshold value. That is, for example, if the threshold value is 0.7 and the result value (probability value) of at least one motion detection function is 0.7 or more, the motion classifier 113 can confirm that the user has performed a specific action at the corresponding time.
  • the numerical values of the threshold values described herein are only examples and may be variously changed.
  • an additional condition may be added to the threshold value condition that at least half of the at least one motion detection function must have a threshold value or more. That is, for example, at least one motion detection function is F1(t), F2(t), F3(t), F4(t), F5(t), of which at least three functions have a threshold value of 0.7 or more. Only then can the motion classification unit 113 determine that the user has performed a specific operation at a corresponding time.
  • the motion classification unit 113 when the result value of the at least one motion detection function is less than the threshold value, stops the operation check and sends to at least one of the dimension reduction unit 111 or the deep learning unit 112 It is possible to transmit a repetitive signal. That is, for example, if the result value of the at least one motion detection function is less than 0.7, the motion classification unit 113 does not determine that the user performs a specific action, and can sequentially perform actions 32 to 35 again.
  • the repetition signal may be transmitted to at least one of the dimensionality reduction unit 111 and the deep learning unit 112 so that it may be. Of course, since the operation is repeated, some of operations 42 to 45 may be omitted.
  • the deep learning unit 112 in operation 46, feeds back the user's motion check result derived using the motion check learning model, and reflects the user's motion check result to the construction of the motion check learning model to learn the motion check. You can gradually optimize the model. For example, since a learning model according to deep learning may become more sophisticated as the machine learning is repeated, the deep learning unit 112 may continuously update the motion check learning model whenever a user's motion check result is derived.
  • operations 41 to 46 may be repeated by returning again after operation 46 is completed. As the operations of FIG. 4 are repeated, the user motion detection apparatus 100 can detect the user motion more precisely.
  • the operations of FIG. 4 may be operations after the electronic device 100 is manufactured and shipped. That is, even after being sold to a user, the electronic device 100 of the present invention may continuously optimize the operation confirmation learning model and the operation detection function, and detect the user's operation precisely.
  • a method of determining a motion detection function using dimension reduction of a plurality of sensor data includes: acquiring the plurality of sensor data related to the user's motion as learning data; Reducing the dimension of the acquired plurality of sensor data and deriving a plurality of motion detection functions; And machine learning the derived plurality of motion detection functions through a preset motion confirmation learning model, and determining at least one motion detection function derived as a result of the machine learning as a function for detecting the user's motion.
  • the plurality of motion detection functions may be derived by reducing dimensions of a plurality of time series functions corresponding to each of the plurality of sensor data through a polynomial combination.
  • a result of determining the at least one motion detection function is fed back using the motion confirmation learning model, and the result of determining the at least one motion detection function is reflected in the construction of the motion confirmation learning model, and the operation is performed. It may further include the step of gradually optimizing the confirmation learning model.
  • a method for detecting a user motion using dimension reduction of a plurality of sensor data includes: detecting the plurality of sensor data related to the user motion; Reducing a dimension of a plurality of sensor data detected by the sensor unit and deriving a plurality of motion detection functions; Machine learning the derived plurality of motion detection functions through a preset motion confirmation learning model and outputting at least one motion detection function derived as a result of the machine learning; And checking a user's motion based on the outputted at least one motion detection function.
  • the plurality of motion detection functions may be derived by reducing dimensions of a plurality of time series functions corresponding to each of the plurality of sensor data through a polynomial combination.
  • the step of constructing an optimized motion verification learning model for predicting a user's motion using a machine learning technique based on a plurality of previously collected sensor data may be further included.
  • the step of gradually optimizing the motion verification learning model by feeding back a user action verification result derived using the action verification learning model, and reflecting the user action verification result in constructing the action verification learning model may further include.
  • the checking of the user's motion may include: checking the user's motion when a result value of the at least one motion detection function is greater than or equal to a threshold value; And when the result value of the at least one motion detection function is less than a threshold value, stopping the motion check and transmitting a repetition signal to at least one of the dimension reduction unit or the deep learning unit.
  • an apparatus for detecting a user motion using a dimension reduction of a plurality of sensor data includes: a sensor unit configured to detect the plurality of sensor data related to the user motion; A dimension reduction unit for reducing a dimension of a plurality of sensor data detected by the sensor unit and for deriving a plurality of motion detection functions; A deep learning unit for machine learning the derived plurality of motion detection functions through a preset motion verification learning model and outputting at least one motion detection function derived as a result of the machine learning; And a motion classification unit that checks a user's motion based on the outputted at least one motion detection function.
  • the plurality of sensor data may include at least one of 3-axis sensor data of an acceleration sensor, 3-axis sensor data of a gyro sensor, or 3-axis sensor data of a geomagnetic sensor.
  • the dimension reduction unit may derive the plurality of motion detection functions through a polynomial combination of a plurality of time series functions corresponding to each of the plurality of sensor data.
  • the deep learning unit constructs an optimized motion verification learning model for predicting a user's motion using a machine learning technique based on a plurality of previously collected sensor data, and the motion verification learning model Based on the determination of the at least one motion detection function suitable for the user motion detection, feedback of the user motion verification result derived using the motion verification learning model, and the user motion verification result to construct the motion verification learning model.
  • the operation confirmation learning model can be gradually optimized.
  • the motion classification unit when the result value of the at least one motion detection function is greater than or equal to a threshold value, checks the user's motion, and when the result value of the at least one motion detection function is less than the threshold value, The operation check may be stopped and a repetition signal may be transmitted to at least one of the dimension reduction unit or the deep learning unit.
  • a process of building a learning model and a process of detecting a user's motion using the learning model may be performed in parallel or independently of each other.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

Divers modes de réalisation de la présente invention concernent un procédé de détection de mouvement d'utilisateur utilisant une réduction de dimension d'une pluralité d'éléments de données de capteur. Selon divers modes de réalisation de la présente invention, le procédé de détection de mouvement d'utilisateur utilisant la réduction de dimension d'une pluralité d'éléments de données de capteur peut comprendre les étapes consistant à : détecter la pluralité d'éléments de données de capteur associées à un mouvement d'utilisateur ; réduire la dimension de la pluralité d'éléments de données de capteur détectées par une unité de capteur et dériver une pluralité de fonctions de détection de mouvement ; effectuer un apprentissage automatique sur la pluralité de fonctions de détection de mouvement dérivées par l'intermédiaire d'un modèle d'apprentissage d'identification de mouvement prédéfini et délivrer en sortie au moins une fonction de détection de mouvement dérivée du résultat de l'apprentissage automatique ; et identifier le mouvement d'utilisateur sur la base de la ou des fonctions de détection de mouvement délivrées. L'invention peut également concerner d'autres modes de réalisation.
PCT/KR2020/007435 2019-09-05 2020-06-09 Procédé de détermination de fonction de détection de mouvement utilisant une réduction de dimension d'une pluralité d'éléments de données de capteur, procédé de détection de mouvement d'utilisateur et appareil associé WO2021045354A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2019-0109907 2019-09-05
KR1020190109907A KR102078765B1 (ko) 2019-09-05 2019-09-05 복수의 센서데이터의 차원 축소를 이용한 동작 검출 함수 결정 방법과 사용자 동작 검출 방법 및 그 장치

Publications (1)

Publication Number Publication Date
WO2021045354A1 true WO2021045354A1 (fr) 2021-03-11

Family

ID=69670088

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2020/007435 WO2021045354A1 (fr) 2019-09-05 2020-06-09 Procédé de détermination de fonction de détection de mouvement utilisant une réduction de dimension d'une pluralité d'éléments de données de capteur, procédé de détection de mouvement d'utilisateur et appareil associé

Country Status (2)

Country Link
KR (1) KR102078765B1 (fr)
WO (1) WO2021045354A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20220114915A (ko) 2021-02-09 2022-08-17 주식회사 엘지에너지솔루션 배터리 진단 장치 및 방법

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015011690A (ja) * 2013-07-02 2015-01-19 ニフティ株式会社 効果測定プログラム、方法及び装置
KR101528236B1 (ko) * 2014-03-06 2015-06-12 국립대학법인 울산과학기술대학교 산학협력단 단말의 휴대 방법에 관계 없이 사용자의 행동을 인지할 수 있는 방법 및 이를 지원하는 장치, 그리고 상기 방법을 실행하는 프로그램을 기록한 컴퓨터 판독 가능한 기록매체
CN105453070A (zh) * 2013-09-20 2016-03-30 英特尔公司 基于机器学习的用户行为表征
US20170188895A1 (en) * 2014-03-12 2017-07-06 Smart Monitor Corp System and method of body motion analytics recognition and alerting

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20190060625A (ko) 2017-11-24 2019-06-03 옥철식 관성 측정 유닛 센서의 위치 보정 장치 및 그 보정 방법

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015011690A (ja) * 2013-07-02 2015-01-19 ニフティ株式会社 効果測定プログラム、方法及び装置
CN105453070A (zh) * 2013-09-20 2016-03-30 英特尔公司 基于机器学习的用户行为表征
KR101528236B1 (ko) * 2014-03-06 2015-06-12 국립대학법인 울산과학기술대학교 산학협력단 단말의 휴대 방법에 관계 없이 사용자의 행동을 인지할 수 있는 방법 및 이를 지원하는 장치, 그리고 상기 방법을 실행하는 프로그램을 기록한 컴퓨터 판독 가능한 기록매체
US20170188895A1 (en) * 2014-03-12 2017-07-06 Smart Monitor Corp System and method of body motion analytics recognition and alerting

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LEE, HYUN-JIN: "Human Activity Recognition Using Multi-temporal Neural Networks", JOURNAL OF DIGITAL CONTENTS SOCIETY, vol. 18, no. 3, June 2017 (2017-06-01), pages 559 - 565, XP055799506 *

Also Published As

Publication number Publication date
KR102078765B1 (ko) 2020-02-19

Similar Documents

Publication Publication Date Title
CN111432989B (zh) 人工增强基于云的机器人智能框架及相关方法
Hakim et al. Smartphone based data mining for fall detection: Analysis and design
Doukas et al. Emergency fall incidents detection in assisted living environments utilizing motion, sound, and visual perceptual components
KR102281590B1 (ko) 음성인식 성능 향상을 위한 비 지도 가중치 적용 학습 시스템 및 방법, 그리고 기록 매체
JP2017524182A (ja) グローバルモデルからの局所化された学習
WO2019216732A1 (fr) Dispositif électronique et procédé de commande associé
WO2021045354A1 (fr) Procédé de détermination de fonction de détection de mouvement utilisant une réduction de dimension d'une pluralité d'éléments de données de capteur, procédé de détection de mouvement d'utilisateur et appareil associé
Toha et al. MLP and Elman recurrent neural network modelling for the TRMS
Lyashenko et al. Analysis of Basic Principles for Sensor System Design Process Mobile Robots
KR20200080418A (ko) 단말기 및 그의 동작 방법
KR101456554B1 (ko) 클래스 확률 출력망에 기초한 불확실성 측도를 이용한 능동학습기능이 구비된 인공인지시스템 및 그 능동학습방법
WO2018092957A1 (fr) Procédé, dispositif et programme de détermination de réapprentissage par rapport à une valeur d'entrée dans un modèle de réseau neuronal
Urresty Sanchez et al. Fall detection using accelerometer on the user’s wrist and artificial neural networks
Artemov et al. Subsystem for simple dynamic gesture recognition using 3DCNNLSTM
WO2023101417A1 (fr) Procédé permettant de prédire une précipitation sur la base d'un apprentissage profond
KR20200121666A (ko) 오차 보정 방법 및 센서 시스템
WO2022177345A1 (fr) Procédé et système pour générer un événement dans un objet sur un écran par reconnaissance d'informations d'écran sur la base de l'intelligence artificielle
Ramakrishnan et al. A novel approach for emotion recognition for pose invariant images using prototypical networks
WO2022181907A1 (fr) Procédé, appareil et système pour la fourniture d'informations nutritionnelles sur la base d'une analyse d'image de selles
WO2021137395A1 (fr) Système et procédé de classification de comportement problématique reposant sur un algorithme de réseau de neurones profond
KR102125349B1 (ko) 행동 패턴 추론 장치 및 방법
KR20220154135A (ko) 제조 공정을 위한 시스템, 방법 및 매체
KR102502387B1 (ko) 음성 인식 기반 물류 처리 방법, 장치 및 시스템
Nursyahrul et al. Convolutional Neural Network (CNN) on Realization of Facial Recognition Devices using Multi Embedded Computers
Ramadhan et al. Comparative analysis of various optimizers on residual network architecture for facial expression identification

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20860916

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20860916

Country of ref document: EP

Kind code of ref document: A1