CN112241746A - Personnel operation action identification method and system - Google Patents
Personnel operation action identification method and system Download PDFInfo
- Publication number
- CN112241746A CN112241746A CN201910638394.6A CN201910638394A CN112241746A CN 112241746 A CN112241746 A CN 112241746A CN 201910638394 A CN201910638394 A CN 201910638394A CN 112241746 A CN112241746 A CN 112241746A
- Authority
- CN
- China
- Prior art keywords
- information
- action
- information acquisition
- acquisition device
- model library
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000009471 action Effects 0.000 title claims abstract description 95
- 238000000034 method Methods 0.000 title claims abstract description 21
- 230000001133 acceleration Effects 0.000 claims abstract description 41
- 210000000707 wrist Anatomy 0.000 claims abstract description 28
- 230000010365 information processing Effects 0.000 claims abstract description 26
- 230000008569 process Effects 0.000 claims description 5
- 238000013528 artificial neural network Methods 0.000 claims description 4
- 238000012545 processing Methods 0.000 abstract description 5
- 230000006399 behavior Effects 0.000 description 12
- 238000012549 training Methods 0.000 description 9
- 230000006870 function Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000011982 device technology Methods 0.000 description 1
- 230000008676 import Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000006403 short-term memory Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
- A61B5/1116—Determining posture transitions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
Abstract
The invention provides a method and a system for identifying personnel operation actions. The system comprises an information acquisition device, a data acquisition device and a data processing device, wherein the information acquisition device is used for acquiring the posture information and the acceleration information of the wrist of a person; the information processing device is used for establishing a three-axis coordinate system based on the initial position of the information acquisition device; and comparing the acquired attitude information and acceleration information of the set number of the information acquisition devices in the three-axis coordinate system with a preset action model library, and determining the operation behavior of the wrist as the operation behavior corresponding to the action model when the acquired attitude information and acceleration information are matched with one action model in the action model library. In the technical scheme, the gesture information and the acceleration information of the wrist of the person are acquired by the information acquisition device to judge the movement of the person, so that the accuracy of movement judgment is further improved.
Description
Technical Field
The invention relates to the technical field of household power, in particular to a personnel operation action identification method and system.
Background
There is a need for human body action recognition in many fields, such as power industry to prevent misoperation of personnel through action recognition; the sports training field judges whether the movement of the athlete is standard or not through movement recognition. With the development of information acquisition device technology, the technology for performing motion recognition based on an information acquisition device is rapidly developed. However, the motion recognition device in the prior art has the problem of low accuracy of motion recognition.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a method and a system for identifying personnel operation actions.
The invention is realized by the following technical scheme:
the invention provides a personnel operation behavior recognition system, which comprises an information acquisition device, a data acquisition device and a data processing device, wherein the information acquisition device is used for acquiring the posture information and the acceleration information of a wrist of a person; the information processing device is used for establishing a three-axis coordinate system based on the initial position of the information acquisition device; and comparing the acquired attitude information and acceleration information of the set number of the information acquisition devices in the three-axis coordinate system with a preset action model library, and determining the operation behavior of the wrist as the operation behavior corresponding to the action model when the acquired attitude information and acceleration information are matched with one action model in the action model library.
In the technical scheme, the gesture information and the acceleration information of the wrist of the person are acquired by the information acquisition device to judge the movement of the person, so that the accuracy of movement judgment is further improved.
In a specific possible implementation, the acceleration information includes sub-acceleration information in three vertical directions in the established three-axis coordinate system, which are respectively: α x, α y, α z.
In a specific possible embodiment, the attitude information includes sub-attitude information in three perpendicular directions in the established three-axis coordinate system, which are θ x, θ y, and θ z.
In a specific possible implementation, when comparing the acquired attitude information and acceleration information of the set number of information acquisition devices in the three-axis coordinate system with a preset motion model library, the information processing device specifically includes: and acquiring T frame data of the information acquisition device as input, and comparing and identifying the attitude information and the acceleration information contained in the T frame data with a preset action model library through an identification algorithm and the preset action model library.
In a specific possible embodiment, the recognition algorithm uses the LSTM algorithm of the deep neural network to classify the T frame data.
In a specific implementation, the information processing device is further configured to perform motion recognition once after motion data of each S frame of the information acquisition device is acquired in the recognition process, and perform one recognition using the last T frame data; wherein S is more than or equal to T.
In a specific embodiment, the system further comprises a model building device, wherein the model building device is used for building and storing a preset action model library.
In a specific possible embodiment, the step of the model building means for building and storing a preset motion model library specifically includes: and acquiring the posture information and the acceleration information of the information acquisition device worn by each experimenter according to the set action model, repeatedly acquiring the same action model for 3-5 times, and recording multiple groups of data of the same movement.
In addition, the method for identifying the operation action of the personnel comprises the following steps:
acquiring posture information and acceleration information of the wrist of a person;
establishing a three-axis coordinate system based on the initial position of the information acquisition device;
comparing the acquired attitude information and acceleration information of the set number of the information acquisition devices in the three-axis coordinate system with a preset action model library;
when the wrist is matched with one motion model in the motion model library, the operation behavior of the wrist is determined to be the operation behavior corresponding to the motion model.
In the technical scheme, the gesture information and the acceleration information of the wrist of the person are acquired by the information acquisition device to judge the movement of the person, so that the accuracy of movement judgment is further improved.
In a specific possible implementation, when comparing the acquired attitude information and the acceleration information of the set number of the information acquisition devices in the three-axis coordinate system with a preset motion model library, specifically:
and acquiring T frame data of the information acquisition device as input, and comparing and identifying the attitude information and the acceleration information contained in the T frame data with a preset action model library through an identification algorithm and the preset action model library.
Drawings
FIG. 1 is a schematic structural diagram of a human operation action recognition system according to an embodiment of the present invention;
fig. 2 is a reference diagram for use of the information acquisition apparatus according to the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
For convenience of understanding, firstly, a use environment of the human operation action recognition system provided by the embodiment of the application is described, and at present, there is a need for human action recognition in many fields, for example, the power industry prevents misoperation of a human by action recognition; the sports training field judges whether the movement of the athlete is standard or not through movement recognition. Therefore, the embodiment of the application provides a motion recognition system based on motion recognition of the information acquisition device.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a human operation action recognition system according to the present invention.
The embodiment of the invention provides a personnel operation behavior recognition system, which comprises an information acquisition device, a data acquisition device and a data processing device, wherein the information acquisition device is used for acquiring the posture information and the acceleration information of a wrist of a person; when in specific use, the information acquisition device is worn on the wrist of a person, and the acquired information comprises: acceleration information and attitude information, which are respectively composed of three mutually perpendicular components, which are a three-axis coordinate system (basic coordinate system) established based on the initial position of the information acquisition device. The acceleration information is represented by a symbol α, and includes sub-acceleration information in three vertical directions in the established three-axis coordinate system, which are respectively: α x, α y, α z. Indicating the magnitude of the large velocity in three perpendicular directions. The attitude information represents the rotation angle information of the wearing part of the information acquisition device in a three-dimensional space relative to a basic coordinate system, wherein the attitude information is represented by theta, and the attitude information comprises sub-attitude information in three vertical directions in an established three-axis coordinate system, namely thetax, thetay and thetaz. When the current information acquisition device is horizontally placed in the information acquisition device, included angles in three vertical directions are formed. The information acquisition device acquires the two kinds of information for a plurality of times within one second and transmits the information to the information processing device through Bluetooth. The data acquired by the information acquisition device each time consists of the above 6 variables, and an array { α (i, x), α (i, y), α (i, z), θ (i, x), θ (i, y), θ (i, z) } is used for representing the data acquired by the wireless information acquisition device at the time i.
When the data are transmitted to the information processing device, the information processing device may compare the acquired posture information and acceleration information of the set number of information acquisition devices in the three-axis coordinate system with a preset motion model library, and when the posture information and acceleration information are matched with one motion model in the motion model library, determine the operation behavior of the wrist as the operation behavior corresponding to the motion model.
As can be seen from the above description, two aspects are included in implementing motion recognition: establishing an action model base (a preset action model base) and a behavior recognition part. First, a motion recognition model is built by pre-collected and labeled data. Secondly, the model is used for analyzing the information acquired by the information acquisition device in real time, so that a behavior recognition function is realized. These will be described below.
Firstly, during recognition, when the information processing device compares the acquired attitude information and acceleration information of the set number of information acquisition devices in a three-axis coordinate system with a preset action model library, the method specifically comprises the following steps: and acquiring T frame data of the information acquisition device as input, and comparing and identifying the attitude information and the acceleration information contained in the T frame data with a preset action model library through an identification algorithm and the preset action model library. The recognition algorithm classifies the T frame data by adopting an LSTM algorithm of a deep neural network. In addition, when data are specifically collected, the information processing device is also used for performing action recognition once after motion data of each S frame of the information collection device is acquired in the recognition process, and performing recognition by using the last T frame data; wherein S is more than or equal to T. For ease of understanding, the flow at the time of information processing is described in detail below.
First, before performing recognition, the person wears the information acquisition device on the right wrist, and ensures that the information acquisition device is attached to the wrist, and the positional relationship between the coordinate system and the wrist is made to be consistent with fig. 2.
Before the identification is started, the information acquisition device and the identification program on the information processing device are started. In the invention, the information processing device can be a notebook computer or a mobile phone, or other movable data processing equipment. When the Bluetooth wireless information acquisition device is used, the link between the information processing device and the information acquisition device is established in a Bluetooth mode, so that the information processing device can acquire information acquired by the information acquisition device in real time.
The recognition algorithm adopts an LSTM algorithm of a deep neural network, and after a recognition program is started, the program loads pre-trained model parameters stored in an information processing device, and initializes a parameter matrix of the LSTM.
The information processing apparatus recognizes information by a recognition algorithm and a pre-trained model (a preset motion model library) using newly acquired T frame data as input. The T frame data is expressed as { α (i, x), α (i, y), α (i, z), θ (i, x), θ (i, y), θ (i, z) }, i ∈ T. The algorithm uses a long short term memory network (LSTM) to classify the T frame data, the output result of the model is an array consisting of decimal numbers, each number in the array corresponds to an action, the length of the array is the number of identified action types, and the serial number of the position of the action in the array is the serial number of the current action. Each number corresponding to the likelihood of the current action category. The classified number is the length of the array, and the action type corresponding to each data is specified during model training, and the action type corresponding to the number with the maximum probability in the array is the identified current action.
In order to improve the execution efficiency of the identification program, the information processing device performs action identification once after acquiring motion data of S frames in the identification process, wherein S is more than or equal to T. An identification is made using the latest T frame data input in S frames. The information processing apparatus displays the recognition result on a screen. In the recognition stage, the current action can be estimated through the recognition program running on the information processing device and the information collected by the information collection device, and the estimation result can be displayed and broadcasted in voice.
And after the identification is finished, closing the identification program of the information processing device, disconnecting the Bluetooth link between the information processing device equipment and the information acquisition device, and closing the information acquisition device.
When the preset action model is specifically established, the system further comprises a model establishing device, and the model establishing device is used for establishing and storing a preset action model library. When in use, the model establishing device is used for establishing and storing a preset action model library, and specifically comprises: and acquiring the posture information and the acceleration information of the information acquisition device worn by each experimenter according to the set action model, repeatedly acquiring the same action model for 3-5 times, and recording multiple groups of data of the same movement. For the sake of understanding, the following describes the procedure of using the motion model creation apparatus.
When the motion model library is used, a preset motion model library is obtained through pre-collected and labeled data training, and the generation process of the model library is as follows:
before identification, a person wears the information acquisition device on the right wrist, and ensures that the information acquisition device is tightly attached to the wrist when being worn, and the coordinate system of the information acquisition device and the position of the wrist are consistent with those in the figure 2.
And operating a data acquisition program in the information processing device equipment, opening the information acquisition device, and establishing data connection through Bluetooth.
Before the collection is started, a user inputs the name of the currently collected action in a collection program of the model building device. The action type is pre-designated before collection, and each action is numbered, and the number is consistent with the corresponding number during identification.
When a user starts to record actions, the user wearing the information acquisition device starts to perform a preset action, wherein the action type is a preset standardized short action, such as a toggle switch, a rotary wrench and the like. The current action is the action of the input.
The model establishing device acquires the data of the information acquisition device and records the data into a CSV file, wherein the CSV file records the 6 component values of alpha and theta acquired by the wireless information acquisition device each time. And storing frame by frame according to the acquired time. And simultaneously recording the number of the action corresponding to the frame action in the CSV file, wherein the number is consistent with the number corresponding to the identification method.
After the personnel finish the action, click the end button in the model building device, the acquisition program stops recording data. The writing of the CSV file is stopped.
The method needs to collect different personnel and different types of movement data. And (5) repeating the steps 3-5 by each person, recording multiple groups of data of the same movement, and collecting the same amount of data for each action. The motion type and motion data are saved in the CSV file for each acquisition.
After the collection is finished, the personnel import the CSV file of the stored data into the server side.
And operating a model training program at the server, wherein the program reads the data of the information acquisition device corresponding to the action in the imported CSV file and the action type corresponding to each frame.
The preset action model base is established by grouping data according to preset T. Starting from the first frame of a piece of data, using successive T frame data comprising the frame as an input x for model trainingiAnd given a corresponding label li。
Then, starting from the second frame, selecting continuous T frames as xi+1And given a label li。
And repeating the acquisition, and processing all CSV files storing the acquired data according to the T to generate training data D with marks.
Using the obtained training data, a training model is generated. The loss function in the training of the LSTM model is the SoftMax loss function. Using the tag data in D as a supervision, the tag for this action is encoded in OneHot. And optimizing the model by adopting an ADAM method, and updating parameters of the LSTM model until the loss function is not reduced any more. And obtaining the final recognition model M. The initial use of parameters in the model is gaussian noise.
And connecting the model building device to a server, and importing the trained M into the information processing device.
From the above description, it can be seen that the above system, when in use, forms a human operation action recognition method, which comprises the steps of:
acquiring posture information and acceleration information of the wrist of a person;
establishing a three-axis coordinate system based on the initial position of the information acquisition device;
comparing the acquired attitude information and acceleration information of the set number of the information acquisition devices in the three-axis coordinate system with a preset action model library;
when the wrist is matched with one motion model in the motion model library, the operation behavior of the wrist is determined to be the operation behavior corresponding to the motion model.
When comparing the acquired attitude information and acceleration information of the set number of information acquisition devices in the three-axis coordinate system with a preset action model library, the method specifically comprises the following steps:
and acquiring T frame data of the information acquisition device as input, and comparing and identifying the attitude information and the acceleration information contained in the T frame data with a preset action model library through an identification algorithm and the preset action model library.
According to the description, the gesture information and the acceleration information of the wrist of the person are acquired by the information acquisition device to judge the movement of the person, so that the accuracy of movement judgment is further improved.
The present invention is not limited to the above preferred embodiments, and any modifications, equivalent substitutions and improvements made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (10)
1. A human operation action recognition system, comprising:
the information acquisition device is used for acquiring the posture information and the acceleration information of the wrist of the person;
the information processing device is used for establishing a three-axis coordinate system based on the initial position of the information acquisition device; and comparing the acquired attitude information and acceleration information of the set number of the information acquisition devices in the three-axis coordinate system with a preset action model library, and determining the operation behavior of the wrist as the operation behavior corresponding to the action model when the acquired attitude information and acceleration information are matched with one action model in the action model library.
2. The human-operation-action recognition system according to claim 1, wherein the acceleration information includes sub-acceleration information in three vertical directions in an established three-axis coordinate system, respectively: α x, α y, α z.
3. The human operation action recognition system according to claim 1, wherein the attitude information includes sub-attitude information in three vertical directions in the established three-axis coordinate system, which are θ x, θ y, and θ z, respectively.
4. The human operation action recognition system according to claim 3, wherein the information processing device, when comparing the acquired attitude information and acceleration information of the set number of the information acquisition devices in the three-axis coordinate system with a preset action model library, specifically comprises: and acquiring T frame data of the information acquisition device as input, and comparing and identifying the attitude information and the acceleration information contained in the T frame data with a preset action model library through an identification algorithm and the preset action model library.
5. The human operation action recognition system of claim 4, wherein the recognition algorithm classifies the T frame data using the LSTM algorithm of a deep neural network.
6. The human operation action recognition system of claim 5, wherein the information processing device is further configured to perform action recognition once after acquiring motion data of each S frame of the information acquisition device in the recognition process, and perform one recognition using the last T frame data; wherein S is more than or equal to T.
7. The human-operation-action recognition system according to claim 6, further comprising a model building means for building and storing a library of preset action models.
8. The human-operation-action recognition system according to claim 6, wherein the model building means for building and storing a preset action model library specifically comprises: and acquiring the posture information and the acceleration information of the information acquisition device worn by each experimenter according to the set action model, repeatedly acquiring the same action model for 3-5 times, and recording multiple groups of data of the same movement.
9. A personnel operation action recognition method is characterized by comprising the following steps:
acquiring posture information and acceleration information of the wrist of a person;
establishing a three-axis coordinate system based on the initial position of the information acquisition device;
comparing the acquired attitude information and acceleration information of the set number of the information acquisition devices in the three-axis coordinate system with a preset action model library;
when the wrist is matched with one motion model in the motion model library, the operation behavior of the wrist is determined to be the operation behavior corresponding to the motion model.
10. The method for recognizing human operation actions according to claim 9, wherein when comparing the acquired attitude information and acceleration information of the set number of the information acquisition devices in the three-axis coordinate system with a preset action model library, the method specifically comprises:
and acquiring T frame data of the information acquisition device as input, and comparing and identifying the attitude information and the acceleration information contained in the T frame data with a preset action model library through an identification algorithm and the preset action model library.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910638394.6A CN112241746A (en) | 2019-07-16 | 2019-07-16 | Personnel operation action identification method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910638394.6A CN112241746A (en) | 2019-07-16 | 2019-07-16 | Personnel operation action identification method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112241746A true CN112241746A (en) | 2021-01-19 |
Family
ID=74166620
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910638394.6A Pending CN112241746A (en) | 2019-07-16 | 2019-07-16 | Personnel operation action identification method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112241746A (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106897670A (en) * | 2017-01-19 | 2017-06-27 | 南京邮电大学 | A kind of express delivery violence sorting recognition methods based on computer vision |
CN107016342A (en) * | 2017-03-06 | 2017-08-04 | 武汉拓扑图智能科技有限公司 | A kind of action identification method and system |
CN108433728A (en) * | 2018-03-06 | 2018-08-24 | 大连理工大学 | A method of million accidents of danger are fallen based on smart mobile phone and ANN identification construction personnel |
CN109567814A (en) * | 2018-10-22 | 2019-04-05 | 深圳大学 | The classifying identification method of brushing action calculates equipment, system and storage medium |
CN109919034A (en) * | 2019-01-31 | 2019-06-21 | 厦门大学 | A kind of identification of limb action with correct auxiliary training system and method |
-
2019
- 2019-07-16 CN CN201910638394.6A patent/CN112241746A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106897670A (en) * | 2017-01-19 | 2017-06-27 | 南京邮电大学 | A kind of express delivery violence sorting recognition methods based on computer vision |
CN107016342A (en) * | 2017-03-06 | 2017-08-04 | 武汉拓扑图智能科技有限公司 | A kind of action identification method and system |
CN108433728A (en) * | 2018-03-06 | 2018-08-24 | 大连理工大学 | A method of million accidents of danger are fallen based on smart mobile phone and ANN identification construction personnel |
CN109567814A (en) * | 2018-10-22 | 2019-04-05 | 深圳大学 | The classifying identification method of brushing action calculates equipment, system and storage medium |
CN109919034A (en) * | 2019-01-31 | 2019-06-21 | 厦门大学 | A kind of identification of limb action with correct auxiliary training system and method |
Non-Patent Citations (4)
Title |
---|
孔冬荣等: ""基于表面肌电和加速度信息融合的手势识别"", 《电子测量技术》, vol. 42, no. 5, 31 March 2019 (2019-03-31), pages 85 - 89 * |
张龙娇等: ""基于深度神经网络的sEMG手势识别研究"", 《计算机工程与应用》, 5 June 2019 (2019-06-05), pages 1 - 12 * |
秦敏莹等: ""基于长短时记忆网络的多媒体教学手势识别研究"", 《研究与开发》, vol. 38, no. 6, 30 June 2019 (2019-06-30), pages 80 - 85 * |
邓巧茵等: ""一种基于手势识别的智能设备控制系统的设计"", 《计算技术与自动化》, vol. 36, no. 2, pages 63 - 67 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Tran et al. | Human activities recognition in android smartphone using support vector machine | |
CN111027487B (en) | Behavior recognition system, method, medium and equipment based on multi-convolution kernel residual error network | |
Seto et al. | Multivariate time series classification using dynamic time warping template selection for human activity recognition | |
CN107428004B (en) | Automatic collection and tagging of object data | |
Lester et al. | A hybrid discriminative/generative approach for modeling human activities | |
CN107171872B (en) | User behavior prediction method in smart home | |
CN109446927B (en) | Double-person interaction behavior identification method based on priori knowledge | |
Stiefmeier et al. | Gestures are strings: efficient online gesture spotting and classification using string matching | |
CN110503077B (en) | Real-time human body action analysis method based on vision | |
CN113326835B (en) | Action detection method and device, terminal equipment and storage medium | |
CN109308437B (en) | Motion recognition error correction method, electronic device, and storage medium | |
CN110555417A (en) | Video image recognition system and method based on deep learning | |
CN106503631A (en) | A kind of population analysis method and computer equipment | |
Calvo et al. | Human activity recognition using multi-modal data fusion | |
Ali et al. | Human activity recognition system using smart phone based accelerometer and machine learning | |
Khatun et al. | Human activity recognition using smartphone sensor based on selective classifiers | |
CN114155610B (en) | Panel assembly key action identification method based on upper half body posture estimation | |
CN110598599A (en) | Method and device for detecting abnormal gait of human body based on Gabor atomic decomposition | |
Ziaeefard et al. | Hierarchical human action recognition by normalized-polar histogram | |
CN110680337A (en) | Method for identifying action types | |
Dong et al. | Modeling influence between experts | |
CN112241746A (en) | Personnel operation action identification method and system | |
CN107463689A (en) | Generation method, moving state identification method and the terminal in motion characteristic data storehouse | |
Tahir et al. | Recognizing human-object interaction (HOI) using wrist-mounted inertial sensors | |
CN113989943B (en) | Distillation loss-based human body motion increment identification method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |