CN106570482A - Method and device for identifying body motion - Google Patents

Method and device for identifying body motion Download PDF

Info

Publication number
CN106570482A
CN106570482A CN201610973939.5A CN201610973939A CN106570482A CN 106570482 A CN106570482 A CN 106570482A CN 201610973939 A CN201610973939 A CN 201610973939A CN 106570482 A CN106570482 A CN 106570482A
Authority
CN
China
Prior art keywords
acceleration signal
feature
obtains
eigenvector
image sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610973939.5A
Other languages
Chinese (zh)
Other versions
CN106570482B (en
Inventor
程俊
李懿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN201610973939.5A priority Critical patent/CN106570482B/en
Publication of CN106570482A publication Critical patent/CN106570482A/en
Application granted granted Critical
Publication of CN106570482B publication Critical patent/CN106570482B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/02Preprocessing
    • G06F2218/04Denoising

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention, which is applicable to the field of pattern recognition technology, provides a method and an apparatus for identifying a body motion. The method comprises: obtaining a depth image sequence and an acceleration signal of a body motion; extracting a first feature vector from the depth image sequence and extracting a second feature vector from the acceleration signal; obtaining a feature descriptor by combining the first feature vector and the second feature vector; and according to the feature descriptor, identifying the body motion. According to the method and device provided by the invention, with one or two fixed depth cameras and a plurality of acceleration sensors worn at key joint parts of the human body, a problem of low body motion identification accuracy due to shielding or error accumulation according to the existing body motion identification method can be solved and thus the accuracy of body motion identification can be improved.

Description

Human motion recognition method and device
Technical field
The invention belongs to mode identification technology, more particularly to a kind of human motion recognition method and device.
Background technology
With the continuous development of sensor technology, the communication technology and data analysis technique, ambient intelligence is answered many Achieved successfully with scene, and human action identification is exactly the key issue for realizing ambient intelligence.Human action identification is related to To from automatic detection, analysis human action in the information collected by different modalities sensor.
Existing human motion recognition method mainly includes the human motion recognition method based on computer vision and is based on Two kinds of the human motion recognition method of inertial sensor.Based on the human motion recognition method of computer vision, by computer The original image or image sequence data of camera acquisition are processed and analyzed, is learnt and understood the wherein action of people and row For.Based on the human motion recognition method of inertial sensor, by being fixed on the inertial sensor of human body privileged site people is gathered The action message of body, and computer is sent to by wireless transport module, so data are pre-processed, feature extraction and choosing Select, the classification of motion.
And at present the human action recognizer of most of view-based access control models is all completed under given conditions, such as high score Resolution, fixed viewpoint, fixed background, fixed video camera and unobstructed etc., substantially also not can solve the problem that sternly without effective ways Action recognition problem block again under.And for the human motion recognition method based on inertial sensor, sensor can be due to peace The insecure of dress causes error, and error to accumulate during human motion.
The content of the invention
In view of this, a kind of human motion recognition method and device are embodiments provided, to solve prior art Present in block or error accumulation caused by human action identification the relatively low problem of accuracy rate.
A kind of first aspect of the embodiment of the present invention, there is provided human motion recognition method, including:
Obtain the range image sequence and acceleration signal of human action;The range image sequence and the acceleration are believed Number synchronization;
Extract first eigenvector from the range image sequence, from the acceleration signal extract second feature to Amount;
With reference to the first eigenvector and second feature vector, Feature Descriptor is drawn;
According to the Feature Descriptor, human action is identified.
A kind of second aspect of the embodiment of the present invention, there is provided human action identifying device, including:
Acquisition module, for obtaining the range image sequence and acceleration signal of human action;The range image sequence With the acceleration signal synchronization;
Extraction module, for extracting first eigenvector from the range image sequence, from the acceleration signal Extract second feature vector;
Feature interpretation submodule, for reference to the first eigenvector and second feature vector, showing that feature is retouched State son;
Identification module, for according to the Feature Descriptor, being identified to human action.
The embodiment of the present invention at least has the advantages that relative to prior art:The embodiment of the present invention gathers first people The range image sequence and acceleration signal of body action, then extracts first eigenvector from the range image sequence, with And second feature vector is extracted from the acceleration signal, finally by the first eigenvector and second feature vector With reference to drawing Feature Descriptor, and according to the Feature Descriptor, human action is identified, it is only necessary to one to two positions Fixed depth camera and several acceleration transducers for being worn on human body major joint position, it is possible to solve existing human body Action identification method exist to block or error accumulation caused by the low problem of human action recognition accuracy, improve human body and move The accuracy rate that work is recognized.
Description of the drawings
Technical scheme in order to be illustrated more clearly that the embodiment of the present invention, below will be to embodiment or description of the prior art Needed for the accompanying drawing to be used be briefly described, it should be apparent that, drawings in the following description be only the present invention some Embodiment, for those of ordinary skill in the art, without having to pay creative labor, can be with according to these Accompanying drawing obtains other accompanying drawings.
Fig. 1 is the human motion recognition method flow chart that the embodiment of the present invention one is provided;
Fig. 2 is the flow process that first eigenvector is extracted from the range image sequence that the embodiment of the present invention two is provided Figure;
Fig. 3 is the flow chart that second feature vector is extracted from the acceleration signal that the embodiment of the present invention three is provided;
Fig. 4 is the analogous diagram of the characteristic vector pickup of the 3-axis acceleration signal that the embodiment of the present invention three is provided;
Fig. 5 is the structured flowchart of the human action identifying device that the embodiment of the present invention four is provided;
Fig. 6 is the structured flowchart of the extraction module that the embodiment of the present invention four is provided;
Fig. 7 is the structured flowchart of the extraction module that the embodiment of the present invention four is provided.
Specific embodiment
In below describing, in order to illustrate rather than in order to limit, it is proposed that the such as tool of particular system structure, technology etc Body details, thoroughly to understand the embodiment of the present invention.However, it will be clear to one skilled in the art that concrete without these The present invention can also be realized in the other embodiments of details.In other situations, omit to well-known system, device, electricity Road and the detailed description of method, in case unnecessary details hinders description of the invention.
In order to illustrate technical solutions according to the invention, illustrate below by specific embodiment.
Embodiment one:
Fig. 1 shows that the human motion recognition method that the embodiment of the present invention one is provided realizes flow process, and details are as follows:
In step S101, the range image sequence and acceleration signal of human action, the range image sequence are obtained With the acceleration signal synchronization.
In the present embodiment, the range image sequence of human action can be obtained by depth camera.Can be by accelerating Degree sensor obtains the acceleration signal of human action.For example, the depth camera that can be fixed by one to two positions is obtained Take the range image sequence of human action.Again for example, at least one acceleration for being worn on human body major joint position can be passed through Degree sensor obtains the acceleration signal of human action.In view of the accuracy of the acceleration signal of the human action for getting, The acceleration signal that multiple acceleration transducers for being worn on human body major joint position obtain human action can be passed through.Its In, the acceleration signal of the human action for getting can characterize acceleration situation of the human body on multiple axial directions or direction, i.e., Motion conditions of the human body on multiple axial directions or direction.
It should be noted that the range image sequence got in this step and acceleration signal are after elapsed time synchronization Range image sequence and acceleration signal.So, range image sequence and acceleration signal can be corresponded in time, with reference to Reflect the action of human body together such that it is able to improve the accuracy of identification to human action.
Further, since the space that human body is located is three dimensions, therefore corresponding acceleration signal can accelerate for three axles Degree signal.Wherein, corresponding three axial directions of 3-axis acceleration signal are vertical each other.
In step s 102, first eigenvector is extracted from the range image sequence, from the acceleration signal Extract second feature vector.
Specifically, line translation can be entered to the range image sequence, obtains corresponding Depth Motion mapping and to described Depth Motion mapping is extracted characteristic vector and obtains first eigenvector.Simultaneously time slip-window side is adopted to the acceleration signal Method extracts second feature vector.Implement process to be described in embodiment two and embodiment three, here is no longer excessively retouched State.
Wherein, first eigenvector includes but is not limited to HOG (Histogram of Oriented Gradient, direction Histogram of gradients) feature.Second feature vector including but not limited to FFT (Fast Fourier Transformation, quickly Fourier transformation) coefficient.
In step s 103, with reference to the first eigenvector and second feature vector, Feature Descriptor is drawn.
Specifically, can be by the first eigenvector of two different modalities in step S102 and second feature vector splicing Into a characteristic vector, then dropped using PCA (Principal Component Analysis, principal component analysis) method Dimension, finally gives the Feature Descriptor.
It should be noted that being not limited to carry out dimensionality reduction using PCA methods in the present embodiment, it would however also be possible to employ other dimensionality reduction sides Method, and finally give the Feature Descriptor.
In step S104, according to the Feature Descriptor, human action is identified.
In the present embodiment, Linear SVM (support vector machine, SVMs) method can be adopted, be led to The Feature Descriptor is crossed, classification, but not limited to this are identified to human action.Those skilled in the art can be according to need Will, using additive method, by the Feature Descriptor, human action is identified.
Embodiment two:、
Referring to Fig. 2, in the present embodiment, first eigenvector bag is extracted from the range image sequence in step S102 Include:
Step S201, obtains the i-th frame depth image in the range image sequence, and by the i-th frame depth image Project on each orthogonal plane in the same coordinate system, obtain the corresponding with the orthogonal plane of the i-th frame depth image Each visual angle projected image.
Wherein, i is the integer more than or equal to 1, and i is less than or equal to the frame number of depth image in range image sequence. Illustrated as a example by there are three orthogonal planes in the same coordinate system below, but be not limited to this.In the present embodiment, institute is obtained After stating any i-th frame depth image in range image sequence, the i-th frame depth image can be projected to cartesian coordinate On three orthogonal planes of system, facing angular projection image, side-looking angular projection image and bowing for the i-th frame depth image is obtained Visual angle projected image.
Step S202, calculate i+1 frame depth image each visual angle projected image and the i-th frame depth image each regard The absolute difference of the projected image at angle.
In the present embodiment, calculate i+1 frame depth image faces angular projection image, side-looking angular projection image and vertical view The absolute difference of angular projection image visual angle projected image corresponding with the i-th frame depth image.That is, i+1 frame depth image is calculated The absolute difference for facing angular projection image of angular projection image and the i-th frame depth image is faced, i+1 frame depth image is calculated The absolute difference of the side-looking angular projection image of side-looking angular projection image and the i-th frame depth image, and calculate i+1 frame depth map The absolute difference of the depression angle projected image of the depression angle projected image of picture and the i-th frame depth image.
Step S203, is superimposed the corresponding absolute difference of each frame depth image, obtains the range image sequence corresponding Depth Motion maps.
In the present embodiment, each frame depth image in the range image sequence is traveled through, be superimposed each frame depth image Corresponding absolute difference, obtains the corresponding Depth Motion mapping of the range image sequence.
Step S204, to the Depth Motion mapping feature extraction is carried out, and obtains the first eigenvector.
Preferably, the Depth Motion mapping to obtaining can extract feature from each visual angle respectively, obtain the depth fortune The dynamic characteristic vector for mapping each visual angle, the subcharacter vector for connecting each visual angle obtains described first eigenvector.
For example, to the Depth Motion mapping that obtains respectively from facing, side-looking and overlook three visual angles and extract features, obtain institute The characteristic vector that Depth Motion maps three visual angles is stated, the characteristic vector for connecting three visual angles obtains the first eigenvector.
Assume a given range image sequence { I comprising N frames1, I2..., IN}.It is any in for N frame depth images One frame, for the ease of description, is designated as here the i-th frame, and the i-th frame depth image is projected to three under cartesian coordinate system Orthogonal plane, obtains the projected image at three visual anglesThe v ∈ { f, s, t }, wherein, f represents positive visual angle, and s represents side-looking Angle, t represents depression angle.The corresponding projected image of each frame depth image in the N frames range image sequence is obtainedAfterwards, According to formulaIt is calculated three visual angle components of Depth Motion mapping.For each visual angle component, Advantage distillation HOG features obtain characteristic vector HOG at each visual anglev, connect three visual angles characteristic vector obtain [HOGf, HOGs, HOGt] it is the characteristic vector of the Depth Motion mapping, i.e. first eigenvector.
Embodiment three:
Referring to Fig. 3, in the present embodiment, the second feature vector that extracts from the acceleration signal includes:
Step S301, is split using time slip-window to the acceleration signal, obtains at least one comprising acceleration The data window of degree signal segment.
In the present embodiment, Duplication can be adopted the acceleration signal is carried out for the time slip-window of preset ratio point Cut, and it is long using the optimal window of cross validation acquisition, obtain at least one data window comprising acceleration signal fragment.Wherein, in advance If ratio can be numerical value between 40% to 60% with value.For example, preset ratio can be with value as 40%, 50%, 60% etc..
Preferably, in the present embodiment, adopt Duplication the acceleration signal is carried out for 50% time slip-window point Cut, and it is long using the optimal window of cross validation acquisition, obtain one group of data window comprising 3-axis acceleration signal segment.
Step S302, in each described data window, extracts the feature of the acceleration signal fragment, obtains each The characteristic vector of the data window.
In the present embodiment, process can be filtered to 3-axis acceleration signal segment in each described data window And feature is extracted, and remove DC component part therein, obtain the characteristic vector of three axial directions of each data window.
Step S303, according to the characteristic vector of all data windows, draws the second feature vector.
In the present embodiment, the characteristic vector of the correspondence axial direction that can be connected in all data windows obtains three axles To the characteristic vector of acceleration signal, connect three axial acceleration signals characteristic vector be obtained final feature to Amount.It is the analogous diagram of the characteristic vector pickup of 3-axis acceleration signal referring to Fig. 4.
Above-mentioned human motion recognition method, gathers first the range image sequence and acceleration signal of human action, then First eigenvector is extracted from the range image sequence, and second feature vector is extracted from the acceleration signal, Finally the first eigenvector and second feature vector are combined and draw Feature Descriptor, and according to the feature interpretation Son, is identified to human action.Above-mentioned human motion recognition method is different in two kinds of range image sequence and acceleration signal Carry out Fusion Features under modal data to recognize human action, it is only necessary to the depth camera that one to two positions are fixed with it is several Be worn on the acceleration transducer at human body major joint position, it is possible to solve that existing human motion recognition method is present to hiding The low problem of human action recognition accuracy caused by gear or error accumulation, improves the accuracy rate of human action identification.
It should be understood that the size of the sequence number of each step is not meant to the priority of execution sequence, each process in above-described embodiment Execution sequence should be determined with its function and internal logic, and any limit should not be constituted to the implementation process of the embodiment of the present invention It is fixed.
Example IV:
Corresponding to the human motion recognition method described in foregoing embodiments, Fig. 5 shows people provided in an embodiment of the present invention The structured flowchart of body action recognition device.For convenience of description, illustrate only part related to the present embodiment.
With reference to Fig. 5, the device includes:Acquisition module 401, extraction module 402, feature interpretation submodule 403 and identification mould Block 404.
Wherein, acquisition module 401, for obtaining the range image sequence and acceleration signal of human action;The depth Image sequence and the acceleration signal synchronization.
Extraction module 402, for extracting first eigenvector from the range image sequence, from the acceleration signal Middle extraction second feature vector.
Feature interpretation submodule 403, for the first eigenvector and second feature vector to be combined, draws spy Levy description.
Identification module 404, for according to the Feature Descriptor, being identified to human action.
Used as a kind of embodiment, referring to Fig. 6, the extraction module 402 can include:Processing unit 501, calculating are single Unit 502, map unit 503 and first eigenvector acquiring unit 504.
Wherein, processing unit 501, for obtaining the range image sequence in the i-th frame depth image, and by described I frame depth images are projected on each orthogonal plane in the same coordinate system, obtain the i-th frame depth image with it is described just Hand over the projected image at corresponding each visual angle of plane.Wherein, i is the integer more than or equal to 1, and i is less than or equal to depth map As the frame number of depth image in sequence.
Computing unit 502, for calculating the projected image and the i-th frame depth map at each visual angle of i+1 frame depth image As the absolute difference of the projected image at each visual angle.
Map unit 503, for traveling through the range image sequence in each frame depth image, be superimposed each frame depth The corresponding absolute difference of image, obtains the corresponding Depth Motion mapping of the range image sequence.
First eigenvector acquiring unit 504, for carrying out feature extraction to Depth Motion mapping, obtains described the One characteristic vector.
Preferably, the first eigenvector acquiring unit specifically for:To the Depth Motion mapping that obtains respectively from each Feature is extracted at individual visual angle, obtains the characteristic vector that the Depth Motion maps each visual angle, connect the subcharacter at each visual angle to Measure described first eigenvector.
Used as a kind of embodiment, referring to Fig. 7, the extraction module 402 can include:Cutting unit 601, feature is carried Take unit 602 and second feature vector acquiring unit 603.
Wherein, cutting unit 601, for being split to the acceleration signal using time slip-window, are obtained at least One data window comprising acceleration signal fragment.
Feature extraction unit 602, in each described data window, extracting the spy of the acceleration signal fragment Levy, obtain the characteristic vector of each data window.
Second feature vector acquiring unit 603, for according to the characteristic vector of all data windows, drawing described second Characteristic vector.
Preferably, the cutting unit 601 specifically for:Adopt Duplication for preset ratio time slip-window to described Acceleration signal is split, and long using the optimal window of cross validation acquisition, obtains at least one comprising acceleration signal fragment Data window.
Above-mentioned human action identifying device, gathers first the range image sequence and acceleration signal of human action, then First eigenvector is extracted from the range image sequence, and second feature vector is extracted from the acceleration signal, Finally the first eigenvector and second feature vector are combined and draw Feature Descriptor, and according to the feature interpretation Son, is identified to human action.Above-mentioned human action identifying device is different in two kinds of range image sequence and acceleration signal Carry out Fusion Features under modal data to recognize human action, it is only necessary to the depth camera that one to two positions are fixed with it is several Be worn on the acceleration transducer at human body major joint position, it is possible to solve that existing human motion recognition method is present to hiding The low problem of human action recognition accuracy caused by gear or error accumulation, improves the accuracy rate of human action identification.
Those skilled in the art can be understood that, for convenience of description and succinctly, only with above-mentioned each work( Energy unit, the division of module are illustrated, and in practical application, as desired can distribute above-mentioned functions by different Functional unit, module are completed, will the internal structure of described device be divided into different functional unit or module, to complete the above The all or part of function of description.Each functional unit, module in embodiment can be integrated in a processing unit, also may be used Being that unit is individually physically present, it is also possible to which two or more units are integrated in a unit, above-mentioned integrated Unit both can be realized in the form of hardware, it would however also be possible to employ the form of SFU software functional unit is realized.In addition, each function list Unit, the specific name of module are also only to facilitate mutually differentiation, is not limited to the protection domain of the application.Said system The specific work process of middle unit, module, may be referred to the corresponding process in preceding method embodiment, will not be described here.
Those of ordinary skill in the art are it is to be appreciated that the list of each example with reference to the embodiments described herein description Unit and algorithm steps, being capable of being implemented in combination in electronic hardware or computer software and electronic hardware.These functions are actually Performed with hardware or software mode, depending on the application-specific and design constraint of technical scheme.Professional and technical personnel Each specific application can be used different methods to realize described function, but this realization it is not considered that exceeding The scope of the present invention.
In embodiment provided by the present invention, it should be understood that disclosed apparatus and method, can pass through other Mode is realized.For example, system embodiment described above is only schematic, for example, the division of the module or unit, It is only a kind of division of logic function, there can be other dividing mode when actually realizing, such as multiple units or component can be with With reference to or be desirably integrated into another system, or some features can be ignored, or not perform.It is another, it is shown or discussed Coupling each other or direct-coupling or communication connection can be INDIRECT COUPLING by some interfaces, device or unit or Communication connection, can be electrical, mechanical or other forms.
The unit as separating component explanation can be or may not be it is physically separate, it is aobvious as unit The part for showing can be or may not be physical location, you can with positioned at a place, or can also be distributed to multiple On NE.Some or all of unit therein can according to the actual needs be selected to realize the mesh of this embodiment scheme 's.
In addition, each functional unit in each embodiment of the invention can be integrated in a processing unit, it is also possible to It is that unit is individually physically present, it is also possible to which two or more units are integrated in a unit.Above-mentioned integrated list Unit both can be realized in the form of hardware, it would however also be possible to employ the form of SFU software functional unit is realized.
If the integrated unit is realized using in the form of SFU software functional unit and as independent production marketing or used When, during a computer read/write memory medium can be stored in.Based on such understanding, the technical scheme of the embodiment of the present invention The part for substantially contributing to prior art in other words or all or part of the technical scheme can be with software products Form embody, the computer software product is stored in a storage medium, including some instructions use so that one Computer equipment (can be personal computer, server, or network equipment etc.) or processor (processor) perform this The all or part of step of bright embodiment each embodiment methods described.And aforesaid storage medium includes:USB flash disk, portable hard drive, Read-only storage (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic Dish or CD etc. are various can be with the medium of store program codes.
Embodiment described above only to illustrate technical scheme, rather than a limitation;Although with reference to aforementioned reality Apply example to be described in detail the present invention, it will be understood by those within the art that:It still can be to aforementioned each Technical scheme described in embodiment is modified, or carries out equivalent to which part technical characteristic;And these are changed Or replace, the spirit and scope of the essence disengaging various embodiments of the present invention technical scheme of appropriate technical solution are not made, all should It is included within protection scope of the present invention.

Claims (10)

1. a kind of human motion recognition method, it is characterised in that include:
Obtain the range image sequence and acceleration signal of human action;The range image sequence and the acceleration signal are same Step;
First eigenvector is extracted from the range image sequence, second feature vector is extracted from the acceleration signal;
With reference to the first eigenvector and second feature vector, Feature Descriptor is drawn;
According to the Feature Descriptor, human action is identified.
2. human motion recognition method according to claim 1, it is characterised in that described from the range image sequence Extracting first eigenvector includes:
The i-th frame depth image in the range image sequence is obtained, and the i-th frame depth image is projected to into same coordinate On each orthogonal plane in system, the throwing at each visual angle corresponding with the orthogonal plane of the i-th frame depth image is obtained Shadow image;Wherein, i is the integer more than or equal to 1;
Calculate the projected image at each visual angle of i+1 frame depth image and the projected image at i-th each visual angle of frame depth image Absolute difference;
The each frame depth image in the range image sequence is traveled through, the corresponding absolute difference of each frame depth image is superimposed, Obtain the corresponding Depth Motion mapping of the range image sequence;
Feature extraction is carried out to Depth Motion mapping, the first eigenvector is obtained.
3. human motion recognition method according to claim 2, it is characterised in that described that the Depth Motion is mapped into Row feature extraction, obtains the first eigenvector and is specially:
Depth Motion mapping to obtaining extracts feature from each visual angle respectively, obtains the Depth Motion and maps each visual angle Characteristic vector, the subcharacter vector for connecting each visual angle obtains described first eigenvector.
4. human motion recognition method according to claim 1, it is characterised in that described to carry from the acceleration signal Taking second feature vector includes:
The acceleration signal is split using time slip-window, obtains at least one number comprising acceleration signal fragment According to window;
In each described data window, the feature of the acceleration signal fragment is extracted, obtain each data window Characteristic vector;
According to the characteristic vector of all data windows, the second feature vector is drawn.
5. human motion recognition method according to claim 4, it is characterised in that the employing time slip-window is to described Acceleration signal is split, and obtains at least one data window comprising acceleration signal fragment and is specially:
Adopt Duplication the acceleration signal is split for the time slip-window of preset ratio, and obtained using cross validation Take that optimal window is long, obtain at least one data window comprising acceleration signal fragment.
6. a kind of human action identifying device, it is characterised in that include:
Acquisition module, for obtaining the range image sequence and acceleration signal of human action;The range image sequence and institute State acceleration signal synchronization;
Extraction module, for extracting first eigenvector from the range image sequence, extracts from the acceleration signal Second feature vector;
Feature interpretation submodule, for reference to the first eigenvector and second feature vector, drawing Feature Descriptor;
Identification module, for according to the Feature Descriptor, being identified to human action.
7. human action identifying device according to claim 6, it is characterised in that the extraction module includes:
Processing unit, for obtaining the range image sequence in the i-th frame depth image, and by the i-th frame depth image Project on each orthogonal plane in the same coordinate system, obtain the corresponding with the orthogonal plane of the i-th frame depth image Each visual angle projected image;Wherein, i is the integer more than or equal to 1;
Computing unit, for calculate i+1 frame depth image each visual angle projected image and the i-th frame depth image each regard The absolute difference of the projected image at angle;
Map unit, for traveling through the range image sequence in each frame depth image, be superimposed each frame depth image pair The absolute difference answered, obtains the corresponding Depth Motion mapping of the range image sequence;
First eigenvector acquiring unit, for carrying out feature extraction to Depth Motion mapping, obtains the fisrt feature Vector.
8. human action identifying device according to claim 7, it is characterised in that the first eigenvector acquiring unit Specifically for:Depth Motion mapping to obtaining extracts feature from each visual angle respectively, obtains the Depth Motion and maps each The characteristic vector at visual angle, the subcharacter vector for connecting each visual angle obtains described first eigenvector.
9. human action identifying device according to claim 6, it is characterised in that the extraction module includes:
Cutting unit, for splitting to the acceleration signal using time slip-window, obtains at least one comprising acceleration The data window of degree signal segment;
Feature extraction unit, in each described data window, extracting the feature of the acceleration signal fragment, obtains every The characteristic vector of one data window;
Second feature vector acquiring unit, for according to the characteristic vector of all data windows, draw the second feature to Amount.
10. human action identifying device according to claim 9, it is characterised in that the cutting unit specifically for:Adopt The acceleration signal is split with the time slip-window that Duplication is preset ratio, and obtains optimal using cross validation Window is long, obtains at least one data window comprising acceleration signal fragment.
CN201610973939.5A 2016-11-03 2016-11-03 Human motion recognition method and device Active CN106570482B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610973939.5A CN106570482B (en) 2016-11-03 2016-11-03 Human motion recognition method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610973939.5A CN106570482B (en) 2016-11-03 2016-11-03 Human motion recognition method and device

Publications (2)

Publication Number Publication Date
CN106570482A true CN106570482A (en) 2017-04-19
CN106570482B CN106570482B (en) 2019-12-03

Family

ID=58540016

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610973939.5A Active CN106570482B (en) 2016-11-03 2016-11-03 Human motion recognition method and device

Country Status (1)

Country Link
CN (1) CN106570482B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107590476A (en) * 2017-09-22 2018-01-16 郑州云海信息技术有限公司 A kind of comparison method of action, device and computer-readable storage medium
CN107704799A (en) * 2017-08-10 2018-02-16 深圳市金立通信设备有限公司 A kind of human motion recognition method and equipment, computer-readable recording medium
CN108535714A (en) * 2018-05-25 2018-09-14 加驰(厦门)智能科技有限公司 A kind of millimetre-wave radar detection open space blocks the method and device of object
CN108985355A (en) * 2018-06-28 2018-12-11 中国空间技术研究院 A kind of data fusion method based on the orthogonal local sensitivity Hash of grouping
CN109460734A (en) * 2018-11-08 2019-03-12 山东大学 The video behavior recognition methods and system shown based on level dynamic depth projection difference image table
CN115393964A (en) * 2022-10-26 2022-11-25 天津科技大学 Body-building action recognition method and device based on BlazePose

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100310157A1 (en) * 2009-06-05 2010-12-09 Samsung Electronics Co., Ltd. Apparatus and method for video sensor-based human activity and facial expression modeling and recognition
CN103839040A (en) * 2012-11-27 2014-06-04 株式会社理光 Gesture identification method and device based on depth images
CN105608421A (en) * 2015-12-18 2016-05-25 中国科学院深圳先进技术研究院 Human movement recognition method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100310157A1 (en) * 2009-06-05 2010-12-09 Samsung Electronics Co., Ltd. Apparatus and method for video sensor-based human activity and facial expression modeling and recognition
CN103839040A (en) * 2012-11-27 2014-06-04 株式会社理光 Gesture identification method and device based on depth images
CN105608421A (en) * 2015-12-18 2016-05-25 中国科学院深圳先进技术研究院 Human movement recognition method and device

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107704799A (en) * 2017-08-10 2018-02-16 深圳市金立通信设备有限公司 A kind of human motion recognition method and equipment, computer-readable recording medium
CN107590476A (en) * 2017-09-22 2018-01-16 郑州云海信息技术有限公司 A kind of comparison method of action, device and computer-readable storage medium
CN107590476B (en) * 2017-09-22 2020-10-23 苏州浪潮智能科技有限公司 Action comparison method and device and computer storage medium
CN108535714A (en) * 2018-05-25 2018-09-14 加驰(厦门)智能科技有限公司 A kind of millimetre-wave radar detection open space blocks the method and device of object
CN108535714B (en) * 2018-05-25 2021-10-22 厦门精益远达智能科技有限公司 Method and device for detecting object sheltered in open space by millimeter wave radar
CN108985355A (en) * 2018-06-28 2018-12-11 中国空间技术研究院 A kind of data fusion method based on the orthogonal local sensitivity Hash of grouping
CN109460734A (en) * 2018-11-08 2019-03-12 山东大学 The video behavior recognition methods and system shown based on level dynamic depth projection difference image table
CN109460734B (en) * 2018-11-08 2020-07-31 山东大学 Video behavior identification method and system based on hierarchical dynamic depth projection difference image representation
CN115393964A (en) * 2022-10-26 2022-11-25 天津科技大学 Body-building action recognition method and device based on BlazePose

Also Published As

Publication number Publication date
CN106570482B (en) 2019-12-03

Similar Documents

Publication Publication Date Title
EP3961485A1 (en) Image processing method, apparatus and device, and storage medium
CN106570482A (en) Method and device for identifying body motion
US8442307B1 (en) Appearance augmented 3-D point clouds for trajectory and camera localization
CN110807451B (en) Face key point detection method, device, equipment and storage medium
CN108319901B (en) Biopsy method, device, computer equipment and the readable medium of face
CN113874870A (en) Image-based localization
JP7128708B2 (en) Systems and methods using augmented reality for efficient collection of training data for machine learning
US11328481B2 (en) Multi-resolution voxel meshing
EP3906527B1 (en) Image bounding shape using 3d environment representation
CN111243093A (en) Three-dimensional face grid generation method, device, equipment and storage medium
CN108734058B (en) Obstacle type identification method, device, equipment and storage medium
CN112927363B (en) Voxel map construction method and device, computer readable medium and electronic equipment
CN107924571A (en) Three-dimensional reconstruction is carried out to human ear from a cloud
US20220319146A1 (en) Object detection method, object detection device, terminal device, and medium
JP2008033958A (en) Data processing system and method
CN110363817A (en) Object pose estimation method, electronic equipment and medium
JP2022529367A (en) Peripheral estimation from postural monocular video
US20220301277A1 (en) Target detection method, terminal device, and medium
CN111191582A (en) Three-dimensional target detection method, detection device, terminal device and computer-readable storage medium
US11676361B2 (en) Computer-readable recording medium having stored therein training program, training method, and information processing apparatus
CN108734773A (en) A kind of three-dimensional rebuilding method and system for mixing picture
US20220301176A1 (en) Object detection method, object detection device, terminal device, and medium
CN113592015B (en) Method and device for positioning and training feature matching network
CN113570725A (en) Three-dimensional surface reconstruction method and device based on clustering, server and storage medium
WO2021120578A1 (en) Forward calculation method and apparatus for neural network, and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant