CN104035557B - Kinect action identification method based on joint activeness - Google Patents
Kinect action identification method based on joint activeness Download PDFInfo
- Publication number
- CN104035557B CN104035557B CN201410220225.8A CN201410220225A CN104035557B CN 104035557 B CN104035557 B CN 104035557B CN 201410220225 A CN201410220225 A CN 201410220225A CN 104035557 B CN104035557 B CN 104035557B
- Authority
- CN
- China
- Prior art keywords
- joint
- standard operation
- action
- template
- instant action
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a kinect action identification method based on joint activeness. The method includes the following steps that firstly, a standard action template library is set up, and a standard action template chart and a standard action template difference chart are obtained; secondly, immediate actions of a user are identified, wherein immediate action data are collected at first, then an immediate action chart is constructed, an immediate action difference chart is calculated and the immediate action chart is preprocessed, the standard action template difference chart and a user real-time image difference chart are used for summation of difference absolute values of each column so as to set the weight of joint activeness, and finally the weight of joint activeness is multiplied into an Euclidean distance matrix of the user immediate action chart and the standard action template chart, summation is conducted, and an obtained value is used for judging the category of user immediate actions. The kinect action identification method has the advantages of being low in complexity, high in operation speed and high in identification rate.
Description
Technical field
The present invention relates to computer vision field, more particularly to a kind of Kinect action recognition sides based on joint liveness
Method.
Background technology
2010, the gradually maturation of Kinect somatosensory device so that people become more for the input mode of electronic system
Plus diversification, while being also more convenient for people to carry out input operation.Kinect device is by the dynamic of direct detection people's limbs
Make to be input into system, it is proposed that a brand-new man-machine interaction mode.
Kinect device is different from common camera, except the mirror for being configured with the 2 dimension coloured images for being able to record that common
Head is outer, and it is provided with the CMOS infrared sensors in the perceived depth world.Infrared sensor sends infrared ray and covers whole
The visual range of Kinect device, photographic head group receive the object that reflection light is come in identification range, including user.It is infrared
Photographic head identification image is one " depth field " (Depth Field), it is possible to depth information is mapped to two dimensional image and is worked as
In, reflect the distance of object in " depth field " with the size of pixel value.Therefore, Kinect device is common except furnishing us with
Outside 2 dimension image sequences, the depth information of each object in 2 dimension images is additionally provided.This is undoubtedly to computer vision field
Research brings an important information resources, promotes the development of computer vision correlation technique.
Under Kinect somatosensory platform, action recognition is the core link of application program, so far, such as click on, wave,
The body moulding identification of the simple hand motion recognition such as raise one's hand and some fixations, being widely used for all kinds of Kinect should
Among program.For example, Shen Shihong et al. have studied the body-sensing gesture recognizer based on Kinect, employ SVM method realities
The identification of three kinds of gestures is showed.But some simple gestures are only recognized and use, this is for Kinect device, it appears that be
Waste one's talent on a petty job.Developer can obtain more rich user movement information, including detection real-time tracing by Kinect device
The head of user's human body, shoulder, ancon ..., 20 articulares of ankle and record respective 3-dimensional spatial coordinated information.Therefore, I
Can recognize more more complicated actions using Kinect device, for example:The fight such as left and right straight punch, left right hook, dwi dolryo cha-gi is dynamic
Make, and setting-up exercises to radio music, all kinds of dance movements etc..User and related application can be preferably strengthened in the identification of these actions
It is interactive, so as to play skill teaching, the effect such as game leisure.
The content of the invention
In order to overcome the disadvantages mentioned above of prior art and deficiency, it is an object of the invention to provide a kind of active based on joint
The Kinect action identification methods of degree, complexity are relatively low, and fast operation, discrimination are higher.
The purpose of the present invention is achieved through the following technical solutions:
A kind of Kinect action identification methods based on joint liveness, comprise the following steps:
(1) set up standard operation template base:
(1-1) determine the major joint of human body and refer to joint;
(1-2) standard operation data are gathered:Gather each when human body makes standard operation using Kinect picture pick-up devices
Frame standard picture, obtains the continuous three dimensional local information of the now major joint of human body, and calculates the major joint phase of human body
For the relative position coordinates with reference to joint;
(1-3) coordinate of the major joint of each human body in standard picture is standardized:It is main with each human body
The relative position coordinates in joint obtain normalized coordinates divided by user's height L;
(1-4) time window is set and obtains standard operation data, the standard operation data in a time window, according to
Two dimensions of major joint and time, are stored as single channel image matrix, obtain standard operation figure;
(1-5) smooth reparation is carried out to standard operation figure, removes catastrophe point, remove continuous breakpoint, obtain adjusted mark
Quasi- action diagram;
(1-6) for same standard operation, its corresponding several adjusted standard picture is averaging, the standard is obtained
The standard operation Prototype drawing of action;
(1-7) Difference Calculation is carried out in time-axis direction to standard operation Prototype drawing and obtains standard operation template difference diagram;
(2) instant action of user is identified:
(2-1) instant action data are gathered:Gather each when human body makes instant action using Kinect picture pick-up devices
The instant image of frame, obtains the continuous three dimensional local information of the now major joint of human body, and calculates the major joint phase of human body
For the relative position coordinates with reference to joint;
(2-2) coordinate of the major joint of each human body in instant image is standardized:It is main with each human body
The relative position coordinates in joint obtain normalized coordinates divided by user's height L;
(2-3) time window is set and obtains instant action data, the instant action data in a time window, according to
Two dimensions of major joint and time, are stored as single channel image matrix, obtain instant action figure;
(2-4) smooth reparation is carried out to instant action figure, removes catastrophe point, remove breakpoint, and carry out time adjustment, obtain
Adjusted instant action figure;
(2-5) Difference Calculation is carried out in time-axis direction to adjusted instant action figure and obtains instant action difference diagram;
(2-6) joint liveness weights are calculated:
If wjWeights are enlivened in joint for the corresponding pixel matrix jth row of instant action difference diagram, then
Wherein, diffMaptemplate(i, j) represents corresponding the i-th row of the pixel matrix jth of standard operation template difference diagram
The element of row;diffMaprealtime(i, j) represents the element of corresponding the i-th row of the pixel matrix jth row of instant action difference diagram;
N is shooting sample rate;Numbers of the m for the major joint of human body;
(2-7) calculate similarity judge of instant action and standard operation:
Wherein, elements of the realTimeMap (i, j) for corresponding the i-th row of the pixel matrix jth row of instant action figure;
Elements of the templateMap (i, j) for corresponding the i-th row of the pixel matrix jth row of standard operation Prototype drawing.
The major joint of the human body includes:Head, left ancon, right ancon, left hand, the right hand, left knee joint, right knee joint, left foot, the right side
Foot;The reference joint is waist node.
Step (1-4) the setting time window obtains standard operation data, the standard operation in a time window
Data, according to two dimensions of major joint and time, are stored as single channel image matrix, specially:
Setting time length for moving step length integral multiple time window obtaining continuous standard operation data, setting is taken the photograph
As sample rate is N, the continuous action sequence on three component motions of each major joint, the i.e. main pass of the i-th frame picture are obtained
Section positional information is all preserved into i-th row of matrix templateMap, forms an a height of N, the matrix of a width of 3m;Wherein, square
Element templateMap (i, j) of the i-th row jth row in battle array templateMap is calculated by following formula:
TemplateMap (i, j)=(skeletonPostemplate(i,k)x-torsoPostemplate(i)x)·255/L,k
=j, j ∈ [0, m-1]
TemplateMap (i, j)=(skeletonPostemplate(i,k)y-torsoPostemplate(i)y)·255/L,k
=j-n, j ∈ [m, 2m-1]
TemplateMap (i, j)=(skeletonPostemplate(i,k)z-torsoPostemplate(i)z)·255/L,k
=j-2n, j ∈ [2m, 3m-1]
Wherein, skeletonPostemplate(i,k)xFor the x-axis of k-th major joint of the i-th frame of standard operation data
Absolute coordinate;skeletonPostemplate(i,k)yFor k-th major joint of the i-th frame of standard operation data y-axis it is exhausted
To coordinate figure;skeletonPostemplate(i,k)zFor k-th major joint of the i-th frame of standard operation data z-axis it is absolute
Coordinate figure;torsoPostemplate(i)xFor the absolute coordinate of the x-axis in the reference joint of the i-th frame of standard operation data;
torsoPostemplate(i)yFor the absolute coordinate of the y-axis in the reference joint of the i-th frame of standard operation data;torsoPostemplate
(i)zFor the absolute coordinate of the z-axis in the reference joint of the i-th frame of standard operation data.
Step (2-3) the setting time window obtains instant action data, the instant action in a time window
Data, according to two dimensions of major joint and time, are stored as single channel image matrix, specially:
Setting time length for moving step length integral multiple time window obtaining continuous instant action data, setting is taken the photograph
As sample rate is N, the continuous action sequence on three component motions of each major joint, the i.e. main pass of the i-th frame picture are obtained
Section positional information is all preserved into the i-th row of matrix r ealtimeMap, forms an a height of N, the matrix of a width of 3m;Wherein, square
Element realtimeMap (i, j) of the i-th row jth row in battle array realtimeMap is calculated by following formula:
RealTimeMap (i, j)=(skeletonPosrealtime(i,k)x-torsoPosrealtime(i)x)·255/L,k
=j, j ∈ [0, m-1]
RealTimeMap (i, j)=(skeletonPosrealtime(i,k)y-torsoPosrealtime(i)y)·255/L,k
=j-n, j ∈ [m, 2m-1]
RealTimeMap (i, j)=(skeletonPosrealtime(i,k)z-torsoPosrealtime(i)z)·255/L,k
=j-2n, j ∈ [2m, 3m-1]
Wherein, skeletonPosrealtime(i,k)xFor the x-axis of k-th major joint of the i-th frame of instant action data
Absolute coordinate;skeletonPosrealtime(i,k)yFor k-th major joint of the i-th frame of instant action data y-axis it is exhausted
To coordinate figure;skeletonPosrealtime(i,k)zFor k-th major joint of the i-th frame of instant action data z-axis it is absolute
Coordinate figure;torsoPosrealtime(i)xFor the absolute coordinate of the x-axis in the reference joint of the i-th frame of instant action data;
torsoPosrealtime(i)yFor the absolute coordinate of the y-axis in the reference joint of the i-th frame of instant action data;torsoPosrealtime
(i)zFor the absolute coordinate of the z-axis in the reference joint of the i-th frame of instant action data.
Step (1-5) the removal catastrophe point, it is specific as follows:One differential threshold detection is set in time-axis direction, such as
Really the difference of certain pixel and two neighboring pixel is more than given threshold, and contiguous pixels value do not occur be 0 or 255
Situation, then be set to catastrophe point this pixel, removes catastrophe point as the following formula:
Step (2-4) the removal catastrophe point, it is specific as follows:One differential threshold detection is set in time-axis direction, such as
Really the difference of certain pixel and two neighboring pixel is more than given threshold, and contiguous pixels value do not occur be 0 or 255
Situation, then be set to catastrophe point this pixel, removes catastrophe point as the following formula:
It is described to remove continuous breakpoint, specially:
The continuous breakpoint is the continuous stain for occurring or white point, and the stain is the point that pixel value is zero, the white point
For the point of pixel value 255;If on a timeline, continuous breakpoint occurs in certain coordinate of certain major joint, while and existing
In the case of partial pixel point is normal, the value of the normal pixel closest with the continuous breakpoint is replaced into the continuous breakpoint
Pixel value.
Step (2-4) time adjustment, specially:
The projection on time shafts is carried out to instant action difference diagram and standard operation template difference diagram, the center of gravity of projection is calculated
Time coordinate:
The center of gravity time coordinate templateT of the projection of standard operation template difference diagramg, the center of gravity of instant action difference diagram
Time coordinate realTimeTgCalculate as follows:
The computational methods of instant action map migration amount are as follows:
Δ t=templateTg-realTimeTg
Line skew is carried out to instant action figure according to offset Δ t finally, the part vacated is carried out with adjacent column element
Fill up.
Compared with prior art, the present invention has advantages below and beneficial effect:
1. the algorithmic procedure complexity of the present invention is relatively low, mainly by the weighting meter of the Euclidean distance of image respective pixel
Calculate, easily realize.
2. the fast operation of the present invention, can complete the knowledge of an action diagram on before next action diagram builds
Do not calculate, therefore in real time action can be identified.
3. the discrimination of the present invention is higher, obviate or mitigates impact of many factors to action recognition process.
4. the present invention is practical, can be widely applied to game, body-building, teaching, criminal investigation field
5. expansion of the present invention is strong, as user can voluntarily carry out the making of standard operation template, so can recognize
Action can expand standard operation template base at any time without too big restriction, user.
Description of the drawings
The flow charts of setting up standard operation template base process of the Fig. 1 for embodiments of the invention.
Adjusted standard operation figures of the Fig. 2 for the straight right action of embodiments of the invention.
Standard operation template difference diagrams of the Fig. 3 for the straight right action of embodiments of the invention.
Flow charts of the Fig. 4 for the instant action identification process of embodiments of the invention.
Instant action figures of the Fig. 5 for embodiments of the invention.
Adjusted instant action figures of the Fig. 6 for embodiments of the invention.
Specific embodiment
With reference to embodiment, the present invention is described in further detail, but embodiments of the present invention not limited to this.
Embodiment
The Kinect action identification methods based on joint liveness of the present embodiment, comprise the following steps:
(1) as shown in figure 1, setting up standard operation template base:
(1-1) determine the major joint of human body and refer to joint:
To reduce operand, the present embodiment only to have chosen and have obvious motion energy relative to other joints when carrying out fight action
9 joints of power are followed successively by as the major joint of human body:Head, left ancon, right ancon, left hand, the right hand, left knee joint, right knee joint,
Left foot, right crus of diaphragm.
To solve the problems, such as that user and the distance change of Kinect device cause, the present invention is by setting reference mode and making
The motion conditions of each major joint are represented with relative position;It is waist node that the present embodiment is arranged with reference to joint.
(1-2) standard operation data are gathered:Gather each when human body makes standard operation using Kinect picture pick-up devices
Frame standard picture, obtains the continuous three dimensional local information of the now major joint of human body, and calculates the major joint phase of human body
For the relative position coordinates with reference to joint.
(1-3) coordinate of the major joint of each human body in standard picture is standardized:It is main with each human body
The relative position coordinates in joint obtain normalized coordinates divided by user's height L;
L is distance of the user's head to double-legged joint, specifically calculates formula as follows:
L=yHead-yFoot.
(1-4) time window is set and obtains standard operation data, the action data in a time window, according to joint
With two dimensions of time, single channel image matrix is stored as, standard operation figure is obtained, specially:
Setting time length for moving step length integral multiple time window obtaining continuous standard operation data, setting is taken the photograph
As sample rate is N, the continuous action sequence on three component motions of each major joint, the i.e. main pass of the i-th frame picture are obtained
Section positional information is all preserved into i-th row of matrix templateMap, forms an a height of N, a width of 27 matrix;Wherein, square
Element templateMap (i, j) of the i-th row jth row in battle array templateMap is calculated by following formula:
TemplateMap (i, j)=(skeletonPostemplate(i,k)x-torsoPostemplate(i)x)·255/L,k
=j, j ∈ [0,8]
TemplateMap (i, j)=(skeletonPostemplate(i,k)y-torsoPostemplate(i)y)·255/L,k
=j-9, j ∈ [9,17]
TemplateMap (i, j)=(skeletonPostemplatee(i,k)z-torsoPostemplate(i)z)·255/L,
K=j-18, j ∈ [18,26]
Wherein, skeletonPostemplate(i,k)xFor the x-axis of k-th major joint of the i-th frame of standard operation data
Absolute coordinate;skeletonPostemplate(i,k)yFor k-th major joint of the i-th frame of standard operation data y-axis it is exhausted
To coordinate figure;skeletonPostemplate(i,k)zFor k-th major joint of the i-th frame of standard operation data z-axis it is absolute
Coordinate figure;torsoPostemplate(i)xFor the absolute coordinate of the x-axis in the reference joint of the i-th frame of standard operation data;
torsoPostemplate(i)yFor the absolute coordinate of the y-axis in the reference joint of the i-th frame of standard operation data;torsoPostemplate
(i)zFor the absolute coordinate of the z-axis in the reference joint of the i-th frame of standard operation data.
(1-5) smooth reparation is carried out to standard operation figure, removes catastrophe point, remove continuous breakpoint, obtain adjusted mark
Quasi- action diagram;By taking straight right action as an example, its adjusted standard operation figure is shown in Fig. 2.
The removal catastrophe point, it is specific as follows:One differential threshold detection is set in time-axis direction, if certain pixel
Point is more than given threshold with the difference of two neighboring pixel, and if there is not the situation that contiguous pixels value is 0 or 255,
This pixel is set to catastrophe point then, catastrophe point is removed as the following formula:
It is described to remove continuous breakpoint, specially:
The continuous breakpoint is the continuous stain for occurring or white point, and the stain is the point that pixel value is zero, the white point
For the point of pixel value 255;If on a timeline, continuous breakpoint occurs in certain coordinate of certain major joint, while and existing
In the case of partial pixel point is normal, the value of the normal pixel closest with the continuous breakpoint is replaced into the continuous breakpoint
Pixel value.
(1-6) for same standard operation, its corresponding several adjusted standard picture is averaging, the standard is obtained
The standard operation Prototype drawing of action.
(1-7) Difference Calculation is carried out in time-axis direction to standard operation Prototype drawing and obtains standard operation template difference diagram;
By taking straight right action as an example, its standard operation template difference diagram is shown in Fig. 3.
(2) as shown in figure 4, being identified to the instant action of user:
(2-1) instant action data are gathered:Each frame when making instant action using Kinect picture pick-up devices collection human body
Instant image, obtain the continuous three dimensional local information of the now major joint of human body, and calculate the major joint phase of human body
For the relative position coordinates with reference to joint.
(2-2) coordinate of the major joint of each human body in instant image is standardized:It is main with each human body
The relative position coordinates in joint obtain normalized coordinates divided by user's height L.
(2-3) time window is set and obtains instant action data, the action data in a time window, according to joint
With two dimensions of time, single channel image matrix is stored as, instant action figure (such as Fig. 5) is obtained, specially:
Setting time length for moving step length integral multiple time window obtaining continuous standard operation data, setting is taken the photograph
As sample rate is N, the continuous action sequence on three component motions of each major joint, the i.e. main pass of the i-th frame picture are obtained
Section positional information is all preserved into the i-th row of matrix r ealtimeMap, forms an a height of N, a width of 27 matrix;In order that
Matrix visual pattern is convenient to be studied, and by its element value Linear Mapping to [0,255], is stored, matrix in the form of gray level image
The element of the i-th row jth row in realtimeMap
RealtimeMap (i, j)=(skeletonPosrealtime(i,k)x-torsoPosrealtime(i)x)·255/L,k
=j, j ∈ [0,8]
RealtimeMap (i, j)=(skeletonPosrealtime(i,k)y-torsoPosrealtime(i)y)·255/L,k
=j-9, j ∈ [9,17]
RealtimeMap (i, j)=(skeletonPosrealtime(i,k)z-torsoPosrealtime(i)z)·255/L,k
=j-18, j ∈ [18,26]
RealtimeMap (i, j) is calculated by following formula:
Wherein, skeletonPosrealtime(i,k)xFor the x-axis of k-th major joint of the i-th frame of instant action data
Absolute coordinate;skeletonPosrealtime(i,k)yFor k-th major joint of the i-th frame of instant action data y-axis it is exhausted
To coordinate figure;skeletonPosrealtime(i,k)zFor k-th major joint of the i-th frame of instant action data z-axis it is absolute
Coordinate figure;torsoPosrealtime(i)xFor the absolute coordinate of the x-axis in the reference joint of the i-th frame of instant action data;
torsoPosrealtime(i)yFor the absolute coordinate of the y-axis in the reference joint of the i-th frame of instant action data;torsoPosrealtime
(i)zFor the absolute coordinate of the z-axis in the reference joint of the i-th frame of instant action data.
(2-4) smooth reparation is carried out to instant action figure, removes catastrophe point, remove breakpoint, and carry out time adjustment, obtain
Adjusted instant action figure, as shown in Figure 6;
The removal catastrophe point of this step is specific as follows:In time-axis direction, one differential threshold detection is set, if certain
Pixel is more than given threshold with the difference of two neighboring pixel, and the situation that contiguous pixels value is 0 or 255 does not occur,
This pixel is set to catastrophe point then, catastrophe point is removed as the following formula:
The removal breakpoint method of this step is same with embodiment step (1-3).
The time adjustment, specially:
The projection on time shafts is carried out to instant action difference diagram and standard operation template difference diagram, the center of gravity of projection is calculated
Time coordinate:The center of gravity time coordinate of the projection, that is, find a time point, action difference diagram from being sometime divided into two
Individual part, this two parts make pixel value summation difference for minimum;The center of gravity time coordinate of the projection of standard operation template difference diagram
templateTg, the center of gravity time coordinate realTimeT of instant action difference diagramgCalculate as follows:
The computational methods of instant action map migration amount are as follows:
Δ t=templateTg-realTimeTg
Line skew is carried out to instant action figure according to offset Δ t finally, the part vacated is carried out with adjacent column element
Fill up.
(2-5) Difference Calculation is carried out in time-axis direction to adjusted instant action figure and obtains instant action difference diagram;
(2-6) joint liveness weights are calculated:
If wjWeights are enlivened in joint for the corresponding pixel matrix jth row of instant action difference diagram, then
Wherein, diffMaptemplate(i, j) represents corresponding the i-th row of the pixel matrix jth of standard operation template difference diagram
The element of row;diffMaprealtime(i, j) represents the element of corresponding the i-th row of the pixel matrix jth row of instant action difference diagram;
N is shooting sample rate;
(2-7) calculate similarity judge of instant action and standard operation:
Wherein, elements of the realTimeMap (i, j) for corresponding the i-th row of the pixel matrix jth row of instant action figure;
Elements of the templateMap (i, j) for corresponding the i-th row of the pixel matrix jth row of standard operation Prototype drawing.
Above-described embodiment is the present invention preferably embodiment, but embodiments of the present invention not by the embodiment
Limit, other any spirit without departing from the present invention and the change, modification, replacement made under principle, combine, simplification,
Equivalent substitute mode is should be, is included within protection scope of the present invention.
Claims (8)
1. a kind of Kinect action identification methods based on joint liveness, it is characterised in that comprise the following steps:
(1) set up standard operation template base:
(1-1) determine the major joint of human body and refer to joint;
(1-2) standard operation data are gathered:Each frame mark during standard operation is made using Kinect picture pick-up devices collection human body
Quasi- image, obtains the continuous three dimensional local information of the now major joint of human body, and calculate the major joint of human body relative to
With reference to the relative position coordinates in joint;
(1-3) coordinate of the major joint of each human body in standard picture is standardized:With the major joint of each human body
Relative position coordinates obtain normalized coordinates divided by user's height L;
(1-4) time window is set and obtains standard operation data, the standard operation data in a time window, according to main
Two dimensions in joint and time, are stored as single channel image matrix, obtain standard operation figure;
(1-5) smooth reparation is carried out to standard operation figure, removes catastrophe point, remove continuous breakpoint, obtained adjusted standard and move
Mapping;The continuous breakpoint is the continuous stain for occurring or white point, and the stain is the point that pixel value is zero, and the white point is picture
The point of plain value 255;
(1-6) for same standard operation, its corresponding several adjusted standard operation figure is averaging, obtains the standard and move
The standard operation Prototype drawing of work;
(1-7) Difference Calculation is carried out in time-axis direction to standard operation Prototype drawing and obtains standard operation template difference diagram;
(2) instant action of user is identified:
(2-1) instant action data are gathered:Each frame when human body makes instant action is gathered i.e. using Kinect picture pick-up devices
When image, obtain the continuous three dimensional local information of the now major joint of human body, and calculate the major joint of human body relative to
With reference to the relative position coordinates in joint;
(2-2) coordinate of the major joint of each human body in instant image is standardized:With the major joint of each human body
Relative position coordinates obtain normalized coordinates divided by user's height L;
(2-3) time window is set and obtains instant action data, the instant action data in a time window, according to main
Two dimensions in joint and time, are stored as single channel image matrix, obtain instant action figure;
(2-4) smooth reparation is carried out to instant action figure, removes catastrophe point, remove breakpoint, and carry out time adjustment, obtain modulated
Whole instant action figure;
(2-5) Difference Calculation is carried out in time-axis direction to adjusted instant action figure and obtains instant action difference diagram;
(2-6) joint liveness weights are calculated:
If wjWeights are enlivened in joint for the corresponding pixel matrix jth row of instant action difference diagram, then
Wherein, diffMaptemplate(i, j) represents corresponding the i-th row of the pixel matrix jth row of standard operation template difference diagram
Element;diffMaprealtime(i, j) represents the element of corresponding the i-th row of the pixel matrix jth row of instant action difference diagram;N is
Shooting sample rate;Numbers of the m for the major joint of human body;
(2-7) calculate similarity judge of instant action and standard operation:
Wherein, elements of the realTimeMap (i, j) for corresponding the i-th row of the pixel matrix jth row of instant action figure;
Elements of the templateMap (i, j) for corresponding the i-th row of the pixel matrix jth row of standard operation Prototype drawing.
2. Kinect action identification methods based on joint liveness according to claim 1, it is characterised in that the people
The major joint of body includes:Head, left ancon, right ancon, left hand, the right hand, left knee joint, right knee joint, left foot, right crus of diaphragm;The reference is closed
Save as waist node.
3. Kinect action identification methods based on joint liveness according to claim 1, it is characterised in that step
(1-4) the setting time window obtains standard operation data, the standard operation data in a time window, according to main
Two dimensions in joint and time, are stored as single channel image matrix, specially:
Setting time length for moving step length integral multiple time window obtaining continuous standard operation data, shooting is set and is adopted
Sample rate is N, obtains the continuous action sequence on three component motions of each major joint, i.e. the major joint position of the i-th frame picture
Confidence breath is all preserved into i-th row of matrix templateMap, forms an a height of N, the matrix of a width of 3m;Wherein, matrix
Element templateMap (i, j) of the i-th row jth row in templateMap is calculated by following formula:
TemplateMap (i, j)=(skeletonPostemplate(i,k)x-torsoPostemplate(i)x) 255/L, k=j,
j∈[0,m-1]
TemplateMap (i, j)=(skeletonPostemplate(i,k)y-torsoPostemplate(i)y) 255/L, k=j-
n,j∈[m,2m-1]
TemplateMap (i, j)=(skeletonPostemplate(i,k)z-torsoPostemplate(i)z) 255/L, k=j-
2n,j∈[2m,3m-1]
Wherein, skeletonPostemplate(i,k)xFor k-th major joint of the i-th frame of standard operation data x-axis it is absolute
Coordinate figure;skeletonPostemplate(i,k)yFor the absolute seat of the y-axis of k-th major joint of the i-th frame of standard operation data
Scale value;skeletonPostemplate(i,k)zFor the absolute coordinate of the z-axis of k-th major joint of the i-th frame of standard operation data
Value;torsoPostemplate(i)xFor the absolute coordinate of the x-axis in the reference joint of the i-th frame of standard operation data;
torsoPostemplate(i)yFor the absolute coordinate of the y-axis in the reference joint of the i-th frame of standard operation data;torsoPostemplate
(i)zFor the absolute coordinate of the z-axis in the reference joint of the i-th frame of standard operation data.
4. Kinect action identification methods based on joint liveness according to claim 3, it is characterised in that step
(2-3) the setting time window obtains instant action data, the instant action data in a time window, according to main
Two dimensions in joint and time, are stored as single channel image matrix, specially:
Setting time length for moving step length integral multiple time window obtaining continuous instant action data, shooting is set and is adopted
Sample rate is N, obtains the continuous action sequence on three component motions of each major joint, i.e. the major joint position of the i-th frame picture
Confidence breath is all preserved into the i-th row of matrix r ealtimeMap, forms an a height of N, the matrix of a width of 3m;Wherein, matrix
Element realtimeMap (i, j) of the i-th row jth row in realtimeMap is calculated by following formula:
RealTimeMap (i, j)=(skeletonPosrealtime(i,k)x-torsoPosrealtime(i)x) 255/L, k=j,
j∈[0,m-1]
RealTimeMap (i, j)=(skeletonPosrealtime(i,k)y-torsoPosrealtime(i)y) 255/L, k=j-
n,j∈[m,2m-1]
RealTimeMap (i, j)=(skeletonPosrealtime(i,k)z-torsoPosrealtime(i)z) 255/L, k=j-
2n,j∈[2m,3m-1]
Wherein, skeletonPosrealtime(i,k)xFor k-th major joint of the i-th frame of instant action data x-axis it is absolute
Coordinate figure;skeletonPosrealtime(i,k)yFor the absolute seat of the y-axis of k-th major joint of the i-th frame of instant action data
Scale value;skeletonPosrealtime(i,k)zFor the absolute coordinate of the z-axis of k-th major joint of the i-th frame of instant action data
Value;torsoPosrealtime(i)xFor the absolute coordinate of the x-axis in the reference joint of the i-th frame of instant action data;
torsoPosrealtime(i)yFor the absolute coordinate of the y-axis in the reference joint of the i-th frame of instant action data;torsoPosrealtime
(i)zFor the absolute coordinate of the z-axis in the reference joint of the i-th frame of instant action data.
5. Kinect action identification methods based on joint liveness according to claim 3, it is characterised in that step
(1-5) the removal catastrophe point, specific as follows:One differential threshold detection is set in time-axis direction, if certain pixel
It is more than given threshold with the difference of two neighboring pixel, and the situation that contiguous pixels value is 0 or 255 does not occur, then this
Individual pixel is set to catastrophe point, removes catastrophe point as the following formula:
6. Kinect action identification methods based on joint liveness according to claim 3, it is characterised in that step
(2-4) the removal catastrophe point, specific as follows:One differential threshold detection is set in time-axis direction, if certain pixel
It is more than given threshold with the difference of two neighboring pixel, and the situation that contiguous pixels value is 0 or 255 does not occur, then this
Individual pixel is set to catastrophe point, removes catastrophe point as the following formula:
7. Kinect action identification methods based on joint liveness according to claim 3, it is characterised in that described to go
Except continuous breakpoint, specially:
If on a timeline, there is continuous breakpoint in certain coordinate of certain major joint, while and just there is partial pixel point
In the case of often, the value of the normal pixel closest with the continuous breakpoint is replaced into the pixel value of the continuous breakpoint.
8. Kinect action identification methods based on joint liveness according to claim 3, it is characterised in that step
(2-4) time adjustment, specially:
The projection on time shafts is carried out to instant action difference diagram and standard operation template difference diagram, the center of gravity time of projection is calculated
Coordinate:
The center of gravity time coordinate templateT of the projection of standard operation template difference diagramg, the center of gravity time of instant action difference diagram
Coordinate realTimeTgCalculate as follows:
The computational methods of instant action map migration amount are as follows:
Δ t=templateTg-realTimeTg
Line skew is carried out to instant action figure according to offset Δ t finally, the part vacated is filled up with adjacent column element.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410220225.8A CN104035557B (en) | 2014-05-22 | 2014-05-22 | Kinect action identification method based on joint activeness |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410220225.8A CN104035557B (en) | 2014-05-22 | 2014-05-22 | Kinect action identification method based on joint activeness |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104035557A CN104035557A (en) | 2014-09-10 |
CN104035557B true CN104035557B (en) | 2017-04-19 |
Family
ID=51466357
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410220225.8A Active CN104035557B (en) | 2014-05-22 | 2014-05-22 | Kinect action identification method based on joint activeness |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104035557B (en) |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106448279A (en) * | 2016-10-27 | 2017-02-22 | 重庆淘亿科技有限公司 | Interactive experience method and system for dance teaching |
CN106774896B (en) * | 2016-12-19 | 2018-03-13 | 吉林大学 | A kind of sitting posture hand assembly line model is worth evaluating system |
CN107080940A (en) * | 2017-03-07 | 2017-08-22 | 中国农业大学 | Body feeling interaction conversion method and device based on depth camera Kinect |
CN107293162A (en) * | 2017-07-31 | 2017-10-24 | 广东欧珀移动通信有限公司 | Move teaching auxiliary and device, terminal device |
CN107730529A (en) * | 2017-10-10 | 2018-02-23 | 上海魔迅信息科技有限公司 | A kind of video actions methods of marking and system |
CN108153421B (en) * | 2017-12-25 | 2021-10-01 | 深圳Tcl新技术有限公司 | Somatosensory interaction method and device and computer-readable storage medium |
CN108288300A (en) * | 2018-01-12 | 2018-07-17 | 北京蜜枝科技有限公司 | Human action captures and skeleton data mapped system and its method |
CN112950751A (en) * | 2019-12-11 | 2021-06-11 | 阿里巴巴集团控股有限公司 | Gesture action display method and device, storage medium and system |
JP2022065241A (en) * | 2020-10-15 | 2022-04-27 | 株式会社日立ハイテク | Motion visualization system and motion visualization method |
CN116631045A (en) * | 2022-02-10 | 2023-08-22 | 成都拟合未来科技有限公司 | Human body liveness detection method, system and device based on action recognition and medium |
CN114795192B (en) * | 2022-07-01 | 2022-09-16 | 佛山科学技术学院 | Joint mobility intelligent detection method and system |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103399637A (en) * | 2013-07-31 | 2013-11-20 | 西北师范大学 | Man-computer interaction method for intelligent human skeleton tracking control robot on basis of kinect |
CN103706106A (en) * | 2013-12-30 | 2014-04-09 | 南京大学 | Self-adaption continuous motion training method based on Kinect |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130198625A1 (en) * | 2012-01-26 | 2013-08-01 | Thomas G Anderson | System For Generating Haptic Feedback and Receiving User Inputs |
-
2014
- 2014-05-22 CN CN201410220225.8A patent/CN104035557B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103399637A (en) * | 2013-07-31 | 2013-11-20 | 西北师范大学 | Man-computer interaction method for intelligent human skeleton tracking control robot on basis of kinect |
CN103706106A (en) * | 2013-12-30 | 2014-04-09 | 南京大学 | Self-adaption continuous motion training method based on Kinect |
Non-Patent Citations (1)
Title |
---|
《人体关节运动跟踪》;邓学雄,等;《东华大学学报》;20130831;第39卷(第4期);第448-454页 * |
Also Published As
Publication number | Publication date |
---|---|
CN104035557A (en) | 2014-09-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104035557B (en) | Kinect action identification method based on joint activeness | |
CN109255813B (en) | Man-machine cooperation oriented hand-held object pose real-time detection method | |
CN111460875B (en) | Image processing method and apparatus, image device, and storage medium | |
CN106650687B (en) | Posture correction method based on depth information and skeleton information | |
US9159134B2 (en) | Method and apparatus for estimating a pose | |
US9047507B2 (en) | Upper-body skeleton extraction from depth maps | |
CN108256504A (en) | A kind of Three-Dimensional Dynamic gesture identification method based on deep learning | |
CN107301370A (en) | A kind of body action identification method based on Kinect three-dimensional framework models | |
CN104036488B (en) | Binocular vision-based human body posture and action research method | |
CN107754225A (en) | A kind of intelligent body-building coaching system | |
CN107688391A (en) | A kind of gesture identification method and device based on monocular vision | |
CN104794737B (en) | A kind of depth information Auxiliary Particle Filter tracking | |
CN106650630A (en) | Target tracking method and electronic equipment | |
US9117138B2 (en) | Method and apparatus for object positioning by using depth images | |
CN103006178B (en) | Equipment based on three-dimensional motion following calculation energy expenditure and method | |
CN102622766A (en) | Multi-objective optimization multi-lens human motion tracking method | |
CN109934847A (en) | The method and apparatus of weak texture three-dimension object Attitude estimation | |
CN110503686A (en) | Object pose estimation method and electronic equipment based on deep learning | |
JP2019096113A (en) | Processing device, method and program relating to keypoint data | |
WO2022174594A1 (en) | Multi-camera-based bare hand tracking and display method and system, and apparatus | |
CN113111767A (en) | Fall detection method based on deep learning 3D posture assessment | |
CN108389227A (en) | A kind of dimensional posture method of estimation based on multiple view depth perceptron frame | |
CN114641799A (en) | Object detection device, method and system | |
WO2020147797A1 (en) | Image processing method and apparatus, image device, and storage medium | |
CN115862124B (en) | Line-of-sight estimation method and device, readable storage medium and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |