CN109948579B - Human body limb language identification method and system - Google Patents

Human body limb language identification method and system Download PDF

Info

Publication number
CN109948579B
CN109948579B CN201910242917.5A CN201910242917A CN109948579B CN 109948579 B CN109948579 B CN 109948579B CN 201910242917 A CN201910242917 A CN 201910242917A CN 109948579 B CN109948579 B CN 109948579B
Authority
CN
China
Prior art keywords
limb
semantic
axis
characteristic points
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910242917.5A
Other languages
Chinese (zh)
Other versions
CN109948579A (en
Inventor
伍穗颖
柯茂旭
王筠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Frontop Digital Originality Technology Co Ltd
Original Assignee
Guangzhou Frontop Digital Originality Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Frontop Digital Originality Technology Co Ltd filed Critical Guangzhou Frontop Digital Originality Technology Co Ltd
Priority to CN201910242917.5A priority Critical patent/CN109948579B/en
Publication of CN109948579A publication Critical patent/CN109948579A/en
Application granted granted Critical
Publication of CN109948579B publication Critical patent/CN109948579B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a human body limb language identification method and a human body limb language identification system. The identification method comprises the following steps: constructing a virtual world environment, and acquiring limb characteristic points of a human body based on a Kinect camera; constructing a limb semantic set according to the limb characteristic points; when the human body is in a cross state, taking the arm of the human body as an X axis, taking the human body as a Y axis, taking the intersection point of the X axis and the Y axis as an O point, and taking an axis vertical to an XOY plane as a Z axis, and constructing a three-dimensional coordinate system of the virtual world environment; determining a limb semantic direction vector of each limb semantic in a three-dimensional coordinate system; acquiring direction angles of each limb semantic direction vector with an X axis, a Y axis and a Z axis respectively; determining direction cosine according to the semantic direction vector of the limb and the direction angle; establishing a body semantic triggering mechanism according to the direction cosine; and recognizing human body language according to the body semantic triggering mechanism. The identification method and the system provided by the invention have strong universality and can accurately identify the human body language.

Description

Human body limb language identification method and system
Technical Field
The invention relates to the field of man-machine interaction, in particular to a human body limb language identification method and system.
Background
The technical background of the invention is a human-computer interaction technology in a three-dimensional virtual world, in the human-computer interaction of the virtual world, a computer identifies the limb actions of a human through a camera, judges the operation intention of the human, namely judges a command transmitted by the human to a machine, and the machine receives the command and then makes feedback to complete the interaction process of the human in the real world and the machine in the virtual world. In the human-computer interaction process, the machine detects the limb action of the human body and identifies the semantics represented by the limb action, which is the key of the whole human-computer interaction process.
Generally, most of the existing semantic design methods based on human body features do not follow fixed thinking logic, some thinking logic based on change of position information of human body features, some time sequence logic based on feature points, and some combination of the thinking logic and the time sequence logic. Because fixed logic is not followed, the research in the field of semantic design has no systematized theory and application with special influence, each research topic group has a method, the effects are different, the research topics can not be used commonly, and the propagation performance is weak; and the semantic design related to the human body limbs in the virtual world is less, and the specific operation instructions of the human body in reality cannot be accurately identified only according to the gesture semantics.
Disclosure of Invention
The invention aims to provide a human body limb language identification method and a human body limb language identification system, which are used for solving the problems that the traditional gesture semantic identification method is poor in universality, weak in propagation performance and incapable of accurately identifying an operation instruction.
In order to achieve the purpose, the invention provides the following scheme:
a human body limb language identification method comprises the following steps:
constructing a virtual world environment, and acquiring limb characteristic points of a human body based on a Kinect camera; the limb characteristic points comprise head characteristic points, neck characteristic points, spine characteristic points, fingertip characteristic points, finger characteristic points, wrist characteristic points, elbow characteristic points, shoulder characteristic points, hip characteristic points, knee characteristic points, ankle characteristic points and foot characteristic points;
constructing a limb semantic set according to the limb feature points; the limb semantic set comprises a plurality of limb semantics based on the limb semantic feature points; each limb semantic corresponds to at least two related limb characteristic points;
when the human body is in a cross state, constructing a three-dimensional coordinate system of the virtual world environment by taking the arm of the human body as an X axis, the body as a Y axis, the intersection point of the X axis and the Y axis as an O point and an axis vertical to an XOY plane as a Z axis;
determining a limb semantic direction vector of each limb semantic in the three-dimensional coordinate system;
acquiring direction angles of each limb semantic direction vector with an X axis, a Y axis and a Z axis respectively;
determining direction cosine according to the body semantic direction vector and the direction angle;
establishing a body semantic triggering mechanism according to the direction cosine;
and recognizing human body languages according to the body semantic triggering mechanism.
Optionally, the constructing a limb semantic set according to the limb feature points specifically includes:
according to the formula
Figure BDA0002010220020000021
Determining the meaning of the limbs; wherein R is1(X1,Y1,Z1),R2(X2,Y2,Z2) Respectively representing the coordinates of the limb characteristic points in the virtual world environment;
and constructing a limb semantic set according to the plurality of limb semantics.
Optionally, the determining a body semantic direction vector of each piece of body semantic in the three-dimensional coordinate system specifically includes:
according to the formula
Figure BDA0002010220020000022
Determining a limb semantic direction vector of each limb semantic in the three-dimensional coordinate system; wherein the content of the first and second substances,
Figure BDA0002010220020000023
is a limb semantic direction vector.
Optionally, the determining the direction cosine according to the body semantic direction vector and the direction angle specifically includes:
according to the formula
Figure BDA0002010220020000024
Anddetermining direction cosine; wherein cos alpha is
Figure BDA0002010220020000027
Direction cosine of said X axis, cos beta being
Figure BDA0002010220020000028
Direction cosine of said Y axis, cos gamma being
Figure BDA0002010220020000029
And the direction cosine of the Z axis.
Optionally, the establishing a body semantic triggering mechanism according to the direction cosine specifically includes:
comparing the direction cosines on the X axis, the Y axis and the Z axis with 0 to determine a comparison result; the comparison result is: when cos alpha is larger than 0 or cos alpha is smaller than 0, determining that the currently moving limb characteristic point moves along the positive direction or the negative direction of the X axis; when cos beta is larger than 0 or cos beta is smaller than 0, determining that the currently moving limb characteristic point moves along the positive direction or the negative direction of the Y axis; when cos gamma is larger than 0 or cos gamma is smaller than 0, determining that the currently moving limb characteristic point moves along the positive direction or the negative direction of the Z axis;
determining the movement direction of the limb according to the comparison result;
and establishing a limb semantic trigger mechanism according to the limb movement direction.
A human body limb language recognition system comprising:
the system comprises a limb characteristic point acquisition module, a Kinect camera and a database, wherein the limb characteristic point acquisition module is used for constructing a virtual world environment and acquiring limb characteristic points of a human body based on the Kinect camera; the limb characteristic points comprise head characteristic points, neck characteristic points, spine characteristic points, fingertip characteristic points, finger characteristic points, wrist characteristic points, elbow characteristic points, shoulder characteristic points, hip characteristic points, knee characteristic points, ankle characteristic points and foot characteristic points;
the limb semantic set construction module is used for constructing a limb semantic set according to the limb characteristic points; the limb semantic set comprises a plurality of limb semantics based on the limb semantic feature points; each limb semantic corresponds to at least two related limb characteristic points;
the three-dimensional coordinate system building module is used for building a three-dimensional coordinate system of the virtual world environment by taking the arm of the human body as an X axis, the body as a Y axis, the intersection point of the X axis and the Y axis as an O point and the axis vertical to the XOY plane as a Z axis when the human body is in a cross state;
a body semantic direction vector determining module, configured to determine a body semantic direction vector of each of the body semantics in the three-dimensional coordinate system;
the direction angle acquisition module is used for acquiring direction angles of each limb semantic direction vector with an X axis, a Y axis and a Z axis respectively;
the direction cosine determining module is used for determining direction cosine according to the body semantic direction vector and the direction angle;
the body semantic trigger mechanism determining module is used for establishing a body semantic trigger mechanism according to the direction cosine;
and the human body limb language identification module is used for identifying the human body limb language according to the limb semantic triggering mechanism.
Optionally, the body semantic set constructing module specifically includes:
a body semantics determining unit for determining the semantics of the body according to a formula
Figure BDA0002010220020000041
Determining the meaning of the limbs; wherein R is1(X1,Y1,Z1),R2(X2,Y2,Z2) Respectively representing the coordinates of the limb characteristic points in the virtual world environment;
and the limb semantic set construction unit is used for constructing a limb semantic set according to the plurality of limb semantics.
Optionally, the body semantic direction vector determining module specifically includes:
a body semantic direction vector determination unit for determining the direction of the body semantic direction vector according to a formula
Figure BDA0002010220020000042
Determining a limb semantic direction vector of each limb semantic in the three-dimensional coordinate system; wherein the content of the first and second substances,
Figure BDA0002010220020000043
is a limb semantic direction vector.
Optionally, the direction cosine determining module specifically includes:
a direction cosine determination unit for determining a direction cosine according to a formula
Figure BDA0002010220020000044
Figure BDA0002010220020000045
And
Figure BDA0002010220020000046
determining direction cosine; wherein cos alpha is
Figure BDA0002010220020000047
Direction cosine of said X axis, cos beta beingDirection cosine of said Y axis, cos gamma being
Figure BDA0002010220020000049
And the direction cosine of the Z axis.
Optionally, the body semantic trigger mechanism establishing module specifically includes:
a comparison result determining unit, configured to compare the magnitude of the direction cosine and 0 on the X axis, the Y axis, and the Z axis, and determine a comparison result; the comparison result is: when cos alpha is larger than 0 or cos alpha is smaller than 0, determining that the currently moving limb characteristic point moves along the positive direction or the negative direction of the X axis; when cos beta is larger than 0 or cos beta is smaller than 0, determining that the currently moving limb characteristic point moves along the positive direction or the negative direction of the Y axis; when cos gamma is larger than 0 or cos gamma is smaller than 0, determining that the currently moving limb characteristic point moves along the positive direction or the negative direction of the Z axis;
the limb movement direction determining unit is used for determining the limb movement direction according to the comparison result;
and the limb semantic trigger mechanism establishing unit is used for establishing a limb semantic trigger mechanism according to the limb movement direction.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects: the invention provides a human body limb language identification method and a human body limb language identification system, which are based on the direction cosine of a direction vector as the theoretical basis of semantic design and the limb characteristic points of the whole human body as the basis, establish a limb semantic trigger mechanism, thereby identifying the operation instruction of the human body more accurately, adopting uniform limb semantics and solving the problems of poor generality and poor propagation performance of the traditional gesture semantic identification.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a flowchart of a human body language identification method provided by the present invention;
FIG. 2 is a schematic diagram of the spatial positions of 25 limb feature points in a human limb provided by the invention;
FIG. 3 is a schematic diagram of a human body model in a virtual world environment provided by the present invention;
FIG. 4 is a schematic diagram of a plurality of video sequence frames for user interaction with an object in a virtual world according to the present invention; FIG. 4(a) is a schematic view of a first video sequence frame of a user interacting with an object in a virtual world according to the present invention; FIG. 4(b) is a schematic diagram of a second video sequence frame for user interaction with an object in the virtual world according to the present invention; FIG. 4(c) is a schematic diagram of a third video sequence frame for user interaction with an object in the virtual world according to the present invention; FIG. 4(d) is a diagram of a fourth video sequence frame for user interaction with an object in the virtual world according to the present invention; FIG. 4(e) is a schematic diagram of a fifth video sequence frame for user interaction with an object in the virtual world according to the present invention; FIG. 4(f) is a diagram of a sixth video sequence frame for user interaction with an object in the virtual world, according to the present invention; FIG. 4(g) is a diagram of a seventh video sequence frame for user interaction with an object in the virtual world according to the present invention; FIG. 4(h) is a schematic view of an eighth video sequence frame for user interaction with an object in the virtual world, according to the present invention; FIG. 4(i) is a diagram of a ninth video sequence frame for user interaction with an object in the virtual world according to the present invention;
fig. 5 is a structural diagram of a human body limb language identification system provided by the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention aims to provide a human body limb language identification method and a human body limb language identification system, which solve the problems of poor generality and poor propagation performance of traditional gesture semantic identification.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
Fig. 1 is a flowchart of a human body language identification method provided by the present invention, and as shown in fig. 1, the human body language identification method includes:
step 101: constructing a virtual world environment, and acquiring limb characteristic points of a human body based on a Kinect camera; the limb characteristic points comprise head characteristic points, neck characteristic points, spine characteristic points, fingertip characteristic points, finger characteristic points, wrist characteristic points, elbow characteristic points, shoulder characteristic points, hip characteristic points, knee characteristic points, ankle characteristic points and foot characteristic points.
And constructing a virtual world environment, using the Kinect camera as visual input, capturing limbs of a user within the visual angle of the Kinect camera, and storing the limbs in a virtual space with depth information in a mode of 25 characteristic points.
The Kinect is a set of market consumer-grade equipment developed by Microsoft corporation, and mainly comprises an infrared emitter, an infrared camera and a common camera; the main function of the second generation of Kinect is to treat the human body as 25 feature points. The Kinect has the advantages that the Kinect is relatively stable after machine learning of millions of pictures, so that the Kinect becomes common camera equipment in human-computer interaction and is used as video information input of a computer.
The invention takes 25 characteristic points of a human body captured by Kinect equipment and combines a human body outline of a famous picture 'UOMOViruviano' of DaVinci as the background of human body limb semantic design. General human body contour in general, the basic contour of the opened limbs is in the shape of a circle, the Kinect can capture 25 joint points of the human body, as shown in fig. 2, each joint point has position information of three-dimensional space, for example, three-dimensional space information defining the elbow of the right hand is (X)Elbow_Right,YElbow_Right,ZElbow_Right) Defining that the three-dimensional spatial position information of the right wrist is W (X)Wrist_Right,YWrist_Right,ZWrist_Right)。
If two of the limb joint points are defined to form a space vector, for example:a vector representing the spatial position E to the spatial position W, i.e. the vector of the direction of the right elbow pointing to the right wrist; then direction vector
Figure BDA0002010220020000062
The vector formula in the three-dimensional coordinates of the virtual world is expressed as
Figure BDA0002010220020000071
25 characteristic points of a human body captured by the Kinect have depth information, a simple human body model is constructed in a three-dimensional virtual world according to the user characteristic points captured by the Kinect, as shown in fig. 3, all limb actions of a user can be reflected in the model person one by one, and interaction between the user and the virtual world is presented in real time, for example, the user opens a refrigerator in the virtual world, and can grasp and open the handle as long as the handle is placed on the handle of a refrigerator door. The main contribution of the present invention is also to enable the user to interact with the objects of the virtual world in the virtual world. The invention may be used for operational training, skill assessment, entertainment games, physical exercise, and the like.
Step 102: constructing a limb semantic set according to the limb feature points; the limb semantic set comprises a plurality of limb semantics based on the limb semantic feature points; each of the limb semantics corresponds to at least two related limb feature points.
Certain limb semantic SiAnd define its associated limb feature points. The related limb feature points refer to a set of all the limb feature points participating in a certain limb semantic design, and if no special description exists, one related joint point is usually composed of two joint points and is represented as R in a virtual space1(X1,Y1,Z1),R2(X2,Y2,Z2)。
Figure BDA0002010220020000072
D is not 0.
Step 103: and when the human body is in a cross state, constructing a three-dimensional coordinate system of the virtual world environment by taking the arm of the human body as an X axis, the body as a Y axis, the intersection point of the X axis and the Y axis as an O point and an axis vertical to the XOY plane as a Z axis.
Step 104: and determining a limb semantic direction vector of each limb semantic in the three-dimensional coordinate system.
And constructing a direction vector based on the feature point expression. The direction vector of the limb semantic Si in the three-dimensional coordinate system of the virtual world is expressed as
Figure BDA0002010220020000074
Figure BDA0002010220020000073
Step 105: and acquiring direction angles of each limb semantic direction vector with an X axis, a Y axis and a Z axis respectively.
Step 106: and determining the direction cosine according to the direction vector of the limb semantic meaning and the direction angle.
In the three-dimensional coordinate system of the virtual world, defining the direction angle of the direction vector and each axis of the three-dimensional coordinate system as alpha, beta and gamma, then the direction vectorThe cosine of the direction of the X axis is expressed as
Direction vector
Figure BDA0002010220020000083
And the direction cosines of three axes in the virtual world are respectively expressed as
Figure BDA0002010220020000084
Figure BDA0002010220020000085
Figure BDA0002010220020000086
The directional cosine has the following characteristics:
(1) the cosine cos alpha in the direction of the three direction axes, X axis, Y axis and Z axis, is more than 0 or less than 0; cos beta is more than 0 or cos beta is less than 0; cos gamma is more than 0 or cos gamma is less than 0; or the above components are simultaneously provided;
(2) in the directions of the three directional axes, the X-axis, the Y-axis, and the Z-axis, the direction cosine thereof is incremented. Δ cos α > 0 (moving in the positive direction toward the X-axis) or Δ cos α < 0 (moving in the negative direction toward the X-axis); Δ cos β > 0 (moving in the positive Y-axis direction) or Δ cos β < 0 (moving in the negative Y-axis direction); Δ cos γ > 0 (moving towards the positive Z-axis) or Δ cos γ < 0 (moving towards the negative Z-axis); or the combination of the above.
Step 107: and establishing a body semantic triggering mechanism according to the direction cosine.
According to the behavior habit of a user in man-machine interaction, one or more semantic trigger mechanisms which accord with logic in direction cosine features are selected, and under the condition that the time t is less than 1 second, if all the limb actions of the feature combination are completed in sequence, an effective trigger command is formed, and a user command is sent to a machine.
Step 108: and recognizing human body languages according to the body semantic triggering mechanism.
Storing all the limb semantic designs and trigger mechanisms based on the feature point expression and the direction cosine thereof into a system database to form a semantic library of a human-computer interaction system so as to facilitate human-computer interaction.
Constructing a virtual world based on Kinect, designing a body semantic meaning for representing 'negation', wherein two related joint points are required for representing the body semantic meaning, namely the right elbow RE(XElbow_Right,YElbow_Right,ZElbow_Right) To the right wrist RW(XWrist_Right,YWrist_Right,ZWrist_Right) It should be noted that, because the structure of the human body is symmetrical, the present invention only takes the right half limb as an example, but is also applicable to the left half limb; the same logic is applied to the mechanism of event symmetry.
Here only the right elbow RETo the right wrist RWAs an analysis object. The component direction vector is expressed as
Figure BDA0002010220020000091
Figure BDA0002010220020000094
The direction cosine of the included angle alpha with the X axis is expressed as cos alpha, and its cosine increment in the direction of the X axis is expressed as Δ cos alpha,
Figure BDA0002010220020000092
the directional cosine of the Y-axis included angle β is expressed as cos β, and its cosine increment in the Y-axis direction is expressed as Δ cos β. Defining a trigger mechanism for negating the body semantics, considering the user behavior habit of man-machine interaction, at time t<Under the condition of 1 second, if Δ cos α > 0 → Δ cos α < 0 → Δ cos α > 0 or Δ cos α < 0 → Δ cos α > 0 → Δ cos α < 0, the representative person sends a "negative" command to the machine at this time. In the view of the picture, the motion is a dynamic limb motion with hands swinging left and right, and the displacement motion is only carried out in the X-axis direction and the Y-axis direction, but in the embodiment, the logic of the trigger command can be achieved only by considering the X-axis direction, so that the motion accords with the behavior habit of people expressing negation. The same technical principle, some of the semantic designs of certain limbs are shifted in the Y-axis or Z-axis direction, or shifted in three directions simultaneously, and the same technical principle is in the technical scope of the invention as long as the semantic designs of human limbs are performed by changing direction vectors.
According to the above method for designing the body semantics, other body semantics triggering mechanisms are defined, and table 1 is a user body semantics identification table based on the body feature points and the direction cosine, as shown in table 1.
TABLE 1
Figure BDA0002010220020000093
Figure BDA0002010220020000101
The invention emphasizes a mechanism method of semantic design, and all logics belonging to the field of utilizing the thought to carry out limb semantic design belong to the protection scope of the invention. Since practical examples of semantic design using this logic approach tend to be infinite and difficult to enumerate one by one, the present invention enumerates only some of the examples shown in table 1. Meanwhile, the method for semantic design of the limbs is used for experimental demonstration, so that an ideal effect is obtained, and the following experimental effects are observed.
Fig. 4 is a schematic view of a video sequence frame of a user interacting with an object in a virtual world provided by the present invention, wherein a simple model person in the figure is generated by feature points captured by a Kinect in the virtual world, and the picture has a ghost because the simple model person is taken in the virtual world, and the virtual reality glasses with the simple model person have no ghost as shown in fig. 3. As can be seen from the sequence frame, the user characteristic point expression and direction vector-based limb semantic design method can be effectively applied to the human-computer interaction process of the virtual reality world, and can be used in the fields of product testing, operation drilling, skill evaluation and the like.
Fig. 5 is a structural diagram of a human body limb language identification system provided by the present invention, and as shown in fig. 5, a human body limb language identification system includes:
a limb feature point obtaining module 501, configured to construct a virtual world environment, and obtain a limb feature point of a human body based on a Kinect camera; the limb characteristic points comprise head characteristic points, neck characteristic points, spine characteristic points, fingertip characteristic points, finger characteristic points, wrist characteristic points, elbow characteristic points, shoulder characteristic points, hip characteristic points, knee characteristic points, ankle characteristic points and foot characteristic points.
A limb semantic set constructing module 502, configured to construct a limb semantic set according to the limb feature points; the limb semantic set comprises a plurality of limb semantics based on the limb semantic feature points; each of the limb semantics corresponds to at least two related limb feature points.
The body semantic set constructing module 502 specifically includes: a body semantics determining unit for determining the semantics of the body according to a formulaDetermining the meaning of the limbs; wherein R is1(X1,Y1,Z1),R2(X2,Y2,Z2) Respectively representing the coordinates of the limb characteristic points in the virtual world environment; and the limb semantic set construction unit is used for constructing a limb semantic set according to the plurality of limb semantics.
And a three-dimensional coordinate system constructing module 503, configured to construct a three-dimensional coordinate system of the virtual world environment by using a human arm as an X-axis, using a human body as a Y-axis, using an intersection of the X-axis and the Y-axis as an O-point, and using an axis perpendicular to the XOY plane as a Z-axis when the human body is in a cross state.
A body semantic direction vector determining module 504, configured to determine a body semantic direction vector of each of the body semantics in the three-dimensional coordinate system.
The body semantic direction vector determining module 504 specifically includes: a body semantic direction vector determination unit for determining the direction of the body semantic direction vector according to a formulaDetermining a limb semantic direction vector of each limb semantic in the three-dimensional coordinate system; wherein the content of the first and second substances,
Figure BDA0002010220020000113
is a limb semantic direction vector.
The direction angle obtaining module 505 is configured to obtain direction angles of each of the semantic direction vectors of the limbs with the X axis, the Y axis, and the Z axis, respectively.
A direction cosine determining module 506, configured to determine a direction cosine according to the direction vector of the limb semantic meaning and the direction angle.
The direction cosine determining module 506 specifically includes: a direction cosine determination unit for determining a direction cosine according to a formula
Figure BDA0002010220020000114
Anddetermining direction cosine; wherein cos alpha is
Figure BDA0002010220020000116
Direction cosine of said X axis, cos beta being
Figure BDA0002010220020000117
Direction cosine of said Y axis, cos gamma being
Figure BDA0002010220020000118
And the direction cosine of the Z axis.
And a body semantic trigger mechanism determining module 507, configured to establish a body semantic trigger mechanism according to the direction cosine.
The limb semantic trigger mechanism establishing module 507 specifically includes: a comparison result determining unit, configured to compare the magnitude of the direction cosine and 0 on the X axis, the Y axis, and the Z axis, and determine a comparison result; the comparison result is: when cos alpha is larger than 0 or cos alpha is smaller than 0, determining that the currently moving limb characteristic point moves along the positive direction or the negative direction of the X axis; when cos beta is larger than 0 or cos beta is smaller than 0, determining that the currently moving limb characteristic point moves along the positive direction or the negative direction of the Y axis; when cos gamma is larger than 0 or cos gamma is smaller than 0, determining that the currently moving limb characteristic point moves along the positive direction or the negative direction of the Z axis; the limb movement direction determining unit is used for determining the limb movement direction according to the comparison result; and the limb semantic trigger mechanism establishing unit is used for establishing a limb semantic trigger mechanism according to the limb movement direction.
And the human body limb language identification module 508 is used for identifying the human body limb language according to the limb semantic triggering mechanism.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the system disclosed by the embodiment, the description is relatively simple because the system corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.

Claims (8)

1. A human body limb language identification method is characterized by comprising the following steps:
constructing a virtual world environment, and acquiring limb characteristic points of a human body based on a Kinect camera; the limb characteristic points comprise head characteristic points, neck characteristic points, spine characteristic points, fingertip characteristic points, finger characteristic points, wrist characteristic points, elbow characteristic points, shoulder characteristic points, hip characteristic points, knee characteristic points, ankle characteristic points and foot characteristic points;
constructing a limb semantic set according to the limb feature points; the limb semantic set comprises a plurality of limb semantics based on the limb semantic feature points; each limb semantic corresponds to at least two related limb characteristic points;
when the human body is in a cross state, constructing a three-dimensional coordinate system of the virtual world environment by taking the arm of the human body as an X axis, the body as a Y axis, the intersection point of the X axis and the Y axis as an O point and an axis vertical to an XOY plane as a Z axis;
determining a limb semantic direction vector of each limb semantic in the three-dimensional coordinate system;
acquiring direction angles of each limb semantic direction vector with an X axis, a Y axis and a Z axis respectively;
determining direction cosine according to the body semantic direction vector and the direction angle;
establishing a body semantic triggering mechanism according to the direction cosine;
identifying human body languages according to the body semantic triggering mechanism, which specifically comprises the following steps:
defining a triggering mechanism of 'negative' limb semantics, under the condition that the time t is less than 1 second, if the delta cos alpha is more than 0 → the delta cos alpha is less than 0 → the delta cos alpha is more than 0 or the delta cos alpha is less than 0 → the delta cos alpha is more than 0 → the delta cos alpha is less than 0, the limb movement direction is swinging dynamic limb movement; defining a trigger mechanism of 'clockwise rotation' limb semantics, and under the condition that the time t is less than 1 second, if delta cos alpha is more than 0 and delta cos beta is less than 0, the motion direction of the limb is clockwise rotation; defining a triggering mechanism of 'anticlockwise rotation' limb semantics, and under the condition that the time t is less than 1 second, if delta cos alpha is less than 0 and delta cos beta is more than 0, the limb movement direction is anticlockwise rotation;
the establishing of the body semantic triggering mechanism according to the direction cosine specifically includes:
comparing the direction cosines on the X axis, the Y axis and the Z axis with 0 to determine a comparison result; the comparison result is: when cos alpha is larger than 0 or cos alpha is smaller than 0, determining that the currently moving limb characteristic point moves along the positive direction or the negative direction of the X axis; when cos beta is larger than 0 or cos beta is smaller than 0, determining that the currently moving limb characteristic point moves along the positive direction or the negative direction of the Y axis; and when cos gamma is larger than 0 or cos gamma is smaller than 0, determining that the currently moving limb characteristic point moves along the positive direction or the negative direction of the Z axis.
2. The human body limb language identification method according to claim 1, wherein the constructing a limb semantic set according to the limb feature points specifically comprises:
according to the formula
Figure FDA0002302001400000021
Determining the meaning of the limbs; wherein R is1(X1,Y1,Z1),R2(X2,Y2,Z2) Respectively representing the coordinates of the limb characteristic points in the virtual world environment;
and constructing a limb semantic set according to the plurality of limb semantics.
3. The method for recognizing human body limb languages according to claim 2, wherein the determining a limb semantic direction vector of each of the limb semantics in the three-dimensional coordinate system specifically comprises:
according to the formula
Figure FDA0002302001400000022
Determining that each of the body semantics is in the threeA limb semantic direction vector in a dimensional coordinate system; wherein the content of the first and second substances,
Figure FDA0002302001400000023
is a limb semantic direction vector.
4. The human body limb language identification method according to claim 3, wherein the determining the direction cosine according to the limb semantic direction vector and the direction angle specifically comprises:
according to the formula
Figure FDA0002302001400000024
And
Figure FDA0002302001400000025
determining direction cosine; wherein cos alpha is
Figure FDA0002302001400000026
Direction cosine of said X axis, cos beta being
Figure FDA0002302001400000027
Direction cosine of said Y axis, cos gamma being
Figure FDA0002302001400000028
And the direction cosine of the Z axis.
5. A human body limb language identification system, comprising:
the system comprises a limb characteristic point acquisition module, a Kinect camera and a database, wherein the limb characteristic point acquisition module is used for constructing a virtual world environment and acquiring limb characteristic points of a human body based on the Kinect camera; the limb characteristic points comprise head characteristic points, neck characteristic points, spine characteristic points, fingertip characteristic points, finger characteristic points, wrist characteristic points, elbow characteristic points, shoulder characteristic points, hip characteristic points, knee characteristic points, ankle characteristic points and foot characteristic points;
the limb semantic set construction module is used for constructing a limb semantic set according to the limb characteristic points; the limb semantic set comprises a plurality of limb semantics based on the limb semantic feature points; each limb semantic corresponds to at least two related limb characteristic points;
the three-dimensional coordinate system building module is used for building a three-dimensional coordinate system of the virtual world environment by taking the arm of the human body as an X axis, the body as a Y axis, the intersection point of the X axis and the Y axis as an O point and the axis vertical to the XOY plane as a Z axis when the human body is in a cross state;
a body semantic direction vector determining module, configured to determine a body semantic direction vector of each of the body semantics in the three-dimensional coordinate system;
the direction angle acquisition module is used for acquiring direction angles of each limb semantic direction vector with an X axis, a Y axis and a Z axis respectively;
the direction cosine determining module is used for determining direction cosine according to the body semantic direction vector and the direction angle;
the body semantic trigger mechanism determining module is used for establishing a body semantic trigger mechanism according to the direction cosine;
the human body limb language identification module is used for identifying human body limb languages according to the limb semantic triggering mechanism, and specifically comprises the following steps:
defining a triggering mechanism of 'negative' limb semantics, under the condition that the time t is less than 1 second, if the delta cos alpha is more than 0 → the delta cos alpha is less than 0 → the delta cos alpha is more than 0 or the delta cos alpha is less than 0 → the delta cos alpha is more than 0 → the delta cos alpha is less than 0, the limb movement direction is swinging dynamic limb movement; defining a trigger mechanism of 'clockwise rotation' limb semantics, and under the condition that the time t is less than 1 second, if delta cos alpha is more than 0 and delta cos beta is less than 0, the motion direction of the limb is clockwise rotation; defining a triggering mechanism of 'anticlockwise rotation' limb semantics, and under the condition that the time t is less than 1 second, if delta cos alpha is less than 0 and delta cos beta is more than 0, the limb movement direction is anticlockwise rotation;
the limb semantic trigger mechanism establishing module specifically comprises:
a comparison result determining unit, configured to compare the magnitude of the direction cosine and 0 on the X axis, the Y axis, and the Z axis, and determine a comparison result; the comparison result is: when cos alpha is larger than 0 or cos alpha is smaller than 0, determining that the currently moving limb characteristic point moves along the positive direction or the negative direction of the X axis; when cos beta is larger than 0 or cos beta is smaller than 0, determining that the currently moving limb characteristic point moves along the positive direction or the negative direction of the Y axis; and when cos gamma is larger than 0 or cos gamma is smaller than 0, determining that the currently moving limb characteristic point moves along the positive direction or the negative direction of the Z axis.
6. The human body limb language identification system according to claim 5, wherein the limb semantic set construction module specifically comprises:
a body semantics determining unit for determining the semantics of the body according to a formulaDetermining the meaning of the limbs; wherein R is1(X1,Y1,Z1),R2(X2,Y2,Z2) Respectively representing the coordinates of the limb characteristic points in the virtual world environment;
and the limb semantic set construction unit is used for constructing a limb semantic set according to the plurality of limb semantics.
7. The human body limb language identification system according to claim 6, wherein the limb semantic direction vector determination module specifically comprises:
a body semantic direction vector determination unit for determining the direction of the body semantic direction vector according to a formula
Figure FDA0002302001400000032
Determining a limb semantic direction vector of each limb semantic in the three-dimensional coordinate system; wherein the content of the first and second substances,
Figure FDA0002302001400000041
is a limb semantic direction vector.
8. The system according to claim 7, wherein the direction cosine determining module specifically comprises:
a direction cosine determination unit for determining a direction cosine according to a formula
Figure FDA0002302001400000042
And
Figure FDA0002302001400000043
determining direction cosine; wherein cos alpha is
Figure FDA0002302001400000044
Direction cosine of said X axis, cos beta beingDirection cosine of said Y axis, cos gamma being
Figure FDA0002302001400000046
And the direction cosine of the Z axis.
CN201910242917.5A 2019-03-28 2019-03-28 Human body limb language identification method and system Active CN109948579B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910242917.5A CN109948579B (en) 2019-03-28 2019-03-28 Human body limb language identification method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910242917.5A CN109948579B (en) 2019-03-28 2019-03-28 Human body limb language identification method and system

Publications (2)

Publication Number Publication Date
CN109948579A CN109948579A (en) 2019-06-28
CN109948579B true CN109948579B (en) 2020-01-24

Family

ID=67011939

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910242917.5A Active CN109948579B (en) 2019-03-28 2019-03-28 Human body limb language identification method and system

Country Status (1)

Country Link
CN (1) CN109948579B (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8929600B2 (en) * 2012-12-19 2015-01-06 Microsoft Corporation Action recognition based on depth maps
CN105930795A (en) * 2016-04-20 2016-09-07 东北大学 Walking state identification method based on space vector between human body skeleton joints
CN106650687B (en) * 2016-12-30 2020-05-19 山东大学 Posture correction method based on depth information and skeleton information
CN107908288A (en) * 2017-11-30 2018-04-13 沈阳工业大学 A kind of quick human motion recognition method towards human-computer interaction
CN109344694B (en) * 2018-08-13 2022-03-22 西安理工大学 Human body basic action real-time identification method based on three-dimensional human body skeleton

Also Published As

Publication number Publication date
CN109948579A (en) 2019-06-28

Similar Documents

Publication Publication Date Title
Borst et al. Realistic virtual grasping
Li et al. A web-based sign language translator using 3d video processing
Sun et al. Augmented reality based educational design for children
CN110069133A (en) Demo system control method and control system based on gesture identification
Yeh et al. An integrated system: virtual reality, haptics and modern sensing technique (VHS) for post-stroke rehabilitation
Zaldívar-Colado et al. A mixed reality for virtual assembly
Boruah et al. Development of a learning-aid tool using hand gesture based human computer interaction system
Li et al. Gesture recognition based on Kinect v2 and leap motion data fusion
CN109948579B (en) Human body limb language identification method and system
Bers A body model server for human motion capture and representation
Vyas et al. Gesture recognition and control
CN110032958B (en) Human body limb language identification method and system
Du et al. A mobile gesture interaction method for augmented reality games using hybrid filters
Pilatásig et al. Interactive system for hands and wrist rehabilitation
Shi et al. Grasping 3d objects with virtual hand in vr environment
Spanogianopoulos et al. Human computer interaction using gestures for mobile devices and serious games: A review
Mumbare et al. Software Controller using Hand Gestures
Thakar et al. Hand gesture controlled gaming application
Jiang et al. A brief analysis of gesture recognition in VR
Chen et al. Dual quaternion based virtual hand interaction modeling
Kavakli Gesture recognition in virtual reality
Xue et al. Gesture interaction and augmented reality based hand rehabilitation supplementary system
Bernardes et al. Comprehensive model and image-based recognition of hand gestures for interaction in 3D environments
Zhao et al. Virtual assembly operations with grasp and verbal interaction
Lee et al. Multifinger interaction between remote users in avatar‐mediated telepresence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant