CN110032958B - Human body limb language identification method and system - Google Patents

Human body limb language identification method and system Download PDF

Info

Publication number
CN110032958B
CN110032958B CN201910242558.3A CN201910242558A CN110032958B CN 110032958 B CN110032958 B CN 110032958B CN 201910242558 A CN201910242558 A CN 201910242558A CN 110032958 B CN110032958 B CN 110032958B
Authority
CN
China
Prior art keywords
limb
semantic
axis
characteristic points
semantic feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910242558.3A
Other languages
Chinese (zh)
Other versions
CN110032958A (en
Inventor
伍穗颖
柯茂旭
王筠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Frontop Digital Originality Technology Co Ltd
Original Assignee
Guangzhou Frontop Digital Originality Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Frontop Digital Originality Technology Co Ltd filed Critical Guangzhou Frontop Digital Originality Technology Co Ltd
Priority to CN201910242558.3A priority Critical patent/CN110032958B/en
Publication of CN110032958A publication Critical patent/CN110032958A/en
Application granted granted Critical
Publication of CN110032958B publication Critical patent/CN110032958B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Abstract

The invention discloses a human body limb language identification method and a human body limb language identification system. The identification method comprises the following steps: constructing a virtual world environment, and acquiring limb characteristic points of a human body based on a Kinect camera; constructing a limb semantic set according to the limb characteristic points; establishing a relative three-dimensional coordinate system by taking the coordinate of any limb semantic feature point in any limb semantic as an origin; determining a direction vector of any limb semantic based on the limb semantic feature points according to a relative three-dimensional coordinate system; establishing a body semantic triggering mechanism according to the direction vector; and recognizing human body language according to the body semantic triggering mechanism. The identification method and the system provided by the invention have strong universality and can accurately identify the human body language.

Description

Human body limb language identification method and system
Technical Field
The invention relates to the field of man-machine interaction, in particular to a human body limb language identification method and system.
Background
The technical background of the invention is a human-computer interaction technology in a three-dimensional virtual world, in the human-computer interaction of the virtual world, a computer identifies the limb actions of a human through a camera, judges the operation intention of the human, namely judges a command transmitted by the human to a machine, and the machine receives the command and then makes feedback to complete the interaction process of the human in the real world and the machine in the virtual world. In the human-computer interaction process, the machine detects the limb action of the human body and identifies the semantics represented by the limb action, which is the key of the whole human-computer interaction process.
Generally, most of the existing semantic design methods based on human body features do not follow fixed thinking logic, some thinking logic based on change of position information of human body features, some time sequence logic based on feature points, and some combination of the thinking logic and the time sequence logic. Because fixed logic is not followed, the research in the field of semantic design has no systematized theory and application with special influence, each research topic group has a method, the effects are different, the research topics can not be used commonly, and the propagation performance is weak; and the semantic design related to the human body limbs in the virtual world is less, and the specific operation instructions of the human body in reality cannot be accurately identified only according to the gesture semantics.
Disclosure of Invention
The invention aims to provide a human body limb language identification method and a human body limb language identification system, which are used for solving the problems that the traditional gesture semantic identification method is poor in universality, weak in propagation performance and incapable of accurately identifying an operation instruction.
In order to achieve the purpose, the invention provides the following scheme:
a human body limb language identification method comprises the following steps:
constructing a virtual world environment, and acquiring limb characteristic points of a human body based on a Kinect camera; the limb characteristic points comprise head characteristic points, neck characteristic points, spine characteristic points, fingertip characteristic points, finger characteristic points, wrist characteristic points, elbow characteristic points, shoulder characteristic points, hip characteristic points, knee characteristic points, ankle characteristic points and foot characteristic points;
constructing a limb semantic set according to the limb feature points; the limb semantic set comprises a plurality of limb semantics based on the limb semantic feature points; each limb semantic corresponds to at least two related limb characteristic points;
establishing a relative three-dimensional coordinate system by taking the coordinate of any limb semantic feature point in any limb semantic as an origin;
determining a direction vector of any one of the limb semantics based on the limb semantic feature points according to the relative three-dimensional coordinate system;
establishing a body semantic triggering mechanism according to the direction vector;
and recognizing human body languages according to the body semantic triggering mechanism.
Optionally, the establishing a relative three-dimensional coordinate system with the coordinate of any of the semantic feature points of the limb in any of the semantic meanings as an origin specifically includes:
and establishing a relative three-dimensional coordinate system by taking the coordinate of any limb semantic feature point in any limb semantic as an origin, taking the horizontal direction of the human body as an X axis, taking the human body as a Y axis, taking the intersection point of the X axis and the Y axis as an O point and taking an axis vertical to an XOY plane as a Z axis when the human body is in a cross state.
Optionally, after the establishing a relative three-dimensional coordinate system with the coordinate of any of the semantic feature points of the limb in any of the limb semantics as an origin, the method further includes:
and acquiring the relative coordinates of the limb semantic feature points related to any limb semantic feature point in any limb semantic.
Optionally, the determining, according to the relative three-dimensional coordinate system, a direction vector of any one of the limb semantics based on the limb semantic feature points specifically includes:
and determining a direction vector of any one of the limb semantics based on the limb semantic feature points according to the coordinates of any one of the limb semantic feature points and the relative coordinates.
Optionally, the establishing a body semantic triggering mechanism according to the direction vector specifically includes:
comparing the direction vector with 0 to determine a comparison result; the comparison result is: when in useOr
Figure BDA0002010111480000022
Determining that the currently moving limb characteristic point moves along the positive direction or the negative direction of the X axis; when in use
Figure BDA0002010111480000023
Or
Figure BDA0002010111480000024
Determining that the currently moving limb characteristic point moves along the positive direction or the negative direction of the Y axis; when in use
Figure BDA0002010111480000025
Or
Figure BDA0002010111480000026
Determining the characteristic point edge of the limb moving currentlyMoving in the positive direction or the negative direction of the Z axis; wherein R isx1Is the x-axis coordinate, R, of any limb semantic feature pointy1Is the y-axis coordinate, R, of any limb semantic feature pointz1The z-axis coordinate of any limb semantic feature point is obtained; rx2Is the x-axis coordinate, R, of the limb semantic feature point related to any one limb semantic feature pointy2Is the y-axis coordinate, R, of the limb semantic feature point related to any one limb semantic feature pointz2The z-axis coordinate of the limb semantic feature point related to any limb semantic feature point is obtained;
determining the movement direction of the limb according to the comparison result;
and establishing a limb semantic trigger mechanism according to the limb movement direction.
A human body limb language recognition system comprising:
the system comprises a limb characteristic point acquisition module, a Kinect camera and a database, wherein the limb characteristic point acquisition module is used for constructing a virtual world environment and acquiring limb characteristic points of a human body based on the Kinect camera; the limb characteristic points comprise head characteristic points, neck characteristic points, spine characteristic points, fingertip characteristic points, finger characteristic points, wrist characteristic points, elbow characteristic points, shoulder characteristic points, hip characteristic points, knee characteristic points, ankle characteristic points and foot characteristic points;
the limb semantic set construction module is used for constructing a limb semantic set according to the limb characteristic points; the limb semantic set comprises a plurality of limb semantics based on the limb semantic feature points; each limb semantic corresponds to at least two related limb characteristic points;
a relative three-dimensional coordinate system establishing module, which establishes a relative three-dimensional coordinate system by taking the coordinate of any limb semantic feature point in any limb semantic as an origin;
the direction vector determination module is used for determining a direction vector of any one of the limb semantics based on the limb semantic feature points according to the relative three-dimensional coordinate system;
the body semantic trigger mechanism establishing module is used for establishing a body semantic trigger mechanism according to the direction vector;
and the human body limb language identification module is used for identifying the human body limb language according to the limb semantic triggering mechanism.
Optionally, the relative three-dimensional coordinate system establishing module specifically includes:
and the relative three-dimensional coordinate system establishing unit is used for establishing a relative three-dimensional coordinate system by taking the coordinate of any limb semantic feature point in any limb semantic as an origin, taking the horizontal direction of the human body as an X axis, taking the human body as a Y axis, taking the intersection point of the X axis and the Y axis as an O point and taking an axis vertical to an XOY plane as a Z axis when the human body is in a cross state.
Optionally, the method further includes:
and the relative coordinate acquisition module is used for acquiring the relative coordinates of the limb semantic feature points related to any limb semantic feature point in any limb semantic.
Optionally, the direction vector determining module specifically includes:
and the direction vector determining unit is used for determining any direction vector of which the limb semantics is based on the limb semantic feature points according to the coordinates of any limb semantic feature points and the relative coordinates.
Optionally, the body semantic trigger mechanism establishing module specifically includes:
a comparison result determining unit for comparing the direction vector with 0 to determine a comparison result; the comparison result is: when in use
Figure BDA0002010111480000041
Or
Figure BDA0002010111480000042
Determining that the currently moving limb characteristic point moves along the positive direction or the negative direction of the X axis; when in use
Figure BDA0002010111480000043
OrDetermining the characteristics of the currently moving limbThe dots move in a positive or negative direction along the Y axis; when in use
Figure BDA0002010111480000045
OrDetermining that the currently moving limb characteristic point moves along the positive direction or the negative direction of the Z axis; wherein R isx1Is the x-axis coordinate, R, of any limb semantic feature pointy1Is the y-axis coordinate, R, of any limb semantic feature pointz1The z-axis coordinate of any limb semantic feature point is obtained; rx2Is the x-axis coordinate, R, of the limb semantic feature point related to any one limb semantic feature pointy2Is the y-axis coordinate, R, of the limb semantic feature point related to any one limb semantic feature pointz2The z-axis coordinate of the limb semantic feature point related to any limb semantic feature point is obtained;
the limb movement direction determining unit is used for determining the limb movement direction according to the comparison result;
and the limb semantic trigger mechanism establishing unit is used for establishing a limb semantic trigger mechanism according to the limb movement direction.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects: the invention provides a human body limb language identification method and a human body limb language identification system, which are characterized in that a direction cosine based on a direction vector is taken as a theoretical basis of semantic design, and a limb semantic trigger mechanism is established on the basis of limb characteristic points of the whole human body, so that an operation instruction of the human body is identified more accurately.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a flowchart of a human body language identification method provided by the present invention;
FIG. 2 is a schematic diagram of the spatial positions of 25 limb feature points in a human limb provided by the invention;
FIG. 3 is a schematic diagram of a human body model in a virtual world environment provided by the present invention;
FIG. 4 is a schematic diagram of a plurality of video sequence frames for user interaction with an object in a virtual world according to the present invention; FIG. 4(a) is a schematic view of a first video sequence frame of a user interacting with an object in a virtual world according to the present invention; FIG. 4(b) is a schematic diagram of a second video sequence frame for user interaction with an object in the virtual world according to the present invention; FIG. 4(c) is a schematic diagram of a third video sequence frame for user interaction with an object in the virtual world according to the present invention; FIG. 4(d) is a diagram of a fourth video sequence frame for user interaction with an object in the virtual world according to the present invention; FIG. 4(e) is a schematic diagram of a fifth video sequence frame for user interaction with an object in the virtual world according to the present invention; FIG. 4(f) is a diagram of a sixth video sequence frame for user interaction with an object in the virtual world, according to the present invention; FIG. 4(g) is a diagram of a seventh video sequence frame for user interaction with an object in the virtual world according to the present invention; FIG. 4(h) is a schematic view of an eighth video sequence frame for user interaction with an object in the virtual world, according to the present invention; FIG. 4(i) is a diagram of a ninth video sequence frame for user interaction with an object in the virtual world according to the present invention;
fig. 5 is a structural diagram of a human body limb language identification system provided by the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention aims to provide a human body limb language identification method and a human body limb language identification system, which solve the problems of poor generality and poor propagation performance of traditional gesture semantic identification.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
Fig. 1 is a flowchart of a human body language identification method provided by the present invention, and as shown in fig. 1, the human body language identification method includes:
step 101: constructing a virtual world environment, and acquiring limb characteristic points of a human body based on a Kinect camera; the limb characteristic points comprise head characteristic points, neck characteristic points, spine characteristic points, fingertip characteristic points, finger characteristic points, wrist characteristic points, elbow characteristic points, shoulder characteristic points, hip characteristic points, knee characteristic points, ankle characteristic points and foot characteristic points.
And constructing a virtual world environment, using the Kinect camera as visual input, capturing limbs of a user within the visual angle of the Kinect camera, and storing the limbs in a virtual space with depth information in a mode of 25 characteristic points.
The Kinect is a set of market consumer-grade equipment developed by Microsoft corporation, and mainly comprises an infrared emitter, an infrared camera and a common camera; the main function of the second generation of Kinect is to treat the human body as 25 feature points. The Kinect has the advantages that the Kinect is relatively stable after machine learning of millions of pictures, so that the Kinect becomes common camera equipment in human-computer interaction and is used as video information input of a computer.
The invention takes 25 characteristic points of a human body captured by Kinect equipment and combines a human body outline of a famous picture 'UOMOViruviano' of DaVinci as the background of human body limb semantic design. Under the general condition of the general human body contour, the basic contour of the opened limbs is in the shape of a circle, and Kinect canThe 25 joint points of the human body are captured, and each joint point has position information of three-dimensional space, such as three-dimensional space information (X) defining the right elbow, as shown in FIG. 2Elbow_Right,YElbow_Right,ZElbow_Right) Defining that the three-dimensional spatial position information of the right wrist is W (X)Wrist_Right,YWrist_Right,ZWrist_Right)。
If two of the limb joint points are defined to form a space vector, for example:
Figure BDA0002010111480000061
a vector representing the spatial position E to the spatial position W, i.e. the vector of the direction of the right elbow pointing to the right wrist; then direction vector
Figure BDA0002010111480000062
The vector formula in the three-dimensional coordinates of the virtual world is expressed as
Figure BDA0002010111480000063
25 characteristic points of a human body captured by the Kinect have depth information, a simple human body model is constructed in a three-dimensional virtual world according to the user characteristic points captured by the Kinect, as shown in fig. 3, all limb actions of a user can be reflected in the model person one by one, and interaction between the user and the virtual world is presented in real time, for example, the user opens a refrigerator in the virtual world, and can grasp and open the handle as long as the handle is placed on the handle of a refrigerator door. The main contribution of the present invention is also to enable the user to interact with the objects of the virtual world in the virtual world. The invention may be used for operational training, skill assessment, entertainment games, physical exercise, and the like.
Step 102: constructing a limb semantic set according to the limb feature points; the limb semantic set comprises a plurality of limb semantics based on the limb semantic feature points; each of the limb semantics corresponds to at least two related limb feature points.
Certain limb semantic SiAnd define its associated limb feature points. The related limb feature points refer to a set of all the limb feature points participating in a certain limb semantic design, and if no special description exists, one related joint point is usually composed of two joint points and is represented as R in a virtual space1(X1,Y1,Z1),R2(X2,Y2,Z2)。
Step 103: and establishing a relative three-dimensional coordinate system by taking the coordinate of any limb semantic feature point in any limb semantic as an origin.
The relative three-dimensional coordinate system is a coordinate system which takes one of the joint points as the origin of the relative three-dimensional coordinate system and takes the spatial positions of other joint points as reference. Thus, in a relative three-dimensional coordinate system, the limb semantics SiRelative coordinate position information of two related joint points, at this time, R1Has the coordinates of (0,0,0), R2Has the coordinates of (X)2-X1,Y2-Y1,Z2-Z1)。
Step 104: and determining a direction vector of any one of the limb semantics based on the limb semantic feature points according to the relative three-dimensional coordinate system.
Step 105: and establishing a body semantic triggering mechanism according to the direction vector.
Limb semantics SiThe direction vector in the relative three-dimensional coordinate system is expressed as
Figure BDA0002010111480000071
Has the following characteristics:
(1) in the directions of three directional axes, the X axis, the Y axis and the Z axis,
Figure BDA0002010111480000072
or
Figure BDA0002010111480000073
Or
Figure BDA0002010111480000076
Or
Figure BDA0002010111480000077
Or the above components are simultaneously provided;
(2) in the directions of three directional axes, the X-axis, the Y-axis, and the Z-axis, the direction vector thereof is incremented.
Figure BDA0002010111480000078
(moving in the positive X-axis direction) or
Figure BDA0002010111480000079
(movement in the negative X-axis direction);
Figure BDA00020101114800000710
(moving in the positive Y-axis direction) or(movement in the negative Y-axis direction);
Figure BDA0002010111480000081
(moving in the positive Z-axis direction) or
Figure BDA0002010111480000082
(movement in the negative Z-axis direction); or the combination of the above.
Step 106: and recognizing human body languages according to the body semantic triggering mechanism.
According to the behavior habit of a user in man-machine interaction, one or more semantic trigger mechanisms which accord with logic in the direction vector characteristics are selected, and under the condition that the time t is less than 1 second, if all the limb actions of the characteristic combination are completed in sequence, an effective trigger command is formed, and a user command is sent to a machine.
For example:taking the design of the body semantics representing 'negation' as an example, at t<Direction vector under 1 second condition
Figure BDA0002010111480000083
The following logic occurs in succession for the value of (c):
Figure BDA0002010111480000084
a "no" instruction is sent to the system.
Storing all the limb semantic design and triggering mechanisms based on the feature points and the direction vectors thereof into a system database to form a semantic library of a human-computer interaction system so as to facilitate human-computer interaction.
Constructing a virtual world based on Kinect, designing a body semantic meaning for representing 'negation', wherein two related joint points are required for representing the body semantic meaning, namely the right elbow RE(XElbow_Right,YElbow_Right,ZElbow_Right) To the right wrist RW(XWrist_Right,YWrist_Right,ZWrist_Right) It should be noted that, because the structure of the human body is symmetrical, this document only takes the right half limb as an example, but applies to the left half limb; the same logic is applied to the mechanism of event symmetry. Here only the right elbow RETo the right wrist RWAs an analysis object. Constructing a relative three-dimensional coordinate system with the right elbow as the origin, RE(0,0,0), then the relative three-dimensional spatial position information of the right wrist of its associated joint point is RW(XWrist_Right-XElbow_Right,YWrist_Right-YElbow_Right,ZWrist_Right-ZElbow_Right) The component direction vector is expressed as
Figure BDA0002010111480000085
If only the direction vector in the X-axis direction is considered, then it is expressed as
Figure BDA0002010111480000086
A triggering mechanism is defined that "negates" this limb semantic,considering the user behavior habit of man-machine interaction, at time t<1 second, if
Figure BDA0002010111480000087
Or
Figure BDA0002010111480000088
The representative person now sends a "no" command to the machine. In the view of the picture, the motion is a dynamic limb motion with hands swinging left and right, and the displacement motion is only carried out in the X-axis direction and the Y-axis direction, but in the embodiment, the logic of the trigger command can be achieved only by considering the X-axis direction, so that the motion accords with the behavior habit of people expressing negation. The same technical principle, some of the semantic designs of certain limbs are shifted in the Y-axis or Z-axis direction, or shifted in three directions simultaneously, and the same technical principle is in the technical scope of the invention as long as the semantic designs of human limbs are performed by changing direction vectors.
According to the above method for designing the body semantics, other embodiments are defined, and table 1 is a user body semantics identification table based on the body feature points and the direction vectors, as shown in table 1.
TABLE 1
Figure BDA0002010111480000091
Figure BDA0002010111480000101
From the steps and technical routes of the invention, the invention emphasizes a mechanism method of semantic design, and all logics belonging to the field of utilizing the thought to carry out limb semantic design belong to the protection scope of the invention. Since practical examples of semantic design using this logic approach tend to be infinite and difficult to enumerate one by one, the present invention enumerates only some of the examples shown in table 1. Meanwhile, the method for semantic design of the limbs is used for experimental demonstration, so that an ideal effect is obtained, and the following experimental effects are observed.
Fig. 4 is a schematic diagram of video sequence frames of a user interacting with an object in a virtual world, wherein a simple model person in the diagram is generated by feature points captured by a Kinect in the virtual world by the user, and the picture is ghost because the simple model person is taken in the virtual world, so that the virtual reality glasses are worn without ghost as shown in fig. 3. As can be seen from the sequence frame, the user characteristic point expression and direction vector-based limb semantic design method can be effectively applied to the human-computer interaction process of the virtual reality world, and can be used in the fields of product testing, operation drilling, skill evaluation and the like.
Fig. 5 is a structural diagram of a human body limb language identification system provided by the present invention, and as shown in fig. 5, a human body limb language identification system includes:
a limb feature point obtaining module 501, configured to construct a virtual world environment, and obtain a limb feature point of a human body based on a Kinect camera; the limb characteristic points comprise head characteristic points, neck characteristic points, spine characteristic points, fingertip characteristic points, finger characteristic points, wrist characteristic points, elbow characteristic points, shoulder characteristic points, hip characteristic points, knee characteristic points, ankle characteristic points and foot characteristic points.
A limb semantic set constructing module 502, configured to construct a limb semantic set according to the limb feature points; the limb semantic set comprises a plurality of limb semantics based on the limb semantic feature points; each of the limb semantics corresponds to at least two related limb feature points.
The body semantic set constructing module 502 specifically includes: a body semantics determining unit for determining the semantics of the body according to a formula
Figure BDA0002010111480000102
Determining the meaning of the limbs; wherein R is1(X1,Y1,Z1),R2(X2,Y2,Z2) Respectively representing the coordinates of the limb characteristic points in the virtual world environment; and the limb semantic set construction unit is used for constructing a limb semantic set according to the plurality of limb semantics.
A relative three-dimensional coordinate system establishing module 503, configured to establish a relative three-dimensional coordinate system with the coordinate of any of the semantic feature points of the limb in any of the semantic meanings as an origin.
The relative three-dimensional coordinate system establishing module 503 specifically includes: and the relative three-dimensional coordinate system establishing unit is used for establishing a relative three-dimensional coordinate system by taking the coordinate of any limb semantic feature point in any limb semantic as an origin, taking the horizontal direction of the human body as an X axis, taking the human body as a Y axis, taking the intersection point of the X axis and the Y axis as an O point and taking an axis vertical to an XOY plane as a Z axis when the human body is in a cross state.
The invention also includes: and the relative coordinate acquisition module is used for acquiring the relative coordinates of the limb semantic feature points related to any limb semantic feature point in any limb semantic.
A direction vector determination module 504, configured to determine, according to the relative three-dimensional coordinate system, a direction vector of any one of the limb semantics based on the limb semantic feature points.
The direction vector determining module 504 specifically includes: and the direction vector determining unit is used for determining any direction vector of which the limb semantics is based on the limb semantic feature points according to the coordinates of any limb semantic feature points and the relative coordinates.
And a body semantic trigger mechanism determining module 505, configured to establish a body semantic trigger mechanism according to the direction vector.
The limb semantic trigger mechanism establishing module 505 specifically includes: a comparison result determining unit for comparing the direction vector with 0 to determine a comparison result; the comparison result is: when in use
Figure BDA0002010111480000111
Or
Figure BDA0002010111480000112
Determining that the currently moving limb characteristic point moves along the positive direction or the negative direction of the X axis; when in use
Figure BDA0002010111480000113
Or
Figure BDA0002010111480000114
Determining that the currently moving limb characteristic point moves along the positive direction or the negative direction of the Y axis; when in use
Figure BDA0002010111480000115
Or
Figure BDA0002010111480000116
Determining that the currently moving limb characteristic point moves along the positive direction or the negative direction of the Z axis; wherein R isx1Is the x-axis coordinate, R, of any limb semantic feature pointy1Is the y-axis coordinate, R, of any limb semantic feature pointz1The z-axis coordinate of any limb semantic feature point is obtained; rx2Is the x-axis coordinate, R, of the limb semantic feature point related to any one limb semantic feature pointy2Is the y-axis coordinate, R, of the limb semantic feature point related to any one limb semantic feature pointz2The z-axis coordinate of the limb semantic feature point related to any limb semantic feature point is obtained; the limb movement direction determining unit is used for determining the limb movement direction according to the comparison result; and the limb semantic trigger mechanism establishing unit is used for establishing a limb semantic trigger mechanism according to the limb movement direction.
And the human body limb language identification module 506 is used for identifying the human body limb language according to the limb semantic triggering mechanism.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the system disclosed by the embodiment, the description is relatively simple because the system corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.

Claims (6)

1. A human body limb language identification method is characterized by comprising the following steps:
constructing a virtual world environment, and acquiring limb characteristic points of a human body based on a Kinect camera; the limb characteristic points comprise head characteristic points, neck characteristic points, spine characteristic points, fingertip characteristic points, finger characteristic points, wrist characteristic points, elbow characteristic points, shoulder characteristic points, hip characteristic points, knee characteristic points, ankle characteristic points and foot characteristic points;
constructing a limb semantic set according to the limb feature points; the limb semantic set comprises a plurality of limb semantics based on the limb semantic feature points; each limb semantic corresponds to at least two related limb characteristic points;
establishing a relative three-dimensional coordinate system by taking the coordinate of any limb semantic feature point in any limb semantic as an origin;
determining a direction vector of any one of the limb semantics based on the limb semantic feature points according to the relative three-dimensional coordinate system;
establishing a body semantic triggering mechanism according to the direction vector;
recognizing human body language according to the body semantic triggering mechanism;
the establishing of the body semantic triggering mechanism according to the direction vector specifically includes: comparing the direction vector with 0 to determine a comparison result; the comparison result is: when in use
Figure FDA0002286856680000011
Or
Figure FDA0002286856680000012
Determining that the currently moving limb characteristic point moves along the positive direction or the negative direction of the X axis; when in use
Figure FDA0002286856680000013
Or
Figure FDA0002286856680000014
Determining that the currently moving limb characteristic point moves along the positive direction or the negative direction of the Y axis; when in use
Figure FDA0002286856680000015
Or
Figure FDA0002286856680000016
Determining that the currently moving limb characteristic point moves along the positive direction or the negative direction of the Z axis; wherein R isx1Is the x-axis coordinate, R, of any limb semantic feature pointy1Is the y-axis coordinate, R, of any limb semantic feature pointz1The z-axis coordinate of any limb semantic feature point is obtained; rx2Is the x-axis coordinate, R, of the limb semantic feature point related to any one limb semantic feature pointy2Is the y-axis coordinate, R, of the limb semantic feature point related to any one limb semantic feature pointz2The z-axis coordinate of the limb semantic feature point related to any limb semantic feature point is obtained;
determining the movement direction of the limb according to the comparison result;
establishing a limb semantic triggering mechanism according to the limb movement direction;
establishing a relative three-dimensional coordinate system by taking the coordinate of any limb semantic feature point in any limb semantic as an origin, specifically comprising: and establishing a relative three-dimensional coordinate system by taking the coordinate of any limb semantic feature point in any limb semantic as an origin, taking the horizontal direction of the human body as an X axis, taking the human body as a Y axis, taking the intersection point of the X axis and the Y axis as an O point and taking an axis vertical to an XOY plane as a Z axis when the human body is in a cross state.
2. The human body limb language identification method according to claim 1, wherein after establishing a relative three-dimensional coordinate system with the coordinates of any limb semantic feature point in any limb semantic as an origin, the method further comprises: and acquiring the relative coordinates of the limb semantic feature points related to any limb semantic feature point in any limb semantic.
3. The method according to claim 2, wherein the determining, according to the relative three-dimensional coordinate system, a direction vector of any one of the limb semantics based on the limb semantic feature points specifically comprises: and determining a direction vector of any one of the limb semantics based on the limb semantic feature points according to the coordinates of any one of the limb semantic feature points and the relative coordinates.
4. A human body limb language identification system, comprising:
the system comprises a limb characteristic point acquisition module, a Kinect camera and a database, wherein the limb characteristic point acquisition module is used for constructing a virtual world environment and acquiring limb characteristic points of a human body based on the Kinect camera; the limb characteristic points comprise head characteristic points, neck characteristic points, spine characteristic points, fingertip characteristic points, finger characteristic points, wrist characteristic points, elbow characteristic points, shoulder characteristic points, hip characteristic points, knee characteristic points, ankle characteristic points and foot characteristic points;
the limb semantic set construction module is used for constructing a limb semantic set according to the limb characteristic points; the limb semantic set comprises a plurality of limb semantics based on the limb semantic feature points; each limb semantic corresponds to at least two related limb characteristic points;
a relative three-dimensional coordinate system establishing module, which establishes a relative three-dimensional coordinate system by taking the coordinate of any limb semantic feature point in any limb semantic as an origin;
the direction vector determination module is used for determining a direction vector of any one of the limb semantics based on the limb semantic feature points according to the relative three-dimensional coordinate system;
the body semantic trigger mechanism establishing module is used for establishing a body semantic trigger mechanism according to the direction vector;
the human body limb language identification module is used for identifying human body limb languages according to the limb semantic triggering mechanism;
the limb semantic trigger mechanism establishing module specifically comprises:
a comparison result determining unit for comparing the direction vector with 0 to determine a comparison result; the comparison result is: when in use
Figure FDA0002286856680000031
Or
Figure FDA0002286856680000032
Determining that the currently moving limb characteristic point moves along the positive direction or the negative direction of the X axis; when in use
Figure FDA0002286856680000033
OrDetermining that the currently moving limb characteristic point moves along the positive direction or the negative direction of the Y axis; when in use
Figure FDA0002286856680000035
Or
Figure FDA0002286856680000036
Determining that the currently moving limb characteristic point moves along the positive direction or the negative direction of the Z axis; wherein R isx1Is the x-axis coordinate, R, of any limb semantic feature pointy1Is the y-axis coordinate, R, of any limb semantic feature pointz1The z-axis coordinate of any limb semantic feature point is obtained; rx2Is the x-axis coordinate, R, of the limb semantic feature point related to any one limb semantic feature pointy2Is the y-axis coordinate, R, of the limb semantic feature point related to any one limb semantic feature pointz2The z-axis coordinate of the limb semantic feature point related to any limb semantic feature point;
The limb movement direction determining unit is used for determining the limb movement direction according to the comparison result;
the limb semantic trigger mechanism establishing unit is used for establishing a limb semantic trigger mechanism according to the limb movement direction;
the relative three-dimensional coordinate system establishing module specifically comprises:
and the relative three-dimensional coordinate system establishing unit is used for establishing a relative three-dimensional coordinate system by taking the coordinate of any limb semantic feature point in any limb semantic as an origin, taking the horizontal direction of the human body as an X axis, taking the human body as a Y axis, taking the intersection point of the X axis and the Y axis as an O point and taking an axis vertical to an XOY plane as a Z axis when the human body is in a cross state.
5. The human body limb language recognition system of claim 4, further comprising:
and the relative coordinate acquisition module is used for acquiring the relative coordinates of the limb semantic feature points related to any limb semantic feature point in any limb semantic.
6. The system according to claim 5, wherein the direction vector determining module specifically comprises:
and the direction vector determining unit is used for determining any direction vector of which the limb semantics is based on the limb semantic feature points according to the coordinates of any limb semantic feature points and the relative coordinates.
CN201910242558.3A 2019-03-28 2019-03-28 Human body limb language identification method and system Active CN110032958B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910242558.3A CN110032958B (en) 2019-03-28 2019-03-28 Human body limb language identification method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910242558.3A CN110032958B (en) 2019-03-28 2019-03-28 Human body limb language identification method and system

Publications (2)

Publication Number Publication Date
CN110032958A CN110032958A (en) 2019-07-19
CN110032958B true CN110032958B (en) 2020-01-24

Family

ID=67236832

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910242558.3A Active CN110032958B (en) 2019-03-28 2019-03-28 Human body limb language identification method and system

Country Status (1)

Country Link
CN (1) CN110032958B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007087089A (en) * 2005-09-21 2007-04-05 Fujitsu Ltd Gesture recognition device, gesture recognition program and gesture recognition method
CN104460967A (en) * 2013-11-25 2015-03-25 安徽寰智信息科技股份有限公司 Recognition method of upper limb bone gestures of human body
CN105930795A (en) * 2016-04-20 2016-09-07 东北大学 Walking state identification method based on space vector between human body skeleton joints
CN106650687A (en) * 2016-12-30 2017-05-10 山东大学 Posture correction method based on depth information and skeleton information
CN107908288A (en) * 2017-11-30 2018-04-13 沈阳工业大学 A kind of quick human motion recognition method towards human-computer interaction

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007087089A (en) * 2005-09-21 2007-04-05 Fujitsu Ltd Gesture recognition device, gesture recognition program and gesture recognition method
CN104460967A (en) * 2013-11-25 2015-03-25 安徽寰智信息科技股份有限公司 Recognition method of upper limb bone gestures of human body
CN105930795A (en) * 2016-04-20 2016-09-07 东北大学 Walking state identification method based on space vector between human body skeleton joints
CN106650687A (en) * 2016-12-30 2017-05-10 山东大学 Posture correction method based on depth information and skeleton information
CN107908288A (en) * 2017-11-30 2018-04-13 沈阳工业大学 A kind of quick human motion recognition method towards human-computer interaction

Also Published As

Publication number Publication date
CN110032958A (en) 2019-07-19

Similar Documents

Publication Publication Date Title
Li et al. A web-based sign language translator using 3d video processing
KR101711736B1 (en) Feature extraction method for motion recognition in image and motion recognition method using skeleton information
Sun et al. Augmented reality based educational design for children
CN102356373A (en) Virtual object manipulation
Adamo-Villani et al. Two gesture recognition systems for immersive math education of the deaf
Yeh et al. An integrated system: virtual reality, haptics and modern sensing technique (VHS) for post-stroke rehabilitation
Boruah et al. Development of a learning-aid tool using hand gesture based human computer interaction system
Li et al. Gesture recognition based on Kinect v2 and leap motion data fusion
Vyas et al. Gesture recognition and control
CN110032958B (en) Human body limb language identification method and system
Zhang Computer-assisted human-computer interaction in visual communication
CN109948579B (en) Human body limb language identification method and system
Raees et al. GIFT: Gesture-Based interaction by fingers tracking, an interaction technique for virtual environment
Shi et al. Grasping 3d objects with virtual hand in vr environment
Spanogianopoulos et al. Human computer interaction using gestures for mobile devices and serious games: A review
Thakar et al. Hand gesture controlled gaming application
Mumbare et al. Software Controller using Hand Gestures
Jiang et al. A brief analysis of gesture recognition in VR
Kavakli Gesture recognition in virtual reality
Chen et al. Dual quaternion based virtual hand interaction modeling
Bernardes et al. Comprehensive model and image-based recognition of hand gestures for interaction in 3D environments
Lee et al. Multifinger interaction between remote users in avatar‐mediated telepresence
Liu Analysis of Interaction Methods in VR Virtual Reality
Tian et al. Design and implementation of boat fist teaching system based on Unity3D
Lee et al. Real-time recognition method of counting fingers for natural user interface

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant