CN110134236B - Unity3D and Kinect-based high interaction feedback method and system under low motion detection precision - Google Patents

Unity3D and Kinect-based high interaction feedback method and system under low motion detection precision Download PDF

Info

Publication number
CN110134236B
CN110134236B CN201910351217.XA CN201910351217A CN110134236B CN 110134236 B CN110134236 B CN 110134236B CN 201910351217 A CN201910351217 A CN 201910351217A CN 110134236 B CN110134236 B CN 110134236B
Authority
CN
China
Prior art keywords
judgment
state
posture
unity3d
confirmed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910351217.XA
Other languages
Chinese (zh)
Other versions
CN110134236A (en
Inventor
刘振华
宋旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shaanxi Liudao Culture Technology Co ltd
Original Assignee
Shaanxi Liudao Culture Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shaanxi Liudao Culture Technology Co ltd filed Critical Shaanxi Liudao Culture Technology Co ltd
Priority to CN201910351217.XA priority Critical patent/CN110134236B/en
Publication of CN110134236A publication Critical patent/CN110134236A/en
Application granted granted Critical
Publication of CN110134236B publication Critical patent/CN110134236B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/016Input arrangements with force or tactile feedback as computer generated output to the user

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Manipulator (AREA)

Abstract

The invention provides a high interaction feedback method and a high interaction feedback system based on Unity3D and Kinect under low motion detection precision, which solve the problems of complex gesture recognition algorithm, large workload, low interaction feedback quality and the like of the existing human-computer interaction. And when the joint node to be confirmed acts, the judgment group detects a collision event representing the joint node to be confirmed, and the current posture of the user is obtained. Posture data is stored in string, and only by knowing the rough shape of the posture and the state definition of each judgment group, the character string value representing the posture for the current judgment group can be correctly input, so that the process of inputting the posture in the traditional posture detection is omitted, and the input and the comparison of the posture are convenient.

Description

High interaction feedback method and system based on Unity3D and Kinect and used under low motion detection precision
Technical Field
The invention belongs to the field of human-computer interaction, and particularly relates to a high interaction feedback method based on Unity3D and Kinect and under low motion detection precision.
Background
The Kinect is put out to make the camera not only receive plane image information, but also can pass through infrared equipment and acquire the degree of depth information of image for the data that the camera obtained become three-dimensional data from two-dimensional data. On the basis, skeleton data containing three-dimensional coordinate information are extracted by the Kinect, so that the position of a human joint point can be captured, and a user can directly complete human-computer interaction through limb actions.
In the prior art, a large-scale matching algorithm aiming at bone data or joint nodes is mostly used in a gesture recognition part of human-computer interaction, such as a gesture recognition algorithm based on Kinect angle measurement, a human body gesture recognition algorithm and implementation based on Kinect, and the like. Specifically, as in "research on motion estimation method based on Kinect", a sample set is trained by feature vectors, and a KNN (k-nearest neighbor) algorithm is used as a classifier to recognize a gesture. In the action evaluation, after the Time characteristics of a motion characteristic sequence are analyzed, a linear regression method is adopted to train a sample curve, a least square method is used for fitting an optimal angle curve as a standard template, then the problem that the Time sequences among the curves are different in length is considered, joint angle curves with different lengths are matched through a DTW (Dynamic Time Warping) algorithm, a set of formulas is defined to evaluate the action, then the DTW difference between the curves is used as an experiment parameter, and finally the recognition posture is determined. The method mainly identifies the action characteristics of the human body, screens the original characteristic vectors of the action of the human body, and eliminates redundant parts through an algorithm, thereby achieving the effect of identifying specific gestures.
The advantage of this approach is the recognition accuracy brought about by the composite algorithm and the large number of samples. However, the body types of each person are different, and various differences exist in body flexibility and coordination, so that the interactive feedback quality is not high, or the interactive feedback quality is improved by re-inputting samples in the gesture library, but the workload of posture adjustment and new posture input is increased; secondly, the method uses a large number of complex algorithms, and has certain difficulty in use and implementation for project development.
When there are some items that need to implement simple gesture interaction, it is certainly inappropriate to spend a lot of time and effort to perform entry of the gesture library and adjustment of the algorithm. At the same time, these scenarios requiring simple interaction do not generally require the user to make an excessively standard gesture, and they are more important to determine the state of the user "trying to make the action", especially when the interactive interface does not display a human body model projection.
Disclosure of Invention
In order to solve the problems of complexity, large workload, low interaction feedback quality and the like of the existing human-computer interaction gesture recognition algorithm, the invention provides a high interaction feedback method based on Unity3D and Kinect and under low action detection precision, and the feedback quality during somatosensory action interaction is improved by reducing the action detection precision in both gesture recognition and interaction detection and simplifying the interaction algorithm by utilizing the characteristics of a Unity3D engine.
The technical scheme of the invention is to provide a high interaction feedback method based on Unity3D and Kinect under low motion detection precision, which comprises the following steps:
s1, constructing a parent-child object system;
s11, acquiring space coordinates of each joint node in the Kinect, sending the acquired space coordinates of each joint node to unity3D, selecting a key joint node for posture detection in unity3D, and fixing the position relation between the key joint node and the joint node to be confirmed relatively;
s12, according to the space coordinates of the joint nodes to be confirmed, a plurality of groups of judgment areas are manufactured around the key joint nodes, so that the joint nodes to be confirmed can fall into the judgment areas when acting; adding a decision box to each set of decision regions using the unity3D engine;
s13, binding multiple groups of judgment areas into children of the key joint node; all the judgment areas under each key joint node are called as a judgment group of the key joint node, and a uniform prefix is given to the judgment group;
s2, inputting a posture state character string corresponding to the set posture state into a unity3D system as a preset posture character string;
s3, when the joint node to be confirmed moves, judging that the group detects a collision event representing the joint node to be confirmed, and obtaining the current posture of the user;
s31, judging whether the joint node to be confirmed is in the made judgment group by the collision detection system depending on the Unity3D, and obtaining a posture state character string representing the current touched state of the judgment group;
and S32, comparing the gesture state character string representing the current touched state of the judgment group with a preset gesture character string, and if the gesture state character string is the same as the preset gesture character string, acquiring the current gesture of the user.
Further, step S31 is specifically:
s311, reading a first judgment area in the judgment group;
s312, checking the first judgment area, confirming the touch state of the joint node to be confirmed in the judgment area, and acquiring a single-character state value representing the state;
s313, writing the acquired single-character state value into the end of a character string representing the current posture data of the judgment group;
s314, judging whether all judgment areas in the judgment group are checked, if so, entering a step S32; if not, the process returns to step S311 to read the next determination area until all determination areas are checked.
Further, step S312 specifically includes:
s3121, declaring Boolean values corresponding to the joint nodes to be confirmed;
s3122, acquiring positions of the joint nodes to be confirmed and the corresponding judgment area, respectively determining whether each joint node to be confirmed is in the judgment area by the aid of a collision detection system of Unity3D, if so, modifying a Boolean value corresponding to the joint node to be confirmed into true, and otherwise, keeping the Boolean value as false;
s3123, and then determining a single-character state value representing the current touched state of the determination area according to the Boolean value corresponding to the joint node to be determined.
Further, the decision group of each key joint node includes six decision regions located in the up-down, left-right, front-back directions of the key joint node.
Further, each decision group is made to detect only collision events of joint nodes to be confirmed by script control.
Further, when dynamic gesture detection is required, a minimum velocity is defined in step S13;
the relative speed of the two colliding is detected when the collision event occurs, and if the relative speed is less than the minimum speed, the collision is judged not to occur.
Further, step S2 is specifically:
s21, determining the touch state of each judgment area under the set posture by drawing a simple three-view of the set posture and the judgment area for each judgment area, and defining a character for each touch state;
and S22, combining the touch state characters of each judgment area in a fixed sequence under a set posture state to obtain a posture state character string under the current state.
The invention also provides a high interaction feedback system based on Unity3D and Kinect and under the condition of low motion detection precision, which is characterized by comprising a processor and a memory, wherein a computer program is stored in the memory, and the computer program realizes the method when running in the processor.
The invention has the beneficial effects that:
1. according to the invention, a plurality of judgment areas are arranged around a key joint node for posture detection by using a Unity3D self collision detection system, and then the set judgment areas are bound with nodes with relatively fixed joints, for example, a relative reference object in a hand node judgment area is a head, and a relative reference object in a foot node judgment area is a spinal tail end; and the process of constructing the judgment group by using the Unity making judgment area is faster, the gesture data is stored by string, and the character string value representing the gesture for the current judgment group can be correctly input only by knowing the rough shape of the gesture and the state definition of each judgment group, so that the process of inputting the gesture in the traditional gesture detection is omitted, and the input and the comparison of the gesture are convenient.
2. Each judgment group only detects the corresponding joint point collision, prevents the false touch and simultaneously facilitates the separation of local posture judgment (such as an upper half body and a lower half body) from the whole body judgment for judging the local posture.
3. According to the invention, key joint nodes and corresponding judgment zone level judgment groups can be set in the unity3D according to requirements, so that the posture detection is refined and a new judgment group (such as elbow or knee) is added in an expanded manner through a custom judgment group.
4. The interaction psychology of the user is predicted, the sensitivity of each response area is independently adjusted, so that the user can better obtain imaginal feedback when making an interaction behavior, and misoperation caused by part of recognition errors can be limited.
Drawings
Fig. 1 shows an Upper determination group, in which a parent object of detection regions for detecting the Upper body of LeftHand and RightHand is Head (Head joint node);
fig. 2 is a Lower decision group, detecting that a parent object of detection regions for the Lower body of LeftFoot and RightFoot is SpineBase (spinal caudal joint node);
FIG. 3 is a flow of detection logic in a single decision region in the Upper decision group;
FIG. 4 is a logic flow of detection in a single decision region in the Lower decision group;
FIG. 5 is a flowchart of a gesture checking portion GestuureCheck main logic single frame;
FIG. 6 shows the Lower decision group after segmentation refinement
Fig. 7 is a UML diagram.
Detailed Description
The invention is further described with reference to the following figures and specific embodiments.
The object in each scene in the Unity3D engine is called a GameObject, and whether the object is rendered by the engine to be displayed in the scene or not, the empty GameObject can form a unique inheritance relationship by using the characteristics of the Unity3D engine, and some characteristics of Unity3D are described in detail below.
The MonoBehevior class is a parent class commonly used by the Unity3D engine, and a class which inherits the MonoBehevior can be attached to a GameObject object in a scene as a component (as shown in FIGS. 1-2), and all common variables in the class are displayed in an indicator panel of the GameObject object, so that parameters can be directly input. The MonoBehevior class provides methods related to the life cycle of the Unity3D engine, for example, an Update method is called once every frame when a program runs, an OnTriggerEnter method is called once when a judgment area of other objects is touched, an OnTriggerExit method is called once when the judgment area of other objects is left, and the class inheriting the MonoBehevior can realize algorithms related to the life cycle of the Unity3D engine by rewriting the methods, for example, coordinates of a GameObject object in a scene are obtained every frame.
Unity3D has a unique parent-child object system, where one GameObject object is dragged under another GameObject object in the Hierarchy panel, and the two GameObject objects become parent-child objects, the former being child objects and the latter being parent objects. When the parent object moves and rotates for zooming, the child object follows the moving and rotating for zooming. However, when the child object performs the moving and rotating zoom, the parent object does not follow the moving and rotating zoom. The movement of the child object rotates the zoom value relative to the parent object. For example, if the coordinates of the child object are (0, 0, 0), the child object is located at the origin of the parent object, and the value is not changed by the coordinate change of the parent object.
The Unity3D engine may enable detection of some collision states and obtain collision information by adding rigid bodies and decision boxes to the GameObject object.
The invention utilizes the parent-child object system of Unity3D to acquire the space coordinates of each joint node from Kinect, even if there is a body type difference, another key joint node (such as a joint node with a head node for two hands) with a position relatively fixed with the joint node to be confirmed can be found, the key joint node is taken as a GameObject object in Unity3D, the GameObject object taken as a judgment area is arranged around the key joint node and a decision block is added, and then the part of the judgment area is bound as a child object of the key joint node. All the judgment areas under each key joint node are called a judgment group, and are endowed with a uniform prefix, for example, all the child objects under a Head node Head are called an Upper judgment group, and all the child objects under a vertebral tail joint node are called a Lower judgment group.
Taking the posture of lifting the arm as an example, when the arm is lifted horizontally for people with large body type differences, the height of the arm is close to that of the head, and because the Upper judgment group is a child object of a head joint node, the decision blocks of the horizontal area in the Upper judgment group only need to be ensured to be in a relatively stable range relative to the height of the head, and when the hand is lifted, the Upper judgment group can detect the collision events of LeftHand and RightHand representing the hand joint node.
As shown in fig. 1, a cube in which the horizontal distance between the front left and right determination region corresponding direction boundaries and the head joint node is 0.5m, the vertical distance between the upper determination region corresponding direction boundary and the head joint node is 0.5m, and the size of the determination region is 0.6m × 1m, can detect the swing of the hand joint node in the direction relatively accurately, and does not trigger a collision event when both hands naturally hang down and move at the height of the waist.
Similarly, as shown in fig. 2, the horizontal distance between the boundaries of the four determination regions at the front, rear, left, and right sides and the joint node at the end of the spine is set to 0.3m, the vertical distance is set to-0.5 m, and the size of the determination region is set to 0.6m by 0.6m, so that the stepping motion of the joint node at the foot can be detected relatively accurately without triggering a collision event when standing naturally.
Through script control, the Upper judgment group only detects the collision events of LeftHand and RightHand, and the Lower judgment group only detects the collision events of LeftFoot and RightFoot, so that false touch caused by a decision block enlarged due to reduction of detection precision is avoided.
The whole hand joint node posture determination process is described in detail with reference to fig. 3.
This flow chart shows the detection logic flow of the HandTiggerCheck class in a single decision region in the Upper decision group.
First, two boolean values isLeftHandIn and isRightHandIn are declared in the handriggercheck class to represent whether the left and right hands are within this decision area;
secondly, acquiring positions corresponding to the judgment areas and the hand joint nodes;
then, the Unity 3D-dependent collision detection system respectively determines whether the GameObject representing the left hand and the right hand is in the determination area, that is, whether the determination area is being touched, and if so, modifies the corresponding boolean value to true, otherwise, the boolean value is retained as false.
Finally, according to the values of isLeftHandIn and isRightHandin, the single character state value state representing the current touched state of the determination area is determined and recorded, and "1", "2", "3" and "4" respectively represent that the determination area is "not touched by hand", "touched by left hand", "touched by right hand" and "touched by both hands at the same time".
The process of the entire foot joint node posture determination will be described in detail with reference to fig. 4.
This flow chart shows the detection logic flow of the FeetTriggerCheck class in a single decision region in the Lower decision set.
First, two boolean values isleftFootIn and isRightFootIn are declared in the FeetTriggerCheck class to represent whether the left and right feet are within this decision region;
then, the Unity 3D-dependent collision detection system respectively determines whether the GameObject representing the left foot and the right foot are in the determination area, that is, whether the determination area is being touched, and if so, modifies the corresponding boolean value to true, otherwise, the boolean value is maintained as false.
Finally, according to the values of isLeftFootin and isRightFootin, determining a single character state value state representing the current touched state of the determination area, wherein "1", "2", "3" and "4" respectively represent that the determination area is not touched by feet, is touched by left feet, is touched by right feet and is touched by both feet.
The definition of the state may be different for each decision group, but all decision regions of the same decision group need to use a uniform state definition, and in the UML diagram, this step is embodied as using enumeration to represent a uniform state definition, and finally converting the corresponding enumerated value into a single character to assign the state.
The process relies on the rewritten Update method, which is performed once per frame, i.e., the touched state of the decision region is updated every frame.
With reference to fig. 5, how to obtain the state quantities of the respective determination groups and compare the state quantities with the preset posture to obtain the current posture of the user is described. The GestureCheck class acquires additional BodyTriggerCheckAbstract classes on each judgment area in each judgment group in a scene through a public variable body TriggerList, traverses the classes to acquire a single character state value state of each judgment area representing the current touched state, sequentially splices the single character state values into a character string getcureNow representing the current posture state, compares the getcureNow with a preset posture character string after the traversal of the body TriggerList is completed, directly sets the value from a Unity3D editor, and if the two are the same, considers that a user makes a preset posture. The preset gesture character string is set as follows: firstly, for each judgment area, determining the touch state of each judgment area under a set posture by drawing a simple three-view of the judgment area and the set posture, and defining a character for each touch state; secondly, under a set posture state, combining the touch state characters of each judgment area together according to a fixed sequence to obtain a posture state character string under the current state.
As can be seen in fig. 5:
firstly, reading a first judgment area according to a detection logic flow in a single judgment area in a judgment group;
then, checking a judgment area, confirming the touch state of the judgment area and the joint node to be confirmed, and acquiring a single character return value representing the state;
then, writing the acquired single-character return value into the tail of a character string representing the current posture data;
finally, whether all the judgment areas are checked is judged, if yes, the current posture data is compared with the preset posture data, and whether the current posture data and the preset posture data are the same or not is confirmed; if not, the next judgment area is continuously read and checked until all judgment areas are finished.
The method has the advantages that the gesture can be conveniently defined through string data, the character string value representing the gesture for the current judgment group can be correctly input only by knowing the rough shape of the gesture and the state definition of each judgment group, and the gesture inputting process in the traditional gesture detection is omitted. If a situation that the key joint nodes need to be determined more finely is met, the determination regions in the determination group corresponding to the joint nodes only need to be segmented and refined, new determination regions are added, and the positions of the determination regions are rearranged to be suitable for the normal limb movement range of human beings (as shown in fig. 6). If it is necessary to detect another key joint node to increase the richness of the posture, a decision group for detecting the joint node needs to be added.
This process relies on the rewritten Update method, which is performed once per frame, i.e., the user's gesture state is updated every frame.
When dynamic gesture detection is needed, the method can also independently adjust the sensitivity of each judgment area to the collision speed, a minimum speed is defined in GesturreCheck classes, the relative speed of the two collided objects is detected when a collision event occurs, if the relative speed is less than the minimum speed, the collision is judged not to occur, and hand shaking and mistaken touch caused by large judgment areas can be avoided when dynamic motion gestures such as punching a fist are detected, so that a user can better obtain imaginal feedback when making interactive behaviors.
FIG. 7 is a UML diagram, in which the BodyTriggerCheckAbstract class is an abstract class, and the main purpose of the abstract class is to declare the GetCollisionState method, and the subclass transmits the joint node collision status defined by the subclass through rewriting the method; the HandTriggerCheck class inherits the BodyTriggerCheckAbstract and is attached to each judgment area of an Upper judgment group, GameObject object information of a collided party is obtained by rewriting an OnTriggerEnter method to determine whether the collision party is touched by LeftHandd or RightHandd, and a single character state defined by private enumeration is made readable for other types by realizing a GetCollisionstate method. The FeetTriggerCheck class is similar to HandTriggerCheck, is attached to each judgment area on a Lower judgment group, acquires GameObject object information of a collided party by rewriting an OnTriggerEnter method to determine whether the FetTriggerCheck class is touched by LeftFoot or RightFoot, and enables a single character state value defined by private enumeration to be readable for other types by realizing a GetCollisionstate method. The GesturgeCheck class is a core function class, and acquires all classes inherited from the body TriggerCheckAbstract existing in a scene through a public variable body GesturreList, so as to acquire data of two judgment groups. And the rewriting Update method acquires the state information of each frame from the Upper judging group and the Lower judging group, splices the state information into a character string value representing the current posture, compares the value with the character string value representing each posture preset in the body GesturreList in sequence, and if the value is equal to the character string value representing each posture, considers that the user is in the posture at the current frame.

Claims (7)

1. A high interaction feedback method based on Unity3D and Kinect and under low motion detection precision is characterized by comprising the following steps:
s1, constructing a parent-child object system;
s11, acquiring space coordinates of each joint node in the Kinect, sending the acquired space coordinates of each joint node to unity3D, selecting a key joint node for posture detection in unity3D, and fixing the position relation between the key joint node and the joint node to be confirmed relatively;
s12, according to the space coordinates of the joint nodes to be confirmed, a plurality of groups of judgment areas are manufactured around the key joint nodes, so that the joint nodes to be confirmed can fall into the judgment areas when acting; adding a decision box to each set of decision regions using the unity3D engine;
s13, binding multiple groups of judgment areas into children of the key joint node; all judgment areas under each key joint node are called as a judgment group of the key joint node, and a uniform prefix is given;
s2, inputting a posture state character string corresponding to the set posture state into a unity3D system as a preset posture character string;
s3, when the joint node to be confirmed moves, judging that the group detects a collision event representing the joint node to be confirmed, and obtaining the current posture of the user;
s31, judging whether the joint node to be confirmed is in the made judgment group by the collision detection system depending on the Unity3D, and obtaining a posture state character string representing the current touched state of the judgment group;
s32, comparing the gesture state character string representing the current touched state of the judgment group with a preset gesture character string, and if the gesture state character string is the same as the preset gesture character string, obtaining the current gesture of the user;
step S31 specifically includes:
s311, reading a first judgment area in the judgment group;
s312, checking the first judgment area, confirming the contact state of the joint nodes to be confirmed in the judgment area, and acquiring a single-character state value representing the state;
s313, writing the acquired single-character state value into the end of a character string representing the current posture data of the judgment group;
s314, judging whether all judgment areas in the judgment group are checked, and if so, entering a step S32; if not, the process returns to step S311 to read the next determination area until all determination areas are checked.
2. The high interaction feedback method under the low motion detection accuracy based on Unity3D and Kinect as claimed in claim 1, wherein step S312 specifically is:
s3121, declaring Boolean values corresponding to the joint nodes to be confirmed;
s3122, acquiring positions of the joint nodes to be confirmed and the corresponding judgment area, respectively determining whether each joint node to be confirmed is in the judgment area by the aid of a collision detection system of Unity3D, if so, modifying a Boolean value corresponding to the joint node to be confirmed into true, and otherwise, keeping the Boolean value as false;
s3123, then according to the Boolean value corresponding to the joint node to be confirmed, determining the single character state representing the current touched state of the determination area.
3. The high interaction feedback method under the low motion detection accuracy based on Unity3D and Kinect of claim 1, wherein: and the judgment group of each key joint node comprises six judgment areas positioned in the upper, lower, left, right and front-back directions of the key joint node.
4. The high interaction feedback method under the low motion detection accuracy based on Unity3D and Kinect of claim 1, wherein: and through script control, each judgment group only detects the collision event of the joint node to be confirmed.
5. The Unity3D and Kinect-based high interaction feedback method with low motion detection accuracy according to claim 1, wherein:
when the dynamic gesture detection is required, a minimum velocity is defined in step S13;
the relative speed of the two colliding is detected when the collision event occurs, and if the relative speed is less than the minimum speed, the collision is judged not to occur.
6. The high interaction feedback method under the low motion detection accuracy based on Unity3D and Kinect as claimed in claim 1, wherein the step S2 specifically is:
s21, determining the touch state of each judgment area under the set posture by drawing a simple three-view of the set posture and the judgment area for each judgment area, and defining a character for each touch state;
and S22, combining the touch state characters of each judgment area in a fixed sequence under a set posture state to obtain a posture state character string under the current state.
7. A high interaction feedback system based on Unity3D and Kinect and with low motion detection precision is characterized in that: comprising a processor and a memory, in which a computer program is stored, which computer program realizes the method of any of claims 1-6 when run on the processor.
CN201910351217.XA 2019-04-28 2019-04-28 Unity3D and Kinect-based high interaction feedback method and system under low motion detection precision Active CN110134236B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910351217.XA CN110134236B (en) 2019-04-28 2019-04-28 Unity3D and Kinect-based high interaction feedback method and system under low motion detection precision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910351217.XA CN110134236B (en) 2019-04-28 2019-04-28 Unity3D and Kinect-based high interaction feedback method and system under low motion detection precision

Publications (2)

Publication Number Publication Date
CN110134236A CN110134236A (en) 2019-08-16
CN110134236B true CN110134236B (en) 2022-07-05

Family

ID=67575428

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910351217.XA Active CN110134236B (en) 2019-04-28 2019-04-28 Unity3D and Kinect-based high interaction feedback method and system under low motion detection precision

Country Status (1)

Country Link
CN (1) CN110134236B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111292403B (en) * 2020-03-10 2023-08-22 黄海波 Method for creating movable cloth doll

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014113507A1 (en) * 2013-01-15 2014-07-24 Leap Motion, Inc. Dynamic user interactions for display control and customized gesture interpretation
CN106682594A (en) * 2016-12-13 2017-05-17 中国科学院软件研究所 Posture and motion identification method based on dynamic grid coding
CN107993545A (en) * 2017-12-15 2018-05-04 天津大学 Children's acupuncture training simulation system and emulation mode based on virtual reality technology
CN108983978A (en) * 2018-07-20 2018-12-11 北京理工大学 virtual hand control method and device
CN109407838A (en) * 2018-10-17 2019-03-01 福建星网视易信息系统有限公司 Interface interaction method and computer readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014113507A1 (en) * 2013-01-15 2014-07-24 Leap Motion, Inc. Dynamic user interactions for display control and customized gesture interpretation
CN106682594A (en) * 2016-12-13 2017-05-17 中国科学院软件研究所 Posture and motion identification method based on dynamic grid coding
CN107993545A (en) * 2017-12-15 2018-05-04 天津大学 Children's acupuncture training simulation system and emulation mode based on virtual reality technology
CN108983978A (en) * 2018-07-20 2018-12-11 北京理工大学 virtual hand control method and device
CN109407838A (en) * 2018-10-17 2019-03-01 福建星网视易信息系统有限公司 Interface interaction method and computer readable storage medium

Also Published As

Publication number Publication date
CN110134236A (en) 2019-08-16

Similar Documents

Publication Publication Date Title
JP6950692B2 (en) People flow estimation device, people flow estimation method and program
US8963963B2 (en) Video-based image control system
KR101184170B1 (en) Volume recognition method and system
CN103376890B (en) The gesture remote control system of view-based access control model
CN108875133A (en) Determine architectural composition
US20150002419A1 (en) Recognizing interactions with hot zones
CN108960192A (en) Action identification method and its neural network generation method, device and electronic equipment
CN113449696B (en) Attitude estimation method and device, computer equipment and storage medium
Pedersoli et al. XKin- eXtendable hand pose and gesture recognition library for kinect
KR102359289B1 (en) Virtual training data generating method to improve performance of pre-learned machine-learning model and device performing the same
CN110134236B (en) Unity3D and Kinect-based high interaction feedback method and system under low motion detection precision
JP2020188449A (en) Image analyzing program, information processing terminal, and image analyzing system
CN113763419A (en) Target tracking method, target tracking equipment and computer-readable storage medium
Gavrila Vision-based 3-D tracking of humans in action
KR101758693B1 (en) Apparatus and Method of Behavior Recognition based on Object-Behavior Relation Model
JP2017033556A (en) Image processing method and electronic apparatus
CN115793893A (en) Touch writing handwriting generation method and device, electronic equipment and storage medium
KR101360322B1 (en) Apparatus and method for controlling electric boards using multiple hand shape detection and tracking
Siam et al. Human computer interaction using marker based hand gesture recognition
CN114299615A (en) Key point-based multi-feature fusion action identification method, device, medium and equipment
Masoud Tracking and analysis of articulated motion with an application to human motion
Krejov Real time hand pose estimation for human computer interaction
WO2023152974A1 (en) Image processing device, image processing method, and program
Lakshmi et al. A survey on performance evaluation of object detection techniques in digital image processing
Raees et al. DOP: Discover Objects and Paths, a model for automated navigation andselection in virtual environments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 710000 room 10318, unit 1, building 5, No. 118, Taibai South Road, high tech Zone, Xi'an City, Shaanxi Province

Applicant after: Shaanxi liudao Culture Technology Co.,Ltd.

Address before: 710075 room 10805, unit 1, building 2, building D, city gate, Tangyan South Road, high tech Zone, Xi'an, Shaanxi

Applicant before: Shaanxi Liudao Network Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant