CN110134236A - High interaction feedback method and system under low motion detection precision based on Unity3D and Kinect - Google Patents

High interaction feedback method and system under low motion detection precision based on Unity3D and Kinect Download PDF

Info

Publication number
CN110134236A
CN110134236A CN201910351217.XA CN201910351217A CN110134236A CN 110134236 A CN110134236 A CN 110134236A CN 201910351217 A CN201910351217 A CN 201910351217A CN 110134236 A CN110134236 A CN 110134236A
Authority
CN
China
Prior art keywords
posture
unity3d
articulation nodes
area
state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910351217.XA
Other languages
Chinese (zh)
Other versions
CN110134236B (en
Inventor
刘振华
宋旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shaanxi Liudao Network Technology Co Ltd
Original Assignee
Shaanxi Liudao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shaanxi Liudao Network Technology Co Ltd filed Critical Shaanxi Liudao Network Technology Co Ltd
Priority to CN201910351217.XA priority Critical patent/CN110134236B/en
Publication of CN110134236A publication Critical patent/CN110134236A/en
Application granted granted Critical
Publication of CN110134236B publication Critical patent/CN110134236B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/016Input arrangements with force or tactile feedback as computer generated output to the user

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Manipulator (AREA)

Abstract

The present invention provides the high interaction feedback method and system under a kind of low motion detection precision based on Unity3D and Kinect, solves the problems such as gesture recognition algorithm complexity of existing human-computer interaction, heavy workload and interaction feedback are of low quality, the present invention utilizes father and son's object system of Unity3D, find another relatively-stationary articulation nodes the articulation nodes position confirmed in the case where there are figure difference with needs, it is provided as determining area around it and adds decision box, then will determine that the sub- object of the articulation nodes is bound in area part.When articulation nodes to be confirmed movement, decision set detects the collision accident for representing articulation nodes to be confirmed, obtains the current posture of user.Gesture data is stored with string, it is only necessary to know that the state of the rough shape of this posture and each decision set defines, can correctly typing for current decision set represent the string value of the posture, eliminate the process of typing posture in traditional posture detection, facilitate the input and comparison of posture.

Description

High interaction feedback side under low motion detection precision based on Unity3D and Kinect Method and system
Technical field
The invention belongs to field of human-computer interaction, and in particular to a kind of low motion detection essence based on Unity3D and Kinect High interaction feedback method under degree.
Background technique
The release of Kinect allow camera can not only receiving plane image information, and image can be obtained by infrared equipment Depth information so that camera data obtained become three-dimensional data from 2-D data.Kinect is extracted on this basis The skeleton data containing three-dimensional coordinate information is gone out, has made it possible position capture to human joint points, user can be straight Connected limb action finishing man-machine interaction.
The prior art is mostly directed to extensive of skeleton data or articulation nodes in the gesture recognition part of human-computer interaction With algorithm, such as " gesture recognition algorithm based on Kinect angle measurement ", " human posture's recognizer and reality based on Kinect It is existing " etc..In specific such as " the action evaluation technique study based on Kinect ", sample set is trained by feature vector, and Posture is identified as classifier using KNN (k-NearestNeighbor, closest) algorithm.In action evaluation, point After the time response for having analysed motion feature sequence, sample curve is trained using the method for linear regression, uses minimum Square law fits a best angle curve as standard form, considers further that between curve that time series is different in size and asks Topic is carried out by joint angles curve of DTW (Dynamic Time Warping, the dynamic time consolidation) algorithm to different length Matching, and applied a formula by definition one and movement is evaluated, then using DTW difference is as experiment parameter between curve, most Identification posture is determined eventually.What this method mainly identified is the motion characteristic of human body, for the primitive character of human body behavior act Vector is screened, and the part of redundancy is rejected by algorithm, to achieve the effect that identify given pose.
The advantages of this method is by composite algorism and great amount of samples bring accuracy of identification.But everyone figure is not Equally, there is also each species diversity for physical mobility and harmony, so that interaction feedback is of low quality, or by gesture library It re-types sample and improves interaction feedback quality, but will definitely increase the work for needing to adjust posture and new typing posture in this way Amount;Secondly the method use large amount of complex algorithms, all there is certain difficulty on using and realizing for project development.
When taking a significant amount of time and go to carry out posture with energy there are for some projects for needing to realize the interaction of simple posture The typing in library and the adjustment of algorithm are undoubtedly inappropriate.Simultaneously, these scenes for needing simply to interact generally will not It is required that user makes the posture of excessively standard, they are more this shapes it needs to be determined that user's " attempting to make the movement " State, especially when interactive interface does not show manikin projection, this demand seems more important.
Summary of the invention
Gesture recognition algorithm in order to solve existing human-computer interaction is complicated, heavy workload and interaction feedback are of low quality etc. Problem, the present invention provide a kind of high interaction feedback method under the low motion detection precision based on Unity3D and Kinect, utilize The Some features of Unity3D engine itself, by reducing the accuracy of motion detection in gesture recognition and two aspect of interaction detection, Simplify interactive algorithm, the feedback quality when movement of Lai Tigao body-sensing is interactive.
The technical solution of the invention is as follows is provided under a kind of low motion detection precision based on Unity3D and Kinect High interaction feedback method, comprising the following steps:
S1, building father and son's object system;
S11, each articulation nodes space coordinate that obtains each articulation nodes space coordinate in Kinect, and will acquire is sent To unity3D, choose the crucial articulation nodes for being used for posture detection in unity3D, the key articulation nodes with it is to be confirmed Positional relationship between articulation nodes is relatively fixed;
S12, multiple groups are made and determine area around crucial articulation nodes according to articulation nodes space coordinate to be confirmed, so as to When confirming articulation nodes movement, it can fall into and determine in area;Decision box is added to every group of judgement area using unity3D engine;
S13, then sub- object that multiple groups judgement area is bound to the key articulation nodes;It will be under each crucial articulation nodes All judgement areas be known as the decision set of the key articulation nodes, and assign unified prefix;
S2, the corresponding posture state character string of typing setting posture state in unity3D system, as preset posture Character string;
S3, when articulation nodes to be confirmed movement, decision set, which detects, represents the collision accidents of articulation nodes to be confirmed, obtains The current posture of user;
S31, rely on Unity3D collision detecting system judge articulation nodes to be confirmed whether in the decision set of production, It obtains and represents the posture state character string that the decision set is currently touched state;
S32, by represent the decision set currently be touched state posture state character string and preset posture character string pair Than if they are the same, obtaining the current posture of user.
Further, step S31 specifically:
First judgement area in S311, reading decision set;
S312, it checks first judgement area, confirms the state of judgement area articulation nodes touching to be confirmed, obtaining indicates to be somebody's turn to do The monocase state value of state;
S313, the monocase state value that will acquire write-in represent the character string end of the current gesture data of the decision set;
S314, judge whether to have checked out judgement area all in the decision set, if so, entering step S32;If No, then return step S311, reads next judgement area, until checking out all judgement areas.
Further, step S312 specifically:
S3121, statement Boolean corresponding with each articulation nodes to be confirmed;
S3122, articulation nodes to be confirmed and the corresponding position for determining area are obtained, relies on the collision detecting system of Unity3D It determines respectively and represents whether each articulation nodes to be confirmed determine in area herein, if so, by the corresponding boolean of articulation nodes to be confirmed Value is revised as true, is otherwise left false;
S3123, then determine that representing the judgement area is currently touched according to the corresponding Boolean of articulation nodes to be confirmed The monocase state value state of state.
Further, the decision set of each crucial articulation nodes includes being located at the key articulation nodes top to bottom, left and right, front and rear Six of direction or six or more judgement areas.
Further, by Script controlling, each decision set is made only to detect the collision accident of articulation nodes to be confirmed.
Further, when needing to carry out dynamic posture detection, a minimum speed is defined in step s 13;
The relative velocity of the two that detection collides when collision accident occurs, is then determined as if it is less than minimum speed It does not collide.
Further, above-mentioned steps S2 specifically:
S21, it is directed to each judgement area, determines each judgement area by drawing setting posture and determining the simple three-view diagram in area Touching state under the posture, and a character is defined for each touching state;
S22, under a setting posture state, combine current each touching status word for determining area by permanent order The posture state character string under current state is obtained together.
High interaction feedback system under the low motion detection precision based on Unity3D and Kinect that the present invention also provides a kind of System, is characterized in that, including processor and memory, computer program is stored in memory, and computer program is being handled The above method is realized when running in device.
The beneficial effects of the present invention are:
1, the present invention utilizes the collision detecting system of Unity3D itself, in the crucial articulation nodes week for posture detection The multiple judgement areas of setting are enclosed, then determine the judgement area of setting and the relatively-stationary node binding in the joint, such as hand node The opposite object of reference in area is head, and foot's node determines that the opposite object of reference in area is vertebra end;And area is determined using Unity production It is very fast come the process that constructs decision set, gesture data is stored with string, it is only necessary to know the rough shape of this posture and each The state of decision set defines, so that it may which correctly typing represents the string value of the posture for current decision set, eliminates biography The process of typing posture, facilitates the input and comparison of posture in posture detection of uniting.
2, each decision set of the present invention only detects oneself corresponding artis collision, also facilitate while preventing from accidentally touching by The posture of part determines to split out the judgement for carrying out local posture from whole body judgement (such as the upper part of the body and the lower part of the body).
3, the present invention can according to demand in unity3D, self-setting key articulation nodes and corresponding judgement area Grade decision set adds new decision set (such as elbow or knee convenient for refining posture detection and extension by customized decision set Lid).
4, the interaction psychology for predicting user, the susceptibility of each response area is individually adjusted, so that user is making interaction Feedback in capable of preferably being imagined when behavior, while maloperation caused by a part of identification error can be limited.
Detailed description of the invention
Fig. 1 is Upper decision set, and detection zone father's object for the upper part of the body of detection LeftHand and RightHand is Head (joint of head node);
Fig. 2 is Lower decision set, and detection zone father's object for the lower part of the body of detection LeftFoot and RightFoot is SpineBase (vertebra caudal articular process node);
Fig. 3 is the detection logic flow individually determined in area in Upper decision set;
Fig. 4 is the detection logic flow individually determined in area in Lower decision set;
Fig. 5 is that posture checks part GestureCheck main logic single frames flow chart;
Fig. 6 is the Lower decision set after segmentation refinement
Fig. 7 is uml diagram.
Specific embodiment
The present invention is further described through below in conjunction with drawings and the specific embodiments.
Object in Unity3D engine in each scene is referred to as a GameObject, no matter this object whether Can be shown by engine renders in the scene, empty GameObject can use Unity3D engine characteristic constitute it is unique after Relationship is held, some characteristics of Unity3D are described in detail below.
MonoBehevior class is the common parent of Unity3D engine, and the class for inheriting MonoBehevior can be made It is additional to the GameObject object (as shown in Figs. 1-2) in scene for a component, all public variables can be shown in class at this time Show in the Inspector panel of the GameObject object, can thus directly input parameter.MonoBehevior class provides Some methods relevant to the life cycle of Unity3D engine itself, as Update method when program is run adjusted by every frame With primary, OnTriggerEnter method calls primary, OnTriggerExit method when touching the determinating area of other objects Call the class for once inheriting MonoBehevior can be real by rewriteeing these methods in the determinating area for leaving other objects Existing some algorithms associated with Unity3D engine life cycle, such as every frame obtain the seat of GameObject object in the scene Mark.
Unity3D possesses unique father and son's object system, drags a GameObject object in Hierarchy panel It moves under another GameObject object, the two GameObject objects just become father and son's object, after the former is used as The sub- object of person, and the latter is as the former father's object.When the mobile rotation of father's object scales, the and then mobile rotation of sub- object Scaling.But sub- object is when carry out mobile rotation scaling, father's object and then mobile rotation scaling.The mobile rotation of sub- object Turn scaling numerical value and is relative to father's object.For example for, if the coordinate of sub- object is (0,0,0), then sub- object It is exactly the origin positioned at father's object, this numerical value does not change because of the changes in coordinates of father's object.
Unity3D engine is by may be implemented some collision status to GameObject object addition rigid body and decision box It detects and obtains collision information.
The present invention utilizes father and son's object system of Unity3D, and each articulation nodes space coordinate is obtained from Kinect, even if There are in the case where figure difference, can also find and another relatively-stationary crucial articulation nodes of articulation nodes position to be confirmed (articulation nodes of such as head node, for both hands), using crucial articulation nodes as the GameObject object in unity3D, enclose It is provided as determining the GameObject object in area around crucial articulation nodes and adds decision box, then will determine the binding of area part For the sub- object of the key articulation nodes.All judgement areas under each crucial articulation nodes are known as decision set, and assign system One prefix, if all sub- objects under head node, Head are known as Upper decision set, all sons under vertebra caudal articular process node Object is known as Lower decision set.
For praising this posture of arm, people biggish for size difference, by arm is flat lift when, arm and head It is highly close, due to the sub- object that Upper decision set is joint of head node, it is only necessary to ensure horizontal zone in Upper decision set Decision box for head height in a metastable range, when hand lifts, Upper decision set can be detected Represent the collision accident of the LeftHand and RightHand of hand joint node.
As shown in Figure 1, three judgement area's corresponding direction boundaries of front left and right and the horizontal distance of joint of head node are set as 0.5m, the upper vertical range for determining area's corresponding direction boundary and joint of head node are set as 0.5m, determine being dimensioned to for area The cube of 0.6m*1m*1m can relatively accurately detect hand joint node brandishing in this direction, while double Hand naturally droops and will not trigger collision accident when waist height is mobile.
Lower decision set similarly, as shown in Fig. 2, four judgement area corresponding direction boundaries all around and vertebra tail portion The horizontal distance of articulation nodes is set as 0.3m, and vertical range is set as -0.5m, and determine area is dimensioned to 0.6m*0.6m*0.6m Square, can relatively accurately detect the movement of marking time of foot's articulation nodes, while will not trigger when standing naturally Collision accident.
By Script controlling, Upper decision set is made only to detect the collision accident of LeftHand and RightHand, Lower sentences Fixed group only detects the collision accident of LeftFoot and RightFoot, and the widened decision box because reducing detection accuracy is avoided to cause Mistake touching.
In conjunction with 3, the process determined entire hand joint node posture is described in detail.
This flow chart shows that HandTriggerCheck class individually determines the detection logic flow in area in Upper decision set Journey.
Firstly, stating two Booleans isLeftHandIn and isRightHandIn in HandTriggerCheck class To represent whether left hand and the right hand determine in area herein;
Secondly, obtaining the corresponding position for determining area and hand joint node;
Then, rely on the collision detecting system of Unity3D determine respectively represent left hand and the right hand GameObject whether Determine in area herein, i.e., whether the judgement area is being touched, if so, corresponding Boolean is revised as true, otherwise retains For false.
Finally, determining and recording that represent the judgement area current according to the value of isLeftHandIn and isRightHandin The monocase state value state of touched state, " 1 ", " 2 ", " 3 ", " 4 " respectively represent the judgement area " not touched by hand ", " being touched by left hand ", " being touched by the right hand ", " while being touched by both hands ".
In conjunction with Fig. 4, the process determined entire foot's articulation nodes posture is described in detail.
This flow chart shows that FeetTriggerCheck class individually determines the detection logic flow in area in Lower decision set Journey.
Firstly, stating two Booleans isLeftFootIn and isRightFootIn in FeetTriggerCheck class To represent whether left foot and right crus of diaphragm determine in area herein;
Then, rely on the collision detecting system of Unity3D determine respectively represent left foot and right crus of diaphragm GameObject whether Determine in area herein, i.e., whether the judgement area is being touched, if so, corresponding Boolean is revised as true, otherwise retains For false.
It is currently touched finally, being determined according to the value of isLeftFootIn and isRightFootIn and representing the judgement area The monocase state value state of state, " 1 ", " 2 ", " 3 ", " 4 " respectively represent the judgement area " not touched by foot ", " left Foot touching ", " being touched by right crus of diaphragm ", " while being touched by both feet ".
The possible each decision set of the definition of state is different, but needs with all judgement areas of decision set using unified State definition, in uml diagram, this step be presented as using enumerating indicate unified state define, finally by corresponding piece Act value is converted to monocase and invests state.
The process can execute once dependent on the Update method after rewriteeing, every frame, i.e., every frame can all update the judgement area Touched state.
It in conjunction with Fig. 5, illustrates how the quantity of state for obtaining each decision set and is compared with default posture, obtain the current appearance of user Gesture.GestureCheck class is attached in area by respectively determining in each decision set in public variable bodyTriggerList acquisition scene The BodyTriggerCheckAbstract class added traverses them and obtains each list for determining the current touched state of area's representative Character mode value state is successively spliced into the character string gestureNow for representing current posture state, After the completion of bodyTriggerList traversal, gestureNow and preset posture character string are compared, which can be from Directly it is arranged in Unity3D editing machine, if the two is identical, is considered as user and is made that preset posture.Preset posture character The setting up procedure of string is as follows: firstly, each judgement area is directed to, by drawing setting posture and determining the simple three-view diagram in area come really Fixed each touching state for determining area under the posture, and a character is defined for each touching state;Secondly, being set at one Under posture state, current each touching status word for determining area is combined to obtain under current state by permanent order Posture state character string.
As can be seen from Figure 5:
Firstly, reading first judgement area according to the detection logic flow individually determined in area in decision set;
Then, it checks and determines area, confirm that the judgement area and articulation nodes to be confirmed touch state, obtaining indicates the state Monocase return value;
Then, the monocase return value write-in that will acquire represents the character string end of current gesture data;
Finally, judging whether to have checked out all judgement areas, if so, by current gesture data and default posture number According to comparison, whether both confirmations are identical;If otherwise continuing to read and checking next judgement area, until completing all judgement areas.
The advantages of which is to define posture by string data with can be convenient, it is only necessary to know that this posture is big The state of the shape of cause and each decision set definition, so that it may which correctly typing represents the character string of the posture for current decision set Value, eliminates the process of typing posture in traditional posture detection.The finer feelings for determining crucial articulation nodes are needed if encountering Condition, it is only necessary to be refined after cutting the judgement differentiation in the corresponding decision set of the articulation nodes, increase new judgement area, and put again Put the normal limb motion range (as shown in Figure 6) for determining that the position in area makes it suitable for the mankind.If desired other keys are detected to close It is put successively to increase the richness of posture, then needs to add being directed to the decision set that the articulation nodes are detected.
The process can execute once dependent on the Update method after rewriteeing, every frame, i.e., every frame can all update the appearance of user Gesture state.
When needing to carry out dynamic posture detection, which can also be for each judgement area for the sensitivity of impact velocity Degree individually adjusts, and a minimum speed is defined in GestureCheck class, and when collision accident occurs, detection collides The relative velocity of the two is then judged to not colliding if it is less than minimum speed, in detection such as punch dynamic action posture When can to avoid because determine the larger generation in area hand rock accidentally touch, user is preferably obtained when making interbehavior Feedback in the imagination.
Fig. 7 is uml diagram, and BodyTriggerCheckAbstract class is an abstract class, and main purpose is statement GetCollisionState method, subclass transmit articulation nodes collision status defined in itself by rewriteeing this method; HandTriggerCheck Similar integral BodyTriggerCheckAbstract is attached to each judgement area of Upper decision set On, by rewrite OnTriggerEnter method obtain GameObject object information by collision side determine oneself whether by LeftHand or RightHand touching makes the monocase of privately owned enumeration definition by realizing GetCollisionState method State value state becomes readable to other types.FeetTriggerCheck class is similar with HandTriggerCheck, is attached to In each judgement area in Lower decision set, obtained by rewriteeing OnTriggerEnter method by collision side GameObject object information oneself, whether by LeftFoot or RightFoot touching, passes through realization to determine GetCollisionState method makes the monocase state value state of privately owned enumeration definition become readable to other types. GestureCheck class is core function class, it is obtained present in scene by public variable bodyGestureList owns It is inherited from the class of BodyTriggerCheckAbstract, to obtain the data of two decision sets.Update method is rewritten, from Upper decision set and Lower decision set obtain the status information of their every frames, and are spliced into the character for representing current posture String value sequentially compares this value and the string value for representing each posture preset in bodyGestureList, if the two phase Deng, then it is assumed that user is in the posture in present frame.

Claims (8)

1. the high interaction feedback method under the low motion detection precision based on Unity3D and Kinect, which is characterized in that including with Lower step:
S1, building father and son's object system;
S11, each articulation nodes space coordinate is obtained in Kinect, and each articulation nodes space coordinate that will acquire is sent to Unity3D chooses the crucial articulation nodes for being used for posture detection, the key articulation nodes and pass to be confirmed in unity3D Positional relationship successively between point is relatively fixed;
S12, multiple groups are made and determine area, so as to be confirmed around crucial articulation nodes according to articulation nodes space coordinate to be confirmed When articulation nodes act, it can fall into and determine in area;Decision box is added to every group of judgement area using unity3D engine;
S13, then sub- object that multiple groups judgement area is bound to the key articulation nodes;By the institute under each crucial articulation nodes Have and determine that area is known as the decision set of the key articulation nodes, and assigns unified prefix;
S2, the corresponding posture state character string of typing setting posture state in unity3D system, as preset posture character String;
S3, when articulation nodes to be confirmed movement, decision set, which detects, represents the collision accidents of articulation nodes to be confirmed, obtains user Current posture;
S31, the collision detecting system for relying on Unity3D judge that articulation nodes to be confirmed whether in the decision set of production, obtain Represent the posture state character string that the decision set is currently touched state;
S32, posture state character string and the comparison of preset posture character string that the decision set is currently touched state will be represented, if It is identical, obtain the current posture of user.
2. the high interaction feedback side under the low motion detection precision according to claim 1 based on Unity3D and Kinect Method, which is characterized in that step S31 specifically:
First judgement area in S311, reading decision set;
S312, it checks first judgement area, confirms the state of judgement area articulation nodes touching to be confirmed, obtaining indicates the state Monocase state value;
S313, the monocase state value that will acquire write-in represent the character string end of the current gesture data of the decision set;
S314, judge whether to have checked out judgement area all in the decision set, if so, entering step S32;If it is not, then Return step S311 reads next judgement area, until checking out all judgement areas.
3. the high interaction feedback side under the low motion detection precision according to claim 2 based on Unity3D and Kinect Method, which is characterized in that step S312 specifically:
S3121, statement Boolean corresponding with each articulation nodes to be confirmed;
S3122, articulation nodes to be confirmed and the corresponding position for determining area are obtained, relies on the collision detecting system difference of Unity3D It determines and represents whether each articulation nodes to be confirmed determine in area herein, if so, the corresponding Boolean of articulation nodes to be confirmed is repaired It is changed to true, is otherwise left false;
S3123, then determine that representing the judgement area is currently touched state according to the corresponding Boolean of articulation nodes to be confirmed Monocase state value state.
4. the high interaction feedback side under the low motion detection precision according to claim 1 based on Unity3D and Kinect Method, it is characterised in that: the decision set of each key articulation nodes includes being located at the key articulation nodes top to bottom, left and right, front and rear side To six or six or more judgement areas.
5. the high interaction feedback side under the low motion detection precision according to claim 1 based on Unity3D and Kinect Method, it is characterised in that: by Script controlling, each decision set is made only to detect the collision accident of articulation nodes to be confirmed.
6. the high interaction feedback side under the low motion detection precision according to claim 1 based on Unity3D and Kinect Method, it is characterised in that:
When needing to carry out dynamic posture detection, a minimum speed is defined in step s 13;
The relative velocity of the two that detection collides when collision accident occurs, is then judged to not sending out if it is less than minimum speed Raw collision.
7. the high interaction feedback side under the low motion detection precision according to claim 1 based on Unity3D and Kinect Method, which is characterized in that step S2 specifically:
S21, it is directed to each judgement area, determines each judgement area at this by drawing setting posture and determining the simple three-view diagram in area Touching state under posture, and a character is defined for each touching state;
S22, under a setting posture state, combine current each touching status word for determining area one by permanent order It rises and obtains the posture state character string under current state.
8. the high interaction feedback system under a kind of low motion detection precision based on Unity3D and Kinect, it is characterised in that: packet Processor and memory are included, computer program is stored in memory, realizes that right is wanted when computer program is run in the processor Seek any method of 1-5.
CN201910351217.XA 2019-04-28 2019-04-28 Unity3D and Kinect-based high interaction feedback method and system under low motion detection precision Active CN110134236B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910351217.XA CN110134236B (en) 2019-04-28 2019-04-28 Unity3D and Kinect-based high interaction feedback method and system under low motion detection precision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910351217.XA CN110134236B (en) 2019-04-28 2019-04-28 Unity3D and Kinect-based high interaction feedback method and system under low motion detection precision

Publications (2)

Publication Number Publication Date
CN110134236A true CN110134236A (en) 2019-08-16
CN110134236B CN110134236B (en) 2022-07-05

Family

ID=67575428

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910351217.XA Active CN110134236B (en) 2019-04-28 2019-04-28 Unity3D and Kinect-based high interaction feedback method and system under low motion detection precision

Country Status (1)

Country Link
CN (1) CN110134236B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111292403A (en) * 2020-03-10 2020-06-16 黄海波 Method for creating movable cloth doll

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014113507A1 (en) * 2013-01-15 2014-07-24 Leap Motion, Inc. Dynamic user interactions for display control and customized gesture interpretation
CN106682594A (en) * 2016-12-13 2017-05-17 中国科学院软件研究所 Posture and motion identification method based on dynamic grid coding
CN107993545A (en) * 2017-12-15 2018-05-04 天津大学 Children's acupuncture training simulation system and emulation mode based on virtual reality technology
CN108983978A (en) * 2018-07-20 2018-12-11 北京理工大学 virtual hand control method and device
CN109407838A (en) * 2018-10-17 2019-03-01 福建星网视易信息系统有限公司 Interface interaction method and computer readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014113507A1 (en) * 2013-01-15 2014-07-24 Leap Motion, Inc. Dynamic user interactions for display control and customized gesture interpretation
CN106682594A (en) * 2016-12-13 2017-05-17 中国科学院软件研究所 Posture and motion identification method based on dynamic grid coding
CN107993545A (en) * 2017-12-15 2018-05-04 天津大学 Children's acupuncture training simulation system and emulation mode based on virtual reality technology
CN108983978A (en) * 2018-07-20 2018-12-11 北京理工大学 virtual hand control method and device
CN109407838A (en) * 2018-10-17 2019-03-01 福建星网视易信息系统有限公司 Interface interaction method and computer readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111292403A (en) * 2020-03-10 2020-06-16 黄海波 Method for creating movable cloth doll
CN111292403B (en) * 2020-03-10 2023-08-22 黄海波 Method for creating movable cloth doll

Also Published As

Publication number Publication date
CN110134236B (en) 2022-07-05

Similar Documents

Publication Publication Date Title
CN112668687B (en) Cloud robot system, cloud server, robot control module and robot
US20220147137A1 (en) Interaction Engine for Creating a Realistic Experience in Virtual Reality/Augmented Reality Environments
CN106139564B (en) Image processing method and device
CN107430437B (en) System and method for creating a real grabbing experience in a virtual reality/augmented reality environment
US20190362562A1 (en) Throwable Interface for Augmented Reality and Virtual Reality Environments
CN103930944B (en) Adaptive tracking system for space input equipment
Kjeldsen et al. Toward the use of gesture in traditional user interfaces
KR101184170B1 (en) Volume recognition method and system
CN104616028B (en) Human body limb gesture actions recognition methods based on space segmentation study
CN103809733B (en) Man-machine interactive system and method
CN104978762A (en) Three-dimensional clothing model generating method and system
CN106406518B (en) Gesture control device and gesture identification method
CN107357428A (en) Man-machine interaction method and device based on gesture identification, system
CN110286763A (en) A kind of navigation-type experiment interactive device with cognitive function
KR102359289B1 (en) Virtual training data generating method to improve performance of pre-learned machine-learning model and device performing the same
Bhiri et al. Hand gesture recognition with focus on leap motion: An overview, real world challenges and future directions
Cardoso et al. Hand gesture recognition towards enhancing accessibility
CN112258161A (en) Intelligent software testing system and testing method based on robot
CN110134236A (en) High interaction feedback method and system under low motion detection precision based on Unity3D and Kinect
CN107507218A (en) Part motility Forecasting Methodology based on static frames
Sutopo et al. Dance gesture recognition using laban movement analysis with j48 classification
CN111796709B (en) Method for reproducing image texture features on touch screen
KR20140092536A (en) 3d character motion synthesis and control method and device for navigating virtual environment using depth sensor
Liu et al. Gesture recognition based on Kinect
Wang et al. Research on Computer Aided Interaction Design based on Virtual reality Technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 710000 room 10318, unit 1, building 5, No. 118, Taibai South Road, high tech Zone, Xi'an City, Shaanxi Province

Applicant after: Shaanxi liudao Culture Technology Co.,Ltd.

Address before: 710075 room 10805, unit 1, building 2, building D, city gate, Tangyan South Road, high tech Zone, Xi'an, Shaanxi

Applicant before: Shaanxi Liudao Network Technology Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant