CN112306234B - Hand operation identification method and system based on data glove - Google Patents
Hand operation identification method and system based on data glove Download PDFInfo
- Publication number
- CN112306234B CN112306234B CN202011020659.5A CN202011020659A CN112306234B CN 112306234 B CN112306234 B CN 112306234B CN 202011020659 A CN202011020659 A CN 202011020659A CN 112306234 B CN112306234 B CN 112306234B
- Authority
- CN
- China
- Prior art keywords
- hand
- target object
- virtual
- data glove
- action
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 42
- 239000000872 buffer Substances 0.000 claims abstract description 89
- 230000002452 interceptive effect Effects 0.000 claims abstract description 15
- 230000003578 releasing effect Effects 0.000 claims description 13
- 230000003278 mimic effect Effects 0.000 claims description 7
- 210000004247 hand Anatomy 0.000 description 16
- 238000010586 diagram Methods 0.000 description 14
- 230000003993 interaction Effects 0.000 description 7
- 210000000707 wrist Anatomy 0.000 description 7
- 230000006399 behavior Effects 0.000 description 3
- 238000010009 beating Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 230000035515 penetration Effects 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/014—Hand-worn input/output arrangements, e.g. data gloves
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/28—Recognition of hand or arm movements, e.g. recognition of deaf sign language
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Multimedia (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention discloses a hand operation identification method and a hand operation identification system based on data gloves, wherein the method comprises the following steps: determining a core area and a buffer area of a target object; establishing an interactive animation model of the virtual imitation hand and the data glove; judging the action type of the data glove when the positioning of the data glove enters the core area of the target object; when the target object performs a first type of action, the virtual imitation hand and the target object are kept in a connection state; when the target object cannot move, the virtual imitation hand is kept in a connected state with the target object and does not move; when the target object moves, the target object moves along with the positioning of the virtual imitation hand; when the target object performs a second type of action and when the target object moves after contact with the virtual simulated hand, then the target object moves following the positioning of the virtual simulated hand.
Description
Technical Field
The invention relates to the technical field of virtual reality, in particular to a hand operation identification method and system based on data gloves.
Background
The collision problem in the virtual reality environment is mostly to set a rigid structure for an object or to further perform collision processing according to all structures and part shapes in a scene, however, the hand operation collision problem of the data glove cannot be well solved by the method in the prior art.
The data glove can be interacted with the virtual environment to establish a simulated gesture model, so that the operation behaviors of a person are reproduced in the virtual environment.
Therefore, a technique is needed to implement a data glove based hand operation collision recognition method and system.
Disclosure of Invention
The technical scheme of the invention provides a hand operation collision recognition method and a hand operation collision recognition system based on data gloves, which are used for solving the problem of how to recognize hand operation collision based on the data gloves.
In order to solve the above problems, the present invention provides a hand operation recognition method based on data glove, the method comprising:
setting and positioning for the virtual imitation hand and the data glove according to the set distance;
determining a core area of a target object, wherein the core area is a space area with the distance between the positioning of a virtual imitation hand and the target object as a radius when the virtual imitation hand can touch the target object;
determining a buffer area of a target object, wherein the buffer area is a spatial range of the core area which extends outwards and is based on the positioning of the virtual imitation hand and the distance between the core area and the target object;
establishing an interactive animation model of the virtual imitation hand and the data glove, and determining the position change of the data glove according to the animation of the data glove based on the interactive animation model;
judging the action type of the data glove when the positioning of the data glove enters the core area of the target object;
when the target object performs a first type of action, the virtual imitation hand and the target object are kept in a connection state; when the target object cannot move, the virtual mimicking hand remains connected to the target object and does not move; when the target object moves, the target object moves along with the positioning of the virtual imitation hand;
when the target object performs a second type of action and when the target object moves after being contacted with the virtual simulated hand, the target object moves along with the positioning of the virtual simulated hand; or when the target object does not move after contacting with the virtual simulated hand, judging that the virtual simulated hand contacts with the target object;
when the data glove performs a first type of action in the core area and the positioning of the data glove leaves the core area of the target object, and when the positioning of the data glove is at the boundary of the core area, the virtual imitation glove restores to the real action of the data glove after performing the action of releasing the target object, the virtual imitation hand follows the positioning position of the data glove, so that the virtual imitation hand coincides with the positioning position of the data glove at the boundary of the buffer area, and the movement speed of the virtual imitation hand in the buffer area is as follows:
V hand with a handle =[(D 1,2 +40)/40]*V Fixing device
wherein ,VHand with a handle V for the movement speed of hands in a virtual reality system Fixing device For the speed of movement of the data glove positioned in the buffer zone, D 1,2 To virtually mimic the spatial distance of the location of the hand's location from the location of the data glove.
Preferably, the first type of action is an action that virtually mimics a hand with a strong correlation to the target object;
the second type of action is a virtual simulated hand that is not associated with the action of the target object.
Preferably, the positioning of the virtual simulated hand is within the core region when the target object performs a first type of action.
Preferably, the method further comprises: determining an external area of a target object, wherein the external area is a space range outside a buffer area of the target object;
the virtual simulated hand is consistent with the motion of the data glove when the virtual simulated hand enters the buffer region from the outer region or when the virtual simulated hand enters the buffer region from the core region.
Preferably, the method further comprises: the virtual simulated hand is consistent with the motion of the data glove when the virtual simulated hand is located in the outer region and is not in contact with the target object.
Based on another aspect of the present invention, the present invention provides a hand operation recognition system based on data glove, the system comprising:
the initial unit is used for setting and positioning the virtual imitation hand and the data glove according to the set distance; determining a core area of a target object, wherein the core area is a space area with the distance between the positioning of a virtual imitation hand and the target object as a radius when the virtual imitation hand can touch the target object; determining a buffer area of a target object, wherein the buffer area is a spatial range of the core area which extends outwards and is based on the positioning of the virtual imitation hand and the distance between the core area and the target object; establishing an interactive animation model of the virtual imitation hand and the data glove, and determining the position change of the data glove according to the animation of the data glove based on the interactive animation model;
the execution unit is used for judging the action type of the data glove when the positioning of the data glove enters the core area of the target object;
when the target object performs a first type of action, the virtual imitation hand and the target object are kept in a connection state; when the target object cannot move, the virtual mimicking hand remains connected to the target object and does not move; when the target object moves, the target object moves along with the positioning of the virtual imitation hand;
when the target object performs a second type of action and when the target object moves after being contacted with the virtual simulated hand, the target object moves along with the positioning of the virtual simulated hand; or when the target object does not move after contacting with the virtual simulated hand, judging that the virtual simulated hand contacts with the target object;
when the data glove performs a first type of action in the core area and the positioning of the data glove leaves the core area of the target object, and when the positioning of the data glove is at the boundary of the core area, the virtual imitation glove restores to the real action of the data glove after performing the action of releasing the target object, the virtual imitation hand follows the positioning position of the data glove, so that the virtual imitation hand coincides with the positioning position of the data glove at the boundary of the buffer area, and the movement speed of the virtual imitation hand in the buffer area is as follows:
V hand with a handle =[(D 1,2 +40)/40]*V Fixing device
wherein ,VHand with a handle V for the movement speed of hands in a virtual reality system Fixing device For the speed of movement of the data glove positioned in the buffer zone, D 1,2 Positioning of data glove for virtually mimicking the position of a handIs a spatial distance of the position of (c).
Preferably, the first type of action is an action that virtually mimics a hand with a strong correlation to the target object;
the second type of action is a virtual simulated hand that is not associated with the action of the target object.
Preferably, the positioning of the virtual simulated hand is within the core region when the target object performs a first type of action.
Preferably, the method further comprises: determining an external area of a target object, wherein the external area is a space range outside a buffer area of the target object;
the virtual simulated hand is consistent with the motion of the data glove when the virtual simulated hand enters the buffer region from the outer region or when the virtual simulated hand enters the buffer region from the core region.
Preferably, the method further comprises: the virtual simulated hand is consistent with the motion of the data glove when the virtual simulated hand is located in the outer region and is not in contact with the target object.
In the technical scheme of the invention, a Hi5 VR Glove data Glove is taken as an example, wherein the data Glove consists of a positioner fixed on a wrist and a plurality of knuckle sensors on fingers. Wherein the positioner determines the position of the hand and the knuckle sensor determines the posture of the hand.
Drawings
Exemplary embodiments of the present invention may be more completely understood in consideration of the following drawings:
FIG. 1 is a flow chart of a method for identifying hand operations based on a data glove according to a preferred embodiment of the present invention;
fig. 2 is a schematic view illustrating the division of an external space region of a target object according to a preferred embodiment of the present invention;
FIG. 3 is a schematic view of target object core region scoping in accordance with a preferred embodiment of the present invention;
FIG. 4 is a schematic diagram of a first type of action in accordance with a preferred embodiment of the present invention;
FIG. 5 is a schematic diagram of a second type of action in accordance with a preferred embodiment of the present invention;
FIG. 6 is a schematic diagram of the interaction flow of a target object in a core region according to a preferred embodiment of the present invention;
FIG. 7 is a schematic diagram of interaction rules of a target object in a core region according to a preferred embodiment of the present invention;
fig. 8 is a core area movement diagram of a target object according to a preferred embodiment of the present invention;
FIG. 9 is a diagram of a core region grabbing action according to a preferred embodiment of the present invention;
FIG. 10 is a schematic view of a locator just out of the core area according to a preferred embodiment of the invention;
FIG. 11 is a schematic illustration of the change in velocity off core area in accordance with a preferred embodiment of the present invention;
FIG. 12 is a schematic diagram of a simultaneous de-buffers according to a preferred embodiment of the present invention;
FIG. 13 is a schematic illustration of palm and ball sizes in accordance with a preferred embodiment of the present invention;
FIG. 14 is a schematic diagram of interactions under a first type of action in accordance with a preferred embodiment of the present invention;
FIG. 15 is a schematic diagram of a sphere motion with traction according to a preferred embodiment of the present invention;
FIG. 16 is a two-dimensional coordinate diagram of hand versus positioner motion velocity in accordance with a preferred embodiment of the present invention;
FIG. 17 is a two-dimensional coordinate diagram of a hand versus positioner motion velocity relationship in accordance with a preferred embodiment of the present invention;
FIG. 18 is a two-dimensional coordinate diagram of hand versus positioner motion velocity in accordance with a preferred embodiment of the present invention; and
figure 19 is a block diagram of a hand operation recognition system based on a data glove in accordance with a preferred embodiment of the present invention.
Detailed Description
The exemplary embodiments of the present invention will now be described with reference to the accompanying drawings, however, the present invention may be embodied in many different forms and is not limited to the examples described herein, which are provided to fully and completely disclose the present invention and fully convey the scope of the invention to those skilled in the art. The terminology used in the exemplary embodiments illustrated in the accompanying drawings is not intended to be limiting of the invention. In the drawings, like elements/components are referred to by like reference numerals.
Unless otherwise indicated, terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art. In addition, it will be understood that terms defined in commonly used dictionaries should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense.
Fig. 1 is a flow chart of a hand operation recognition method based on a data glove according to a preferred embodiment of the present invention. The invention provides a real-time modeling method for a hand and an impenetrable contact object in real time, which solves the problem of operation authenticity of the hand and the contact object. In a virtual reality environment of a partially real-time operation training class, a handle cannot be used to complete a refined operation, and therefore, a data glove is required to be used for operation in the virtual reality environment.
As shown in fig. 1, the present invention provides a hand operation recognition method based on a data glove, the method comprising:
preferably, in step 101: and setting positioning for the virtual imitation hand and the data glove according to the set distance.
Preferably, at step 102: and determining a core area of the target object, wherein the core area is a space area with the distance between the positioning of the virtual simulated hand and the target object as a radius when the virtual simulated hand can touch the target object.
Preferably, in step 103: a buffer area of the target object is determined, and the buffer area is a spatial range which is extended outwards by taking the core area as a basis of the positioning of the virtual imitation hand and the distance between the target object.
The virtual reality collision detection method under the operation of the data glove divides the outer space of the impenetrable object into three areas, namely an outer area, a buffer area and a core area.
The invention divides the region of the virtual reality space into three blocks, and the dividing method is based on the process of the data glove during the operation in the virtual space and the collision with the object.
The core area is an area where a hand may collide with an impenetrable object. The extension is 20cm outwards based on an impenetrable object (typically the locator on the wrist is no more than 20cm from the fingertip). That is, if the locator on the wrist does not enter the core area, the finger tip is not likely to come into contact with the impenetrable object furthest.
The buffer zone was set such that the core zone extended outwardly by a further 40cm. The purpose of the buffer area is to express an area that will separate from the impenetrable object after the hand leaves the core area, provided for a visually realistic experience.
The outer region, which is farther from the object, is the region of the glove after it has completely left the cushioning region. In the outer region, the position of the hand is fully synchronized with the actual motion.
Preferably, at step 104: establishing an interactive animation model of the virtual imitation hand and the data glove, and determining the position change of the data glove according to the animation of the data glove based on the interactive animation model.
Preferably, in step 105: judging the action type of the data glove when the positioning of the data glove enters the core area of the target object; preferably, the first type of action is an action that virtually mimics a hand with a target object; the second type of action is an action that virtually mimics a hand with a target object.
Preferably, at step 106: when the target object performs a first type of action, the virtual imitation hand and the target object are kept in a connection state; when the target object cannot move, the virtual imitation hand is kept in a connected state with the target object and does not move; when the target object moves, the target object moves along with the positioning of the virtual imitation hand; preferably, the positioning of the virtual simulated hand is within the core region when the target object performs the first type of action.
Preferably, in step 107: when the target object performs a second type of action and when the target object moves after contacting the virtual simulated hand, the target object moves along with the positioning of the virtual simulated hand; or when the target object does not move after contacting with the virtual simulated hand, judging that the virtual simulated hand contacts with the target object.
In the present invention, after the positioning of the data glove into the core region, penetration of the hand and the impenetrable object may occur. Actions expressed by the finger sensors are classified into two types: 1. the hand is strongly associated with the impenetrable object. 2. The hand is not associated with an impenetrable object. For example: the actions of grabbing, pinching, holding and the like belong to the first class; pushing, beating, throwing, placing and the like belong to the second category.
In the core area, the hands are connected to the impenetrable object if a first type of action is performed. As long as the locator position does not leave the core area, the hand is always connected with the impenetrable object. If the impenetrable object is not movable, the state of motion is maintained in the virtual reality system regardless of the motion of the positioner. When the hand performs a releasing action, the position of the locator in the virtual space leaves the core area, and then the hand synchronously performs the releasing action and leaves the impenetrable object in the virtual reality system. The specific exit steps are illustrated in the buffer area interaction rules.
If the impenetrable object can be moved, such as rotated, moved, etc. The movement of the positioner, if it is the movable direction of the impenetrable object, the hand moves with the impenetrable object. The core region follows the change with the movement of the impenetrable object.
In the core area, if the second type of motion is performed, in the virtual reality system, the position of the locator does not enter the core area, and the contact between the hand and the impenetrable object is considered to be generated in a logical relationship. If the impenetrable object is movable by such contact, the core region follows the change in movement of the impenetrable object, and the position of the hand does not change. If the impenetrable object is unable to move through such contact, only the hand is judged to be in contact with the impenetrable object.
Preferably, at step 108: when the data glove performs a first type of action in the core area and the positioning of the data glove leaves the core area of the target object, and when the positioning of the data glove is at the boundary of the core area, the virtual imitation glove restores to the real action of the data glove after performing the action of releasing the target object, the virtual imitation hand follows the positioning position of the data glove, so that the virtual imitation hand coincides with the positioning position of the data glove at the boundary of the buffer area, and the movement speed of the virtual imitation hand in the buffer area is as follows:
V hand with a handle =[(D 1,2 +40)/40]*V Fixing device
wherein ,VHand with a handle V for the movement speed of hands in a virtual reality system Fixing device For the speed of movement of the data glove positioned in the buffer zone, D 1,2 To virtually mimic the spatial distance of the location of the hand's location from the location of the data glove.
The buffer area is configured to allow the hand to be separated from the impenetrable object during the first type of operation from the core area, because in the virtual reality system, the hand is located beside the impenetrable object, and the positioner is located away from the core area, the sense of realism perceived by the operator is greatly reduced if the positioner is directly switched out. For this case, a buffer area is provided, and the corresponding operation is performed to make the feeling of the operator more realistic.
When the locator leaves the core area, the hands are separated from the impenetrable object after a releasing action is performed by the hands in the virtual reality system. At this time, the positioner is already at the junction of the buffer region and the core region, set as d1= { x 1 ,y 1 ,z 1 In a virtual reality system, the hand position is near the impenetrable object, set as d2= { x 2 ,y 2 ,z 2 }. The width of the buffer area is40cm. The positioner may move from one end of the buffer zone to the other, while in a virtual reality system, the hand position is from near the impenetrable object to the end of the buffer zone bordering the outer zone.
At this time, in the virtual reality system, the movement speed of the hand is:
V hand with a handle =[(D 1,2 +40)/40]*V Fixing device
wherein ,VHand with a handle V for the movement speed of hands in a virtual reality system Fixing device D for the movement speed of the positioner in the buffer zone 1,2 Is the spatial distance between the position of the locator in the virtual reality system and the position of the actual locator.
Preferably, the method further comprises: determining an external area of the target object, wherein the external area is a space range outside a buffer area of the target object;
the virtual simulated hand remains consistent with the motion of the data glove when the virtual simulated hand enters the buffer area from the outer area or when the virtual simulated hand enters the buffer area from the core area.
Preferably, the method further comprises: when the virtual simulated hand is located in the outer region and is not in contact with the target object, the virtual simulated hand remains consistent with the movements of the data glove.
The invention describes the situation that a hand leaves the core area and enters the buffer area after being strongly associated with an impenetrable object from the core area. In addition to this, other cases such as: when a hand enters the buffer area from the outer area, leaves the buffer area from the core area in the second type of motion, and the like, the operation of the buffer area is identical to the operation of the outer area. And according to the position coincidence of the normal hand and the positioner, displaying the action represented by the finger sensor in the virtual reality.
In the invention, the outer area is not contacted with the impenetrable object by hands, so that the area coincides with the position of the locator according to the normal hands, and the action represented by the finger sensor is displayed in virtual reality. That is, the hand represented by the data glove is fully coincident with the hand represented in the virtual reality system.
When modeling, the touchable object in the virtual environment needs to establish a hand model interactive animation established with the glove. In the event of a collision, the displayed collision picture needs to include animation such as grasping, holding, lifting, throwing, rotating, and the like. And establishing a position coordinate updating relation between the animation and the virtual object, and updating the position information of the virtual object in real time according to the operation in the animation so as to ensure that the operation behavior finished through the animation process after the virtual operation is finished can be correctly reflected through the position change of the virtual object. After the hand enters the core area, the interaction mode is changed into a collision mode using the handle, so that the image keeps true.
The present invention provides a study of a collision handling method for a data glove for hand movement and an impenetrable contact object. The operation in the virtual reality environment in the prior art is extremely easy to occur the phenomenon that the hand penetrates through the object. According to the invention, better visual feedback effect can be obtained through space region division.
The invention is illustrated by taking a hand-held sphere as an example:
the virtual space is provided with a sphere smaller than the palm of the hand. As shown in fig. 13. The core region is constructed according to the method, the buffer region is constructed, and other regions in the virtual space are external regions, such as partial spheres on fig. 14.
In fig. 14, the upper half shows a virtual system and the lower half is a data glove actually worn. When the ball is to be grasped, the hand enters the buffer area from the external area and then enters the core area, and if the first type of operation action is not made, the locator at the wrist cannot enter the core area in the virtual reality system. In a virtual reality system, a finger can touch the edge of a sphere. However, if the positioner continues to advance, in the virtual reality system, the gesture is changed only along with the return action of the finger sensor, and the wrist does not enter the core area. This avoids the problem of hand passing through the ball. I.e. the condition that the finger touches the sphere is determined by the position of the locator.
In a virtual reality environment, after a hand touches a ball, the hand in the virtual reality system does not advance any more even if the hand is still advancing in the real environment.
If a first type of operation is made, the hand grabs the sphere, the positioner enters the core area, the sphere moves with the hand movement, and if the grabbing is maintained, the sphere moves with the hand. The core region, the buffer region and the outer region will all change as the sphere moves. The core area, the buffer area and the outer area are integrated with the sphere, and the set parameters are aligned on the periphery of the sphere model.
As shown in fig. 15, if the sphere cannot move along with the hand for some reasons (rope traction, etc.), and the hand is always in a gripping state, the positioner leaves the core area during the movement of the hand, and after the hand performs a releasing action in the virtual reality system, the movement is performed according to the aforementioned strategy. The movement speed in the virtual system is slightly faster than that of the positioner, if the virtual system moves towards the outer area, the virtual system merges at the junction of the outer area and the buffer area, and if the virtual system moves again like the core area in the buffer area and performs the gripping action, the hand grips the sphere again at the junction of the buffer area and the core area.
The invention relates to a method for dividing a space region when a data glove is used in a virtual reality system and a hand interacts with an impenetrable object. The invention is characterized in that the core area is an impenetrable object and extends outwards for 20cm, and the buffer area is the core area and extends outwards for 40cm. Embodiments of the present invention include, but are not limited to, methods of setting up protection zones, values, and design considerations.
In the core area, the locator can enter the core area only when the first type of actions such as grabbing, holding and the like are performed, and the grabbing process of the hand and the impenetrable object is completed, at the moment, the hand can always grab the impenetrable object as long as the locator does not leave the core area, and can move along with the motion direction of the object; the hand does not change when moving in the direction in which the object cannot move. Until the positioner leaves the core region.
In the buffer area, when the locator leaves the core area, the locator can be released. This action is to make the motion of the hand display more realistic, and without this action the hand would penetrate the object in the virtual reality system. It appears unrealistic.
In the buffer area, when the positioner moves in the buffer area after leaving the core area, the relationship between the movement speed of the hand and the movement speed of the positioner in the virtual reality system. For example, if the two-dimensional coordinates are represented as the following straight lines (as shown in fig. 16) in the formula, other curves are shown in fig. 17 and 18.
Figure 19 is a block diagram of a hand operation recognition system based on a data glove in accordance with a preferred embodiment of the present invention. As shown in fig. 19, the present invention provides a hand operation recognition system based on data glove, the system comprising:
an initial unit 1901 for setting a position for the virtual simulated hand and the data glove, respectively, at a set distance; determining a core area of a target object, wherein the core area is a space area with the distance between the positioning of the virtual simulated hand and the target object as a radius when the virtual simulated hand can touch the target object; determining a buffer area of the target object, wherein the buffer area is a space range which is formed by extending outwards the core area and is based on the positioning of the virtual imitation hand and the distance between the target object; establishing an interactive animation model of the virtual imitation hand and the data glove, and determining the position change of the data glove according to the animation of the data glove based on the interactive animation model.
The virtual reality collision detection method under the operation of the data glove divides the outer space of the impenetrable object into three areas, namely an outer area, a buffer area and a core area.
The invention divides the region of the virtual reality space into three blocks, and the dividing method is based on the process of the data glove during the operation in the virtual space and the collision with the object.
The core area is an area where a hand may collide with an impenetrable object. The extension is 20cm outwards based on an impenetrable object (typically the locator on the wrist is no more than 20cm from the fingertip). That is, if the locator on the wrist does not enter the core area, the finger tip is not likely to come into contact with the impenetrable object furthest.
The buffer zone was set such that the core zone extended outwardly by a further 40cm. The purpose of the buffer area is to express an area that will separate from the impenetrable object after the hand leaves the core area, provided for a visually realistic experience.
An execution unit 1902, configured to determine a type of motion of the data glove when the positioning of the data glove enters a core area of the target object; preferably, the first type of action is an action that virtually mimics a hand with a target object; the second type of action is an action that virtually mimics a hand with a target object.
When the target object performs a first type of action, the virtual imitation hand and the target object are kept in a connection state; when the target object cannot move, the virtual imitation hand is kept in a connected state with the target object and does not move; when the target object moves, the target object moves along with the positioning of the virtual imitation hand; preferably, the positioning of the virtual simulated hand is within the core region when the target object performs the first type of action.
When the target object performs a second type of action and when the target object moves after contacting the virtual simulated hand, the target object moves along with the positioning of the virtual simulated hand; or when the target object does not move after contacting with the virtual imitation hand, judging that the virtual imitation hand contacts with the target object;
when the data glove performs a first type of action in the core area and the positioning of the data glove leaves the core area of the target object, and when the positioning of the data glove is at the boundary of the core area, the virtual imitation glove restores to the real action of the data glove after performing the action of releasing the target object, the virtual imitation hand follows the positioning position of the data glove, so that the virtual imitation hand coincides with the positioning position of the data glove at the boundary of the buffer area, and the movement speed of the virtual imitation hand in the buffer area is as follows:
V hand with a handle =[(D 1,2 +40)/40]*V Fixing device
wherein ,VHand with a handle V for the movement speed of hands in a virtual reality system Fixing device For the speed of movement of the data glove positioned in the buffer zone, D 1,2 To virtually mimic the spatial distance of the location of the hand's location from the location of the data glove.
In the present invention, after the positioning of the data glove into the core region, penetration of the hand and the impenetrable object may occur. Actions expressed by the finger sensors are classified into two types: 1. the hand is strongly associated with the impenetrable object. 2. The hand is not associated with an impenetrable object. For example: the actions of grabbing, pinching, holding and the like belong to the first class; pushing, beating, throwing, placing and the like belong to the second category.
In the core area, the hands are connected to the impenetrable object if a first type of action is performed. As long as the locator position does not leave the core area, the hand is always connected with the impenetrable object. If the impenetrable object is not movable, the state of motion is maintained in the virtual reality system regardless of the motion of the positioner. When the hand performs a releasing action, the position of the locator in the virtual space leaves the core area, and then the hand synchronously performs the releasing action and leaves the impenetrable object in the virtual reality system. The specific exit steps are illustrated in the buffer area interaction rules.
If the impenetrable object can be moved, such as rotated, moved, etc. The movement of the positioner, if it is the movable direction of the impenetrable object, the hand moves with the impenetrable object. The core region follows the change with the movement of the impenetrable object.
In the core area, if the second type of motion is performed, in the virtual reality system, the position of the locator does not enter the core area, and the contact between the hand and the impenetrable object is considered to be generated in a logical relationship. If the impenetrable object is movable by such contact, the core region follows the change in movement of the impenetrable object, and the position of the hand does not change. If the impenetrable object is unable to move through such contact, only the hand is judged to be in contact with the impenetrable object.
When the locator leaves the core area, the hands are separated from the impenetrable object after a releasing action is performed by the hands in the virtual reality system. At this time, the positioner is already at the junction of the buffer region and the core region, set as d1= { x 1 ,y 1 ,z 1 In a virtual reality system, the hand position is near the impenetrable object, set as d2= { x 2 ,y 2 ,z 2 }. The buffer zone width was 40cm. The positioner may move from one end of the buffer zone to the other, while in a virtual reality system, the hand position is from near the impenetrable object to the end of the buffer zone bordering the outer zone.
At this time, in the virtual reality system, the movement speed of the hand is:
V hand with a handle =[(D 1,2 +40)/40]*V Fixing device
wherein ,VHand with a handle V for the movement speed of hands in a virtual reality system Fixing device D for the movement speed of the positioner in the buffer zone 1,2 Is the spatial distance between the position of the locator in the virtual reality system and the position of the actual locator.
Preferably, the method further comprises: determining an external area of the target object, wherein the external area is a space range outside a buffer area of the target object;
the virtual simulated hand remains consistent with the motion of the data glove when the virtual simulated hand enters the buffer area from the outer area or when the virtual simulated hand enters the buffer area from the core area.
Preferably, the method further comprises: when the virtual simulated hand is located in the outer region and is not in contact with the target object, the virtual simulated hand remains consistent with the movements of the data glove.
The invention describes the situation that a hand leaves the core area and enters the buffer area after being strongly associated with an impenetrable object from the core area. In addition to this, other cases such as: when a hand enters the buffer area from the outer area, leaves the buffer area from the core area in the second type of motion, and the like, the operation of the buffer area is identical to the operation of the outer area. And according to the position coincidence of the normal hand and the positioner, displaying the action represented by the finger sensor in the virtual reality.
The outer region, which is farther from the object, is the region of the glove after it has completely left the cushioning region. In the outer region, the position of the hand is fully synchronized with the actual motion.
In the invention, the outer area is not contacted with the impenetrable object by hands, so that the area coincides with the position of the locator according to the normal hands, and the action represented by the finger sensor is displayed in virtual reality. That is, the hand represented by the data glove is fully coincident with the hand represented in the virtual reality system.
When modeling, the touchable object in the virtual environment needs to establish a hand model interactive animation established with the glove. In the event of a collision, the displayed collision picture needs to include animation such as grasping, holding, lifting, throwing, rotating, and the like. And establishing a position coordinate updating relation between the animation and the virtual object, and updating the position information of the virtual object in real time according to the operation in the animation so as to ensure that the operation behavior finished through the animation process after the virtual operation is finished can be correctly reflected through the position change of the virtual object. After the hand enters the core area, the interaction mode is changed into a collision mode using the handle, so that the image keeps true.
The invention has been described with reference to a few embodiments. However, as is well known to those skilled in the art, other embodiments than the above disclosed invention are equally possible within the scope of the invention, as defined by the appended patent claims.
Generally, all terms used in the claims are to be interpreted according to their ordinary meaning in the technical field, unless explicitly defined otherwise therein. All references to "a// the [ means, component, etc ]" are to be interpreted openly as referring to at least one instance of means, component, etc., unless explicitly stated otherwise. The steps of any method disclosed herein do not have to be performed in the exact order disclosed, unless explicitly stated.
Claims (10)
1. A method of hand operation recognition based on data glove, the method comprising:
setting and positioning for the virtual imitation hand and the data glove according to the set distance;
determining a core area of a target object, wherein the core area is a space area with the distance between the positioning of a virtual imitation hand and the target object as a radius when the virtual imitation hand can touch the target object;
determining a buffer area of a target object, wherein the buffer area is a spatial range of the core area which extends outwards and is based on the positioning of the virtual imitation hand and the distance between the core area and the target object;
establishing an interactive animation model of the virtual imitation hand and the data glove, and determining the position change of the data glove according to the animation of the data glove based on the interactive animation model;
judging the action type of the data glove when the positioning of the data glove enters the core area of the target object;
when the target object performs a first type of action, the virtual imitation hand and the target object are kept in a connection state; when the target object cannot move, the virtual mimicking hand remains connected to the target object and does not move; when the target object moves, the target object moves along with the positioning of the virtual imitation hand;
when the target object performs a second type of action and when the target object moves after being contacted with the virtual simulated hand, the target object moves along with the positioning of the virtual simulated hand; or when the target object does not move after contacting with the virtual simulated hand, judging that the virtual simulated hand contacts with the target object;
when the data glove performs a first type of action in the core area and the positioning of the data glove leaves the core area of the target object, and when the positioning of the data glove is at the boundary of the core area, the virtual imitation glove restores to the real action of the data glove after performing the action of releasing the target object, the virtual imitation hand follows the positioning position of the data glove, so that the virtual imitation hand coincides with the positioning position of the data glove at the boundary of the buffer area, and the movement speed of the virtual imitation hand in the buffer area is as follows:
wherein ,for the movement speed of the hand in the virtual reality system, < >>For the speed of movement of the data glove positioned in the buffer zone,/->To virtually mimic the spatial distance of the location of the hand's location from the location of the data glove, d 1 To virtually mimic the location of the hand, d 2 Is the location of the data glove.
2. The method of claim 1, the first type of action being a virtual mimicking hand's action having a strong association with the target object;
the second type of action is a virtual simulated hand that is not associated with the action of the target object.
3. The method of claim 1, the positioning of the virtual simulated hand within the core region when the target object performs a first type of action.
4. The method of claim 1, further comprising: determining an external area of a target object, wherein the external area is a space range outside a buffer area of the target object;
the virtual simulated hand is consistent with the motion of the data glove when the virtual simulated hand enters the buffer region from the outer region or when the virtual simulated hand enters the buffer region from the core region.
5. The method of claim 4, further comprising: the virtual simulated hand is consistent with the motion of the data glove when the virtual simulated hand is located in the outer region and is not in contact with the target object.
6. A data glove-based hand operation recognition system, the system comprising:
the initial unit is used for setting and positioning the virtual imitation hand and the data glove according to the set distance; determining a core area of a target object, wherein the core area is a space area with the distance between the positioning of a virtual imitation hand and the target object as a radius when the virtual imitation hand can touch the target object; determining a buffer area of a target object, wherein the buffer area is a spatial range of the core area which extends outwards and is based on the positioning of the virtual imitation hand and the distance between the core area and the target object; establishing an interactive animation model of the virtual imitation hand and the data glove, and determining the position change of the data glove according to the animation of the data glove based on the interactive animation model;
the execution unit is used for judging the action type of the data glove when the positioning of the data glove enters the core area of the target object;
when the target object performs a first type of action, the virtual imitation hand and the target object are kept in a connection state; when the target object cannot move, the virtual mimicking hand remains connected to the target object and does not move; when the target object moves, the target object moves along with the positioning of the virtual imitation hand;
when the target object performs a second type of action and when the target object moves after being contacted with the virtual simulated hand, the target object moves along with the positioning of the virtual simulated hand; or when the target object does not move after contacting with the virtual simulated hand, judging that the virtual simulated hand contacts with the target object;
when the data glove performs a first type of action in the core area and the positioning of the data glove leaves the core area of the target object, and when the positioning of the data glove is at the boundary of the core area, the virtual imitation glove restores to the real action of the data glove after performing the action of releasing the target object, the virtual imitation hand follows the positioning position of the data glove, so that the virtual imitation hand coincides with the positioning position of the data glove at the boundary of the buffer area, and the movement speed of the virtual imitation hand in the buffer area is as follows:
wherein ,for the movement speed of the hand in the virtual reality system, < >>For the speed of movement of the data glove positioned in the buffer zone,/->To virtually mimic the spatial distance of the location of the hand's location from the location of the data glove, d 1 To virtually mimic the location of the hand, d 2 Is the location of the data glove.
7. The system of claim 6, the first type of action being a virtual mimicking hand's action having a strong correlation with the target object;
the second type of action is a virtual simulated hand that is not associated with the action of the target object.
8. The system of claim 6, the virtual mimicking hand positioning within the core area when the target object performs a first type of motion.
9. The system of claim 6, further comprising: determining an external area of a target object, wherein the external area is a space range outside a buffer area of the target object;
the virtual simulated hand is consistent with the motion of the data glove when the virtual simulated hand enters the buffer region from the outer region or when the virtual simulated hand enters the buffer region from the core region.
10. The system of claim 9, further comprising: the virtual simulated hand is consistent with the motion of the data glove when the virtual simulated hand is located in the outer region and is not in contact with the target object.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011020659.5A CN112306234B (en) | 2020-09-25 | 2020-09-25 | Hand operation identification method and system based on data glove |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011020659.5A CN112306234B (en) | 2020-09-25 | 2020-09-25 | Hand operation identification method and system based on data glove |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112306234A CN112306234A (en) | 2021-02-02 |
CN112306234B true CN112306234B (en) | 2023-10-31 |
Family
ID=74488632
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011020659.5A Active CN112306234B (en) | 2020-09-25 | 2020-09-25 | Hand operation identification method and system based on data glove |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112306234B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108983978A (en) * | 2018-07-20 | 2018-12-11 | 北京理工大学 | virtual hand control method and device |
CN109460150A (en) * | 2018-11-12 | 2019-03-12 | 北京特种机械研究所 | A kind of virtual reality human-computer interaction system and method |
KR20200051938A (en) * | 2018-11-06 | 2020-05-14 | 한길씨앤씨 주식회사 | Method for controlling interaction in virtual reality by tracking fingertips and VR system using it |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9552673B2 (en) * | 2012-10-17 | 2017-01-24 | Microsoft Technology Licensing, Llc | Grasping virtual objects in augmented reality |
-
2020
- 2020-09-25 CN CN202011020659.5A patent/CN112306234B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108983978A (en) * | 2018-07-20 | 2018-12-11 | 北京理工大学 | virtual hand control method and device |
KR20200051938A (en) * | 2018-11-06 | 2020-05-14 | 한길씨앤씨 주식회사 | Method for controlling interaction in virtual reality by tracking fingertips and VR system using it |
CN109460150A (en) * | 2018-11-12 | 2019-03-12 | 北京特种机械研究所 | A kind of virtual reality human-computer interaction system and method |
Also Published As
Publication number | Publication date |
---|---|
CN112306234A (en) | 2021-02-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5087101B2 (en) | Program, information storage medium, and image generation system | |
US6597347B1 (en) | Methods and apparatus for providing touch-sensitive input in multiple degrees of freedom | |
Ha et al. | WeARHand: Head-worn, RGB-D camera-based, bare-hand user interface with visually enhanced depth perception | |
EP3275514A1 (en) | Virtuality-and-reality-combined interactive method and system for merging real environment | |
Oprea et al. | A visually realistic grasping system for object manipulation and interaction in virtual reality environments | |
CN112105486B (en) | Augmented reality for industrial robots | |
CN113710432A (en) | Method for determining a trajectory of a robot | |
Boonbrahm et al. | Assembly of the virtual model with real hands using augmented reality technology | |
KR20200051938A (en) | Method for controlling interaction in virtual reality by tracking fingertips and VR system using it | |
Nainggolan et al. | User experience in excavator simulator using leap motion controller in virtual reality environment | |
CN109669538B (en) | Object grabbing interaction method under complex motion constraint in virtual reality | |
CN112306234B (en) | Hand operation identification method and system based on data glove | |
US11500453B2 (en) | Information processing apparatus | |
CN104239119A (en) | Method and system for realizing electric power training simulation upon kinect | |
Aleotti et al. | Trajectory reconstruction with nurbs curves for robot programming by demonstration | |
WO2021195916A1 (en) | Dynamic hand simulation method, apparatus and system | |
Barber et al. | Sketch-based robot programming | |
Spanogianopoulos et al. | Human computer interaction using gestures for mobile devices and serious games: A review | |
Shi et al. | Grasping 3d objects with virtual hand in vr environment | |
Qingchao et al. | The application of leap motion in astronaut virtual training | |
US20210232289A1 (en) | Virtual user detection | |
Kim et al. | Direct hand manipulation of constrained virtual objects | |
Figueiredo et al. | Bare hand natural interaction with augmented objects | |
TW202144983A (en) | Method of interacting with virtual creature in virtual reality environment and virtual object operating system | |
Nikolakis et al. | A mixed reality learning environment for geometry education |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |