CN109144273B - Virtual fire experience method based on VR technology - Google Patents

Virtual fire experience method based on VR technology Download PDF

Info

Publication number
CN109144273B
CN109144273B CN201811055005.9A CN201811055005A CN109144273B CN 109144273 B CN109144273 B CN 109144273B CN 201811055005 A CN201811055005 A CN 201811055005A CN 109144273 B CN109144273 B CN 109144273B
Authority
CN
China
Prior art keywords
hand
node
model
data
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811055005.9A
Other languages
Chinese (zh)
Other versions
CN109144273A (en
Inventor
史龙宇
潘志庚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Normal University
Original Assignee
Hangzhou Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Normal University filed Critical Hangzhou Normal University
Priority to CN201811055005.9A priority Critical patent/CN109144273B/en
Publication of CN109144273A publication Critical patent/CN109144273A/en
Application granted granted Critical
Publication of CN109144273B publication Critical patent/CN109144273B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/02Electrically-operated educational appliances with visual presentation of the material to be studied, e.g. using film strip
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a virtual fire-fighting experience method based on VR technology, which adopts virtual fire-fighting experience equipment and comprises the following steps: a head display device; a gesture recognition component; a computer; the specific method comprises the following steps: the computer firstly transmits the fire-fighting picture to the head display device to provide a virtual fire-fighting scene, the experiential person interacts with a virtual article in the virtual fire-fighting scene through the handle or the gesture recognition component, the gesture recognition component acquires image information of a hand and converts the image information into position information of each node of the hand in the virtual scene, the position information of each node of the hand is corrected through an algorithm for reducing hand model shake, and the computer acquires feedback information of the head-mounted display, the handle and the gesture recognition component in real time. The invention reduces the cost of fire safety education, improves the safety of the fire safety education and increases the interestingness of the fire safety education.

Description

Virtual fire experience method based on VR technology
Technical Field
The invention relates to the field of virtual fire experience, in particular to a virtual fire experience method based on a VR technology.
Background
In recent years, fire safety education has been the focus of all areas of attention. Teaching modes established in textbooks, education videos and safe exercises are monotonous, and an ideal teaching purpose cannot be achieved. The fire safety experience hall built above improves the interestingness, however, the building of the experience hall needs a large amount of equipment and a large occupied area, the cost is high overall, the possibility of safety accidents such as trampling and the like occurs when the escape practice is performed in a play, and certain danger exists.
With the development of computer hardware technology, virtual reality technology is becoming more mature, and applying virtual reality technology to the field of education becomes a relatively hot research direction. The virtual teaching scene is simulated by means of a virtual reality technology, the key points of knowledge are explained in detail, theories and concepts are summarized, the learner is guided from the aspect of sense, and relevant knowledge and skills are learned in the guiding and exploring processes in an active interaction mode, so that the learning interest and innovation consciousness of the learner can be stimulated, the imagination of the learner can be fully exerted, and the teaching purpose is achieved.
In the existing stage, when a gesture recognition component based on an infrared camera recognizes a gesture and transmits gesture data, if working light is poor, other infrared devices work, and the gesture data has errors (natural jitter of human hands can be amplified), a jitter phenomenon can occur after a model of the hands receives the data, so that the operation experience of a user can be influenced.
Disclosure of Invention
The invention discloses a virtual fire-fighting experience method based on a VR (virtual reality) technology, aiming at reducing the cost of fire safety education, improving the safety of the fire safety education and increasing the interestingness of the fire safety education by using an advanced virtual reality technology.
In order to realize the purpose, the following technical scheme is adopted:
a virtual fire experience device comprising:
1) the head display equipment comprises a head-mounted display and a handle wirelessly interactive with the head-mounted display, and positioning modules are arranged in the head-mounted display and the handle. The two positioning modules can simultaneously track the positioning systems of the head-mounted display and the handle in space. The device meets the hardware requirements for realizing the function of displaying the virtual scene and the handle interaction. The head display device can adopt an HTC Vive head display device.
2) The gesture recognition component is internally provided with a positioning module. The gesture recognition component can adopt a leap motion somatosensory controller or a Fingo gesture recognition component. The Fingo gesture recognition component tracks the gesture of a user by using an infrared camera, recognizes the gesture of the user in real time by a gesture recognition algorithm, and can control the movement of a virtual character in a virtual scene and the interaction with a virtual object by the gesture by defining different gestures.
3) And the computer is connected with the head display equipment and the gesture recognition component.
The computer firstly transmits the fire-fighting picture to the head display device to provide a virtual fire-fighting scene, the experience person interacts with the virtual article in the virtual fire-fighting scene through the handle or the gesture recognition component, and the computer acquires the position information, the image information, the key feedback information and the like of the head-mounted display, the handle and the gesture recognition component in real time.
A virtual fire experience device using virtual reality technology; the system has relatively perfect fire safety knowledge point teaching; having multiple scenarios to experience; the method has multiple interaction mode selections, including handle interaction and bare hand gesture interaction; the practice results of the user are evaluated and analyzed, so that the user can better master the fire safety knowledge.
The virtual fire fighting scene comprises a fire fighting safety knowledge learning scene, a fire fighting equipment practicing scene, a fire escape scene, a potential safety hazard investigation scene and an evaluation scene.
The fire safety knowledge learning scene is used for learning fire safety knowledge, and comprises life fire safety common knowledge, fire icon identification knowledge and fire equipment use knowledge.
The practice scenario for fire fighting equipment is used for learning and practicing the use of fire fighting equipment, including practice of using fire extinguishers, safety ropes, fire blankets, and the like.
After various life scenes are simulated to generate fire, a user uses learned fire safety knowledge to search and use tools such as fire extinguishers and the like to realize a safe escape route to realize escape, and the scenes comprise common scenes such as classrooms, dormitories, laboratories and bedrooms. The scene comprises necessary high-precision models such as furniture, desks and the like, and also comprises flame and other effects.
The potential safety hazard investigation scene is used for detecting the learning condition of life fire prevention knowledge of a user. And setting 10 potential safety hazards, wherein the user can pass after checking 10 potential safety hazards within a certain time, otherwise, failure is prompted and the position of the hidden danger which is not found is given.
The evaluation scene is that the escape operation evaluation is carried out after the escape generation work or the escape generation failure, the evaluation gives correct operation and wrong action, and gives a combined exercise opportunity for the wrong action.
The 3D Max software is used for modeling the virtual scene, the software can be used for building a high-precision model, can simulate objects such as furniture, fire extinguishers and the like in reality and is displayed in the virtual scene, and a user can have an immersive feeling in the virtual scene.
Virtual scene construction uses the Unity 3D game engine to develop the implementation. The engine can simulate flame effect, collision effect and the like vividly. The SteamVR development kit is used for carrying out system development of VR projects, and the c # is used for compiling logical operation scripts, so that a series of operations such as movement, object touch, triggering events and the like in a virtual scene can be realized, and the virtual scene can be more realistic
A virtual fire experience method based on VR technology adopts virtual fire experience equipment, and the specific method comprises the following steps:
the computer firstly transmits the fire-fighting pictures to the head display device to provide a virtual fire-fighting scene, the experience person uses the interaction between the handle or the gesture recognition component and a virtual article in the virtual fire-fighting scene, the gesture recognition component acquires the image information of the hand and converts the image information into the position information of each node of the hand in the virtual scene, the position information of each node of the hand is corrected by an algorithm for reducing hand model shake, and the computer acquires the feedback information of the head-mounted display, the handle and the gesture recognition component in real time.
The feedback information comprises position information, image information, key feedback information and the like.
The gesture recognition algorithm is optimized in the situation of controlling the gesture recognition assembly by using the bare hand, and the algorithm capable of reducing hand model jitter is designed, and the algorithm is as follows:
when the gesture recognition component based on the infrared camera recognizes gestures and transmits gesture data, if working light is poor, other infrared devices work, and the hand model receives data and then shakes due to the fact that the gesture data has errors (natural shaking of human hands can be amplified). There are three main cases: firstly, the model of the whole hand translates in a certain range, and the amplitude of the model obviously does not accord with the hand motion of the hand in human reality; secondly, most parts of the hand, such as the palm, translate within a certain range, and the individual fingers have a relatively quick displacement phenomenon (similar to the action of clicking a mouse by the fingers); third, data is lost and the virtual hand model disappears, which reduces the recognition rate of gestures and affects the user experience. Aiming at the problem, a reasonable data smoothing algorithm, namely an algorithm for reducing hand model jitter, is designed.
The hand in the virtual scene is the model of the hand.
The algorithm for reducing hand model jitter comprises the following steps:
the position information of each node of the hand is 21, each of 5 fingers is provided with 4 nodes, wherein the number of the fingertips is 1 node, 3 joints correspond to three nodes, the wrist position is 1 single node, and the gesture recognition component can establish a three-dimensional coordinate system for the positions of the 21 nodes in a virtual fire scene;
the first condition is as follows: the thickness of the hand model is set to be 1A (unit), the thickness of the real hand is set to be 1B (unit), when the whole hand model translates within the range of 1A-2A units, and the amplitude of the real hand displaces within the range of 0-0.5B, the hand model is judged to be obviously inconsistent with the hand action of the real hand of a human (if the real hand is in a static state, the hand model is in a shaking state in a virtual scene).
1) The model for judging hands obviously does not accord with the hand motion of hands in human reality, and the method specifically comprises the following steps:
the model of the hand has 21 nodes, one of the nodes is taken, the current position of the node is recorded as P1 and is stored in an array, the position data of the point of the next frame is P2, and by analogy, the data is stored in the array … … until the data of the M-th frame is stored, if the relative position distance t of any two points in the array is less than lambda and the 21 nodes meet the condition, the model of the hand currently positioned is judged to obviously not accord with the hand motion of the hand in human reality;
m is a value set according to the gesture recognition component, t is the relative position distance of any two points in the array, and lambda refers to the distance between the set judgment shaking farthest point and the central point;
2) the treatment method comprises the following steps:
taking one node from the hand model, recording the position information of the node in the 1 st to M-th frames, wherein the node position of the 1 st frame is P1(P1x, P1y, P1z), the node position of the 2 nd frame is P2(P2x, P2y, P2z) … …, and the node position of the M-th frame is PM(PMx,PMy,PMz), the stable point P0(P0x, P0y, P0z) is represented by the average data of M frames, P0x ═ P1x + P2x + … ++ PMx)/M,P0y=(P1y+P2y+….+PMy)/M,P0z=(P1z+P2z+….+PMz)/M, determining the coordinates of a P0 stable point;
when each frame of data is obtained, P0+ theta (P-P0) is used as the position of the current frame after the jitter amplitude is reduced, the result is displayed in the system, P is set as the position coordinate of the current frame received by the node from the gesture recognition component, theta is the compression ratio of jitter displacement, and theta (P-P0) is the displacement after the node displacement is compressed;
all 21 nodes are processed (namely processed according to the step 2);
case two: when the fingers of the hand are shielded in reality, the hand is relatively static in reality, the palm in the model is static, and the fingers in the model shake in a reciprocating manner;
A) the fingers of the hand are shielded in reality, and the hand is relatively static in reality, and the specific judgment method comprises the following steps:
the hand model comprises 21 nodes, wherein one node is selected, the position of the node in a first frame is recorded as Q1 and stored in an array, the position data of the next frame is Q2, the position data are stored in the array … … until the data QN of the N frames are stored, the position information Q1-QN of the previous N frames is obtained, one node is selected, the position distance s between any two frames of the node in the data of the N frames is less than gamma, and if 5-20 nodes in the 21 nodes meet the position distance s less gamma, the hand in reality is judged to be in a relatively static state;
wherein, N is a value set by the gesture recognition component, s is a position distance of any two frames of the node in the data of N frames, and gamma refers to a set threshold value;
B) when the fingers of the hand are shielded in reality and the hand is relatively static in reality, two reciprocating points appear on each node on the fingers of the hand in the model within 1 to 3 frames, and the finger tip nodes are adopted to judge whether the fingers in the model shake in a reciprocating manner or not;
taking nodes on fingertips of fingers of a hand in the model, taking one of the nodes, recording position data of the node in the L frames, storing the position data in an array, calculating the distance of the position data in the array, if the distance between the position of any one frame in the L frames and other frames meets u < epsilon or u > lambda m, if the distance meets u < epsilon or u > lambda m, performing reciprocating jitter on the fingers in the model, and taking two positions with the largest distance as two reciprocating points A and B;
wherein, L is a value set according to the gesture recognition component; u is the distance between any two frames in the same node L frame, epsilon is a set threshold value, and lambdam is a set jitter distance threshold value of the node;
all 21 nodes are processed (namely processed according to the step B);
C) adopting a smoothing algorithm to correct the reciprocating shaking of the fingers in the model, which specifically comprises the following steps:
a. taking two end points of the finger in the model which do reciprocating shaking as two reciprocating points A and B, and U is a vector of a finger root node of the hand in the model pointing to the fingertip of the same finger;
b. reciprocating the point A-reciprocating the point B to obtain a vector BA, and calculating a vector included angle by using a cos theta ═ vector Uxvector BA/| vector U| × | vector BA | formula;
c. if the included angle theta between the vector BA and the vector U is smaller than 90 degrees, A is a correct stable point;
if the included angle theta between the vector BA and the vector U is larger than 90 degrees, B is a correct stable point;
d. setting data when the point is correctly stabilized as finger data of the hand of the model;
case three: when gesture recognition component is in the gesture of discernment reality, the hand data condition that does not exist in the whole model can appear when the action range of hand in the reality is great, the discernment environment variation, surpass the discernment scope, this can lead to disappearing of model, if the user uses virtual hand to hold the object in virtual scene this moment, the hand model disappears and can cause the condition such as article fall, is unfavorable for user experience.
The judging method comprises the following steps:
the system does not receive hand data in the model.
The treatment method comprises the following steps:
recording M frame data before losing, recovering the last frame as current data, giving a prompt (such as hand data loss) in a virtual scene, and simulating a gesture according to a new signal if hand data in the model is detected again.
Compared with the prior art, the invention has the following advantages:
the gesture recognition algorithm in the scene of bare-hand control by the gesture recognition component is optimized by adopting the algorithm for reducing hand model jitter, and the nodes of the hands in the virtual scene are corrected, so that the computer can more accurately acquire the feedback information of the gesture recognition component, the recognition efficiency of the gesture recognition component is improved, a person can have better experience interactivity and better experience feeling in virtual fire-fighting experience, the cost of fire safety education is reduced, the safety of the fire safety education is improved, and the interestingness of the fire safety education is increased.
Drawings
FIG. 1 is a schematic flow chart of a virtual fire experience method based on VR technology according to the present invention;
FIG. 2 is a schematic structural diagram of a human hand model with 21 nodes;
FIG. 3 is a schematic diagram of a hand model that significantly does not conform to the hand motion of a human hand in reality;
FIG. 4 is a schematic diagram of a palm resting in a model and fingers reciprocating in the model;
FIG. 5 is a schematic diagram of a vector U of a finger root node of a hand in a model pointing to a fingertip of the same finger;
FIG. 6 is a diagram illustrating an angle θ between the vector BA and the vector U is smaller than 90;
fig. 7 is a diagram illustrating that the angle θ between the vector BA and the vector U is greater than 90 degrees.
Detailed Description
As shown in fig. 1, the virtual fire experience method based on VR technology of the present invention employs virtual fire experience equipment, which includes:
1) the head display equipment comprises a head-mounted display and a handle wirelessly interacting with the head-mounted display, and positioning modules are arranged in the head-mounted display and the handle. The two positioning modules can simultaneously track the positioning systems of the head-mounted display and the handle in space. The device meets the hardware requirements for realizing the function of displaying the virtual scene and the handle interaction. The head display device can adopt an HTC Vive head display device.
2) The gesture recognition component is internally provided with a positioning module. The gesture recognition component adopts a Fingo gesture recognition component. The Fingo gesture recognition component tracks the gesture of a user by using an infrared camera, recognizes the gesture of the user in real time by a gesture recognition algorithm, and can control the movement of a virtual character in a virtual scene and the interaction with a virtual object by the gesture by defining different gestures.
3) And the computer is connected with the head display equipment and the gesture recognition component.
The computer firstly transmits the fire-fighting picture to the head display device to provide a virtual fire-fighting scene, the experience person interacts with the virtual article in the virtual fire-fighting scene through the handle or the gesture recognition component, and the computer acquires the position information, the image information, the key feedback information and the like of the head-mounted display, the handle and the gesture recognition component in real time.
A virtual fire experience device using virtual reality technology; the system has relatively perfect fire safety knowledge point teaching; having multiple scenarios to experience; the method has multiple interaction mode selections, including handle interaction and bare hand gesture interaction; the practice results of the user are evaluated and analyzed, so that the user can better master the fire safety knowledge.
The virtual fire fighting scene comprises a fire fighting safety knowledge learning scene, a fire fighting equipment practicing scene, a fire escape scene, a potential safety hazard investigation scene and an evaluation scene.
The fire safety knowledge learning scene is used for learning fire safety knowledge, and comprises life fire safety common knowledge, fire icon identification knowledge and fire equipment use knowledge.
The practice scenario for fire fighting equipment is used for learning and practicing the use of fire fighting equipment, including practice of using fire extinguishers, safety ropes, fire blankets, and the like.
After various life scenes are simulated to generate fire, a user uses learned fire safety knowledge to search and use tools such as fire extinguishers and the like to realize a safe escape route to realize escape, and the scenes comprise common scenes such as classrooms, dormitories, laboratories and bedrooms. The scene comprises necessary high-precision models such as furniture, desks and the like, and also comprises flame and other effects.
The potential safety hazard investigation scene is used for detecting the learning condition of life fire prevention knowledge of a user. And setting 10 potential safety hazards, wherein the user can pass after checking 10 potential safety hazards within a certain time, otherwise, failure is prompted and the position of the hidden danger which is not found is given.
The evaluation scene is that the escape operation evaluation is carried out after the escape generation work or the escape generation failure, the evaluation gives correct operation and wrong action, and gives a combined exercise opportunity for the wrong action.
The 3D Max software is used for modeling the virtual scene, the software can be used for building a high-precision model, can simulate objects such as furniture, fire extinguishers and the like in reality and is displayed in the virtual scene, and a user can have an immersive feeling in the virtual scene.
Virtual scene construction uses the Unity 3D game engine to develop the implementation. The engine can simulate flame effect, collision effect and the like vividly. The SteamVR development kit is used for carrying out system development of VR projects, and the c # is used for compiling logical operation scripts, so that a series of operations such as movement, object touch, triggering events and the like in a virtual scene can be realized, and the virtual scene can be more realistic.
A virtual fire experience method based on VR technology adopts virtual fire experience equipment, and the specific method comprises the following steps:
the computer transmits a fire-fighting picture to the head display device to provide a virtual fire-fighting scene, and experiences the interaction between a handle or a gesture recognition component and a virtual article in the virtual fire-fighting scene, the gesture recognition component acquires image information of a hand and converts the image information into position information of each node of the hand in the virtual scene, the position information of each node of the hand is corrected by an algorithm for reducing hand model shake, and the computer acquires feedback information of the head-mounted display, the handle and the gesture recognition component in real time; the feedback information comprises position information, image information, key feedback information and the like. The hand in the virtual scene is the model of the hand.
The algorithm for reducing hand model jitter comprises the following steps:
as shown in fig. 2, the position information of each node of the hand is 21, each of 5 fingers has 4 nodes, wherein, the finger tip is 1 node, the 3 joints are corresponding to three nodes, the wrist is 1 single node, namely, the finger tip of the thumb is a joint 1, 3 joints of the thumb correspond to three nodes, namely a node 2, a node 3 and a node 4, the finger tip of the index finger is a joint 5, 3 joints of the index finger correspond to three nodes, namely a node 6, a node 7 and a node 8, the middle finger tip is a joint 9, 3 joints of the middle finger correspond to three nodes, namely a node 10, a node 11 and a node 12, the joint 13 is the ring finger tip, the three nodes are corresponding to 3 joints of the ring finger tip, namely a node 14, a node 15 and a node 16, wherein the fingertip of the little finger is a joint 17, 3 joints of the little finger correspond to three nodes, namely a node 18, a node 19 and a node 20, and the wrist is a node 21; the gesture recognition component establishes a three-dimensional coordinate system for the positions of the 21 nodes in the virtual fire scene.
The first condition is as follows: the thickness of the hand model is set to be 1A (unit), the thickness of the real hand is set to be 1B (unit), when the whole hand model translates within the range of 1A-2A units, and the amplitude of the real hand displaces within the range of 0-0.5B, the hand model is judged to be obviously inconsistent with the hand action of the real hand of a human (if the real hand is in a static state, the hand model is in a shaking state in a virtual scene). As shown in FIG. 3, the model of the entire hand translates within a certain range, the amplitude of which is obviously inconsistent with the hand motion of the human hand in reality, and the model moves back and forth along with the arrow.
The model for judging hands obviously does not accord with the hand motion of hands in human reality, and the method specifically comprises the following steps:
the hand model has 21 nodes, one of which is taken, the current position of which is recorded as P1 and is stored in an array, the position data of the next frame is P2 and is stored in the array … … until the data of the Mth frame is stored (M can be set to different values according to the actual gesture recognition component). If the relative position distance t of any two points in the array is less than lambda and 21 nodes meet the condition, the model of the hand at present is judged to be obviously not accordant with the hand motion of the hand in reality of the person.
M is a value set according to the gesture recognition component, t is the relative position distance of any two points in the array, and lambda refers to the distance between the set judgment shaking farthest point and the central point.
The treatment method comprises the following steps:
in this case, the displacement of the hand is small, taking one of the sectionsAnd recording position information of the node M frame, wherein the node position of the 1 st frame is P1(P1x, P1y and P1z), the node position of the 2 nd frame is P2(P2x, P2y and P2z) … …, and the node position of the M frame is PM(PMx,PMy,PMz) … …, a stable point P0(P0x, P0y, P0z) can be represented by average data of M frames, P0x ═ (P1x + P2x + …. + PMx)/M,P0y=(P1y+P2y+….+PMy)/M,P0z=(P1z+P2z+….+PMz)/M, P0 stable point coordinates can be determined.
When each frame of data is acquired, P0+ theta (P-P0) is used as the position of the current frame after the jitter amplitude is reduced, and as a result, the position coordinate of the current frame received by the node from the gesture recognition component is displayed in the system, P is set as the position coordinate of the node, wherein P0 is the position of a stable point, theta is the compression ratio of jitter displacement, and theta (P-P0) is the displacement after the displacement of the node is compressed, so that the effect of reducing the jitter amplitude is achieved, and the node moves in a small amplitude near the stable point.
All the 21 nodes are processed;
case two: when the fingers of the hand are shielded, the hand is relatively static in reality, the palm of the hand is static in the model, and the fingers in the model shake in a reciprocating manner, as shown in fig. 4, when one part of the hand is shielded, the gesture recognition module can possibly recognize the wrong position of the hand, and in a critical state, the virtual hand model can generate a reciprocating motion that the fingers quickly and greatly deflect towards the palm to recover the correct gesture;
the fingers of the hand are shielded in reality, and the hand is relatively static in reality, and the specific judgment method comprises the following steps:
the hand model comprises 21 nodes, wherein one node is taken, the current position of the node is recorded as Q1 and stored in an array, the position data of the next frame is Q2, and the like, the node is stored in the array … … until the data QN of the N frames, the position information of the previous N frames, Q1-QN, one node is taken, the position distance s between any two frames of the node in the data of the N frames is less than gamma, and if 5 to 20 nodes in the 21 nodes meet the position distance s less gamma, the hand in reality is judged to be in a relatively static state;
n is a value set by the gesture recognition component, s is the position distance of any two frames of the node in the data of the N frames, and gamma refers to a threshold value for reciprocating shaking of the finger in the set model;
when the fingers of the hand in reality are shielded and the hand in reality is relatively static, two reciprocating points appear on each node on the fingers of the hand in the model within 1-3 frames, and the finger tip nodes are adopted to judge whether the fingers in the model shake in a reciprocating mode or not. For example, the correct and wrong positions P0, P0 ' of the jitter are stabilized for several frames near the point P0, and then jump to the point P0 ' stabilized for several frames and then jump back to the point P0, and then the point P0, P0 ' reciprocates. By recording the position information of each frame of each point, whether the point has a reciprocating motion can be judged, and if the reciprocating motion occurs in a few shorter frames.
In this scenario, the correct node position is generally a stable point where the fingertip is close to the fingertip position with the gesture in the palm state.
Taking nodes on fingertips of fingers in a hand in the model, taking one of the nodes, recording position data of the node in L frames, storing the position data in an array A [ ], calculating the distance of each position data in the array A [ ], if the distance between the position of any one frame in the L frames and other frames meets u < epsilon or u > lambda m, if the distance meets u < epsilon or u > lambda m, performing reciprocating jitter on the fingers in the model, and taking two positions with the maximum distance as two reciprocating points A and B;
wherein, L is a value set according to the gesture recognition component; u is the distance between any two frames in the same node L frame, epsilon is a set threshold value, and lambdam is a set jitter distance threshold value of the node; and an array A [ ], recording the position data of a certain node on the hand in the virtual scene in a period of time, and storing the position data in the array.
Adopting a smoothing algorithm to correct the reciprocating shaking of the fingers in the model, which specifically comprises the following steps:
1. taking two end points of the finger in the model which do reciprocating shaking as two reciprocating points A and B, wherein U is a vector of a finger root node of the hand in the model pointing to the fingertip of the same finger, namely a vector U, as shown in FIG. 5;
2. the reciprocating point a-reciprocating point B obtains a vector BA, and the vector angle calculation is calculated by a formula of cos θ ═ vector U × vector BA/| vector U | × | vector BA |.
3. If the included angle θ between the vector BA and the vector U is smaller than 90 degrees, as shown in fig. 6, a is a correct stable point;
if the angle θ between the vector BA and the vector U is greater than 90 degrees, as shown in FIG. 7, B is the correct stable point.
4. The data at the correct stable point is set as the finger data of the hand of the model.
Case three: when gesture recognition component is in the gesture of discernment reality, the hand data condition that does not exist in the whole model can appear when the action range of hand in the reality is great, the discernment environment variation, surpass the discernment scope, this can lead to disappearing of model, if the user uses virtual hand to hold the object in virtual scene this moment, the hand model disappears and can cause the condition such as article fall, is unfavorable for user experience.
The judging method comprises the following steps:
the system does not receive hand data in the model.
The treatment method comprises the following steps:
recording M frame data before losing, recovering the last frame as current data, giving a prompt (such as hand data loss) in a virtual scene, and simulating a gesture according to a new signal if hand data in the model is detected again.

Claims (2)

1. A virtual fire experience method based on VR technology is characterized in that virtual fire experience equipment is adopted, and the method comprises the following steps:
the head-mounted display device comprises a head-mounted display and a handle which wirelessly interacts with the head-mounted display, wherein positioning modules are arranged in the head-mounted display and the handle;
the gesture recognition component is internally provided with a positioning module;
the computer is connected with the head display equipment and the gesture recognition component;
the specific method comprises the following steps:
the computer transmits a fire-fighting picture to the head display device to provide a virtual fire-fighting scene, a handle or a gesture recognition component for an experiential person interacts with a virtual article in the virtual fire-fighting scene, the gesture recognition component acquires image information of a hand and converts the image information into position information of each node of the hand in the virtual scene, the position information of each node of the hand is corrected by an algorithm for reducing hand model jitter, and the computer acquires feedback information of the head-mounted display, the handle and the gesture recognition component in real time;
the algorithm for reducing hand model jitter comprises the following steps:
the position information of each node of the hand is 21, each of 5 fingers has 4 nodes, wherein the number of the fingertips is 1 node, 3 joints correspond to three nodes, and the wrist position is 1 single node;
the first condition is as follows: the hand model is obviously not in accordance with the hand motion of the hand in human reality;
1) the model for judging hands obviously does not accord with the hand motion of hands in human reality, and the method specifically comprises the following steps:
the model of the hand has 21 nodes, one of the nodes is taken, the current position of the node is recorded as P1 and is stored in an array, the position data of the point of the next frame is P2, and by analogy, the data is stored in the array … … until the data of the M-th frame is stored, if the relative position distance t of any two points in the array is less than lambda and the 21 nodes meet the condition, the model of the hand currently positioned is judged to obviously not accord with the hand motion of the hand in human reality;
m is a value set according to the gesture recognition component, t is the relative position distance of any two points in the array, and lambda refers to the distance between the set judgment shaking farthest point and the central point;
2) the treatment method comprises the following steps:
taking one node from the hand model, recording the position information of the node in the 1 st to M-th frames, wherein the node position of the 1 st frame is P1(P1x, P1y, P1z), the node position of the 2 nd frame is P2(P2x, P2y, P2z) … …, and the node position of the M-th frame is PM(PMx,PMy,PMz), the stable point P0(P0x, P0y, P0z) is represented by the average data of M frames, P0x ═ P1x + P2x + … ++ PMx)/M,P0y=(P1y+P2y+….+PMy)/M,P0z=(P1z+P2z+….+PMz)/M, determining the coordinates of a P0 stable point;
when each frame of data is obtained, P0+ theta (P-P0) is used as the position of the current frame after the jitter amplitude is reduced, the result is displayed in the system, P is set as the position coordinate of the current frame received by the node from the gesture recognition component, theta is the compression ratio of jitter displacement, and theta (P-P0) is the displacement after the node displacement is compressed;
all the 21 nodes are processed;
case two: when the fingers of the hand are shielded in reality, the hand is relatively static in reality, the palm in the model is static, and the fingers in the model shake in a reciprocating manner;
A) the fingers of the hand are shielded in reality, and the hand is relatively static in reality, and the specific judgment method comprises the following steps:
the hand model comprises 21 nodes, wherein one node is selected, the position of the node in a first frame is recorded as Q1 and stored in an array, the position data of the next frame is Q2, the position data are stored in the array … … until the data QN of the N frames are stored, the position information Q1-QN of the previous N frames is obtained, one node is selected, the position distance s between any two frames of the node in the data of the N frames is less than gamma, and if 5-20 nodes in the 21 nodes meet the position distance s less gamma, the hand in reality is judged to be in a relatively static state;
wherein, N is a value set by the gesture recognition component, s is a position distance of any two frames of the node in the data of N frames, and gamma refers to a set threshold value;
B) when the fingers of the hand are shielded in reality and the hand is relatively static in reality, two reciprocating points appear on each node on the fingers of the hand in the model within 1 to 3 frames, and the finger tip nodes are adopted to judge whether the fingers in the model shake in a reciprocating manner or not;
taking nodes on fingertips of fingers of a hand in the model, taking one of the nodes, recording position data of the node in the L frames, storing the position data in an array, calculating the distance of the position data in the array, if the distance between the position of any one frame in the L frames and other frames meets u < epsilon or u > lambda m, if the distance meets u < epsilon or u > lambda m, performing reciprocating jitter on the fingers in the model, and taking two positions with the largest distance as two reciprocating points A and B;
wherein, L is a value set according to the gesture recognition component; u is the distance between any two frames in the same node L frame, epsilon is a set threshold value, and lambdam is a set jitter distance threshold value of the node;
all the 21 nodes are processed;
C) adopting a smoothing algorithm to correct the reciprocating shaking of the fingers in the model, which specifically comprises the following steps:
a. taking two end points of the finger in the model which do reciprocating shaking as two reciprocating points A and B, and U is a vector of a finger root node of the hand in the model pointing to the fingertip of the same finger;
b. reciprocating the point A-reciprocating the point B to obtain a vector BA, and calculating a vector included angle by using a cos theta ═ vector Uxvector BA/| vector U| × | vector BA | formula;
c. if the included angle theta between the vector BA and the vector U is smaller than 90 degrees, A is a correct stable point;
if the included angle theta between the vector BA and the vector U is larger than 90 degrees, B is a correct stable point;
d. setting data when the point is correctly stabilized as finger data of the hand of the model;
case three: the system does not receive hand data in the model;
recording the T frame data before losing, recovering the last frame as the current data, giving a prompt in a virtual scene, and simulating the gesture according to a new signal if the hand data in the model is detected again.
2. The VR technology based virtual fire experience method of claim 1, wherein the feedback information includes location information, image information, and button feedback information.
CN201811055005.9A 2018-09-11 2018-09-11 Virtual fire experience method based on VR technology Active CN109144273B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811055005.9A CN109144273B (en) 2018-09-11 2018-09-11 Virtual fire experience method based on VR technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811055005.9A CN109144273B (en) 2018-09-11 2018-09-11 Virtual fire experience method based on VR technology

Publications (2)

Publication Number Publication Date
CN109144273A CN109144273A (en) 2019-01-04
CN109144273B true CN109144273B (en) 2021-07-27

Family

ID=64824599

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811055005.9A Active CN109144273B (en) 2018-09-11 2018-09-11 Virtual fire experience method based on VR technology

Country Status (1)

Country Link
CN (1) CN109144273B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109992107B (en) * 2019-02-28 2023-02-24 济南大学 Virtual control device and control method thereof
CN109767668B (en) * 2019-03-05 2021-04-20 郑州万特电气股份有限公司 Unity 3D-based virtual fire-fighting training device
CN110827414A (en) * 2019-11-05 2020-02-21 江西服装学院 Virtual digital library experience device based on VR technique
CN111369854A (en) * 2020-03-20 2020-07-03 广西生态工程职业技术学院 Vr virtual reality laboratory operating system and method
CN112102667A (en) * 2020-09-27 2020-12-18 国家电网有限公司技术学院分公司 Video teaching system and method based on VR interaction
CN112835449A (en) * 2021-02-03 2021-05-25 青岛航特教研科技有限公司 Virtual reality and somatosensory device interaction-based safety somatosensory education system
CN113223364A (en) * 2021-06-29 2021-08-06 中国人民解放军海军工程大学 Submarine cable diving buoy simulation training system
CN115454240B (en) * 2022-09-05 2024-02-13 无锡雪浪数制科技有限公司 Meta universe virtual reality interaction experience system and method

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102930753B (en) * 2012-10-17 2014-11-12 中国石油化工股份有限公司 Gas station virtual training system and application
US9696795B2 (en) * 2015-02-13 2017-07-04 Leap Motion, Inc. Systems and methods of creating a realistic grab experience in virtual reality/augmented reality environments
CN104750397B (en) * 2015-04-09 2018-06-15 重庆邮电大学 A kind of Virtual mine natural interactive method based on body-sensing
CN108196686B (en) * 2018-03-13 2024-01-26 北京无远弗届科技有限公司 Hand motion gesture capturing device, method and virtual reality interaction system

Also Published As

Publication number Publication date
CN109144273A (en) 2019-01-04

Similar Documents

Publication Publication Date Title
CN109144273B (en) Virtual fire experience method based on VR technology
TWI377055B (en) Interactive rehabilitation method and system for upper and lower extremities
Khundam First person movement control with palm normal and hand gesture interaction in virtual reality
KR101643020B1 (en) Chaining animations
US8418085B2 (en) Gesture coach
US20090258703A1 (en) Motion Assessment Using a Game Controller
WO2018196552A1 (en) Method and apparatus for hand-type display for use in virtual reality scene
CN108805766B (en) AR somatosensory immersive teaching system and method
US10788889B1 (en) Virtual reality locomotion without motion controllers
CN108961910A (en) A kind of VR fire drill device
Babu et al. Can immersive virtual humans teach social conversational protocols?
Kirakosian et al. Near-contact person-to-3d character dance training: Comparing ar and vr for interactive entertainment
Rouanet et al. A comparison of three interfaces using handheld devices to intuitively drive and show objects to a social robot: the impact of underlying metaphors
Chang et al. A platform for mechanical assembly education using the Microsoft Kinect
Rupprecht et al. Virtual reality meets smartwatch: Intuitive, natural, and multi-modal interaction
US20230214007A1 (en) Virtual reality de-escalation tool for delivering electronic impulses to targets
Zhu et al. Keyboard before head tracking depresses user success in remote camera control
KR101519589B1 (en) Electronic learning apparatus and method for controlling contents by hand avatar
Kirakosian et al. Immersive simulation and training of person-to-3d character dance in real-time
Wei et al. Integrating Kinect and haptics for interactive STEM education in local and distributed environments
Kang et al. Integrated augmented and virtual reality technologies for realistic fire drill training
Martinez et al. Usability evaluation of virtual reality interaction techniques for positioning and manoeuvring in reduced, manipulation-oriented environments
Mamode et al. Cooperative tabletop working for humans and humanoid robots: Group interaction with an avatar
Cardoso Gesture-based locomotion in immersive VR worlds with the Leap motion controller
Kuramoto et al. Augmented practice mirror: A self-learning support system of physical motion with real-time comparison to teacher’s model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant