CN112835449A - Virtual reality and somatosensory device interaction-based safety somatosensory education system - Google Patents
Virtual reality and somatosensory device interaction-based safety somatosensory education system Download PDFInfo
- Publication number
- CN112835449A CN112835449A CN202110152998.7A CN202110152998A CN112835449A CN 112835449 A CN112835449 A CN 112835449A CN 202110152998 A CN202110152998 A CN 202110152998A CN 112835449 A CN112835449 A CN 112835449A
- Authority
- CN
- China
- Prior art keywords
- somatosensory
- module
- experience
- positioning
- motion
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 230000003238 somatosensory effect Effects 0.000 title claims abstract description 68
- 230000006378 damage Effects 0.000 claims abstract description 60
- 230000009471 action Effects 0.000 claims abstract description 40
- 238000004088 simulation Methods 0.000 claims abstract description 17
- 230000000007 visual effect Effects 0.000 claims abstract description 9
- 208000027418 Wounds and injury Diseases 0.000 claims description 48
- 208000014674 injury Diseases 0.000 claims description 48
- 230000000694 effects Effects 0.000 claims description 33
- 230000003321 amplification Effects 0.000 claims description 3
- 238000003199 nucleic acid amplification method Methods 0.000 claims description 3
- 230000003993 interaction Effects 0.000 abstract description 5
- 238000005452 bending Methods 0.000 abstract description 3
- 230000004044 response Effects 0.000 abstract description 3
- 238000000034 method Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000005553 drilling Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 206010053615 Thermal burn Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 210000000988 bone and bone Anatomy 0.000 description 1
- 238000013501 data transformation Methods 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/20—Education
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/40—3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/28—Recognition of hand or arm movements, e.g. recognition of deaf sign language
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- General Engineering & Computer Science (AREA)
- Tourism & Hospitality (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Educational Technology (AREA)
- Economics (AREA)
- Computer Hardware Design (AREA)
- Psychiatry (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Educational Administration (AREA)
- Multimedia (AREA)
- Social Psychology (AREA)
- Computer Graphics (AREA)
- Human Resources & Organizations (AREA)
- Marketing (AREA)
- Primary Health Care (AREA)
- Strategic Management (AREA)
- General Business, Economics & Management (AREA)
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention discloses a safety somatosensory education system based on interaction of virtual reality and somatosensory equipment, which comprises the following steps: the damage somatosensory device module is used for controlling a sensor on the somatosensory device and generating corresponding damage special-effect animations according to different experience scenes; the virtual reality simulation module is used for modeling an experience scene in a VR virtual environment; the platform positioning module is used for positioning the position and the visual angle of a person in the VR virtual environment by acquiring the position of a sensor on the corresponding VR helmet; and the bare hand identification and positioning module is used for identifying and positioning the states and positions of the two hands of the person in the VR virtual environment. According to the invention, the virtual scene and the external equipment can be communicated with each other through the VR equipment, and the response action can be judged through the bending and motion states of the hand, so that the experience scene model and the equipment model in the virtual environment are controlled, and the related experience action is completed.
Description
Technical Field
The invention relates to the technical field of safety education, in particular to a safety somatosensory education system based on interaction of virtual reality and somatosensory equipment.
Background
The 'safety body feeling' training mode originates from Japan, is gradually adopted in large-scale and heavy-duty industrial enterprises in Japan in recent five years, and is a training mode for sensing dangers through body feelings such as 'vision, hearing, smell, touch' and the like. The 'body-sensing' education training mode shows the injuries and prevention knowledge brought by mechanical injuries, object striking, burns and scalds, electric shocks and other common accidents under the condition of artificial controllable through the manipulations of 3D and 4D scene simulation, property preview and the like, and gives people the feeling of being personally on the scene and seeing and hearing vividly. Not only increases the learning enthusiasm of the students, but also improves the effect of the professional safety training.
The existing safety education generally takes a talking mode as a main mode, safety education contents such as PPT and the like are announced by playing videos, and the existing safety education is boring and has poor education effect; the existing safety education mode has poor experience, the VR virtual simulation education has poor interactivity which is basically weak or no interactivity, and after the education is finished, the teaching aid plays a good warning role for the students that the education cannot go deep into the human heart; the traditional virtual simulation technology and even products based on VR equipment are often single in body feeling, such as vision, hearing and touch, and virtual simulation body feeling projects applied to professional safety education only can provide simulation of virtual simulation scenes and cannot provide touch experience of injury brought by a real production environment; the traditional virtual simulation VR experience project usually needs interaction by means of a VR operating handle, and the experience sense is extremely unfriendly, fussy and unrealistic for the professional safety education industry; the traditional safety motion sensing education equipment is simple in motion sensing, has no substantive cases and contents, and cannot be subjected to storyline and interactive experience by being combined with injury cases in the actual production process;
in view of the above, there is a need for a safety somatosensory education system based on interaction between virtual reality and somatosensory equipment, which can communicate between a virtual scene and external equipment through VR equipment, determine response actions through bending and movement of hands, and complete related experience actions.
Disclosure of Invention
The invention aims to provide a safety motion sensing education system based on interaction of virtual reality and motion sensing equipment, and aims to solve the problems in the background technology.
In order to solve the technical problems, the invention provides the following technical scheme: a safety somatosensory education system based on virtual reality and somatosensory device interaction comprises:
the injury somatosensory device module comprises: the system comprises a motion sensing device, a control module and a control module, wherein the motion sensing device is used for controlling a sensor on the motion sensing device and generating corresponding injury special effect animations aiming at different experience scenes;
virtual reality simulation module: for modeling an experience scenario in a VR virtual environment;
a platform positioning module: positioning the position and the visual angle of a person in the VR virtual environment by acquiring the position of a sensor on a corresponding VR helmet;
bare hand discernment location module: the system is used for identifying and positioning the states and positions of the hands of the people in the VR virtual environment;
a central control module: the system is used for selecting a safety motion sensing education mode, controlling the starting of a virtual scene, starting motion sensing equipment and selecting an experience scene;
a display module: the system is used for displaying injury special effect animations of various scenes or displaying a view picture of a character in a VR virtual environment in real time;
a database: for recording the data information generated by each module.
The virtual reality simulation module establishes an experience scene model according to the space position of the injury somatosensory equipment, the platform positioning module positions the positions of characters in the experience scene model, the bare hand recognition positioning module positions the positions of both hands of the characters in the experience scene model, data comparison is carried out between the positions and the motion states of skeleton points of both hands and a gesture comparison database at the same time, the gestures and actions of the characters are recognized, the central control module controls the virtual scene to be opened, the somatosensory equipment to be started and the experience scene to be selected through the appointed gestures and actions, and finally, pictures in the virtual scene are displayed through the display module.
Further, the injury somatosensory device module comprises a plurality of somatosensory devices, a contact sensor is arranged at a designated position on each somatosensory device,
when the motion sensing device is started and the contact sensor is contacted with the hand of a person, the contact sensor feeds back a signal to the injury motion sensing device module to trigger the injury special effect animation of the corresponding experience scene and upload the animation to the database,
different contact sensors on the somatosensory equipment correspond to damage special effects of different experience scenes.
The body sensing equipment comprises a plurality of body sensing equipment, each body sensing equipment is provided with a plurality of contact sensors which are distributed at the appointed position of the body sensing equipment, different contact sensors correspond to different experience scene damage special effect animations, when a person triggers the contact sensors, an injury body sensing equipment module can match the experience scene damage special effect animation corresponding to the contact sensors according to information transmitted by the sensors, and the matched experience scene damage special effect animation is uploaded to a database and then displayed on a display screen through a display module.
Further, the virtual reality simulation module collects the sizes and the distances of different corresponding somatosensory devices in the harm somatosensory device module,
the size of different body sensing devices is 1: scale of 1 builds a device model in the VR virtual environment,
under the condition that the size of the device model is not changed, the distance between different somatosensory devices is k: the scale of 1 amplifies the spatial position relation in equal proportion,
on the basis of the device model, an experience scene model in the VR virtual environment is established according to the spatial position relation after geometric amplification between different somatosensory devices, a space coordinate system is established by taking a prefabricated initial point as an origin, and data of the experience scene model is uploaded to a database.
The invention adopts different scales when establishing the experience scene model, and adopts the following steps when establishing the equipment model: the scale of 1 is for making the equipment model seem truer, and more convenient, swift when the personage seeks the touch sensor on the body sensing equipment. When a virtual space where different somatosensory devices are located is established, k is adopted: the scale of 1 is to enlarge the space where the human is located, so that the virtual space looks more spacious, and the distance between different body sensing devices looks larger.
Further, the platform positioning module comprises a corresponding VR helmet, three positioning sensors S1, S2 and S3 are arranged on the VR helmet, the positioning sensor S1 is positioned at the front part of the VR helmet, the positioning sensor S2 and the positioning sensor S3 are respectively positioned at the left side and the right side of the VR helmet, the positioning sensors S1, S2 and S3 are distributed in an equilateral triangle shape and are arranged at the same horizontal height,
in a space coordinate system of the experience scene model, a straight line where a median line passing through a point S1 in an equilateral triangle formed by S1, S2 and S3 is located is calculated according to coordinate positions of positioning sensors S1, S2 and S3 in the space coordinate system, the midpoint of a line segment S2S3 is A, the direction from the point A to the point S1 is the view angle direction of a person, the central point of the equilateral triangle formed by S1, S2 and S3 is the position of the person, and the platform positioning module uploads the view angle direction data and the position data of the person to a database.
According to the method, the property of the equilateral triangle is utilized, the straight line where the middle line of the equilateral triangle passing through the point S1 in the space coordinate system is located is obtained, the inclination direction of the obtained straight line is the same as the visual angle direction of the character, the inclination direction of the obtained straight line is taken as the visual angle direction of the character in the experience scene model, and the position of the VR helmet in the virtual scene model, namely the position of the character in the virtual scene model, can be accurately judged through the central point of the equilateral triangle.
Furthermore, the bare hand recognition and positioning module comprises a camera 1, a camera 2 and a gesture comparison database,
the gesture comparison database is used for inputting data of different gestures and actions in advance according to different positions of skeleton points of two hands,
the camera 1 shoots from top to bottom to obtain the skeleton point infrared images of the hands of the person,
the camera 2 is arranged right in front of the motion sensing device, shoots from front to back to obtain skeleton point infrared images of both hands of a person,
acquiring the coordinate positions of the human hand skeleton points in the space coordinate system of the experience scene model according to the positions of the human hand skeleton points in the infrared image,
and comparing the spatial position and the motion state of the double-hand skeleton points of the character with a gesture comparison database to match corresponding gestures and actions of the character, and uploading the gestures and the double-hand actions of the character to the database in real time.
The invention can determine a three-dimensional space according to two vertical surfaces, can determine the plane coordinates of the two-hand skeleton points on the plane by calculating the position proportion of the two-hand skeleton points of the person on the infrared image, and can calculate the coordinates of the skeleton points in the space by the position coordinates of the same skeleton point on two vertical planes. According to the coordinate positions of the two-hand skeleton points in the space, the relative positions of the skeleton points on each hand are calculated and matched with a gesture comparison database, the gestures of the people can be recognized, and the actions of the people can be recognized by matching with the gesture comparison database according to the motion states of the two-hand skeleton points in the space.
Further, the security somatosensory education mode in the central control module comprises the following steps: a single-machine mode and a VR mode,
in the single machine mode, the central control module controls all the body sensing devices to be started,
in the VR mode, the central control module identifies the designated gesture and the double-hand action uploaded to the database by the bare-hand identification and positioning module, controls the virtual scene to be opened, and simultaneously performs experience scene switching selection through the designated gesture and the double-hand action,
and when the character selects the experience scene, the motion sensing equipment corresponding to the experience scene and the equipment model in the corresponding VR virtual environment are synchronously started.
The control module has two safety somatosensory education modes, namely a single machine mode and a VR mode, the single machine mode is simple, fewer modules need to be operated in the safety somatosensory education system, and only the virtual reality simulation module, the central control module, the display module and the database need to be operated; and the security body feels the module of education system operation many in the VR mode, and the experience of personage is better, and is more real, and the substitution is felt stronger.
Furthermore, the display module only displays the injury special effect animation of the experience scene corresponding to the contact sensor in the single-machine mode;
the display module displays a view picture of a character in the VR virtual environment in real time in the VR mode, and simultaneously displays a special injury effect animation of an experience scene corresponding to a contact sensor when the character operates in a violation mode to trigger the contact sensor of the motion sensing equipment.
According to different safety somatosensory education modes, the display module displays different contents.
Further, platform orientation module and bare hand discernment orientation module adopt different locate mode to fix a position people position and both hands position respectively, and when the finger after the location exceeded with the distance of personage and set for the threshold value, the finger and the personage location position mistake that the system can automatic judgement acquireed, platform orientation module and bare hand discernment orientation module fix a position people position and both hands position again.
The two positioning modes are adopted, the distance between the positioned finger and the figure is compared with a set threshold value, if the distance between the positioned finger and the figure exceeds the set threshold value, the positioning coordinate error of the finger and the figure is larger, at least one of the data acquired by the two positioning modes is inaccurate, and therefore the platform positioning module and the bare hand identification positioning module are required to position the position of the figure and the positions of both hands again.
Compared with the prior art, the invention has the following beneficial effects: according to the invention, the virtual scene and the external equipment can be communicated with each other through the VR equipment, and the response action can be judged through the bending and motion states of the hand, so that the experience scene model and the equipment model in the virtual environment are controlled, and the related experience action is completed.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a schematic diagram of a safety motion sensing education system based on virtual reality and motion sensing device interaction according to the present invention;
FIG. 2 is a schematic diagram of an injury somatosensory device module of the safety somatosensory education system based on virtual reality and somatosensory device interaction according to the invention;
FIG. 3 is a schematic flow diagram of a platform location module of a safety motion-sensing education system based on virtual reality and motion-sensing device interaction according to the present invention;
FIG. 4 is a schematic flow chart of a bare hand recognition positioning module of the safety motion sensing education system based on virtual reality and motion sensing device interaction according to the invention;
fig. 5 is a flow diagram of a central control module of the safety motion sensing education system based on virtual reality and motion sensing device interaction.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1-5, the present invention provides the following technical solutions: a safety somatosensory education system based on virtual reality and somatosensory device interaction comprises:
the injury somatosensory device module comprises: the system comprises a motion sensing device, a control module and a control module, wherein the motion sensing device is used for controlling a sensor on the motion sensing device and generating corresponding injury special effect animations aiming at different experience scenes;
virtual reality simulation module: for modeling an experience scenario in a VR virtual environment;
a platform positioning module: positioning the position and the visual angle of a person in the VR virtual environment by acquiring the position of a sensor on a corresponding VR helmet;
bare hand discernment location module: the system is used for identifying and positioning the states and positions of the hands of the people in the VR virtual environment;
a central control module: the system is used for selecting a safety motion sensing education mode, controlling the starting of a virtual scene, starting motion sensing equipment and selecting an experience scene;
a display module: the system is used for displaying injury special effect animations of various scenes or displaying a view picture of a character in a VR virtual environment in real time;
a database: for recording the data information generated by each module.
The virtual reality simulation module establishes an experience scene model according to the space position of the injury somatosensory equipment, the platform positioning module positions the positions of characters in the experience scene model, the bare hand recognition positioning module positions the positions of both hands of the characters in the experience scene model, data comparison is carried out between the positions and the motion states of skeleton points of both hands and a gesture comparison database at the same time, the gestures and actions of the characters are recognized, the central control module controls the virtual scene to be opened, the somatosensory equipment to be started and the experience scene to be selected through the appointed gestures and actions, and finally, pictures in the virtual scene are displayed through the display module.
The injury somatosensory device module comprises a plurality of somatosensory devices, contact sensors are arranged at designated positions on the somatosensory devices,
when the motion sensing device is started and the contact sensor is contacted with the hand of a person, the contact sensor feeds back a signal to the injury motion sensing device module to trigger the injury special effect animation of the corresponding experience scene and upload the animation to the database,
different contact sensors on the somatosensory equipment correspond to damage special effects of different experience scenes.
The body sensing equipment comprises a plurality of body sensing equipment, each body sensing equipment is provided with a plurality of contact sensors which are distributed at the appointed position of the body sensing equipment, different contact sensors correspond to different experience scene damage special effect animations, when a person triggers the contact sensors, an injury body sensing equipment module can match the experience scene damage special effect animation corresponding to the contact sensors according to information transmitted by the sensors, and the matched experience scene damage special effect animation is uploaded to a database and then displayed on a display screen through a display module.
The body feeling device in the embodiment comprises a drilling machine rolling-in body feeling device, a lathe rolling-in body feeling device and an electrical injury body feeling device, and the contact sensors are in contact with different parts during illegal operation of people, so that different experience scene injury special effect animations are triggered.
For example: the drilling machine is drawn into the motion sensing equipment and the lathe is drawn into the motion sensing equipment, and when the real working process of experiencing the equipment of personage, the personage violates the operating rules, wears the gloves operating device violating rules, and the injury process that the staff is drawn into by the main shaft of equipment.
For example: in the electrical injury body sensing device, circuits at different parts of the device leak electricity, and when the device is contacted with a person, a human body gets an electric shock.
The virtual reality simulation module collects the sizes and the distances of different corresponding somatosensory devices in the injury somatosensory device module,
the size of different body sensing devices is 1: scale of 1 builds a device model in the VR virtual environment,
under the condition that the size of the device model is not changed, the distance between different somatosensory devices is k: the scale of 1 amplifies the spatial position relation in equal proportion,
on the basis of the device model, an experience scene model in the VR virtual environment is established according to the spatial position relation after geometric amplification between different somatosensory devices, a space coordinate system is established by taking a prefabricated initial point as an origin, and data of the experience scene model is uploaded to a database.
The invention adopts different scales when establishing the experience scene model, and adopts the following steps when establishing the equipment model: the scale of 1 is for making the equipment model seem truer, and more convenient, swift when the personage seeks the touch sensor on the body sensing equipment. When a virtual space where different somatosensory devices are located is established, k is adopted: the scale of 1 is to enlarge the space where the human is located, so that the virtual space looks more spacious, and the distance between different body sensing devices looks larger.
In this embodiment, when the k value in the scale is 5, then the virtual reality simulation module may first perform a function according to the size of each somatosensory device by 1: scale of 1 builds a device model in the VR virtual environment,
then the virtual reality simulation module can press 5 according to the distance between each somatosensory device: scale of 1 establishes an experience scene model in a VR virtual environment.
The platform positioning module comprises corresponding VR helmets, three positioning sensors S1, S2 and S3 are arranged on the VR helmets, the positioning sensor S1 is positioned at the front part of the VR helmets, the positioning sensor S2 and the positioning sensor S3 are respectively positioned at the left side and the right side of the VR helmets, the positioning sensors S1, S2 and S3 are distributed in an equilateral triangle shape and are arranged at the same horizontal height,
in a space coordinate system of the experience scene model, a straight line where a median line passing through a point S1 in an equilateral triangle formed by S1, S2 and S3 is located is calculated according to coordinate positions of positioning sensors S1, S2 and S3 in the space coordinate system, the midpoint of a line segment S2S3 is A, the direction from the point A to the point S1 is the view angle direction of a person, the central point of the equilateral triangle formed by S1, S2 and S3 is the position of the person, and the platform positioning module uploads the view angle direction data and the position data of the person to a database.
According to the method, the property of the equilateral triangle is utilized, the straight line where the middle line of the equilateral triangle passing through the point S1 in the space coordinate system is located is obtained, the inclination direction of the obtained straight line is the same as the visual angle direction of the character, the inclination direction of the obtained straight line is taken as the visual angle direction of the character in the experience scene model, and the position of the VR helmet in the virtual scene model, namely the position of the character in the virtual scene model, can be accurately judged through the central point of the equilateral triangle.
In this embodiment, S1, S2, S3 are fixed in position on the VR headset and are equilateral triangles, and the viewing angle direction and the spatial position of the person can be determined by the straight line where the median line passing through the point S1 in the equilateral triangles is located and the midpoints of the equilateral triangles.
For example, the coordinates of S1, S2 and S3 in the space coordinate system are respectively Andthe coordinates of point A are thenThe straight line of the median line of the equilateral triangle passing through the point S1 isThe coordinate position of the figure is
The bare hand recognition positioning module comprises a camera 1, a camera 2 and a gesture comparison database,
the gesture comparison database is used for inputting data of different gestures and actions in advance according to different positions of skeleton points of two hands,
the camera 1 shoots from top to bottom to obtain the skeleton point infrared images of the hands of the person,
the camera 2 is arranged right in front of the motion sensing device, shoots from front to back to obtain skeleton point infrared images of both hands of a person,
acquiring the coordinate positions of the human hand skeleton points in the space coordinate system of the experience scene model according to the positions of the human hand skeleton points in the infrared image,
and comparing the spatial position and the motion state of the double-hand skeleton points of the character with a gesture comparison database to match corresponding gestures and actions of the character, and uploading the gestures and the double-hand actions of the character to the database in real time.
The invention can determine a three-dimensional space according to two vertical surfaces, can determine the plane coordinates of the two-hand skeleton points on the plane by calculating the position proportion of the two-hand skeleton points of the person on the infrared image, and can calculate the coordinates of the skeleton points in the space by the position coordinates of the same skeleton point on two vertical planes. According to the coordinate positions of the two-hand skeleton points in the space, the relative positions of the skeleton points on each hand are calculated and matched with a gesture comparison database, the gestures of the people can be recognized, and the actions of the people can be recognized by matching with the gesture comparison database according to the motion states of the two-hand skeleton points in the space.
In this embodiment, after the spatial coordinate positions of the skeleton points of the two hands are calculated, the shape of each hand can be determined according to the relative position of each skeleton point on each hand, and the gesture of the person can be recognized by matching the shape of each hand with the gesture comparison database.
The security somatosensory education mode in the central control module comprises the following steps: a single-machine mode and a VR mode,
in the single machine mode, the central control module controls all the body sensing devices to be started,
in the VR mode, the central control module identifies the designated gesture and the double-hand action uploaded to the database by the bare-hand identification and positioning module, controls the virtual scene to be opened, and simultaneously performs experience scene switching selection through the designated gesture and the double-hand action,
and when the character selects the experience scene, the motion sensing equipment corresponding to the experience scene and the equipment model in the corresponding VR virtual environment are synchronously started.
The control module has two safety somatosensory education modes, namely a single machine mode and a VR mode, the single machine mode is simple, fewer modules need to be operated in the safety somatosensory education system, and only the virtual reality simulation module, the central control module, the display module and the database need to be operated; and the security body feels the module of education system operation many in the VR mode, and the experience of personage is better, and is more real, and the substitution is felt stronger.
In the single-machine mode, the central control module is only responsible for controlling all the somatosensory devices to be started;
in the VR mode of the embodiment, the central control module is not only responsible for starting the motion sensing device, but also responsible for starting the virtual scene and switching and selecting the experience scene,
the appointed action of starting the virtual scene is an action of horizontally opening two hands;
the designated actions for the experience scene switching are two, wherein the left hand or right hand waving to the left is switched to the previous one, and the left hand or right hand waving to the right is switched to the next one;
the designated action for experience scene selection is the action of holding the left or right hand with the fingers except the index finger, extending the index finger, and then clicking forward.
The display module only displays the injury special effect animation of the experience scene corresponding to the contact sensor in the single-machine mode;
the display module displays a view picture of a character in the VR virtual environment in real time in the VR mode, and simultaneously displays a special injury effect animation of an experience scene corresponding to a contact sensor when the character operates in a violation mode to trigger the contact sensor of the motion sensing equipment.
According to different safety somatosensory education modes, the display module displays different contents.
Platform orientation module and bare hand discernment orientation module adopt different locate mode to fix a position people position and both hands position respectively, and when the finger after the location exceeded with the distance of personage and set for the threshold value, the system can judge finger and the personage location position mistake of acquireing automatically, platform orientation module and bare hand discernment orientation module fix a position people position and both hands position again.
The two positioning modes are adopted, the distance between the positioned finger and the figure is compared with a set threshold value, if the distance between the positioned finger and the figure exceeds the set threshold value, the positioning coordinate error of the finger and the figure is larger, at least one of the data acquired by the two positioning modes is inaccurate, and therefore the platform positioning module and the bare hand identification positioning module are required to position the position of the figure and the positions of both hands again.
In this embodiment, the set threshold of the distance between the finger and the person is 1.2 meters, because the distance between the finger and the top of the head is about 1.2 meters when the arm of the adult is drooping, and when the distance between the finger and the person exceeds the set threshold, the positioning deviation is relatively large, which may affect the actual operation of the person, and thus needs to be repositioned.
The working principle of the invention is as follows: when the invention works, people firstly need to select the safety somatosensory education mode in the central control module,
if the security somatosensory education mode selected by the person is the single-machine mode,
the central control module controls all the motion sensing devices to be started, characters can perform simulation operation on the motion sensing devices, when the characters contact the contact sensors on the motion sensing devices, the injury motion sensing device modules can match the experience scene injury special effect animations corresponding to the contact sensors according to information transmitted by the sensors, the matched experience scene injury special effect animations are uploaded to the database, and then the experiences scene injury special effect animations are displayed on the display screen through the display module.
If the person selects the safety somatosensory education mode as the VR mode,
bare hand discernment orientation module fixes a position each bone point of both hands, through matcing with gesture contrast database, discern the gesture and the action of the both hands of personage, then upload to the database, when the personage made the action that both hands level was opened, well accuse module discerned the both hands action that bare hand discernment orientation module uploaded in the database, control virtual scene is opened, well accuse module discerns the waving action of the left hand or the right hand that bare hand discernment orientation module uploaded in the database simultaneously, switch experience scene, after the personage selected experience scene and made the affirmation, the body sensing equipment that well accuse module control experience scene corresponds and the corresponding equipment model synchronous start in the VR virtual environment of virtual reality simulation module.
The platform positioning module can position positioning sensors S1, S2 and S3 on the VR helmet, perform data transformation in a space coordinate system, determine the visual angle direction and the space position of a person by calculating a straight line where a median line of a passing point S1 in an equilateral triangle is located and the midpoint of the equilateral triangle, store the data in a database and display the data in a display screen through the display module.
The bare hand recognition and positioning module can calculate the space coordinates of all skeleton points through the infrared images of all skeleton points of both hands shot by the camera 1 and the camera 2, then matches with the gesture contrast database through the relative positions of all skeleton points in the space coordinates, recognizes the gestures and actions of people, uploads the data to the database, and displays the data in the display screen through the display module.
When a person contacts a contact sensor on the motion sensing device, the contact sensor transmits a signal to the injury motion sensing device module, the injury motion sensing device module matches a corresponding experience scene injury special effect animation according to the signal-transmitting sensor, uploads the matched experience scene injury special effect animation to a database, and then presents the experience scene injury special effect animation on a display screen through the display module.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Finally, it should be noted that: although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that changes may be made in the embodiments and/or equivalents thereof without departing from the spirit and scope of the invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (8)
1. The utility model provides an education system is felt to safety body based on virtual reality and body sense equipment are mutual which characterized in that includes:
the injury somatosensory device module comprises: the system comprises a motion sensing device, a control module and a control module, wherein the motion sensing device is used for controlling a sensor on the motion sensing device and generating corresponding injury special effect animations aiming at different experience scenes;
virtual reality simulation module: for modeling an experience scenario in a VR virtual environment;
a platform positioning module: positioning the position and the visual angle of a person in the VR virtual environment by acquiring the position of a sensor on a corresponding VR helmet;
bare hand discernment location module: the system is used for identifying and positioning the states and positions of the hands of the people in the VR virtual environment;
a central control module: the system is used for selecting a safety motion sensing education mode, controlling the starting of a virtual scene, starting motion sensing equipment and selecting an experience scene;
a display module: the system is used for displaying injury special effect animations of various scenes or displaying a view picture of a character in a VR virtual environment in real time;
a database: for recording the data information generated by each module.
2. The system of claim 1, wherein the system is configured to provide a virtual reality-somatosensory device interaction based on a user interface, the virtual reality-somatosensory device interaction based on a user interface comprising: the injury somatosensory device module comprises a plurality of somatosensory devices, contact sensors are arranged at designated positions on the somatosensory devices,
when the motion sensing device is started and the contact sensor is contacted with the hand of a person, the contact sensor feeds back a signal to the injury motion sensing device module to trigger the injury special effect animation of the corresponding experience scene and upload the animation to the database,
different contact sensors on the somatosensory equipment correspond to damage special effects of different experience scenes.
3. The system of claim 2, wherein the system is configured to provide a virtual reality-based, motion-sensing device interaction for providing a secure motion-sensing experience, and wherein: the virtual reality simulation module collects the sizes and the distances of different corresponding somatosensory devices in the injury somatosensory device module,
the size of different body sensing devices is 1: scale of 1 builds a device model in the VR virtual environment,
under the condition that the size of the device model is not changed, the distance between different somatosensory devices is k: the scale of 1 amplifies the spatial position relation in equal proportion,
on the basis of the device model, an experience scene model in the VR virtual environment is established according to the spatial position relation after geometric amplification between different somatosensory devices, a space coordinate system is established by taking a prefabricated initial point as an origin, and data of the experience scene model is uploaded to a database.
4. The system of claim 3, wherein the system is configured to provide a virtual reality-based, motion-sensing device interaction for providing a secure motion-sensing experience, and wherein: the platform positioning module comprises corresponding VR helmets, three positioning sensors S1, S2 and S3 are arranged on the VR helmets, the positioning sensor S1 is positioned at the front part of the VR helmets, the positioning sensor S2 and the positioning sensor S3 are respectively positioned at the left side and the right side of the VR helmets, the positioning sensors S1, S2 and S3 are distributed in an equilateral triangle shape and are arranged at the same horizontal height,
in a space coordinate system of the experience scene model, a straight line where a median line passing through a point S1 in an equilateral triangle formed by S1, S2 and S3 is located is calculated according to coordinate positions of positioning sensors S1, S2 and S3 in the space coordinate system, the midpoint of a line segment S2S3 is A, the direction from the point A to the point S1 is the view angle direction of a person, the central point of the equilateral triangle formed by S1, S2 and S3 is the position of the person, and the platform positioning module uploads the view angle direction data and the position data of the person to a database.
5. The system of claim 3, wherein the system is configured to provide a virtual reality-based, motion-sensing device interaction for providing a secure motion-sensing experience, and wherein: the bare hand recognition positioning module comprises a camera 1, a camera 2 and a gesture comparison database,
the gesture comparison database is used for inputting data of different gestures and actions in advance according to different positions of skeleton points of two hands,
the camera 1 shoots from top to bottom to obtain the skeleton point infrared images of the hands of the person,
the camera 2 is arranged right in front of the motion sensing device, shoots from front to back to obtain skeleton point infrared images of both hands of a person,
acquiring the coordinate positions of the human hand skeleton points in the space coordinate system of the experience scene model according to the positions of the human hand skeleton points in the infrared image,
and comparing the spatial position and the motion state of the double-hand skeleton points of the character with a gesture comparison database to match corresponding gestures and actions of the character, and uploading the gestures and the double-hand actions of the character to the database in real time.
6. The system of claim 5, wherein the system is configured to provide a virtual reality-based, motion-sensing device interaction for providing a secure motion-sensing experience, and wherein: the security somatosensory education mode in the central control module comprises the following steps: a single-machine mode and a VR mode,
in the single machine mode, the central control module controls all the body sensing devices to be started,
in the VR mode, the central control module identifies the designated gesture and the double-hand action uploaded to the database by the bare-hand identification and positioning module, controls the virtual scene to be opened, and simultaneously performs experience scene switching selection through the designated gesture and the double-hand action,
and when the character selects the experience scene, the motion sensing equipment corresponding to the experience scene and the equipment model in the corresponding VR virtual environment are synchronously started.
7. The system of claim 6, wherein the virtual reality-somatosensory device interaction-based security somatosensory education system comprises: the display module only displays the injury special effect animation of the experience scene corresponding to the contact sensor in the single-machine mode;
the display module displays a view picture of a character in the VR virtual environment in real time in the VR mode, and simultaneously displays a special injury effect animation of an experience scene corresponding to a contact sensor when the character operates in a violation mode to trigger the contact sensor of the motion sensing equipment.
8. The system of claim 1, wherein the system is configured to provide a virtual reality-somatosensory device interaction based on a user interface, the virtual reality-somatosensory device interaction based on a user interface comprising: platform orientation module and bare hand discernment orientation module adopt different locate mode to fix a position people position and both hands position respectively, and when the finger after the location exceeded with the distance of personage and set for the threshold value, the system can judge finger and the personage location position mistake of acquireing automatically, platform orientation module and bare hand discernment orientation module fix a position people position and both hands position again.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110152998.7A CN112835449A (en) | 2021-02-03 | 2021-02-03 | Virtual reality and somatosensory device interaction-based safety somatosensory education system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110152998.7A CN112835449A (en) | 2021-02-03 | 2021-02-03 | Virtual reality and somatosensory device interaction-based safety somatosensory education system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112835449A true CN112835449A (en) | 2021-05-25 |
Family
ID=75932019
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110152998.7A Withdrawn CN112835449A (en) | 2021-02-03 | 2021-02-03 | Virtual reality and somatosensory device interaction-based safety somatosensory education system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112835449A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114035677A (en) * | 2021-10-25 | 2022-02-11 | 中冶智诚(武汉)工程技术有限公司 | Universal interface implementation method for interaction between both hands and virtual glove peripherals |
CN114792364A (en) * | 2022-04-01 | 2022-07-26 | 广亚铝业有限公司 | Aluminum profile door and window projection system and method based on VR technology |
WO2024160171A1 (en) * | 2023-02-02 | 2024-08-08 | 华为技术有限公司 | Video processing method and related electronic device |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103839040A (en) * | 2012-11-27 | 2014-06-04 | 株式会社理光 | Gesture identification method and device based on depth images |
CN104881128A (en) * | 2015-06-18 | 2015-09-02 | 北京国承万通信息科技有限公司 | Method and system for displaying target image in virtual reality scene based on real object |
CN106020440A (en) * | 2016-05-05 | 2016-10-12 | 西安电子科技大学 | Emotion interaction based Peking Opera teaching system |
CN106445176A (en) * | 2016-12-06 | 2017-02-22 | 腾讯科技(深圳)有限公司 | Man-machine interaction system and interaction method based on virtual reality technique |
CN107358833A (en) * | 2017-09-12 | 2017-11-17 | 国网上海市电力公司 | Transformer station's operation maintenance personnel pseudo operation training system |
CN109144273A (en) * | 2018-09-11 | 2019-01-04 | 杭州师范大学 | A kind of virtual fire-fighting experiential method based on VR technology |
CN111694427A (en) * | 2020-05-13 | 2020-09-22 | 北京农业信息技术研究中心 | AR virtual honey shake interactive experience system, method, electronic equipment and storage medium |
CN112136158A (en) * | 2020-07-13 | 2020-12-25 | 深圳盈天下视觉科技有限公司 | Infrared positioning method, infrared positioning device and infrared positioning system |
-
2021
- 2021-02-03 CN CN202110152998.7A patent/CN112835449A/en not_active Withdrawn
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103839040A (en) * | 2012-11-27 | 2014-06-04 | 株式会社理光 | Gesture identification method and device based on depth images |
CN104881128A (en) * | 2015-06-18 | 2015-09-02 | 北京国承万通信息科技有限公司 | Method and system for displaying target image in virtual reality scene based on real object |
CN106020440A (en) * | 2016-05-05 | 2016-10-12 | 西安电子科技大学 | Emotion interaction based Peking Opera teaching system |
CN106445176A (en) * | 2016-12-06 | 2017-02-22 | 腾讯科技(深圳)有限公司 | Man-machine interaction system and interaction method based on virtual reality technique |
CN107358833A (en) * | 2017-09-12 | 2017-11-17 | 国网上海市电力公司 | Transformer station's operation maintenance personnel pseudo operation training system |
CN109144273A (en) * | 2018-09-11 | 2019-01-04 | 杭州师范大学 | A kind of virtual fire-fighting experiential method based on VR technology |
CN111694427A (en) * | 2020-05-13 | 2020-09-22 | 北京农业信息技术研究中心 | AR virtual honey shake interactive experience system, method, electronic equipment and storage medium |
CN112136158A (en) * | 2020-07-13 | 2020-12-25 | 深圳盈天下视觉科技有限公司 | Infrared positioning method, infrared positioning device and infrared positioning system |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114035677A (en) * | 2021-10-25 | 2022-02-11 | 中冶智诚(武汉)工程技术有限公司 | Universal interface implementation method for interaction between both hands and virtual glove peripherals |
CN114792364A (en) * | 2022-04-01 | 2022-07-26 | 广亚铝业有限公司 | Aluminum profile door and window projection system and method based on VR technology |
WO2024160171A1 (en) * | 2023-02-02 | 2024-08-08 | 华为技术有限公司 | Video processing method and related electronic device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112835449A (en) | Virtual reality and somatosensory device interaction-based safety somatosensory education system | |
CN109690633B (en) | Simulation system, processing method, and information storage medium | |
CN106095089A (en) | A kind of method obtaining interesting target information | |
CN103793060A (en) | User interaction system and method | |
CN104615242A (en) | Image recognition device, operation determination method, and program | |
US20160151705A1 (en) | System for providing augmented reality content by using toy attachment type add-on apparatus | |
JP6834614B2 (en) | Information processing equipment, information processing methods, and programs | |
CN104722056A (en) | Rehabilitation training system and method using virtual reality technology | |
WO2006108279A1 (en) | Method and apparatus for virtual presence | |
CN113841110A (en) | Artificial reality system with personal assistant elements for gating user interface elements | |
CN113892075A (en) | Corner recognition gesture-driven user interface element gating for artificial reality systems | |
CN108463839A (en) | Information processing unit and users' guidebook rendering method | |
CN107067456A (en) | A kind of virtual reality rendering method optimized based on depth map | |
WO2018198272A1 (en) | Control device, information processing system, control method, and program | |
CN105184622A (en) | Network shopping for consumer by utilization of virtual technology | |
CN107632702B (en) | Holographic projection system adopting light-sensing data gloves and working method thereof | |
US20230162458A1 (en) | Information processing apparatus, information processing method, and program | |
KR20160005841A (en) | Motion recognition with Augmented Reality based Realtime Interactive Human Body Learning System | |
CN114187651A (en) | Taijiquan training method and system based on mixed reality, equipment and storage medium | |
Lugrin et al. | Usability benchmarks for motion tracking systems | |
JP6625467B2 (en) | Simulation control device and simulation control program | |
EP3598270A1 (en) | Method and control unit for controlling a virtual reality display, virtual reality display and virtual reality system | |
KR20150073754A (en) | Motion training apparatus and method for thereof | |
JP2018190196A (en) | Information processing method, information processing device, program causing computer to execute information processing method | |
CN206147523U (en) | A hand -held controller for human -computer interaction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20210525 |
|
WW01 | Invention patent application withdrawn after publication |