CN109101879B - Posture interaction system for VR virtual classroom teaching and implementation method - Google Patents

Posture interaction system for VR virtual classroom teaching and implementation method Download PDF

Info

Publication number
CN109101879B
CN109101879B CN201810716409.1A CN201810716409A CN109101879B CN 109101879 B CN109101879 B CN 109101879B CN 201810716409 A CN201810716409 A CN 201810716409A CN 109101879 B CN109101879 B CN 109101879B
Authority
CN
China
Prior art keywords
virtual
gesture
action
application server
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810716409.1A
Other languages
Chinese (zh)
Other versions
CN109101879A (en
Inventor
李小志
刘鹏菲
陈宥辛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wenzhou University
Original Assignee
Wenzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wenzhou University filed Critical Wenzhou University
Priority to CN201810716409.1A priority Critical patent/CN109101879B/en
Publication of CN109101879A publication Critical patent/CN109101879A/en
Application granted granted Critical
Publication of CN109101879B publication Critical patent/CN109101879B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language

Abstract

The invention provides a gesture interaction system for VR virtual classroom teaching, which comprises a virtual application server, a virtual recognition device and a somatosensory camera; after the virtual identification device is worn by a user, starting and acquiring and presenting related real-time image data pushed by a virtual application server; the motion sensing camera acquires a three-dimensional coordinate of each action of the user, and the three-dimensional coordinate of each action of the user is mapped into a corresponding gesture by using preset gesture recognition software; the virtual application server identifies the virtual identification device, receives the gesture of each action of the user output by the somatosensory camera and compares the gesture with a preset standard gesture library, and converts the feedback action into image data to be pushed to the virtual identification device for display in real time after forming corresponding feedback action in preset virtual classroom teaching software according to the comparison result. By implementing the method and the device, the problem of gesture interaction through tools such as a handle and a touch screen in the prior virtual reality technology can be solved, and the operation complexity and the learning cost are reduced.

Description

Posture interaction system for VR virtual classroom teaching and implementation method
Technical Field
The invention relates to the technical field of virtual reality, in particular to a posture interaction system for VR virtual classroom teaching and an implementation method.
Background
With the increasing maturity of the virtual reality technology and the gradual understanding of people, the technology is widely applied at home and abroad. At present, the application of the virtual reality technology in the fields of education, entertainment and art occupies the mainstream, and secondly, the fields of military affairs, aviation and medicine, and the fields of robots and business occupy a certain proportion. The immersion system provided by the virtual environment can be used for the trainer to obtain direct feeling and experience in the training process, and timely guidance and help are given, so that the skill of the trainer or the training efficiency of the trainer is improved.
However, the virtual reality technology has many defects, such as that the trainee has no gesture interaction in the system or only key interaction by using a handle, a touch screen and the like in the system, and the trainee can not have more natural gesture interaction with one or more virtual characters in the system. In addition, the key operations of a handle, a touch screen and the like in the existing virtual reality technology are too complex, the learning cost is high, and the immersion of a user is greatly reduced.
Disclosure of Invention
The technical problem to be solved by the embodiments of the present invention is to provide a posture interaction system for VR virtual classroom teaching and an implementation method thereof, which can overcome the problem of posture interaction through tools such as a handle and a touch screen in the existing virtual reality technology, and reduce the operation complexity and the learning cost.
In order to solve the technical problem, an embodiment of the present invention provides an attitude interaction system for VR virtual classroom teaching, including a virtual application server, and a virtual recognition device and a somatosensory camera both connected to the virtual application server; wherein the content of the first and second substances,
the virtual identification device is used for starting, acquiring and presenting relevant real-time image data pushed by the virtual application server after being worn by a user;
the motion sensing camera is used for acquiring the three-dimensional coordinates of each action of the user, mapping the three-dimensional coordinates of each action of the user into corresponding gestures by utilizing preset gesture recognition software for gesture capture, and further outputting the captured gestures of each action of the user to the virtual application server;
the virtual application server is used for identifying the virtual identification device, receiving the gesture of each action of the user output by the somatosensory camera, comparing the gesture with a preset standard gesture library, further forming corresponding feedback actions in preset virtual classroom teaching software according to comparison results, and converting the feedback actions formed each time into image data to be pushed to the virtual identification device for display in real time.
Wherein the virtual identification device is a VR helmet and/or VR glasses.
Wherein, still include: a monitor connected to the virtual application server; wherein the content of the first and second substances,
the monitor is used for monitoring the operation of the virtual recognition device by the user and acquiring various actions of the user by the somatosensory camera.
The embodiment of the invention also provides an implementation method of the posture interaction system for VR virtual classroom teaching, which comprises the following steps:
step S1, starting the virtual recognition device;
step S2, acquiring the three-dimensional coordinates of each action of the user through the motion sensing camera, mapping the three-dimensional coordinates of each action of the user into a corresponding gesture for gesture capture by utilizing gesture recognition software preset on the motion sensing camera, and further outputting the gesture of each action of the user captured by the motion sensing camera to the virtual application server;
step S3, recognizing the virtual recognition device through the virtual application server, comparing the gesture of each action of the user, which is output by the somatosensory camera and received by the virtual application server, with a standard gesture library preset on the virtual application server, further forming a corresponding feedback action in virtual classroom teaching software preset in the virtual application server according to a comparison result, and converting the feedback action formed by the virtual application server each time into image data and pushing the image data to the virtual recognition device for display in real time.
Wherein, the step S3 specifically includes:
when the gesture of the user action output by the motion sensing camera is compared with the preset standard gesture library, and the gesture of the user action is obtained and is a class standing action gesture, the standing actions of all students in the virtual classroom teaching software preset in the virtual application server and the gesture of a teacher indicating that the students sit down are called feedback actions and are output to the virtual recognition device for display; or
And when the gesture of the user action output by the motion sensing camera is compared with the preset standard gesture library, and the gesture of the user action is obtained as a standing action gesture of the next class, calling the next class in the virtual classroom teaching software preset in the virtual application server to inform an action as a feedback action to be output to the virtual recognition device for display, and further calling the next class in the virtual classroom teaching software preset in the virtual application server to inform a speech output to the virtual recognition device.
Wherein the step S3 further includes:
and when the gesture of the user action output by the somatosensory camera is compared with the preset standard gesture library, and the gesture of the user action is obtained as a class action gesture, determining the meaning corresponding to the class action gesture and the pointed virtual student in virtual classroom teaching software preset in the virtual application server, and further calling feedback action generated by the pointed virtual student in the virtual classroom teaching software preset in the virtual application server as feedback action to be output to the virtual recognition device for display.
Wherein the step S3 further includes:
and when the gesture of the user action output by the somatosensory camera is compared with the preset standard gesture library and the gesture of the user action is obtained as a bad action gesture, the action for correcting the bad action gesture in the virtual classroom teaching software preset in the virtual application server is called as a feedback action and is output to the virtual recognition device for display.
Wherein the method further comprises:
the virtual application server predefines virtual classroom training software and a standard attitude library; wherein the content of the first and second substances,
the virtual classroom training software comprises virtual scenes and characters which are developed by the Unity3D software and are required for teaching, various elements which are required for teaching are arranged in the scenes, and the operation environment which is developed by the Steam VR software and is suitable for the virtual recognition device is arranged in the scenes;
the standard gesture library comprises the actions of teachers, the actions corresponding to students, various action gestures and action results.
The embodiment of the invention has the following beneficial effects:
according to the invention, three-dimensional coordinates of each action of a user are mapped into corresponding gestures through gesture recognition software on the motion sensing camera for gesture capture, and after the captured gesture of each action is compared and output with a standard gesture library preset in a virtual application server and virtual classroom teaching software, corresponding feedback actions are formed for gesture interaction simulation of a VR virtual classroom, so that real class scene reproduction in the virtual classroom is realized seamlessly and barrier-free, and therefore, the problem of gesture interaction through tools such as a handle and a touch screen in the existing virtual reality technology can be solved, and the operation complexity and the learning cost are reduced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is within the scope of the present invention for those skilled in the art to obtain other drawings based on the drawings without inventive exercise.
Fig. 1 is a schematic system structure diagram of a posture interaction system for VR virtual classroom teaching according to an embodiment of the present invention;
fig. 2 is a flowchart of an implementation method of the gesture interaction system for VR virtual classroom teaching according to the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings.
As shown in fig. 1, in an embodiment of the present invention, a gesture interaction system for VR virtual classroom teaching is provided, including a virtual application server 1, and a virtual recognition device 2 and a motion sensing camera 3 both connected to the virtual application server 1; wherein the content of the first and second substances,
the virtual identification device 2 is used for starting and acquiring and presenting relevant real-time image data pushed by the virtual application server 1 after being worn by a user; the virtual identification device 1 is a VR helmet and/or VR glasses;
the motion sensing camera 2 is used for acquiring a three-dimensional coordinate of each action of the user, mapping the three-dimensional coordinate of each action of the user into a corresponding gesture by using preset gesture recognition software for gesture capture, and further outputting the captured gesture of each action of the user to the virtual application server 1;
and the virtual application server 1 is used for identifying the virtual identification device 2, receiving the gesture of each action of the user output by the somatosensory camera 3, comparing the gesture with a preset standard gesture library, further forming corresponding feedback actions in preset virtual classroom teaching software according to the comparison result, and converting the feedback actions formed each time into image data and pushing the image data to the virtual identification device 2 for display in real time.
It should be noted that the virtual application server 1 defines virtual classroom training software and a standard posture library in advance; the virtual classroom training software comprises virtual scenes and characters which are developed by the Unity3D software and are required for teaching, various elements which are necessary for teaching are arranged in the scenes, and the running environment which is developed by the Steam VR software and is suitable for the virtual recognition device 2 is arranged in the scenes; the standard gesture library comprises the actions of teachers, the actions corresponding to students, various action gestures and action results.
It should be noted that the preset virtual classroom training software in the virtual application server 1 mainly interacts with voice, receives voice information through an external device, performs a voice recognition part, determines a recognition result, transmits the result to a voice library in the virtual classroom training software for matching search if the recognition result is successful, acquires a search result after the matching is successful, determines again according to the attribute of a trigger condition in the result, transmits a corresponding instruction to a student through the virtual recognition device 2 if the current environment allows the event to be triggered, and finally executes a corresponding action by the student to complete the interaction.
The standard gesture library set in the virtual application server 1 has a variety of application scenarios, which specifically include, but are not limited to, the following:
(1) standing up action gesture in class: when the gesture of the user action output by the somatosensory camera 3 is compared with a standard gesture library preset in the virtual application server 1, and the gesture of the user action is obtained and is a class standing action gesture, the standing actions of all students in virtual classroom teaching software preset in the virtual application server 1 and the gesture of a teacher indicating that the students sit down are called feedback actions and output to the virtual recognition device 2 for display;
(2) standing up action posture in the course: when the gesture of the user action output by the somatosensory camera 3 is compared with a standard gesture library preset in the virtual application server 1, and the gesture of the user action is obtained as a standing action gesture of the next class, the step taking action in virtual classroom teaching software preset in the virtual application server 1 is called as a feedback action to be output to the virtual recognition device 2 for display, and the step taking action speech in the virtual classroom teaching software preset in the virtual application server 1 is further called to be output to the virtual recognition device.
(3) Action gesture in class: when the gesture of the user action output by the motion sensing camera 3 is compared with a standard gesture library preset in the virtual application server 1, and the gesture of the user action is obtained as a class action gesture, the corresponding meaning of the class action gesture and the pointed virtual student are determined in virtual class teaching software preset in the virtual application server 1, and a feedback action generated by the pointed virtual student in the virtual class teaching software preset in the virtual application server 1 is further called as a feedback action to be output to the virtual recognition device 2 for display.
(4) Bad action posture: when the gesture of the user action output by the motion sensing camera 3 is compared with a standard gesture library preset in the virtual application server 1, and the gesture of the user action is obtained as a bad action gesture, the action of correcting the bad action gesture in virtual classroom teaching software preset in the virtual application server 1 is called as a feedback action and is output to the virtual recognition device for display.
(5) Asking the action gesture: when the gesture of the user action output by the motion sensing camera 3 is compared with a standard gesture library preset in the virtual application server 1, and the obtained gesture of the user action is a questioning action gesture, the meaning corresponding to the questioning action and the pointed virtual student are determined in virtual classroom teaching software preset in the virtual application server 1, and a feedback action generated by the pointed virtual student in the virtual classroom teaching software preset in the virtual application server 1 is further called as a feedback action to be output to the virtual recognition device 2 for display.
In the embodiment of the present invention, the method further includes: a monitor 4 connected to the virtual application server 1; the monitor 4 is configured to monitor an operation of the user on the virtual recognition device 2, and acquire various actions of the user by the monitoring motion sensing camera 3.
The application scene of the gesture interaction system for the VR virtual classroom teaching in the embodiment of the invention is specifically as follows:
when a teacher trains in a virtual classroom, firstly, information of the subject to be trained and the number of students are set on the virtual application server 1, and then, habitual actions in class are set, such as standing up, sitting down, stopping discussion, keeping a standard posture to be used, and the like. In order to truly restore the classroom situation, some poor performance settings, such as joint and ear connection, can be carried out on the server for some students. After the above setting is completed, click the "random assignment" button on the virtual application server 1, so that all settings are random.
The randomness setting completed on the virtual application server 1 mainly aims to ensure that questioning students, answering students and badly-expressing students are the same persons forever, the randomness is set, the actions occur randomly, and the teaching process in the virtual classroom can simulate the real classroom teaching process. Finally, it is also necessary to mark key knowledge points of courseware as mark points for starting questioning, so that the operation on the virtual application server 1 is basically completed.
After all the related setting work is completed, the following process occurs in the teaching and training process. After a teacher wears the VR helmet 2, the teacher selects students trained by the teacher and scene requirements, courseware needed by the teacher to be guided into virtual classroom teaching software, the situation seen in the VR helmet 2 is the same as the real classroom situation, the teacher starts to attend a class after hearing class ringing, the teacher calls the virtual students to greet, the body sensing camera 3 captures the postures of the teacher, recognizes the postures and matches the postures with a standard posture library, after matching succeeds, the standing up actions and the matching voices in a fixed database are called, at the moment, the virtual application server 1 calls the whole standing up actions and voices, and the students sit down again according to the postures and greetings of the teacher.
After interactive greetings are finished, a formal lesson-taking link is entered, a teacher takes lessons for students according to courseware, when the lessons begin, the students cannot ask questions, along with the increase of explanation of contents and the appearance of marked points on the courseware, the teacher can ask questions for key contents, when the teacher asks questions, the somatosensory camera 3 firstly introduces a posture signal into virtual classroom teaching software, the introduced posture signal is matched with a standard posture library according to the introduced posture, after the matching is successful, voice questions pointed by the posture are submitted to virtual students, the pointed virtual students answer the questions immediately at the moment, the teacher can ask other students to answer the questions one by one, answers and actions of the virtual students are converted into action signals, and then the actions are output; when the teacher thinks that the answer can be finished, the teacher can ask the virtual students to sit down and summarize and evaluate the question, and can also evaluate each answered student respectively. Similarly, the teacher can let a plurality of students stand up to ask questions and speak, and when the teacher considers that the students can finish asking questions, the teacher must finish asking the students in corresponding postures, and then start to explain all the questions of the previous students one by one.
Because the class in the virtual environment is a simulated real class scene, when the class enters a certain period of time, the bad performance action set on the server is randomly triggered, at the moment, several students have bad learning performance in the virtual class, when the teacher sees the situation, the students should immediately stop the performance of the students by gestures, and similarly, after the motion sensing camera 3 captures the posture signals of the teacher, the posture recognition system converts the posture signals into signals which can be recognized in the virtual class, and when the system recognizes the name of the student, the students stop the bad learning performance.
And finally, after the gesture is recognized again along with the end of the class, matching the gesture with a standard gesture library on the virtual application server 1, calling an action and voice for standing out and asking for other things after the class is finished, and enabling the teacher to do actions and voices such as class reminding and greeting other things after the class is finished with the virtual students. By this point, the training process ends.
As shown in fig. 2, in an embodiment of the present invention, an implementation method of a gesture interaction system for VR virtual classroom teaching is provided, including the following steps:
step S1, starting the virtual recognition device;
step S2, acquiring the three-dimensional coordinates of each action of the user through the motion sensing camera, mapping the three-dimensional coordinates of each action of the user into a corresponding gesture for gesture capture by utilizing gesture recognition software preset on the motion sensing camera, and further outputting the gesture of each action of the user captured by the motion sensing camera to the virtual application server;
step S3, recognizing the virtual recognition device through the virtual application server, comparing the gesture of each action of the user, which is output by the somatosensory camera and received by the virtual application server, with a standard gesture library preset on the virtual application server, further forming a corresponding feedback action in virtual classroom teaching software preset in the virtual application server according to a comparison result, and converting the feedback action formed by the virtual application server each time into image data and pushing the image data to the virtual recognition device for display in real time.
In step S3, the virtual application server predefines virtual classroom training software and a standard posture library; the virtual classroom training software comprises virtual scenes and characters which are developed by the Unity3D software and are required for teaching, various elements which are necessary for teaching are arranged in the scenes, and the operation environment which is developed by the Steam VR software and is suitable for the virtual recognition device is arranged; the standard gesture library comprises the actions of teachers, the actions corresponding to students, various action gestures and action results.
In step S3, the standard gesture library set in the virtual application server 1 has a plurality of application scenarios, which specifically include, but are not limited to, the following:
(1) when the gesture of the user action output by the motion sensing camera is compared with the preset standard gesture library, and the gesture of the user action is obtained and is a class standing action gesture, the standing actions of all students in the virtual classroom teaching software preset in the virtual application server and the gesture of a teacher indicating that the students sit down are called feedback actions and are output to the virtual recognition device for display;
(2) when the gesture of the user action output by the motion sensing camera is compared with the preset standard gesture library, and the gesture of the user action is obtained as a standing action gesture of the next class, the next class informing action in the virtual classroom teaching software preset in the virtual application server is called and output to the virtual recognition device as a feedback action for display, and the next class informing voice in the virtual classroom teaching software preset in the virtual application server is further called and output to the virtual recognition device;
(3) when the gesture of the user action output by the somatosensory camera is compared with the preset standard gesture library, and the gesture of the user action is obtained as a class action gesture, determining the meaning corresponding to the class action gesture and the pointed virtual student thereof in virtual classroom teaching software preset in the virtual application server, and further calling feedback action generated by the pointed virtual student in the virtual classroom teaching software preset in the virtual application server as feedback action to be output to the virtual recognition device for display;
(4) and when the gesture of the user action output by the somatosensory camera is compared with the preset standard gesture library and the gesture of the user action is obtained as a bad action gesture, the action for correcting the bad action gesture in the virtual classroom teaching software preset in the virtual application server is called as a feedback action and is output to the virtual recognition device for display.
The embodiment of the invention has the following beneficial effects:
according to the invention, three-dimensional coordinates of each action of a user are mapped into corresponding gestures through gesture recognition software on the motion sensing camera for gesture capture, and after the captured gesture of each action is compared with a standard gesture library preset in a virtual application server and virtual classroom teaching software and output, corresponding feedback actions are formed for gesture interaction simulation of a VR virtual classroom, so that the real class scene can be reproduced seamlessly and unobstructed in the virtual classroom, therefore, the problem of gesture interaction through tools such as a handle and a touch screen in the prior virtual reality technology can be solved, and the operation complexity and the learning cost are reduced.
While the invention has been described in connection with what is presently considered to be the most practical and preferred embodiment, it is to be understood that the invention is not to be limited to the disclosed embodiment, but on the contrary, is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.

Claims (7)

1. A posture interaction system for VR virtual classroom teaching is characterized by comprising a virtual application server, a virtual recognition device and a somatosensory camera, wherein the virtual recognition device and the somatosensory camera are connected with the virtual application server; wherein the content of the first and second substances,
the virtual identification device is used for starting, acquiring and presenting relevant real-time image data pushed by the virtual application server after being worn by a user;
the motion sensing camera is used for acquiring the three-dimensional coordinates of each action of the user, mapping the three-dimensional coordinates of each action of the user into corresponding gestures by utilizing preset gesture recognition software for gesture capture, and further outputting the captured gestures of each action of the user to the virtual application server;
the virtual application server is used for identifying the virtual identification device, receiving the gesture of each action of the user output by the somatosensory camera, comparing the gesture with a preset standard gesture library, further forming corresponding feedback actions in preset virtual classroom teaching software according to comparison results, and converting the feedback actions formed each time into image data to be pushed to the virtual identification device for display in real time;
when the gesture of the user action output by the somatosensory camera is compared with a standard gesture library preset in the virtual application server, and the gesture of the user action is obtained as a class action gesture, the corresponding meaning of the class action gesture and the pointed virtual student are determined in virtual classroom teaching software preset in the virtual application server, and the feedback action generated by the pointed virtual student in the virtual classroom teaching software preset in the virtual application server is further called as a feedback action to be output to the virtual recognition device for display.
2. The VR virtual classroom teaching gesture interaction system of claim 1, wherein the virtual recognition device is a VR headset and/or VR glasses.
3. The gesture interaction system of VR virtual classroom teaching of claim 1, further comprising: a monitor connected to the virtual application server; wherein the content of the first and second substances,
the monitor is used for monitoring the operation of the virtual recognition device by the user and acquiring various actions of the user by the somatosensory camera.
4. A method for realizing a posture interaction system for VR virtual classroom teaching is characterized by comprising the following steps:
step S1, starting the virtual recognition device;
step S2, acquiring the three-dimensional coordinate of each action of the user through the motion sensing camera, mapping the three-dimensional coordinate of each action of the user into a corresponding gesture for gesture capture by utilizing gesture recognition software preset on the motion sensing camera, and further outputting the gesture of each action of the user captured by the motion sensing camera to the virtual application server;
step S3, recognizing the virtual recognition device through the virtual application server, comparing the gesture of each action of the user, which is output by the somatosensory camera and received by the virtual application server, with a standard gesture library preset on the virtual application server, further forming a corresponding feedback action in virtual classroom teaching software preset in the virtual application server according to a comparison result, and converting the feedback action formed by the virtual application server each time into image data and pushing the image data to the virtual recognition device for display in real time;
and when the gesture of the user action output by the somatosensory camera is compared with the preset standard gesture library, and the gesture of the user action is obtained as a class action gesture, determining the meaning corresponding to the class action gesture and the pointed virtual student in virtual classroom teaching software preset in the virtual application server, and further calling feedback action generated by the pointed virtual student in the virtual classroom teaching software preset in the virtual application server as feedback action to be output to the virtual recognition device for display.
5. The method of claim 4, wherein the step S3 specifically includes:
when the gesture of the user action output by the motion sensing camera is compared with the preset standard gesture library, and the gesture of the user action is obtained and is a class standing action gesture, the standing actions of all students in the virtual classroom teaching software preset in the virtual application server and the gesture of a teacher indicating that the students sit down are called feedback actions and are output to the virtual recognition device for display; or
And when the gesture of the user action output by the motion sensing camera is compared with the preset standard gesture library, and the gesture of the user action is obtained as a standing action gesture of the next class, calling the next class in the virtual classroom teaching software preset in the virtual application server to inform an action as a feedback action to be output to the virtual recognition device for display, and further calling the next class in the virtual classroom teaching software preset in the virtual application server to inform a speech output to the virtual recognition device.
6. The method of implementing the gesture interaction system for VR virtual classroom teaching of claim 4, wherein the step S3 further includes:
and when the gesture of the user action output by the somatosensory camera is compared with the preset standard gesture library and the gesture of the user action is obtained as a bad action gesture, the action for correcting the bad action gesture in the virtual classroom teaching software preset in the virtual application server is called as a feedback action and is output to the virtual recognition device for display.
7. The method of implementing the VR virtual classroom teaching gesture interaction system of claim 4, the method further comprising:
the virtual application server predefines virtual classroom training software and a standard attitude library; wherein the content of the first and second substances,
the virtual classroom training software comprises virtual scenes and characters which are developed by the Unity3D software and are required for teaching, various elements which are required for teaching are arranged in the scenes, and the operation environment which is developed by the Steam VR software and is suitable for the virtual recognition device is arranged in the scenes;
the standard gesture library comprises the actions of teachers, the actions corresponding to students, various action gestures and action results.
CN201810716409.1A 2018-06-29 2018-06-29 Posture interaction system for VR virtual classroom teaching and implementation method Active CN109101879B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810716409.1A CN109101879B (en) 2018-06-29 2018-06-29 Posture interaction system for VR virtual classroom teaching and implementation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810716409.1A CN109101879B (en) 2018-06-29 2018-06-29 Posture interaction system for VR virtual classroom teaching and implementation method

Publications (2)

Publication Number Publication Date
CN109101879A CN109101879A (en) 2018-12-28
CN109101879B true CN109101879B (en) 2022-07-01

Family

ID=64845462

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810716409.1A Active CN109101879B (en) 2018-06-29 2018-06-29 Posture interaction system for VR virtual classroom teaching and implementation method

Country Status (1)

Country Link
CN (1) CN109101879B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113012501B (en) * 2021-03-18 2023-05-16 深圳市天天学农网络科技有限公司 Remote teaching method
CN113204306A (en) * 2021-05-12 2021-08-03 同济大学 Object interaction information prompting method and system based on augmented reality environment
CN113434035B (en) * 2021-05-19 2022-03-29 华中师范大学 Teaching reuse method for VR panoramic image material
CN113347187A (en) * 2021-06-02 2021-09-03 温州大学 Badminton training method and system based on virtual reality technology
CN113362672B (en) * 2021-08-11 2021-11-09 深圳市创能亿科科技开发有限公司 Teaching method and device based on virtual reality and computer readable storage medium
CN114743419B (en) * 2022-03-04 2024-03-29 国育产教融合教育科技(海南)有限公司 VR-based multi-person virtual experiment teaching system
TWI823478B (en) * 2022-07-18 2023-11-21 新加坡商鴻運科股份有限公司 Method, electronic equipment and storage medium for action management for artificial intelligence
CN117472188B (en) * 2023-12-07 2024-04-19 联通沃音乐文化有限公司 VR gesture information control device and method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101916333A (en) * 2010-08-12 2010-12-15 四川大学华西医院 Transesophageal echocardiography visual simulation system and method
CN105373224A (en) * 2015-10-22 2016-03-02 山东大学 Hybrid implementation game system based on pervasive computing, and method thereof
CN107657955A (en) * 2017-11-09 2018-02-02 温州大学 A kind of interactive voice based on VR virtual classrooms puts question to system and method

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101295442A (en) * 2008-06-17 2008-10-29 上海沪江虚拟制造技术有限公司 Non-contact stereo display virtual teaching system
US8755569B2 (en) * 2009-05-29 2014-06-17 University Of Central Florida Research Foundation, Inc. Methods for recognizing pose and action of articulated objects with collection of planes in motion
CA2889778A1 (en) * 2014-04-28 2015-10-28 Modest Tree Media Inc. Virtual interactive learning environment
CN105809144B (en) * 2016-03-24 2019-03-08 重庆邮电大学 A kind of gesture recognition system and method using movement cutting
CN106020440A (en) * 2016-05-05 2016-10-12 西安电子科技大学 Emotion interaction based Peking Opera teaching system
CN107168538A (en) * 2017-06-12 2017-09-15 华侨大学 A kind of 3D campuses guide method and system that emotion computing is carried out based on limb action
CN107992189A (en) * 2017-09-22 2018-05-04 深圳市魔眼科技有限公司 A kind of virtual reality six degree of freedom exchange method, device, terminal and storage medium
CN107765859A (en) * 2017-11-09 2018-03-06 温州大学 A kind of training system and method based on VR virtual classrooms

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101916333A (en) * 2010-08-12 2010-12-15 四川大学华西医院 Transesophageal echocardiography visual simulation system and method
CN105373224A (en) * 2015-10-22 2016-03-02 山东大学 Hybrid implementation game system based on pervasive computing, and method thereof
CN107657955A (en) * 2017-11-09 2018-02-02 温州大学 A kind of interactive voice based on VR virtual classrooms puts question to system and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于Kinect的体感交互开发中间件设计及实现;张一航等;《计算机应用与软件》;20180415(第04期);16-20+38 *

Also Published As

Publication number Publication date
CN109101879A (en) 2018-12-28

Similar Documents

Publication Publication Date Title
CN109101879B (en) Posture interaction system for VR virtual classroom teaching and implementation method
CN107657955B (en) Voice interaction question-asking system and method based on VR virtual classroom
CN107765859A (en) A kind of training system and method based on VR virtual classrooms
CN211878778U (en) Real-time interactive chemical experiment teaching system based on virtual reality technology
KR20030067497A (en) learning system
CN108648535A (en) A kind of tutoring system and its operation method based on the mobile terminals VR technology
CN111240490A (en) Equipment insulation test training system based on VR virtual immersion and circular screen interaction
CN110444061B (en) Thing networking teaching all-in-one
CN108847075A (en) A kind of long-distance educational system and educational method
US20220309947A1 (en) System and method for monitoring and teaching children with autistic spectrum disorders
CN109754653B (en) Method and system for personalized teaching
CN109377802A (en) A kind of automatic and interactive intellectual education system and method
CN112331001A (en) Teaching system based on virtual reality technology
CN112367526B (en) Video generation method and device, electronic equipment and storage medium
CN110956863A (en) Method for playing interactive teaching by using 3D holographic projection
CN111050111A (en) Online interactive learning communication platform and learning device thereof
Barmaki Multimodal assessment of teaching behavior in immersive rehearsal environment-teachlive
Farlianti et al. The Analysis Of Gesture Used By The Students Of English Study Program In The Classroom Interaction At The University Of Sembilanbelas November, Kolaka
CN111695496A (en) Intelligent interactive learning method, learning programming method and robot
Henricsson Student Teachers’ Storytelling: Countering Neoliberalism in Education
McCloskey Irish sign language in a virtual reality environment
Masuta et al. Development of presentation robot system for classroom based on state of participants
Chou et al. A mandarin phonetic-symbol communication aid developed on tablet computers for children with high-functioning autism
CN110413130B (en) Virtual reality sign language learning, testing and evaluating method based on motion capture
Kawabe et al. Measurement of hand raising actions to support students’ active participation in class

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant