CN111862711A - Entertainment and leisure learning device based on 5G internet of things virtual reality - Google Patents

Entertainment and leisure learning device based on 5G internet of things virtual reality Download PDF

Info

Publication number
CN111862711A
CN111862711A CN202010562780.4A CN202010562780A CN111862711A CN 111862711 A CN111862711 A CN 111862711A CN 202010562780 A CN202010562780 A CN 202010562780A CN 111862711 A CN111862711 A CN 111862711A
Authority
CN
China
Prior art keywords
module
client
learning
health
virtual reality
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010562780.4A
Other languages
Chinese (zh)
Inventor
刘应江
唐建武
侯勇滨
俸成玲
钟文轩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Guangjian Communication Technology Co ltd
Original Assignee
Guangzhou Guangjian Communication Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Guangjian Communication Technology Co ltd filed Critical Guangzhou Guangjian Communication Technology Co ltd
Priority to CN202010562780.4A priority Critical patent/CN111862711A/en
Publication of CN111862711A publication Critical patent/CN111862711A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention designs a virtual reality entertainment leisure learning device based on 5G internet of things, wherein a hardware structure comprises a leisure entertainment learning space element, and the contents of the space element are comprehensively integrated: the system comprises a panoramic immersion stereo display system, an environment perception health special effect system, a dynamic platform seat device, a main server, an interaction module, a communication network module, a client, an integrated control and play system and a stereo surround sound system; because numerous software and hardware equipment are organically and comprehensively integrated, the multifunctional integrated machine has the following multiple use functions based on one set of software and hardware equipment: the dynamic cinema device of panorama virtual reality, panorama virtual reality immerse memory reinforcing learning device, the leisure health preserving device is immersed to panorama virtual reality, panorama virtual reality medical treatment health preserving rehabilitation device etc. have realized the multiple utilization of a equipment, have improved the comprehensive utilization efficiency and the cost is reduced of software and hardware equipment.

Description

Entertainment and leisure learning device based on 5G internet of things virtual reality
Technical Field
The invention belongs to the technical field of learning, leisure and entertainment equipment, and particularly belongs to the technical field of virtual reality intelligent learning, leisure and entertainment devices based on a 5G Internet of things technical framework.
Background
The existing dynamic cinema usually shows that a film is played for one month or even longer, so that the activity of consumers is frustrated, and resources are wasted, for example, a new film file is made, but the corresponding dynamic cinema control system related configuration file is complex to make, the period is long, and the seat control file is complex to update, such as a dynamic seat action configuration file, an environment special effect configuration file and the like; the invention discloses a dynamic cinema control system, which aims to improve the expansibility of a dynamic cinema and realize the quick update of cinema playing contents and interactive contents.
The traditional study room or classroom or learning space only provides a physical learning space for learners, does not interact with the environment of the space in the process of learning growth or leisure and entertainment, and does not talk about active artificial intelligence interaction with the environment, so that an intelligent learning environment space cannot be formed, and the modern education theory considers the acquisition of knowledge and ability as the interaction with the surrounding environment; therefore, users of the space do not enjoy a sense of embeddability and immersion sufficient for knowledge, entertainment, and leisure.
Conventional leisure massage seat only possesses simple leisure massage function, does not interact with massage armchair surrounding environment to form the embedding sense of massage leisure and the sense of immersing, also do not interact with surrounding air circumstance simultaneously, need optimize and purify to air quality class as massage leisure class product.
Conventional office spaces are merely physical spaces and do not perform an efficient office-based environment interaction with the spaces.
Conventional virtual reality entertainment products, such as conventional virtual reality experience museums of various types, provide some conventional virtual reality experience products, such as: dynamic limit theme entertainment products (such as combat tank products, roller coaster products, aviation flying products), military shooting theme products, aerospace flying theme products, racing products and the like, wherein each scene product needs to be designed with software and hardware independently, the universality and the expandable amusement scene of the product are poor, so that different hardware or software is needed when one scene or theme is changed, the cost of the developed software and hardware is very high, and the popularization and the promotion of the products are not facilitated.
In contrast, the intelligent education system provides a comprehensive intelligent perception environment and an intelligent learning service platform for students, effectively collects relevant information of student learning, and obtains personalized and intelligent learning and management services.
At present, in the technical field of modern education, how to make teaching contents be three-dimensionally visualized, vivid, gamified, interesting, personalized and immersive is always a hotspot of the research of modern education technology and an unsolved problem, and the education drawbacks of the traditional factory-type flow form are increasingly large along with the development of human society, for example, the traditional factory-type education cannot develop personalized learning education aiming at the personality endowments of students, because practical education resources cannot give consideration to the personalized teaching of the students.
Although conventional teaching multimedia equipment (common television teaching, projection sound system and the like) is added in the traditional teaching, the multimedia video equipment is added on the basis of the space of a traditional classroom in the design of a teaching space, only one teaching auxiliary tool is added, and due to the principle of technical structure, the existing teaching multimedia equipment cannot make the teaching content be in three-dimensional visualization, animation, recreation, interest, individuation and immersion scene, so that the teaching content cannot be vivid and interesting, the memory is stable and durable, and the teaching efficiency result is very low.
The existing handheld devices such as a tablet computer, a learning machine, an intelligent machine and a computer also have the disadvantages that the teaching contents are vivid and interesting, the memory is stable and long-lasting and the like because the three-dimensional visualization, the animation, the gaming, the interest, the individuation and the immersion scene of the teaching contents are not achieved, and the teaching efficiency is very low.
The intelligent learning machine is essentially a flat plate with various application software, is essentially still staying in the technical field of flat computers and smart phones, basically has no immersion virtual feeling due to the narrow video area of human-computer interaction, has no auxiliary teaching system, and can not generate teaching content scenes, animation scenes and game teaching scenes based on teaching contents according to the context semantics.
In addition, the existing products in the market such as hyperbaric oxygen chambers, hyperbaric oxygen rooms, anion generators, oxygen bars, gymnasiums, electromagnetic physiotherapy instruments and the like have single functions, low automation degree and poor intellectualization, cannot receive physiotherapy during work, study, exercise and the like, and cannot superpose time for utilization; the existing high-pressure oxygen rooms, high-pressure oxygen cabins, oxygen bars and the like on the market can only be a fixed high-pressure high-oxygen-concentration passive oxygen inhalation mode, and cannot provide an optimal active oxygen inhalation mode which is suitable for the physical conditions of every person in the processes of learning, working, resting and moving, the concentration and the pressure of the oxygen equipment are fixed and unchanged, the oxygen equipment cannot be intelligently adjusted, the comfort level is poor, the function is single, and the rehabilitation physiotherapy and the learning work can not be taken into consideration during leisure time; the negative ion generating device on the market is also only a device which is used in a small range and can not be intelligently adjusted in the concentration of negative ions; some physical examination devices are only used for physical examination, and corresponding adjusting and physical therapy devices cannot be started according to physical examination results to intelligently adjust and perform physical therapy on the body; some physiotherapy equipment, some fixed universal modes are also only used for conditioning, personalized physiotherapy cannot be achieved, some functional rooms can only be used for conditioning temperature, humidity, oxygen concentration, anion concentration and the like, intelligent and personalized conditioning of multiple parameters of multiple indexes (temperature, humidity, oxygen energy, anion concentration, sound, smell, vision and the like) cannot be achieved, interaction with a virtual environment scene cannot be achieved, immersion experience does not exist, and rehabilitation physiotherapy is regarded as burden.
In addition, the current rehabilitation medical health-preserving equipment has single function and does not have corresponding health-preserving content scenes, health-preserving items are isolated singly and independently, the loneliness and substitution senses during rehabilitation health preservation can not be removed, the time can not be folded and utilized in a composite manner during leisure time of rehabilitation health preservation, a large amount of time and cost of a user are wasted, and the technical problems and the economic problems need to be solved urgently, and the following invention technical scheme is constructed.
Disclosure of Invention
In order to solve the pain points, the following technical scheme is designed and invented:
1. the design invents a virtual reality amusement and study device that is an artificial intelligence virtual reality space based on visual identification, speech recognition, conventional keyboard and mouse class multi-functional interactive mode carry out the interaction with the customer end, gets into this space after, through speech interaction or other interactive mode or manual interactive mode, can get into specific theme space, for example: (1) the method comprises the following steps of (1) panoramic dynamic cinema mode theme space, (2) memory enhancement intelligent learning system mode theme space, (3) leisure office health mode theme space, and (4) no instruction perception, automatic human intelligent interaction mode theme space, wherein specific use modes are shown in figure 1.
This virtual reality amusement and leisure learning device contains: the entertainment and leisure learning space element is characterized in that the following software and hardware equipment is integrated in the entertainment and leisure learning space unit: the system comprises a panoramic immersion stereo display system, an environment perception health special effect system, a dynamic platform seat device, a main server, an interaction module, a communication network module, a client, an integrated control and play system and a stereo surround sound system; the main server is connected with the integrated control and play system, and the client is connected with the main server; the panoramic immersion stereo display system, the stereo surround sound system, the environment perception health special effect system and the dynamic platform seat device are connected with the integrated control and playing system.
The entertainment, leisure and learning space unit structure is as follows: the space structure comprises an egg-shaped three-dimensional space structure, a space capsule three-dimensional space structure, an oval three-dimensional space structure, a conventional cubic cuboid space structure and a study room structure.
The panoramic immersion stereoscopic display system module comprises: the three-dimensional display system comprises a main display system and an auxiliary scene display system, wherein the three-dimensional display system is any one or any combination of an LED display device, an OLED display device, a spherical screen cinema display device, a circular screen cinema display device, a multi-picture cinema display device and a 360-degree whole spherical cinema display device, and the device equipment or the system is connected with an integrated control and playing system module.
The integrated control and play system comprises: A. the intelligent media control system is used for analyzing and judging each media signal source and carrying out intelligent prompt or operation, and particularly completing the functions of voice recognition, image dynamic tracking, key action recognition and the like; B. the intelligent mixed reality system is used for three-dimensional scene output and real-time image fusion, the specific functions comprise real-time three-dimensional scene rendering, real-time three-dimensional scene switching, real-time color key mixing, matting control and the like, and the software schematic diagram of the integrated control and play system is shown in figure 4.
The client is one or a plurality of combinations of a PC (personal computer) end, a tablet personal computer end, a mobile phone end, a posture recognition end, a dynamic seat platform client and an environment perception health system client, wherein application software is installed on the mobile phone client, the PC client and the tablet personal computer client, and the application software comprises: the system comprises a panoramic dynamic cinema system, a memory enhanced intelligent learning system, an office leisure health maintenance system, an intelligent login and payment system and an automatic generation system based on a semantic virtual reality scene;
the interaction module comprises: the system comprises an environment perception health special effect system interaction module, a dynamic platform seat device interaction module, a client interaction module and an action recognition and simulation interaction module, such as a simulation sword gun module and a voice interaction module; the action recognition and simulation interaction module device comprises: the Kinect action recognition system comprises a Kinect action recognition module, a data simulation weapon, data clothes, data gloves, data rings, VR glasses and the like.
The client is connected with the main server through a communication network module.
The client is connected with the main server through a network module, the client is connected with the server through a corresponding network module due to the fact that the system is based on a C/S system architecture, the network module is a virtual module, is established between the system server and each client, mainly uses a socket network communication technology to transmit data, and comprises a server sub-module and a client sub-module; the server sub-module comprises five sub-servers which are respectively an environment perception platform sub-server, a dynamic platform sub-server, a mobile phone/tablet/PC sub-server, a Kinect sub-server and a simulation sword gun sub-server. The client sub-module is embedded in each interactive module actually and is responsible for acquiring and preprocessing the interactive data generated by each interactive module and transmitting the interactive data to the corresponding sub-server of the server end through the network to complete data exchange.
The entertainment, leisure and learning space unit can be designed into an egg-shaped three-dimensional space structure, a space capsule three-dimensional space structure, an oval three-dimensional space structure, a conventional cubic cuboid space study structure, a bullet high-speed rail cab three-dimensional space structure, a classroom space shape, a study space shape, an arc space shape and a massage rest cabin shape; now, taking the shape of the elliptical arc space cabin as an example, the technical structure is described: the appearance of the whole learning and entertainment shelter is a giant eggshell-shaped three-dimensional space or a semi-giant eggshell-shaped three-dimensional space, the negative oxygen ion oxygen generator and the air outlet of the air conditioning device share the air supply module of one air outlet, and the negative oxygen ion oxygen generator and the air conditioning device can be arranged at any suitable position on the premise that the outer surface of the eggshell shape does not influence traffic factors; the negative oxygen ion oxygen air outlet and air conditioning air outlet air-out.
2. In order to solve the problems of the prior art, the invention provides an environment-aware special effect system device, comprising:
the environment special effect model device comprises: an air conditioning device, an olfactory sensation simulation system device, a snowfall simulator, a bubble machine, a rainfall simulator, a lightning simulator, a smoke machine and a fan blower, the air conditioning device is a public device, the smell simulation system device comprises a smell synthesis system device and a smell outlet device of the smell simulation system, the air outlet device of the oxygen anion generating device, the smell outlet device of the smell simulation system and the air blower are connected together or combined into an integrated air outlet device, and is arranged at the head part of the dynamic table and chair platform device or is arranged right above or right behind the arm-chair device of the dynamic table and chair platform device and projects on the projection part of the inner wall of the learning, leisure and entertainment space unit, the outer machine parts of the equipment devices are all arranged at proper positions on the periphery of the entertainment and leisure learning space unit;
the device equipment is connected with an environment sensing health special effect system client through an Internet of things sensing system and a network module, the environment sensing health special effect system client is connected with a main server, the main server is connected with an integrated control and playing system module, and the integrated control and playing system module is connected with an environment sensing health special effect system device and executes feedback action based on scene data output;
Health physical examination health preserving device contains: oxygen anion generating device, health automatic check out system, automatic rehabilitation system, health automatic check out system contain: any one or any combination of an infrared tomography scanner, an ultrasonic detector and an electrocardiogram detector; the automated rehabilitation system further comprises: any one or any combination of a plurality of fumigating devices, atomization physiotherapy devices, electrotherapy equipment, ultrasonic equipment, magnetic therapy equipment, music therapy equipment, phototherapy equipment, cold and heat therapy equipment, far infrared generating devices and laser beauty equipment; the devices are connected with the main server through the Internet of things sensing system and the network module, and the integrated control and playing system inputs and executes corresponding synchronous real-time instructions according to the scene data of the health care theme content;
the air purification and disinfection system device comprises: the ozone generator, the ultraviolet ray disinfection instrument and the lighting lamp, wherein the working head of the ultraviolet ray disinfection instrument is positioned at the inner side of the top of the cabin body, the connecting cable passes through a hole formed in the outer wall of the cabin body to connect the working head and the machine body, and a sealing piece is arranged between the connecting cable and the outer wall of the cabin body; the lighting lamp is positioned in the wind sweeping area of the leg of the dynamic seat, the air port of the air interchanger is positioned on the upper side of the inlet and outlet cabin door, and the equipment device is connected with the cabin door in an opening and closing state in a related manner.
The devices can simulate special effect in a real environment situation through the centralized control and play system according to the content of the film, so that audiences can experience the immersive viewing experience; the smell simulation system device comprises a smell synthesis system device and a smell outlet device of the smell simulation system, for example: the air outlet device of the oxygen anion generating device, the smell outlet device of the smell simulation system, the blower and the air outlet device of the air conditioning device are connected together or synthesized into an integrated air outlet device, and the integrated air outlet device is arranged at the head part of the armchair of the dynamic table and chair platform device or is arranged right above or right behind the armchair device of the dynamic table and chair platform device to project on the projection part of the inner wall of the learning and leisure entertainment space unit.
3. In order to solve the prior art problem, the design invents a dynamic cinema device of panorama, contains: the system comprises a main server, a cinema content playing resource client, a posture recognition client, a dynamic seat platform client, an environment perception special effect system client, a digital projection system, a digital simulation weapon, a data clothes glove ring, an action recognition module, a central control and playing system, a stereo surround sound system, a dynamic seat platform device and an environment perception special effect system;
The digital projection system comprises: the multi-channel video playing system comprises a multi-channel playing system, an immersion projection arc screen and polarization stereo glasses, the multi-channel video synchronous playing system is adopted by the video playing system, the video high-definition and 3D playing functions can be completed, and good video compatibility is guaranteed. The screen of the panoramic motion cinema adopts a metal screen, can improve the brightness of pictures, has good reducibility to polarized light when playing a three-dimensional film, and is suitable for the three-dimensional film in a fixed showing place. Meanwhile, in order to increase the visual impact effect, the digital sound system adopts a stereo cinema surrounding system.
The digital simulation weapon, the data clothes glove ring and the digital simulation sports equipment tool are connected with the main server;
the cinema content playing resource client is any one of a mobile phone client, a tablet and a PC client, and is connected with the main server;
the gesture recognition client, the dynamic seat platform client and the environment perception special effect system client are connected with the main server;
the immersion projection arc screen adopts a multi-channel projection technology and at least comprises a projector a, a projector b and a projector c;
The dynamic seat platform device, the environment perception oxygen-recovery special effect system device and the immersion projection arc screen are connected with the integrated control and playing system module, and the integrated control and playing system module is connected with the server host.
The central control and play system module is controlled by video play software, motion seat platform action control, environment perception platform control, light control and other peripheral equipment, and the control mode is mainly a mode of PC host computer plus PLC control.
The client is one of a PC (personal computer) end, a tablet personal computer and a mobile phone, application software is installed on the client, and the application software comprises: the system comprises a full virtual reality cinema resource module, a third cinema resource system, an identification login module, a preview module, an appointment module, a two-dimensional code generation module and a payment module.
The dynamic platform seat device mainly comprises three modes of pneumatic driving, hydraulic driving and electric driving, a control system of the dynamic platform seat device changes according to different driving modes, and the electric dynamic seat mainly comprises a PLC control board card, a servo motor driver, an actuating mechanism and a photoelectric encoder; the pneumatically-driven dynamic seat control system mainly comprises a PLC control board card, an air pressure station, an electric valve, an actuating mechanism, a photoelectric coding positioning mechanism and the like; the control system of the hydraulically driven dynamic seat mainly comprises a PLC control board card, a hydraulic station, an electric valve, an actuating mechanism, a photoelectric coding positioner and the like.
The central control system can synchronously operate virtual reality contents (such as film showing), the motion platform seat device, the environment perception platform and other effects, and the master control system can simultaneously monitor the states of all the subsystem modules, sends out instructions according to control configuration files compiled by video decoding information in advance according to the virtual reality contents (such as film showing), and controls the automatic operation of the panoramic motion cinema through the processes.
The architecture of the system of the embodiment is shown in fig. 2, the software and control flow chart is shown in fig. 3 and 4, and the control mode of a single hardware device is shown in fig. 7.
4. In order to solve the problems of the prior art, the invention provides a memory-enhanced intelligent learning system device, which comprises: entertainment and leisure learning space unit, environment perception special effect system device, dynamic seat device, customer end, main server, integrated control and play system module, panorama immersion stereo display system module, stereo surround sound system, intelligent virtual reality learning system, this intelligent virtual reality learning system contain: the system comprises a learner module, a login and payment module, a local database learning resource module, a third-party learning resource interface module, an online learning resource intelligent recommendation service system, an intelligent education system, an intelligent family education auxiliary system, an automatic generation system module based on natural semantic scenes and the learner module, wherein the olfaction simulation system device disperses rosemary odor substances into a space in a memory-enhanced intelligent learning mode.
The network learning resource intelligent recommendation service system is used for issuing and recording teaching processes of teachers, returning real-time information of students and recording learning processes of the students, and mainly has the functions of: the system has the functions of standardized courseware generation, real-time streaming media release, real-time audio-video transmission, text online communication, learner login, grouping management and learner file recording;
the recreation and leisure learning space unit further comprises: collapsible shrink table chair, adjustable massage chair, the motion seat platform pass through thing networking platform system and link with virtual reality figure workstation, accept virtual reality figure workstation animation scene content control and carry out corresponding action.
The memory-enhanced intelligent learning system adopts a hierarchical design mode integrated with other systems, and comprises a hardware perception layer, a network transmission layer and a software application layer. Users in various scenes can enter the system through the voice interaction login system to perform voice interaction operation. Meanwhile, the intelligent teaching system realizes seamless butt joint with a resource platform and other business systems based on an SOA architecture system; and an independent interface is provided, and the system is compatible and butted with a third-party platform, integrates third-party services and realizes dynamic expansion of the system. The network layer of the intelligent teaching system provides basic conditions for interconnection, intercommunication, interaction and interoperation among various software and hardware devices of the intelligent teaching system. The teacher end is connected with the resource platform, other subsystems and an external network through a wired network, and is connected with the electronic schoolbag through WiFi, and the electronic schoolbag can be connected with each other through WiFi.
The Internet of things system realizes the sensing, capturing, transmission and analysis of environmental data such as temperature, humidity, illumination and the like in a classroom, and realizes the intelligent control of classroom environment; the hardware layer comprises equipment such as a teacher terminal, an electronic schoolbag, an electronic whiteboard and a physical exhibition stand which have the functions of presenting and transmitting learning contents, creating teaching situations and interacting teaching; providing an intelligent recording and broadcasting system with intelligent tracking, automatic recording, one-key storage and synchronous live broadcasting; and the intelligent platform is integrated and centrally controlled by various devices. The software layer is oriented to the end user and provides various applications and services for education and teaching, and comprises three parts, namely a lesson preparation system oriented to the class, a lecture system oriented to the class, an electronic operation system and a tutoring and question answering and comprehensive evaluation system oriented to the class;
according to the change of learning and teaching content scenes, the automatic generation module generates corresponding content scene data based on the semantic virtual reality scene, and then synchronously controls the environment perception special effect system device through the centralized central control system, so that experience such as ascending, descending, supine and diving can be felt in real time according to the teaching content scenes; the simulation system realizes special effect scenes such as storms, thunderstorms, impacts, vibrations and the like, and simultaneously completes simulation and control of teaching content scenes by matching with rendering of lamplight and sound.
The intelligent teaching system is a normalized learning mode which integrates more intelligent devices on the basis of traditional education and teaching, combines high and new technologies such as big data, Internet of things and cloud computing and realizes remote interaction, normalized communication and intelligent recording and broadcasting in a soft and hard integrated mode. It provides big data support for education and teaching through creating high-efficient intelligent teaching and learning ecological environment, simultaneously through collecting and the record to the information in the teaching activity, forms the analysis and the summary of teaching activity, further optimal design teaching activity makes the teaching more efficient. The method is characterized by individual learning, simple butt joint, high-efficiency interaction and data support.
5. In order to solve the problems of the prior art, the invention provides an office leisure health-care system device, which comprises: the system comprises an entertainment, leisure and learning space unit, an intelligent office, leisure and study room system, a natural human-computer interaction virtual roaming system, an environment perception special effect system device, a dynamic platform seat device, a client, a main server, a central control and playing system module and a panoramic immersion three-dimensional display system; the flow chart of the natural human-computer interaction virtual roaming system is shown in figure 5.
The panoramic immersion interactive roaming system is characterized by comprising: a user module, a voice action recognition sensor, a computer client, a panoramic immersive stereoscopic display system and/or a head-mounted display; the four functional modules are sequentially linked with each other to form a panoramic immersion roaming leisure system;
The user module can identify users of different genders at different ages, and the user needs to stand in an effective range (0.8-4.0 m) of a gesture motion identification sensor (such as a Kinect sensor) and face the Kinect; the gesture motion recognition sensor is an infrared motion sensor and is connected with the computer.
The environment perception special effect system device and the dynamic platform seat device are connected with corresponding clients; the environment perception special effect system device, the dynamic platform seat device, the panoramic immersion display system module and the central control and playing system module are arranged on the base;
the subsystem of the intelligent office leisure study system comprises an automatic generation system based on natural semantic scenes.
6. In order to solve the existing technical problem, an automatic generation system based on natural semantic scenes is designed, comprising (a software flow chart is shown in figure 6): the character animation search engine module and the character animation automatic generation system module; the system module for automatically generating the role animation comprises: the system comprises a body mapping module, a scene information processing module and an animation execution script generating module; when the animation is searched successfully based on the semantics, the system directly enters a centralized control and playing module, and when the search fails, the system enters an automatic character animation generating system module, and the automatic character animation generating system module is connected with the centralized control and playing module.
The main function of the body mapping module is to process the input limited animation text to obtain the frame structure of the text, and then map the extracted frame information to the example of the body for semantic description by using Jena technology on the basis of the animation description body constructed in the text, wherein the event sequence of the animation is constructed according to the context description sequence of the text; the ontology mapping module is used for storing the limited animation description texts which are subjected to information processing mapping into an ontology example based on the animation description ontology established in the text, so that not only can semantic description be performed on the frame information in the texts, but also the logic sequence of behavior events can be kept; the scene information processing module can retrieve the model information of the object and calculate the space three-dimensional coordinate of the object according to the object semantic information and the space information in the text, and establishes an object data base for script generation; the animation execution script generation module analyzes the object behavior logic semantic sequence, automatically generates JS script files which are correlated with each other by using a Velocity template engine in combination with object data, and finally realizes animation display in the Unity 3D game platform.
The scene information processing module has the main function of retrieving a corresponding object model from the model ontology library according to the semantic information and the spatial relationship information of the object and calculating a corresponding spatial three-dimensional coordinate, so that the quantitative processing of the scene information is completed.
In the animation execution script generation module, in the Unity3D platform, the automatic loading of the object model and the execution of the behavior are controlled by using the script file, so that the generation of the animation execution script is the last step of implementing the character animation generation system. The system combines the semantic sequence of the behavior and the scene information object data, firstly generates a scene initialization script, then selects a correct VM template to generate a related script file according to the event type of the behavior, and stores the script file under the catalog of the Unity3D project; after the executable scripts corresponding to the animation text are generated, the system automatically places the executable scripts in a project folder corresponding to the Unity3D, then clicks and opens a Unity game engine platform, binds the scene initialization script on the camera, clicks a play key, automatically completes model calling and behavior control, and displays an animation corresponding to the description of the animation text.
The system is completed by using Java programming language in an operating system of Windows, and the animation display platform of the system is a Unity 3D game engine. The system sequentially processes the three modules on a section of limited animation description text, generates a script under a project catalog set by the Unity 3D platform, and opens the Unity 3D platform to execute, so that the animation corresponding to the text can be displayed.
Advantageous effects
After the technical structure is adopted, the invention has the beneficial effects that:
1. by utilizing the related technology and theory of the 5G Internet of things virtual reality, the invention designs product schemes such as a panoramic immersion dynamic cinema, a panoramic immersion memory enhanced learning space, a panoramic immersion office leisure health preserving space and the like, and utilizes the same basic hardware and software scheme to realize one-key switching of multiple virtual reality use functions, greatly improve the utilization efficiency of software and hardware equipment and reduce the cost, the technical scheme of the invention is a comprehensive equipment device integrating entertainment, learning, office, study and health preserving, for example, in the aspect of education: the integrated education system optimizes the learning environment, enriches the learning resources, improves the learning mode, improves the learning efficiency, plays a great role in improving the education environment, and simultaneously promotes the application and development of the Internet of things technology and the virtual reality technology in education.
2. The server and the client only need to keep network connection, and theoretically, unlimited simultaneous playing and even remote playing in different places can be performed (a sound effect system in different places can be played by user selection).
3. Scenes and settings in the virtual reality environment can be controlled and changed by the server, and multiple experiences can be realized by using one set of system.
4. The selection and control of the video and audio content can enable the system to be applied to various scenes, not only video and audio playing, but also video and audio teaching and other aspects, and can use the server end to flexibly use the played content.
5. The characteristics of the immersive virtual reality system are applied, a controllable playing environment and playing contents are constructed, simulated viewing experience is provided, a real-time interactive virtual scene can be obtained by using the real-time virtual roaming scene generation system when voice information is input, the visibility and interactivity of knowledge are further enhanced, the traditional multimedia learning scene is broken, and the learning efficiency is improved.
6. The device of the invention integrates multiple functions, and a set of hardware and basic software system can realize panoramic cinema function, a memory enhanced study room device, a health care roaming device and a health care device, greatly reduces the development cost of software and hardware, obtains more practical applications at the same time, integrates multiple functions in the same mode application, and improves the immersion comfort level and time utilization efficiency of users, improves the utilization efficiency of software and hardware and reduces the cost of virtual reality industrialization.
7. The diversified, individualized and industrialized characteristics of the thing networking for the technology that the thing networking relates to is of a great variety, and from the perspective of thing networking application system design, operation, application and management, the thing networking technology mainly includes: automatic perception technology, embedded technology, mobile communication technology, computer network technology, intelligent data processing technology, intelligent control technology, location service technology and information security technology. The introduction of the technology of the internet of things can enable the objects in the real world to be communicated with each other, realize the interconnection of the physical space and the digital information space and realize the effective integration of the real space and the virtual learning environment. The teaching system has the advantages that each object in the teaching environment forms the characteristics of digitalization, networking and visualization, and students can perceive the natural and real scenes in a classroom, so that the man-machine interaction and the human-environment interaction are effectively promoted, the communication between teachers and students and between students is enhanced, and an unprecedented technical solution is provided for education informatization, multimedia, virtual reality, self-learning systems and the like.
Drawings
FIG. 1, virtual reality entertainment learning usage patterns; fig. 2 is a panoramic motion cinema system architecture diagram; FIG. 3 is a flowchart of the main theatre program system operation; FIG. 4 is a software schematic diagram of the integrated control and playback system; FIG. 5 is a flow chart of a natural human-computer interaction virtual roaming system; FIG. 6 is a flow chart of system software automatically generated based on semantic scenarios; FIG. 7 is a diagram of a single hardware device control pattern.
Specific examples
Specific example 1: panoramic dynamic cinema device
Based on a virtual reality entertainment learning device, the invention designs a panoramic dynamic cinema device, which comprises: the system comprises an entertainment and leisure learning space unit (the embodiment can be designed into a dynamic cinema space), a panoramic immersion stereo display system module, a digital projection system, a communication network module, a client, a server, an integrated central control system module, a stereo surround sound system and an environment perception special effect system device.
The panoramic immersion stereoscopic display system module is mainly used for projecting a stereoscopic picture rendered by the system in real time onto a large arc screen. The module adopts a shutter type stereo display technology, a stereo camera in a system scene is utilized to render left and right eye frame pictures corresponding to a role view point in parallel, the left and right eye frame pictures are spliced and fused and then projected onto an arc-shaped metal curtain through 3 digital projectors, and a user wears shutter type stereo glasses to watch the pictures.
The communication network module is a virtual module, is established between a system server and each client, is mainly used for network communication and data transmission, and comprises a server submodule and a client submodule; the server submodule is used for constructing a system scene, operating system logic, connecting clients, selecting to send or receive data to the corresponding clients according to client types, and making correct feedback according to the received data, the client submodule comprises a dynamic platform client, an environment perception platform client, a PC/mobile phone/tablet client and a posture recognition client and is responsible for being connected with the server submodule, receiving or sending interactive data from the server submodule according to the types of interactive equipment connected with the client submodule, and if the data are received, updating the state of the currently connected interactive equipment according to the received data.
The interaction module comprises four main parts: the system comprises an environment perception platform module, a dynamic platform module, a PC/mobile phone/tablet interaction module, a Kinect action recognition module and a simulated weapon interaction module.
The environment perception platform module is used for acquiring plot/content environment scene data in real time, processing and converting the plot/content environment scene data and transmitting the plot/content environment scene data to the environment perception platform device, and the real-time environment perception platform device makes corresponding plot/content-based simulation environment scenes; the dynamic platform module is used for acquiring attitude data for driving the three-degree-of-freedom seat and the six-degree-of-freedom platform in real time, processing and converting the attitude data and sending the attitude data to the dynamic platform, and driving the seat and the platform to make corresponding simulation actions in real time.
The PC/mobile phone/tablet interaction module acquires user input by using a mobile phone built-in sensor and a mobile phone touch screen, and transmits data to a server through a network to realize interaction of a game system/content scene.
The Kinect action recognition module acquires the user limb actions by using the Kinect, codes the user action data and then sends the user action data to the server side, and therefore aiming and shooting of the game system are achieved.
The simulation weapon interaction module is connected with the server through a receiver of the interactive simulation weapon, and the system acquires the attitude and the operation action information of the simulation weapon through the receiver to realize the operation of the sight control and the martial arts action in the main scene of the system; the architecture of the panoramic motion cinema device system is shown in figure 2.
The server host runs a system main program which is responsible for processing various interactive data and system logics, the server host is connected with each client device in a network or the like, and the result of the user interactive data is fed back through the video and audio playing device or the like. The dynamic platform client transmits user input data to the server, and transmits attitude data generated by a main program of a server end system to each dynamic platform in real time; the gesture recognition client recognizes and codes the user gesture information captured by the Kinect and transmits the user gesture information to a main program of a server side system for interaction; the mobile phone client acquires mobile phone deflection information and screen sliding information input by a user, encodes the mobile phone deflection information and the screen sliding information and transmits the encoded mobile phone deflection information and screen sliding information to a server side system main program for interaction; the interactive simulation weapon transmits the attitude information and trigger information to a main program of a server end system through a receiver for interaction; the playing controller is responsible for processing audio and three-dimensional pictures rendered by the system in real time, and the three-dimensional pictures generated by the system are projected onto an arc screen through 3 digital projectors after splicing, correcting and fusing so that a user can wear shutter type three-dimensional glasses for watching.
The operation flow of the main program system is designed as follows:
because the system is connected with various different interactive devices, except that the interactive simulated weapon is a plug-and-play device, other interactive devices need to be connected with the server host through the client, and data is transmitted through the network to further perform interaction in real time, if real-time interaction is to be realized, the interactive devices are cooperatively linked, the client must be started according to a certain flow and connected with the main server, so that real-time synchronous transmission of data is realized, and the specific system operation flow design is shown in fig. 3.
First, a stereo projector set is initialized, and 3 projector projection modes are set as a frame sequence. And then starting a main program, automatically establishing a server at the moment, initializing a six-degree-of-freedom dynamic platform, entering the program into an interactive main scene, and waiting for connection of each interactive equipment client.
And starting the dynamic seat platform client, inputting the IP address of the server, and then establishing connection with the main program server, wherein the main program interface displays that the dynamic seat is connected, the gesture recognition client and the mobile phone client are in an optional interactive mode, and the connection mode is the same as that of the dynamic platform client.
And starting a power switch of the interactive gun, automatically establishing connection between the interactive gun and the main program server through the receiver, and displaying the sight of the corresponding interactive gun on the main program interface.
When the dynamic platform and the interactive gun are normally connected, a player operating the six-degree-of-freedom platform can step on the accelerator, start the dynamic platform, start an interactive process, start timing, and coordinate with each other to participate in interaction of a plot.
And when the interaction time is over, counting and displaying the total score obtained by the interaction of the players, automatically quitting the main program after a period of time, and automatically closing each client after receiving quitting information.
The dynamic platform module obtains flight attitude data of six degrees of freedom of a helicopter controlled and piloted by audiences from the server side, and drives the six-degree-of-freedom platform and the three-degree-of-freedom seat to simulate corresponding actions, so that the dynamic platform is a client interface of the dynamic platform. And inputting the IP of the server host in an IP input window, starting a main program of the server and establishing the server, and clicking a button for connecting the server to the client to realize the communication between the client and the server.
Specific example 2: memory-enhanced intelligent learning system device
Based on 5G thing allies oneself with virtual reality amusement leisure learning device, constructed a memory reinforcing intelligence learning system device, the device contains: entertainment and leisure learning space unit, intelligent learning resource system software module, environmental perception special effect system device, motion platform seat device, customer end, server, integrated control and play system module, panorama immersion stereo display system module, intelligent learning resource system software module contain: the system comprises a learner module, a login and payment module, a third-party learning resource interface module, an intelligent network learning resource recommendation system, an intelligent education system, an intelligent family education auxiliary system, a local database learning resource module and a semantic content-based virtual reality scene automatic generation module; the modules are connected with the semantic content-based virtual reality scene automatic generation module;
The olfactory analog system device emits rosemary odor substances into the space when being started based on the memory enhancing function.
The intelligent teaching system software module comprises: a. the expert module, b, the student module, c, the instructor module.
The student module is as follows: the module is used for recording personal information of students, learned course information and done test question information. The teacher can master the basic information, learning ability, knowledge master conditions and the like of students through the module, analyzes the current information of the students and correctly judges the achievement of the students to the knowledge, thereby adopting a corresponding teaching mode to teach the students and being capable of calling and connecting with other modules.
The teacher module is used for: the teaching strategy of the proper students is researched by knowing various information of the students, the teaching contents to be learned are selected for the students, and the teaching contents are displayed for the students in a form acceptable by the students, so that the skillful guidance of teachers and the high teaching level can be seen. The teacher can master the basic information, the learning ability of the students and the master condition and the test result of the knowledge through the system, and then make corresponding teaching arrangement, and the teacher can also update the knowledge base according to various information of the students, formulate the test questions more conforming to the students, and can call with other modules.
The expert knowledge module: the knowledge base is used for storing all teaching knowledge so as to facilitate the study of students and provide the students with knowledge to inquire about to study. The knowledge base is characterized by easy operation and easy utilization, and is used for storing, organizing and managing all teaching knowledge in a computer memory by adopting a knowledge storage representation mode aiming at the requirement of solving problems in the field of expert knowledge. Call connections may be made with other modules.
For example, the memory-enhanced learning space device has the following use flow:
for example, when a Tang poem pity nong is learned, the whole system enters a ready state after being started, pity nong is learned today through a voice instruction, a client system main brain video rapidly shows a conventional teaching courseware of pity nong, poetry sentences are presented, content explanation and the like are presented one by one, meanwhile, an auxiliary scene video system forms an optimal teaching background image scene of pity nong and a character scene (farmers stand under the sun to hoe and pull grass, and auxiliary scene systems realize rural life scenes of ancient villages, vegetable fields, chickens, dogs, cows, sheep and the like) through a machine automatic learning function and an optimization algorithm according to teaching contents, a burning day is displayed as empty, a high-temperature system of an air conditioning system is opened at the same time, the feeling of burning in summer is simulated, at the moment, the displayed courseware completely hidden, the main brain video and the scene auxiliary video are fused, a poetry panoramic system of pity nong is completely displayed, the students can really feel the poetic environment;
The system is used for displaying a three-dimensional character model and a three-dimensional learning scene; the three-dimensional character model and the scene model automatically generate a teaching scene and a first-person roaming scene of the character model through an intelligent teaching system based on a natural semantic scene automatic generation system module and a panoramic immersion interactive roaming system module according to teaching contents, and play instructions; the playing instruction is used for indicating the sound action and the pushing content of the three-dimensional character model; the push content is learning audio and video information for students to learn or test question information for students to test; the three-dimensional learning scene is a three-dimensional animation scene obtained by modeling preset learning content by using a three-dimensional modeling technology.
Specific example 3: study office health maintenance system device
1. Memory-enhanced learning environment mode: the user enters the cabin, the power supply startup system associated with the cabin door is automatically turned on, the whole system enters a standby state, the system enters a memory enhanced learning environment mode according to a voice interaction instruction, the central processing system starts the human body automatic detection system, the system performs human body detection by using physical examination equipment such as an infrared scanning probe, an ultrasonic probe and the like, then the system feeds back the detection result to the central processing system, the user inputs personal information at a human-computer interface of the central processing system, after the confirmation of the memory enhanced learning environment mode is selected through voice interaction or a client, the integrated control and playing system selectively starts the optimal memory health preserving learning environment mode according to the big data information of the health examination health preserving device system and the like of the user, and starts an environment adjusting system, an air conditioning system, a humidifying system, a memory enhanced smell superposing system and the like, the environmental index in the cabin is controlled in the following range, temperature: 18-26 ℃; humidity: 45% -55%; oxygen concentration: 20% -25%; the anion concentration is 4000-; the air pressure is 1 atmosphere; acoustic environment: 30-40dB (light music happy); olfactory environment: the fragrance of rosemary, lily, sweet osmanthus, mint and the like and the fragrance of flowers and plants are generated; visual environment: full spectrum natural light; environmental scenario: the favorite scenes such as classrooms, books, forests, seasides and the like can be selected; the method is characterized in that the scenes are not fixed and are automatically generated and panoramically played based on the semantic scenes of the learning content.
If when studying, still want to utilize this time to carry out the physiotherapy, can start the physiotherapy system, the physiotherapy scheme of physiotherapy system according to central processing system propelling movement, start corresponding physiotherapy mode, also can oneself select physiotherapy mode according to self demand, whole in-process, central processing system receives the information feedback who comes from physical examination system and detecting system all the time, handle feedback information, instruction environment governing system and physiotherapy system carry out the environmental value index, the regulation of indexes such as physiotherapy intensity frequency, make the people be in the stable environment under the best learning state of a continuous regulation change. After the use is finished, the system restores to the original state, the auxiliary system is automatically started to sterilize, ventilate and the like, meanwhile, the central processing system stores and analyzes the numerical values, the optimal learning environment index, the optimal physical therapy mode, the final assessment report of the learning state and the later intervention scheme are presented to the user, the personal file is established for later calling, and the operation of the nursing cabin is completely finished.
2. The best working environment mode is as follows: the user gets into the under-deck, close inside and outside hatch door, whole function cabin gets into operating condition, central processing system starts human automatic check-out system, the system utilizes infrared scanning probe, physical examination equipment such as ultrasonic probe carries out the human body inspection, then the system feeds back the result of inspection to central processing system, the user is in central processing system human-computer interface department, input personal information, select "operational environment mode" and confirm the back, central processing unit will combine the health inspection result, start the best study environment mode, start environmental conditioning system, air conditioning system, humidification system etc., make the interior environmental index control of under-deck at following within range, the temperature: 18-28 ℃; humidity: 45% -60%; oxygen concentration: 21% -23%; the negative ion concentration of 2000-; the air pressure is 1 atmosphere; acoustic environment: 40-60dB (happy music); olfactory environment: the fragrance of peony, rose, lemon and the like and the odor of flowers and plants are generated; visual environment: sufficient light, natural light; environmental scenario: the method is characterized in that the scenes are not fixed and are automatically generated and panoramically played based on the semantic scenes of the learning content.
If the user wants to use the time to carry out physical therapy while working, the physical therapy system can be started, the physical therapy system starts a corresponding physical therapy mode according to a physical therapy scheme pushed by the central processing system, and can also select the physical therapy mode according to the self requirement, in the whole process, the central processing system always receives information feedback from the physical examination system and the detection system, processes the feedback information, instructs the environment adjusting system and the physical therapy system to adjust indexes such as environment value indexes, physical therapy intensity frequency and the like, enables the user to be in a stable environment under the optimal working state of continuous adjustment and change, finishes use, restores the system to the original state, automatically starts the auxiliary system to carry out disinfection, ventilation and the like, simultaneously the central processing system stores and analyzes the numerical values, presents the optimal learning environment indexes for the user, the optimal physical therapy mode, the final evaluation report of the working state and the later intervention scheme, and establishing a personal file for later calling until the operation of the nursing cabin is completely finished.
3. The optimal rest environment mode is as follows: the user gets into the under-deck, and whole function cabin gets into operating condition, and central processing system starts human automatic check-out system, and the system utilizes physical examination equipment such as infrared scanning probe, ultrasonic probe to carry out the human inspection, and then the system feeds back the result of inspection to central processing system. The user is in central processing unit system man-machine interface department, inputs personal information, selects "rest environment mode" and confirms the back, and central processing unit will combine the physical examination result, starts best rest environment mode, starts environmental conditioning system, air conditioning system, humidification system etc. makes the interior environmental index control of cabin in following scope, the temperature: 19-22 ℃; humidity: 60% -70%; oxygen concentration: 21 percent; the concentration of negative ions is 1000 per square meter; the air pressure is 1 atmosphere; acoustic environment: 20-40dB (hypnotic curve); olfactory environment: generating the fragrance of the geranium and the fragrance of flowers and plants; visual environment: the scene of the environment with dark light can be selected from the scenes favored by the guesthouse, the bedroom and the like, and the scene is characterized in that the scenes are not fixed and are automatically generated and played in a panoramic way based on the scene of the optimal rest environment mode.
If in the rest, still want to utilize this time to carry out the physiotherapy, can start the physiotherapy system, the physiotherapy scheme of physiotherapy system according to central processing system propelling movement, start corresponding physiotherapy mode, also can oneself select physiotherapy mode according to self demand, whole in-process, central processing system receives the information feedback who comes from physical examination system and detecting system all the time and handles feedback information, instruction environment governing system and physiotherapy system carry out the regulation to environment numerical value index, index such as physiotherapy intensity frequency, make the people be in the stable environment under the best rest state of a continuous regulation change. After the use, the system is recovered to the original state, and the auxiliary system is automatically started to carry out disinfection, ventilation and the like.
Specific example 4: panoramic immersion roaming leisure system
The embodiment is described by taking a mountain roaming interaction system as an example.
The mountain and mountain roaming interaction system captures the action and sound of a user through Kinect, the action and sound are identified and then mapped into corresponding mouse and keyboard operations, so that camera and menu commands of a Unity3D three-dimensional scene are controlled, a scene database is established through modeling mapping after image data is collected from scenic spots or through a semantic content virtual reality scene automatic generation system to generate a modeling mapping, real-time images and sound generated by Unity3D are transmitted to a head-mounted display or a panoramic immersion display system worn by the user, and the system structure of the system is shown in figure 5.
(1) The user: the system can identify users of different genders in different ages, the users need to stand on the Kinect sensor within an effective range (0.8-4.0 m) and face the Kinect, and the optimal identification range of the system is 0.8-2.5 m.
(2) Kinect sensor: the Kinect generates a depth image by projecting a near infrared spectrum and receiving a reflection spectrum, a camera of the Kinect can move up and down freely, and the vertical angle is +/-28; color cameras can be used to capture color images; the microphone array can locate sound sources, suppress ambient noise, and capture raw audio data.
(3) Depth data stream: the depth data stream consists of depth image frames, each pixel point in each depth image frame containing specific distance information-the vertical distance from a specific (x, y) point coordinate to the Kinect infrared camera plane.
(4) Color data stream: the Kinect NUI API can set color image data in two encoding formats, RGB, which provides a 32-bit linear X8R8G888 color bitmap in sRGB color space, and YUV, which provides a 16-bit Y-corrected linear UYVY color bitmap, where normally Bayer color image data of 1280 × 1024 resolution is lossy compressed and converted to RGB at a transmission rate of 30 FPS.
(5) Audio data flow: the microphone array of Kinect can acquire original Audio data and ensure Audio quality through a series of algorithms of Kinect Audio API, including echo cancellation (AEC), Automatic Gain Control (AGC), and noise suppression (noise suppression), among others, and Kinect Audio DMO also includes built-in algorithms that control beam and provide sound source direction.
(6) Bone tracking: through the analysis of the depth image, a human body is separated from a background environment, then 32 parts of the human body are identified, and then a skeleton system comprising 20 joint points is constructed, so that the limb actions, the gestures and the static postures of a user are identified.
(7) Face tracking: the face tracking is a face recognition technology based on Kinect, the position of the head is determined through skeleton tracking, and then the characteristic points of the face are extracted through analyzing color image data, so that the Pitch Angle, YawAngle and Roll Angle of the face are obtained.
(8) And (3) voice recognition: the microsoft, speech class library provides speech recognition functionality, which can analyze the audio data stream obtained from Kinect, match the most appropriate speech command, further process it by event if it is determined that the utterance contains a specific command to be recognized, and discard the audio data stream directly if it is not.
(9) Mouse keyboard simulator: the somatosensory operation middleware between the Kinect SDK and Unity3D maps the recognized walking-in-place action to the directional key uparow of the keyboard, the Pitch Angle tracked by the head: the Yaw Angle maps to screen mouse coordinates and the voice commands "walk", "fly", "help", "exit" map to the W, F, H, and ESC keys in the keyboard.
(10) Controlling a camera: in unity3d, a Camera (Camera component) corresponds to a human eye, and the whole three-dimensional scene is seen through a Camera viewpoint and a view angle, we can set the viewpoint through Position, set the view angle through Rotation, set the depth and breadth of the field of view through fieldOfView, nearClipPlane, farClipPlane variables, and also realize automatic roaming by setting the motion path of the Camera.
(11) And (3) collision test: the camera is bound on a capsule body, the three-dimensional space coordinates of the capsule body can be calculated according to collision points through collision of the capsule body with the collision of other objects, such as collision of the capsule body with terrain collision, and the purpose of three-dimensional scene roaming is achieved by controlling the capsule body to walk on the terrain through the method.
(12) Menu commands: by voice command ' help ', call out help menu, show in the straight ahead, this is similar to the effect of google's glasses, after the user sends out the voice command, the menu appears in the front.
(13) Unit3D real-time rendering: the surface shader used by Unity3D can automatically manage illumination, shadow, illumination mapping, forward rendering and delayed rendering, and by means of the referred illumination technology, the lighting effect with very high quality can be provided and hundreds of light sources can be placed in the same scene; using a Umbra viewport clipping technique; the vegetation and the terrain are processed by using an LOD (level of detail) technology.
(14) And three-dimensional modeling: obtaining original images and mapping materials of the model in a mode of photographing in the scenic spot, and then manufacturing models of pavilions, stone bridges, stone stools and stones by adopting a polygonal modeling method under 3 dsMAX; the terrain Model is a DEM (Digital Elevation Model) of a mountain land system obtained by utilizing Google Earth, then the DEM is converted into an OBJ file by using SketchUp, and the OBJ is converted into a terrain height map (height map) by programming; the tree is made using the tree generator of Unity 3D.
(15) Model mapping: firstly, UV splitting is carried out on a building model by using Unfold3D, then materials are processed under Photoshop and chartlet materials are made, the chartlet materials of terrain and trees are processed under Photoshop and then added into a terrain system and a tree system of Unity 3D.
(16) FBX file: after the 3ds MAX modeling map is completed, the pavilion, the bridge, the bench, the stone, and the like need to be exported as FBX types and then imported into Unity 3D.
(17) A scene database: the scene database of the Yuenu mountain virtual roaming system mainly comprises atmosphere, terrain, roads, buildings, trees, water bodies, vegetation and the like, wherein the buildings comprise love pavilions, crane pavilions and stone bridges; the trees comprise willow, red maple, camphor tree and green bamboo: the water body comprises a pond and a creek; the vegetation is weeds and wild flowers; others include stone stools and stones.
(18) Head-mounted display: the head-mounted display is used for transmitting three-dimensional scene images and sound generated by a computer in real time to a user, so that the user can achieve good immersion feeling, and the head-mounted display can also be directly displayed and projected on a panoramic immersion stereo display system.
The technical solutions and specific examples of the present invention described above are only for explaining and illustrating the technical solutions of the present invention, and do not limit the scope of the technical solutions of the present invention, and those skilled in the art can be included in the technical solutions of the present invention within the scope of the technical solutions of the present invention by any change.

Claims (6)

1. The utility model provides a virtual reality amusement and recreation learning device, contains an amusement and recreation learning space component, amusement and recreation learning space unit in synthesize integrated: the system comprises a panoramic immersion stereo display system, an environment perception health special effect system, a dynamic platform seat device, a main server, an interaction module, a communication network module, a client, an integrated control and play system and a stereo surround sound system; the main server is connected with the integrated control and play system, and the client is connected with the main server; the panoramic immersion stereo display system, the stereo surround sound system, the environment perception health special effect system and the dynamic platform seat device are connected with the integrated control and playing system;
the entertainment, leisure and learning space unit structure comprises: an egg-shaped three-dimensional space structure, a space capsule three-dimensional space structure, an oval three-dimensional space structure, a conventional cubic cuboid space structure, a study room space structure and a classroom space structure;
the panoramic immersion stereoscopic display system module comprises: the stereoscopic display system is any one stereoscopic display device or any combination stereoscopic display device in an LED display device, an OLED display device, a spherical screen cinema display device, a circular screen cinema display device, a multi-picture cinema display device, a 360-degree whole spherical cinema display device and a 720-degree panoramic immersion cinema display device, and the device equipment or the system is connected with the integrated control and playing system module;
The client includes: one or a plurality of combinations of a PC client, a tablet computer client, a mobile phone client, a gesture action recognition client, a dynamic seat platform client and an environment perception health special effect system client are integrated, wherein application software is installed on the mobile phone client, the PC client and the tablet computer client, and the application software comprises: a panoramic dynamic cinema system, a memory-enhanced intelligent learning system and an office leisure health-care system; the subsystems of the panoramic motion cinema system, the memory enhancement intelligent learning system and the office leisure health system comprise an intelligent login and payment subsystem and an automatic generation system based on a semantic virtual reality scene;
the interaction module comprises: the system comprises an environment perception health special effect system interaction module, a dynamic platform seat device interaction module, a client interaction module and a posture recognition interaction module;
the client is connected with the main server through a communication network module.
2. The virtual reality amusement and learning device of claim 1, wherein the environmental awareness health effect system comprises: an air conditioning device, an olfactory sensation simulation system device, a snowfall simulator, a bubble machine, a rainfall simulator, a lightning simulator, a smoke machine and a fan blower, the air conditioning device is a public device, the smell simulation system device comprises a smell synthesis system device and a smell outlet device of the smell simulation system, the air outlet device of the oxygen anion generating device, the smell outlet device of the smell simulation system and the air blower are connected together or combined into an integrated air outlet device, and is arranged at the head part of the dynamic table and chair platform device or is arranged right above or right behind the arm-chair device of the dynamic table and chair platform device and projects on the projection part of the inner wall of the learning, leisure and entertainment space unit, the outer machine parts of the equipment devices are all arranged at proper positions on the periphery of the entertainment and leisure learning space unit;
The device equipment is connected with an environment sensing health special effect system client through an Internet of things sensing system and a network module, the environment sensing health special effect system client is connected with a main server, the main server is connected with an integrated control and playing system module, and the integrated control and playing system module is connected with an environment sensing health special effect system device and executes feedback action based on scene data output;
health physical examination health preserving device contains: oxygen anion generating device, health automatic check out system, automatic rehabilitation system, health automatic check out system contain: any one or any combination of an infrared tomography scanner, an ultrasonic detector and an electrocardiogram detector; the automated rehabilitation system further comprises: any one or any combination of a plurality of fumigating devices, atomization physiotherapy devices, electrotherapy equipment, ultrasonic equipment, magnetic therapy equipment, music therapy equipment, phototherapy equipment, cold and heat therapy equipment, far infrared generating devices and laser beauty equipment; the devices are connected with the main server through the Internet of things sensing system and the network module, and the integrated control and playing system inputs and executes corresponding synchronous real-time instructions according to the scene data of the health care theme content;
The air purification and disinfection system device comprises: the ozone generator, the ultraviolet ray disinfection instrument and the lighting lamp, wherein the working head of the ultraviolet ray disinfection instrument is positioned at the inner side of the top of the cabin body, the connecting cable passes through a hole formed in the outer wall of the cabin body to connect the working head and the machine body, and a sealing piece is arranged between the connecting cable and the outer wall of the cabin body; the lighting lamp is positioned in the wind sweeping area of the leg of the dynamic seat, the air port of the air interchanger is positioned on the upper side of the inlet and outlet cabin door, and the equipment device is connected with the cabin door in an opening and closing state in a related manner.
3. The virtual reality amusement and learning apparatus according to claim 1, wherein a panoramic motion cinema apparatus is constructed, comprising: the system comprises a cinema content playing resource client, a posture identification client, a digital projection system, a digital simulation weapon, a data clothes glove ring and an action identification module;
the digital projection system comprises: the system comprises a multi-channel playing system, an immersion projection arc screen and polarization stereo glasses;
the digital simulation weapon, the data clothes glove ring and the digital simulation sports equipment tool are connected with the main server;
the cinema content playing resource client is any one of a mobile phone client, a tablet and a PC client, and is connected with the main server;
The gesture recognition client, the dynamic seat platform client and the environment perception special effect system client are connected with the main server;
the immersion projection arc screen adopts a multi-channel projection technology and at least comprises a projector a, a projector b and a projector c.
4. The virtual reality entertainment learning device of claim 1, which is constructed as a memory-enhanced intelligent learning system device, comprising an intelligent virtual reality learning system, said intelligent virtual reality learning system comprising: the system comprises a learner module, a login and payment module, a local database learning resource module, a third-party learning resource interface module, an online learning resource intelligent recommendation system, an intelligent education system, an intelligent family education auxiliary system and a natural semantic scene-based automatic generation system module, wherein the smell simulation system device disperses rosemary odor substances into the space under the memory-enhanced intelligent learning mode.
5. The virtual reality leisure entertainment learning device according to claim 1, wherein an office leisure health system device is constructed, and is characterized by comprising an intelligent office study system and a panoramic immersion interactive roaming system;
the panoramic immersion interactive roaming system is characterized by comprising: the system comprises a user module, a voice action recognition sensor, a PC client, a panoramic immersion stereo display system and/or a head-mounted display; the functional modules are sequentially linked with each other to form a panoramic immersion interactive roaming system;
The intelligent office study system comprises: the system comprises an oxygen bar/health bar control module, an office scene display module, a panoramic immersion theme office environment control module and a stereo surround sound.
6. The virtual reality entertainment learning device of claim 4, wherein the automatic generation system based on natural semantic scenes comprises: the character animation search engine module and the character animation automatic generation system module; the system module for automatically generating the role animation comprises: the system comprises a body mapping module, a scene information processing module and an animation execution script generating module; when the animation is searched successfully based on the semantics, the system directly enters a centralized control and playing module, and when the search fails, the system enters an automatic character animation generating system module, and the automatic character animation generating system module is connected with the centralized control and playing module.
CN202010562780.4A 2020-06-19 2020-06-19 Entertainment and leisure learning device based on 5G internet of things virtual reality Pending CN111862711A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010562780.4A CN111862711A (en) 2020-06-19 2020-06-19 Entertainment and leisure learning device based on 5G internet of things virtual reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010562780.4A CN111862711A (en) 2020-06-19 2020-06-19 Entertainment and leisure learning device based on 5G internet of things virtual reality

Publications (1)

Publication Number Publication Date
CN111862711A true CN111862711A (en) 2020-10-30

Family

ID=72987620

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010562780.4A Pending CN111862711A (en) 2020-06-19 2020-06-19 Entertainment and leisure learning device based on 5G internet of things virtual reality

Country Status (1)

Country Link
CN (1) CN111862711A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112562433A (en) * 2020-12-30 2021-03-26 华中师范大学 5G strong interaction remote delivery teaching system based on holographic terminal and working method thereof
CN112675527A (en) * 2020-12-29 2021-04-20 重庆医科大学 Family education game system and method based on VR technology
CN113256777A (en) * 2021-06-28 2021-08-13 山东捷瑞数字科技股份有限公司 Method for playing and adjusting dome screen based on computer graphics
CN113360682A (en) * 2021-05-21 2021-09-07 青岛海尔空调器有限总公司 Information processing method and device
CN113961074A (en) * 2021-10-21 2022-01-21 南京康博智慧健康研究院有限公司 Virtual home forest health maintenance method and system
CN115050227A (en) * 2022-06-13 2022-09-13 成都信息工程大学 Wall projection-based Qiang culture and language auxiliary teaching device and method

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112675527A (en) * 2020-12-29 2021-04-20 重庆医科大学 Family education game system and method based on VR technology
CN112562433A (en) * 2020-12-30 2021-03-26 华中师范大学 5G strong interaction remote delivery teaching system based on holographic terminal and working method thereof
CN113360682A (en) * 2021-05-21 2021-09-07 青岛海尔空调器有限总公司 Information processing method and device
CN113360682B (en) * 2021-05-21 2023-03-21 青岛海尔空调器有限总公司 Information processing method and device
CN113256777A (en) * 2021-06-28 2021-08-13 山东捷瑞数字科技股份有限公司 Method for playing and adjusting dome screen based on computer graphics
CN113961074A (en) * 2021-10-21 2022-01-21 南京康博智慧健康研究院有限公司 Virtual home forest health maintenance method and system
CN115050227A (en) * 2022-06-13 2022-09-13 成都信息工程大学 Wall projection-based Qiang culture and language auxiliary teaching device and method

Similar Documents

Publication Publication Date Title
CN111862711A (en) Entertainment and leisure learning device based on 5G internet of things virtual reality
Machidon et al. Virtual humans in cultural heritage ICT applications: A review
CN104011788B (en) For strengthening and the system and method for virtual reality
CN108389249B (en) Multi-compatibility VR/AR space classroom and construction method thereof
CN106534830B (en) A kind of movie theatre play system based on virtual reality
Tennent et al. Thresholds: Embedding virtual reality in the museum
CN109333544B (en) Doll interaction method for marionette performance participated by audience
JP2016045815A (en) Virtual reality presentation system, virtual reality presentation device, and virtual reality presentation method
KR20200097637A (en) Simulation sandbox system
Nguyen et al. Real-time 3D human capture system for mixed-reality art and entertainment
CN112734946A (en) Vocal music performance teaching method and system
IJsselsteijn History of telepresence
Regenbrecht et al. Ātea presence—Enabling virtual storytelling, presence, and tele-co-presence in an indigenous setting
See et al. Tomb of a Sultan: a VR digital heritage approach
Fleischmann et al. Images of the Body in the House of Illusion
Helle et al. Miracle Handbook: Guidelines for Mixed Reality Applications for culture and learning experiences
CN107122040A (en) A kind of interactive system between artificial intelligence robot and intelligent display terminal
Wang et al. Construction of a somatosensory interactive system based on computer vision and augmented reality techniques using the kinect device for food and agricultural education
CN112144918A (en) 5G Internet of things visual intelligent learning and entertainment space
CN114067622A (en) Immersive holographic AR future classroom system and teaching method thereof
CN112863643B (en) Immersive virtual reality interpersonal relationship sculpture psychological consultation auxiliary system
CN213597569U (en) 5G thing allies oneself with virtual reality intelligence pavilion
McDonald et al. Nature bot: Experiencing nature in the built environment
Park et al. Gyeongju VR theater: a journey into the breath of Sorabol
Jo et al. Development and utilization of projector-robot service for children's dramatic play activities based on augmented reality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20201030

WD01 Invention patent application deemed withdrawn after publication