WO2018098720A1 - Virtual reality-based data processing method and system - Google Patents

Virtual reality-based data processing method and system Download PDF

Info

Publication number
WO2018098720A1
WO2018098720A1 PCT/CN2016/108118 CN2016108118W WO2018098720A1 WO 2018098720 A1 WO2018098720 A1 WO 2018098720A1 CN 2016108118 W CN2016108118 W CN 2016108118W WO 2018098720 A1 WO2018098720 A1 WO 2018098720A1
Authority
WO
WIPO (PCT)
Prior art keywords
teaching
virtual
virtual reality
reality device
operation instruction
Prior art date
Application number
PCT/CN2016/108118
Other languages
French (fr)
Chinese (zh)
Inventor
熊益冲
Original Assignee
深圳益强信息科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳益强信息科技有限公司 filed Critical 深圳益强信息科技有限公司
Priority to PCT/CN2016/108118 priority Critical patent/WO2018098720A1/en
Publication of WO2018098720A1 publication Critical patent/WO2018098720A1/en

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/08Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations
    • G09B5/14Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations with provision for individual teacher-student communication

Definitions

  • the present invention relates to the field of computer applications, and in particular, to a data processing system and system based on virtual reality.
  • the user can only visually view the two-dimensional. Particle motion on the plane, and does not allow users to explore from different angles Intends to showcase the environment, resulting in user experience simulation resources is extremely limited, and thus the customer can not fully enjoy the immersive experience.
  • the technical problem to be solved by the embodiments of the present invention is to provide a data processing method and a data processing system based on virtual reality, which can enrich user experience resources and provide users with a realistic teaching scene so that users can fully enjoy Immersive experience.
  • the first aspect of the embodiments of the present invention provides a data processing method based on virtual reality, where the data processing method includes:
  • the first virtual reality device captures the first environment information through the camera, and records the first capture time
  • the first virtual reality device receives a setting instruction for the virtual teaching application, and sends the setting Command to the background server;
  • the background server acquires the teaching environment parameter according to the setting instruction, and sends the teaching environment parameter back to the first virtual reality device; wherein the teaching environment parameter comprises: a virtual projector, a virtual desk, and a virtual seat And virtual classrooms;
  • the first virtual reality device caches the teaching environment parameter according to the received teaching environment parameter
  • the first virtual reality device merges the cached processed teaching environment parameter with the captured first environment information, and generates a first virtual teaching scene according to the first capturing time, and displays the first a virtual teaching scene;
  • the first virtual reality device receives an operation instruction for the target subject in the first virtual teaching scene, and selects a corresponding character teaching model according to the operation instruction, and displays the teaching content of the target subject based on the character model .
  • a second aspect of the embodiments of the present invention provides a virtual reality-based data processing system, including: a first virtual reality device and a background server;
  • the first virtual reality device is configured to capture first environment information by using a camera, and record a first capture time
  • the first virtual reality device is further configured to receive a setting instruction for the virtual teaching application, and send the setting instruction to the background server;
  • the background server is configured to acquire a teaching environment parameter according to the setting instruction, and send the teaching environment parameter to the first virtual reality device; wherein the teaching environment parameter comprises: a virtual projector, a virtual desk , virtual seats and virtual classrooms;
  • the first virtual reality device is further configured to cache the teaching environment parameter according to the received teaching environment parameter;
  • the first virtual reality device is further configured to fuse the cached processed teaching environment parameter with the captured first environment information, and generate a first virtual teaching scenario according to the first capture time, and Displaying the first virtual teaching scene;
  • the first virtual reality device is further configured to receive an operation instruction for the target subject in the first virtual teaching scene, and select a corresponding character teaching model according to the operation instruction, and display the target based on the character model Teaching content of the subject.
  • the implementation of the embodiment of the present invention has the following beneficial effects: the first virtual reality device first captures the first environment information through the camera, and records the first capture time; secondly, the first virtual reality device receives the setting of the virtual teaching application. And receiving and buffering the teaching environment parameter returned by the background server according to the setting instruction; then, the first virtual reality device fuses the cached processed teaching environment parameter with the captured first environment information And generating a first virtual teaching scene according to the first capturing time, and displaying the first virtual teaching scene; finally, the first virtual reality device receives an operation instruction for the target subject in the first virtual teaching scene And selecting a corresponding character teaching model according to the operation instruction, and displaying the teaching content of the target subject based on the character model. Therefore, the invention can not only provide users with a realistic simulation teaching scene, but also create a richer and more diverse visual experience for the user, fully mobilize the user's feelings and thinking, and greatly improve the user's learning efficiency. .
  • FIG. 1 is a schematic flowchart of a data processing method based on virtual reality according to an embodiment of the present invention
  • FIG. 2 is a schematic flowchart of another virtual reality-based data processing method according to an embodiment of the present invention.
  • FIG. 3 is a schematic flowchart diagram of still another virtual reality-based data processing method according to an embodiment of the present invention.
  • FIG. 4 is a schematic structural diagram of a data processing system based on virtual reality according to an embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of another data processing system based on virtual reality according to an embodiment of the present invention.
  • the execution of the virtual reality-based data processing method mentioned in the embodiments of the present invention depends on a computer program and can run on a computer system of the Von Ruyman system.
  • the computer program can be integrated into the application or run as a standalone tool class application.
  • the computer system can be a terminal device such as a personal computer, a tablet computer, a notebook computer, or a smart phone.
  • FIG. 1 is a schematic flowchart of a data processing method based on virtual reality according to an embodiment of the present invention. As shown in FIG. 1 , the data processing method includes at least:
  • Step S101 the first virtual reality device captures the first environment information by using the camera, and records the first capture time
  • the first virtual reality device may capture the environment data of the current user by using the front or rear camera, and use the environment data as the first environment information, and record the first corresponding to the first environment information.
  • a capture time in addition, the camera also has functions such as video call and projection.
  • the obtaining, by the first virtual reality device, the user's head rotation angle is detected in real time, and the spatial data in the corresponding imaging area is captured in real time according to the rotation angle, and each angle is The corresponding spatial data is integrated to generate the environmental data, and the environmental data is used as the first environmental information.
  • Step S102 the first virtual reality device receives a setting instruction for the virtual teaching application, and sends the setting instruction to the background server;
  • the first virtual reality device may be a head mounted device, including: a virtual reality eye or a virtual reality helmet; the first virtual reality device is configured to receive a setting instruction of a screen area corresponding to the virtual teaching application by the user, and after the first virtual reality device receives the setting instruction, to the background
  • the server sends the setting instruction;
  • the setting instruction refers to that the user performs a click operation on the virtual screen area of the first virtual reality device.
  • the click operation includes, but is not limited to, an operation of each type of touch touch screen, such as a pressing operation, a double-click operation, or a sliding screen operation.
  • the structure of the touch screen includes at least three layers: a screen glass layer, a touch panel layer, and a display panel layer.
  • the screen glass layer is a protective layer
  • the touch panel layer is used to sense a user's touch operation
  • the display panel layer is used to display an image.
  • related technologies enable the integration of the touch panel layer and the display panel layer.
  • Step S103 the background server acquires the teaching environment parameter according to the setting instruction, and sends the teaching environment parameter back to the first virtual reality device;
  • the background server when receiving the setting request information sent by the first virtual reality device, the background server returns setting response information according to the setting request information, and acquires the teaching environment parameter, and the teaching environment is Sending parameters back to the first virtual reality device; wherein the teaching environment parameters include: a virtual projector, a virtual desk, a virtual seat, and a virtual classroom;
  • Step S104 The first virtual reality device caches the teaching environment parameter according to the received teaching environment parameter
  • the teaching scenario of the virtual teaching application may be set according to the teaching environment parameter; wherein the teaching Environmental parameters include: virtual projectors, virtual desks, virtual seats, and virtual classrooms;
  • Step S105 the first virtual reality device merges the cached processed teaching environment parameter with the captured first environment information, and generates a first virtual teaching scene according to the first capturing time, and displays The first virtual teaching scene;
  • the first virtual reality device may capture the environment data in a real environment in which the user is currently located by using the camera, and estimate and form a model of a corresponding teaching environment parameter according to the environment data, to Extracting the corresponding teaching environment parameter on the server, and performing data fusion with the captured first environment information to generate a first virtual teaching scene according to the first capturing time, and displaying The first virtual teaching scene;
  • the first virtual reality device captures the current position of the user in the room A by the camera, and detects that the user's head is slightly shifted to the left, and the first virtual reality device positions the user. And the head offset angle or the like as the environmental data, and extracting teaching environment parameters corresponding to the environmental data according to the environmental data, so that the environmental data and the teaching environment parameters are data-fused to be virtualized
  • the teaching environment parameters are applied to the real teaching environment, so that the real environment data and the virtual teaching scene are superimposed on the same picture or space in real time, and the two exist at the same time.
  • the user will be located in the middle of the classroom in the virtual teaching environment, and the projection display interface is positioned at a slightly left position with the user as the dividing line.
  • Step S106 the first virtual reality device receives an operation instruction for the target subject in the first virtual teaching scene, and selects a corresponding character teaching model according to the operation instruction, and displays the target subject based on the character model. Teaching content.
  • the first virtual reality device receives an operation instruction for the target subject in the first virtual teaching scenario by using voice recognition; wherein the operation instruction includes: a model selection operation instruction, a teaching content call operation instruction, and a seat selection Operation instruction
  • the first virtual reality device may select a corresponding character teaching model according to the model selection operation instruction, and invoke an operation instruction to display the teaching content of the target subject according to the teaching content, and select an operation according to the seat selection.
  • the first virtual reality device may receive a model selection operation instruction for a target subject (eg, a plurality of subjects) in the first virtual teaching scene by voice recognition to select a user favorite user.
  • Teaching model may be an animal cartoon model of the image or a realistic star simulation model.
  • the first capture time is read, and A virtual target seat in the teaching environment parameter is arranged according to the seat selection operation command and the first capture time (eg, the position of the projection screen facing the left side of the center).
  • the first capture time may be used to record the learning duration of the target subject, and may also be used to select the virtual target seat, so that other users who subsequently enter the English subject cannot repeatedly occupy the selected one of the learning durations. Target seat.
  • the first virtual reality device first captures the first environment information through the camera, and records the first capture time; secondly, the first virtual reality device receives the setting instruction for the virtual teaching application, and receives and caches the background server according to Setting the teaching environment parameter returned by the instruction; then, the first virtual reality device fuses the cached processed teaching environment parameter with the captured first environment information, and according to the first capturing time Generating a first virtual teaching scene and displaying the first virtual teaching scene; finally, the first virtual reality device receives an operation instruction for the target subject in the first virtual teaching scene, and selects according to the operation instruction a character teaching model, and displaying the teaching content of the target subject based on the character model. Therefore, the invention can not only provide users with a realistic simulation teaching scene, but also create a richer and more diverse visual experience for the user, fully mobilize the user's feelings and thinking, and greatly improve the user's learning efficiency. ,
  • FIG. 2 is a schematic flowchart of another virtual reality-based data processing method according to an embodiment of the present invention. As shown in FIG. 2, the data processing method includes at least:
  • Step S201 the first virtual reality device captures the first environment information by using the camera, and records the first capture time
  • the first virtual reality device may capture the environment data of the current user by using the front or rear camera, and use the environment data as the first environment information, and record the first corresponding to the first environment information.
  • a capture time in addition, the camera also has functions such as video call and projection.
  • the obtaining, by the first virtual reality device, the user's head rotation angle is detected in real time, and the spatial data in the corresponding imaging area is captured in real time according to the rotation angle, and each angle is The corresponding spatial data is integrated to generate the environmental data, and the environmental data is used as the first environmental information.
  • Step S202 The first virtual reality device receives a setting instruction for the virtual teaching application, and receives and caches a teaching environment parameter returned by the background server according to the setting instruction.
  • the first virtual reality device receives a setting instruction for the virtual teaching application, and sends the setting instruction to the background server, so that the background server acquires the teaching environment parameter according to the setting instruction, and the teaching The environmental parameter is sent back to the first virtual reality device; wherein the teaching environment parameter comprises: a virtual projector, a virtual desk, a virtual seat, and a virtual classroom; and the teaching environment received by the first virtual reality device After the parameter, the teaching environment parameters are cached.
  • the first virtual reality device may be a head mounted device, including: a virtual reality glasses or a virtual reality helmet; the first virtual reality device may be configured to receive a setting instruction of a screen area corresponding to the virtual teaching application by the user. And after the first virtual reality device receives the setting instruction, send the setting instruction to the background server;
  • the setting instruction may be configured to send the setting request information to the background server, and enable the background server to return setting response information according to the setting request information, to extract the teaching environment parameter stored on the background server, and Setting a teaching scenario of the virtual teaching application according to the teaching environment parameter; wherein the teaching environment parameters include: a virtual projector, a virtual desk, a virtual seat, and a virtual classroom;
  • the setting instruction refers to a user performing a click operation on a virtual screen area of the first virtual reality device.
  • the click operation includes, but is not limited to, an operation of each type of touch touch screen, such as a pressing operation, a double-click operation, or a sliding screen operation.
  • the structure of the touch screen includes at least three layers: a screen glass layer, a touch panel layer, and a display panel layer.
  • the screen glass layer is a protective layer
  • the touch panel layer is used to sense a user's touch operation
  • the display panel layer is used to display an image.
  • related technologies enable the integration of the touch panel layer and the display panel layer.
  • Step S203 the first virtual reality device converts the teaching environment parameter cached in the graphics card cache into three-dimensional classroom interface data based on the active split screen technology, and uses the first capture time to set the three-dimensional classroom interface data. Merging with the captured first environment information to generate the first virtual teaching scene; wherein the first capturing time is used to record the learning duration of the target subject;
  • the first virtual reality device performs split screen processing on the teaching environment parameters buffered in the bottom layer graphics card cache according to the active split screen technology, so that the teaching environment parameters displayed by the system are equally divided.
  • the data can be converted into three-dimensional teaching interface data, and then the first virtual reality device performs data fusion between the captured first environment information and the three-dimensional teaching interface data to generate a first virtual teaching scene. And displaying the first virtual teaching scene;
  • the active split screen technology realizes the split screen processing through the underlying driver of the system, and can realize the split screen from the display buffer area of the bottom layer of the system, and uses the unique algorithm to perform the equal division split screen processing in the layer of FrameBuffer.
  • the screen can be split, and then equipped with virtual reality glasses to achieve the effect of 3D display; in addition, the first virtual reality device can capture the user in the real environment in which the user is currently located by the camera.
  • Step S204 the first virtual reality device receives an operation instruction for the target subject in the first virtual teaching scene, and selects a corresponding character teaching model according to the operation instruction, and displays the target subject based on the character model.
  • Teaching content
  • the first virtual reality device receives an operation instruction for the target subject in the first virtual teaching scenario by using voice recognition; wherein the operation instruction includes: a model selection operation instruction, a teaching content call operation instruction, and a seat selection An operation instruction; at this time, the first virtual reality device may select a corresponding character teaching model according to the model selection operation instruction, and invoke an operation instruction to display the teaching content of the target subject according to the teaching content, and according to the The seat selection operation command and the first capture time arrange virtual target seats in the teaching environment parameters. Receiving an operation instruction for the target subject in the first virtual teaching scene, and selecting a corresponding character teaching model according to the operation instruction, and displaying the teaching content of the target subject based on the character model.
  • the operation instruction includes: a model selection operation instruction, a teaching content call operation instruction, and a seat selection An operation instruction
  • the first virtual reality device may select a corresponding character teaching model according to the model selection operation instruction, and invoke an operation instruction to display the teaching content of the target subject according to the teaching content, and according to the The seat
  • the first virtual reality device may receive a model selection operation instruction for a target subject (eg, a physical subject) in the first virtual teaching scene by voice recognition to select a user favorite user.
  • Teaching model may be an animal cartoon model of the image or a realistic star simulation model.
  • the operation instruction is invoked by the teaching content
  • the first virtual reality device can receive the operation instruction for the teaching content of the target subject in the first virtual teaching scene by using voice recognition, so that the first virtual reality device can be
  • the virtual image extracted from the background server covers the function in the real world picture, that is, within a certain projection distance
  • the virtual image of the computer is merged and marked in the real world picture of the user position through the active split screen technology.
  • the superimposed three-dimensional image is output. For example, you can think of a dinosaur model that has disappeared for many years, and let it be placed around the user's location. It can also help users to reproduce the simulation scene of the launch and launch of the Shenzhou 11 spacecraft in front of the scene. The vivid 3D image allows more users to get to know the technology around them.
  • the data processing method further includes Including the following steps:
  • the first virtual reality device may further send the first virtual teaching scenario to a user terminal having a wireless connection relationship with the first virtual reality device based on a wireless video transmission technology, so that the user terminal Displaying the first virtual teaching scene;
  • the user terminal includes: a smart TV. Laptops, PDAs, gaming peripherals and tablets.
  • the first virtual reality device first captures the first environment information through the camera and records the first capture time; secondly, the first virtual reality device receives the setting instruction for the virtual teaching application, and receives and caches the background server according to the Setting the teaching environment parameter returned by the instruction; then, the first virtual reality device converts the teaching environment parameter cached in the graphics card cache into three-dimensional classroom interface data based on the active split screen technology, and according to the first capture time Combining the three-dimensional classroom interface data with the captured first environment information to generate the first virtual teaching scene; wherein the first capturing time is used to record the learning duration of the target subject; and finally, The first virtual reality device receives an operation instruction for the target subject in the first virtual teaching scene, and selects a corresponding character teaching model according to the operation instruction, and displays the teaching content of the target subject based on the character model It can be seen that with the present invention, the teaching content displayed by the system can be in the underlying driver. Now equalize the screen, so that the application interface in the system realizes the active split screen effect,
  • FIG. 3 is a schematic flowchart of still another method for processing data based on virtual reality according to an embodiment of the present invention.
  • the data processing method includes at least:
  • Step S301 the first virtual reality device captures the first environment information by using the camera, and records the first capture time
  • the first virtual reality device may capture the environment data of the current user by using the front or rear camera, and use the environment data as the first environment information, and record the first corresponding to the first environment information.
  • a capture time in addition, the camera also has functions such as video call and projection.
  • the obtaining, by the first virtual reality device, the user's head rotation angle is detected in real time, and the spatial data in the corresponding imaging area is captured in real time according to the rotation angle, and each angle is The corresponding spatial data is integrated to generate the environmental data, and The environmental data is used as the first environmental information.
  • Step S302 the first virtual reality device receives a setting instruction for the virtual teaching application, and sends the setting instruction to the background server;
  • Step S303 the background server acquires the teaching environment parameter according to the setting instruction, and sends the teaching environment parameter back to the first virtual reality device; wherein the teaching environment parameter comprises: a virtual projector, a virtual desk , virtual seats and virtual classrooms;
  • Step S304 The first virtual reality device caches the teaching environment parameter according to the received teaching environment parameter
  • the first virtual reality device may be a head mounted device, including: a virtual reality eye or a virtual reality helmet; the first virtual reality device may be configured to receive a setting of a screen area corresponding to the virtual teaching application by a user. And after the first virtual reality device receives the setting instruction, sending the setting instruction to the background server;
  • the setting instruction may be configured to send the setting request information to the background server, and enable the background server to return setting response information according to the setting request information, to extract the teaching environment parameter stored on the background server, and Setting a teaching scenario of the virtual teaching application according to the teaching environment parameter; wherein the teaching environment parameters include: a virtual projector, a virtual desk, a virtual seat, and a virtual classroom;
  • the setting instruction refers to a user performing a click operation on a virtual screen area of the first virtual reality device.
  • the click operation includes, but is not limited to, an operation of each type of touch touch screen, such as a pressing operation, a double-click operation, or a sliding screen operation.
  • the structure of the touch screen includes at least three layers: a screen glass layer, a touch panel layer, and a display panel layer.
  • the screen glass layer is a protective layer
  • the touch panel layer is used to sense a user's touch operation
  • the display panel layer is used to display an image.
  • related technologies enable the integration of the touch panel layer and the display panel layer.
  • Step S305 the first virtual reality device merges the cached processed teaching environment parameter with the captured first environment information, and generates a first virtual teaching scene according to the first capturing time, and displays The first virtual teaching scene;
  • the first virtual reality device may capture the environment data in a real environment in which the user is currently located by using the camera, and estimate and form a model of a corresponding teaching environment parameter according to the environment data, to Extracting corresponding teaching environment parameters from the server, and the teaching
  • the environment parameter is data-fused with the captured first environment information to generate a first virtual teaching scene according to the first capturing time, and display the first virtual teaching scene;
  • Step S306 the first virtual reality device receives an operation instruction for the target subject in the first virtual teaching scene, and selects a corresponding character teaching model according to the operation instruction, and displays the target subject based on the character model. Teaching content.
  • the first virtual reality device receives an operation instruction for the target subject in the first virtual teaching scenario by using voice recognition; wherein the operation instruction includes: a model selection operation instruction, a teaching content call operation instruction, and a seat selection An operation instruction; at this time, the first virtual reality device may select a corresponding character teaching model according to the model selection operation instruction, and invoke an operation instruction to display the teaching content of the target subject according to the teaching content, and according to the The seat selection operation command and the first capture time arrange virtual target seats in the teaching environment parameters.
  • the operation instruction includes: a model selection operation instruction, a teaching content call operation instruction, and a seat selection An operation instruction
  • the first virtual reality device may select a corresponding character teaching model according to the model selection operation instruction, and invoke an operation instruction to display the teaching content of the target subject according to the teaching content, and according to the The seat selection operation command and the first capture time arrange virtual target seats in the teaching environment parameters.
  • Step S307 the second virtual reality device sends a join request to the background server.
  • Step S308 the background server forwards the received join request to the first virtual reality device
  • Step S309 the first virtual reality device generates an acknowledgement response message corresponding to the join request, and sends the acknowledgement response message to the second virtual reality device.
  • Step S310 the second virtual reality device uploads second environment information to the background server according to the confirmation response message, and records a second capture time;
  • the second virtual reality device may capture the current environment data by using the front or rear camera, and use the environment data as the second environment information, and record the second capture time corresponding to the second environment information.
  • the camera also has functions such as video calling and projection. After the second virtual reality device receives the confirmation response message sent by the first virtual reality device, uploading the second environment information to the background server;
  • the environment data is uploaded, and the environment data is uploaded to the background server as the second environment information to obtain the second teaching scene data of the target subject corresponding to the second capturing time.
  • Step S311 the background server compares the first virtual teaching scene with the second environment letter Converging, generating first teaching scene data corresponding to the first virtual reality device, and generating second teaching scene data corresponding to the second virtual reality device;
  • the background server after receiving the second environment information uploaded by the second virtual reality device, the background server performs data fusion between the second environment information and the first virtual teaching scenario to generate and The first teaching scene data corresponding to the first virtual reality device is sent to the first virtual reality device; at the same time, the generated second teaching scene data corresponding to the second virtual reality device is sent to the The second virtual reality device is configured to enable the two virtual reality devices to display simulated teaching scenes in different orientations or perspectives in the corresponding virtual teaching scenes.
  • Step S312 the first virtual reality device receives the first teaching scene data sent by the background server, and updates and displays the first virtual teaching scene according to the second capturing time;
  • Step S313 the second virtual reality device receives the second teaching scene data sent by the background server, and generates and displays a second virtual teaching scene according to the second capturing time.
  • the data processing method further includes performing the following steps, the first virtual reality device may further send the first virtual teaching scenario to the wireless video transmission technology to a user terminal having a wireless connection relationship with the first virtual reality device, so that the user terminal displays the first virtual teaching scene;
  • the user terminal includes: a smart TV. Laptops, PDAs, parade peripherals, and tablets.
  • the first virtual reality device first captures the first environment information through the camera, and records the first capture time; secondly, the first virtual reality device receives the setting instruction for the virtual teaching application, and receives and caches the background server according to The teaching environment parameter returned by the setting instruction; the first virtual reality device merges the cached processed teaching environment parameter with the captured first environment information, and according to the first capture Generating a first virtual teaching scene and displaying the first virtual teaching scene; finally, the first virtual reality device receives an operation instruction for the target subject in the first virtual teaching scene, and selects according to the operation instruction Corresponding character teaching model, and displaying the teaching content of the target subject based on the character model; then, after receiving the joining request of the second virtual reality device, the background server forwards the joining request to the a first virtual reality device to cause the second virtual reality device to receive the confirmation response After interest, the second environmental information uploaded;
  • the first virtual server, the background scene and the teaching The second environment information is used for data fusion, and the first teaching scene data
  • the present invention not only provides a user with a realistic simulation teaching scene, but also provides a rich virtual interactive platform for multiple users, thereby creating a richer and more diverse visual experience for the user, and providing the user with a visual experience. Fit the realistic teaching scene, greatly help users understand the teaching content and improve learning efficiency.
  • FIG. 4 is a schematic structural diagram of a data processing system based on virtual reality according to an embodiment of the present invention.
  • the data processing system 1 includes: a first virtual reality device 10 and a background. Server 20;
  • the first virtual reality device 10 is configured to capture first environment information by using a camera, and record a first capture time;
  • the first virtual reality device 10 may capture the environment data of the current user by using the front or rear camera, and use the environment data as the first environment information, and record the corresponding information of the first environment information.
  • the first capture time in addition, the camera also has functions such as video call and projection.
  • the first virtual reality device 10 is further configured to receive a setting instruction for the virtual teaching application, and send the setting instruction to the background server;
  • the background server 20 is configured to acquire a teaching environment parameter according to the setting instruction, and send the teaching environment parameter to the first virtual reality device; wherein the teaching environment parameter comprises: a virtual projector, a virtual class Tables, virtual seats and virtual classrooms;
  • the first virtual reality device 10 is further configured to cache the teaching environment parameter according to the received teaching environment parameter;
  • the first virtual reality device 10 receives a setting instruction for the virtual teaching application, and sends the setting instruction to the background server 20, so that the background service device 20 obtains the teaching environment parameter according to the setting instruction, and Sending the teaching environment parameter back to the first virtual reality device 10; subsequently, the first virtual reality device 10 refers to the teaching according to the received teaching environment parameter Learn environment parameters for cache processing.
  • the teaching environment parameters include: a virtual projector, a virtual desk, a virtual seat, and a virtual classroom;
  • the first virtual reality device 10 may be a head mounted device, including: a virtual reality eye or a virtual reality helmet; the first virtual reality device may be configured to receive a setting of a screen area corresponding to the virtual teaching application by a user. And after the first virtual reality device receives the setting instruction, sending the setting instruction to the background server;
  • the first virtual reality device 10 sends the setting request information to the background server 20, and causes the background server 20 to return setting response information according to the setting request information to extract the teaching stored on the background server 20.
  • An environmental parameter, and the teaching scene of the virtual teaching application is set according to the teaching environment parameter; wherein the teaching environment parameter comprises: a virtual projector, a virtual desk, a virtual seat, and a virtual classroom;
  • the setting instruction refers to that the user performs a click operation on the virtual screen area of the first virtual reality device.
  • the click operation includes, but is not limited to, an operation of each type of touch touch screen, such as a pressing operation, a double-click operation, or a sliding screen operation.
  • the structure of the touch screen includes at least three layers: a screen glass layer, a touch panel layer, and a display panel layer.
  • the screen glass layer is a protective layer
  • the touch panel layer is used to sense a user's touch operation
  • the display panel layer is used to display an image.
  • related technologies enable the integration of the touch panel layer and the display panel layer.
  • the first virtual reality device 10 is further configured to fuse the cached processed teaching environment parameter with the captured first environment information, and generate a first virtual teaching scenario according to the first capture time. And displaying the first virtual teaching scene;
  • the first virtual reality device 10 may be configured to capture, by using the camera, the environment data in a real environment where the user is currently located, and estimate and form a model of a corresponding teaching environment parameter according to the environment data, to
  • the background server 20 extracts corresponding teaching environment parameters, and combines the teaching environment parameters with the captured first environment information to generate a first virtual teaching according to the first capturing time. a scene and displaying the first virtual teaching scene;
  • the first virtual reality device 10 is further configured to receive an operation instruction for a target subject in the first virtual teaching scene, and select a corresponding character teaching model according to the operation instruction, and display the Teaching content of the target subject;
  • the first virtual reality device 10 is specifically configured to receive an operation instruction for a target account in the first virtual teaching scenario by using voice recognition; wherein the operation instruction includes: a model selection operation instruction, and a teaching content call The operation instruction and the seat selection operation instruction; at this time, the first virtual reality device 10 is further configured to select a corresponding character teaching model according to the model selection operation instruction, and invoke the operation instruction to display the target according to the teaching content.
  • the teaching content of the subject, and the virtual target seat in the teaching environment parameter is arranged according to the seat selection operation instruction and the first capturing time.
  • the first virtual reality device 10 first captures the first environment information through the camera and records the first capture time; secondly, the first virtual reality device 10 receives the setting instruction for the virtual teaching application, and receives And the teaching environment parameter returned by the cache background server 20 according to the setting instruction; then, the first virtual reality device 10 fuses the cached processed teaching environment parameter with the captured first environment information, and Generating a first virtual teaching scene according to the first capturing time, and displaying the first virtual teaching scene; finally, the first virtual reality device 10 receives an operation instruction for a target subject in the first virtual teaching scene, And selecting a corresponding character teaching model according to the operation instruction, and displaying the teaching content of the target subject based on the character model. Therefore, the invention can not only provide users with a realistic simulation teaching scene, but also create a richer and more diverse visual experience for the user, fully mobilize the user's feelings and thinking, and greatly improve the user's learning efficiency. ,
  • FIG. 5 is another virtual reality-based data processing system according to an embodiment of the present invention.
  • the data processing system 1 includes the first virtual reality device in the specific embodiment corresponding to FIG. 4. 10 and the background server 20, further, the data processing system 1 further includes a second virtual reality device 30;
  • the second virtual reality device 30 is configured to send a join request to the background server 20;
  • the background server 20 is further configured to forward the received join request to the first virtual reality device 10;
  • the first virtual reality device 10 is further configured to generate an acknowledgment response message corresponding to the join request, and send the acknowledgment response message to the second virtual reality device 30;
  • the second virtual reality device 30 is further configured to upload the second environment information to the background server 20 according to the confirmation response message, and record the second capture time;
  • the background server 20 is further configured to merge the first virtual teaching scenario with the second environment information, generate first teaching scene data corresponding to the first virtual reality device 10, and generate and The second teaching scene data corresponding to the second virtual reality device 30;
  • the first virtual reality device 10 is further configured to receive the first teaching scene data sent by the background server 20, and update and display the first virtual teaching scene according to the second capturing time;
  • the second virtual reality device 30 is further configured to receive the second teaching scene data sent by the background server 20, and generate and display a second virtual teaching scene according to the second capturing time.
  • the first virtual reality device 10 is further configured to send the first virtual teaching scenario to the first A virtual reality device 10 has a user terminal in a wireless connection relationship to cause the user terminal to display the first virtual teaching scene.
  • the first virtual reality device 10 first receives a setting instruction for the virtual teaching application, and sends the setting instruction to the background server 20; secondly, the background server 20 acquires the teaching environment parameter according to the setting instruction, and The teaching environment parameter is sent back to the first virtual reality device 10; then, the first virtual reality device 10 caches the received teaching environment parameter; and the cached processed teaching environment parameter Merging with the captured first environment information, generating a first virtual teaching scene according to the first capturing time, and displaying the first virtual teaching scene; then, the first virtual reality device 10 receives An operation instruction of the target subject in the first virtual teaching scene, and selecting a corresponding character teaching model according to the operation instruction, and displaying the teaching content of the target subject based on the character model; and finally, the background server 20 After receiving the join request of the second virtual reality device 30, forwarding the join request to the first virtual The real device 10, so that the second virtual reality device 30 uploads the second environment information after receiving the confirmation response message; subsequently, the background server 20 compares the first virtual teaching scenario with the The
  • the storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), or a random access memory (RAM).

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A virtual reality-based data processing method and system. The method comprises: a first virtual reality device captures first environment information by means of a camera and records a first capturing time; the first virtual reality device receives a setting instruction on a virtual teaching application, and receives and caches a teaching environment parameter that is returned by a background server according to the setting instruction; the first virtual reality device fuses the cached teaching environment parameter and the captured first environment information, generates a first virtual teaching scene according to the first capturing time, and displays the first virtual teaching scene; the first virtual reality device receives an operation instruction on a target course in the first virtual teaching scene, selects a corresponding human teaching model according to the operation instruction, and presents teaching content of the target course according to the human model. The present invention can provide realistic and rich simulated teaching experience so as to improve the learning efficiency of a user.

Description

一种基于虚拟现实的数据处理方法及系统Data processing method and system based on virtual reality 技术领域Technical field
本发明涉及计算机应用程序领域,尤其涉及一种基于虚拟现实的数据处理系统及系统。The present invention relates to the field of computer applications, and in particular, to a data processing system and system based on virtual reality.
背景技术Background technique
随着人类文明的进步,具有数据处理和数据通信功能的电子设备日渐融入了人们日常生活中,比如,智能手机、平板电脑等电子设备已经成为人们日常生活中不可缺失的重要组成部分,并在一定程度上解决了用户对游戏、电影和音乐等多种娱乐的需求;但对于智能手机而言,当用户通过触屏操控其屏幕区域上的教学应用时,由于其手机屏幕的限制,教学内容的展示效果一般,甚至可以说是展示效果单一,与用户在真实环境中的学习感受相距甚远,致使用户无法切实地领略到沉浸式的教学体验;或者,以智能电视为例,当用户通过按键选择智能电视上的目标科目时,由于其系统的限制,教学内容始终无法呈现出直观的3D环境展示效果,比如:以物理学中的圆周运动为例,用户仅能直观的观赏到二维平面上的粒子运动,且无法让用户从不同角度去探索虚拟展示环境,致使用户的模拟体验资源极其有限,进而导致用户无法充分地享受沉浸式体验。With the advancement of human civilization, electronic devices with data processing and data communication functions are increasingly integrated into people's daily lives. For example, electronic devices such as smart phones and tablet computers have become an indispensable part of people's daily lives. To some extent, the user's demand for games, movies, music and other entertainments is solved; but for the smart phone, when the user controls the teaching application on the screen area through the touch screen, due to the limitation of the mobile phone screen, the teaching content The display effect is general, and it can even be said that the display effect is single, which is far from the user's learning experience in the real environment, so that the user can not truly appreciate the immersive teaching experience; or, for example, smart TV, when the user passes When the button is selected to select the target subject on the smart TV, due to the limitations of the system, the teaching content can never present an intuitive 3D environment display effect. For example, taking the circular motion in physics as an example, the user can only visually view the two-dimensional. Particle motion on the plane, and does not allow users to explore from different angles Intends to showcase the environment, resulting in user experience simulation resources is extremely limited, and thus the customer can not fully enjoy the immersive experience.
发明内容Summary of the invention
本发明实施例所要解决的技术问题在于,提供一种基于虚拟现实的数据处理方法和数据处理系统,可丰富用户的体验资源,为用户提供贴合真实的教学场景,以使用户能充分地享受沉浸式体验。The technical problem to be solved by the embodiments of the present invention is to provide a data processing method and a data processing system based on virtual reality, which can enrich user experience resources and provide users with a realistic teaching scene so that users can fully enjoy Immersive experience.
为了解决上述技术问题,本发明实施例第一方面提供了一种基于虚拟现实的数据处理方法,所述数据处理方法包括:In order to solve the above technical problem, the first aspect of the embodiments of the present invention provides a data processing method based on virtual reality, where the data processing method includes:
第一虚拟现实设备通过摄像头捕捉第一环境信息,并记录第一捕捉时间;The first virtual reality device captures the first environment information through the camera, and records the first capture time;
所述第一虚拟现实设备接收对虚拟教学应用的设置指令,并发送所述设置 指令到后台服务器;The first virtual reality device receives a setting instruction for the virtual teaching application, and sends the setting Command to the background server;
所述后台服务器根据所述设置指令获取教学环境参数,并将所述教学环境参数发送回所述第一虚拟现实设备;其中,所述教学环境参数包括:虚拟投影仪,虚拟课桌、虚拟座位和虚拟教室;The background server acquires the teaching environment parameter according to the setting instruction, and sends the teaching environment parameter back to the first virtual reality device; wherein the teaching environment parameter comprises: a virtual projector, a virtual desk, and a virtual seat And virtual classrooms;
所述第一虚拟现实设备根据接收到的所述教学环境参数,对所述教学环境参数进行缓存处理;The first virtual reality device caches the teaching environment parameter according to the received teaching environment parameter;
所述第一虚拟现实设备将缓存处理后的所述教学环境参数与捕捉到的所述第一环境信息进行融合,并根据所述第一捕捉时间生成第一虚拟教学场景,并显示所述第一虚拟教学场景;The first virtual reality device merges the cached processed teaching environment parameter with the captured first environment information, and generates a first virtual teaching scene according to the first capturing time, and displays the first a virtual teaching scene;
所述第一虚拟现实设备接收对所述第一虚拟教学场景中目标科目的操作指令,并根据所述操作指令选择相应的人物教学模型,并基于所述人物模型展示所述目标科目的教学内容。The first virtual reality device receives an operation instruction for the target subject in the first virtual teaching scene, and selects a corresponding character teaching model according to the operation instruction, and displays the teaching content of the target subject based on the character model .
本发明实施例第二方面提供了一种基于虚拟现实的数据处理系统,包括:第一虚拟现实设备和后台服务器;A second aspect of the embodiments of the present invention provides a virtual reality-based data processing system, including: a first virtual reality device and a background server;
所述第一虚拟现实设备,用于通过摄像头捕捉第一环境信息,并记录第一捕捉时间;The first virtual reality device is configured to capture first environment information by using a camera, and record a first capture time;
所述第一虚拟现实设备,还用于接收对虚拟教学应用的设置指令,并发送所述设置指令到后台服务器;The first virtual reality device is further configured to receive a setting instruction for the virtual teaching application, and send the setting instruction to the background server;
所述后台服务器,用于根据所述设置指令获取教学环境参数,并将所述教学环境参数发送回所述第一虚拟现实设备;其中,所述教学环境参数包括:虚拟投影仪,虚拟课桌、虚拟座位和虚拟教室;The background server is configured to acquire a teaching environment parameter according to the setting instruction, and send the teaching environment parameter to the first virtual reality device; wherein the teaching environment parameter comprises: a virtual projector, a virtual desk , virtual seats and virtual classrooms;
所述第一虚拟现实设备,还用于根据接收到的所述教学环境参数,对所述教学环境参数进行缓存处理;The first virtual reality device is further configured to cache the teaching environment parameter according to the received teaching environment parameter;
所述第一虚拟现实设备,还用于将缓存处理后的所述教学环境参数与捕捉到的所述第一环境信息进行融合,并根据所述第一捕捉时间生成第一虚拟教学场景,并显示所述第一虚拟教学场景;The first virtual reality device is further configured to fuse the cached processed teaching environment parameter with the captured first environment information, and generate a first virtual teaching scenario according to the first capture time, and Displaying the first virtual teaching scene;
所述第一虚拟现实设备,还用于接收对所述第一虚拟教学场景中目标科目的操作指令,并根据所述操作指令选择相应的人物教学模型,并基于所述人物模型展示所述目标科目的教学内容。 The first virtual reality device is further configured to receive an operation instruction for the target subject in the first virtual teaching scene, and select a corresponding character teaching model according to the operation instruction, and display the target based on the character model Teaching content of the subject.
由上可见,实施本发明实施例,具有如下有益效果:第一虚拟现实设备首先通过摄像头捕捉第一环境信息,并记录第一捕捉时间;其次,第一虚拟现实设备接收对虚拟教学应用的设置指令,并接收和缓存后台服务器根据所述设置指令返回的教学环境参数;然后,所述第一虚拟现实设备将缓存处理后的所述教学环境参数与捕捉到的所述第一环境信息进行融合,并根据所述第一捕捉时间生成第一虚拟教学场景,并显示所述第一虚拟教学场景;最后,所述第一虚拟现实设备接收对所述第一虚拟教学场景中目标科目的操作指令,并根据所述操作指令选择相应的人物教学模型,并基于所述人物模型展示所述目标科目的教学内容。因此,采用本发明,不仅能为用户提供贴合真实的模拟教学场景,还能为用户打造更加丰富和更多样化的视觉体验,充分调动用户的感觉和思维,极大地提高用户的学习效率。It can be seen that the implementation of the embodiment of the present invention has the following beneficial effects: the first virtual reality device first captures the first environment information through the camera, and records the first capture time; secondly, the first virtual reality device receives the setting of the virtual teaching application. And receiving and buffering the teaching environment parameter returned by the background server according to the setting instruction; then, the first virtual reality device fuses the cached processed teaching environment parameter with the captured first environment information And generating a first virtual teaching scene according to the first capturing time, and displaying the first virtual teaching scene; finally, the first virtual reality device receives an operation instruction for the target subject in the first virtual teaching scene And selecting a corresponding character teaching model according to the operation instruction, and displaying the teaching content of the target subject based on the character model. Therefore, the invention can not only provide users with a realistic simulation teaching scene, but also create a richer and more diverse visual experience for the user, fully mobilize the user's feelings and thinking, and greatly improve the user's learning efficiency. .
附图说明DRAWINGS
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the description of the prior art will be briefly described below. Obviously, the drawings in the following description are only It is a certain embodiment of the present invention, and other drawings can be obtained from those skilled in the art without any creative work.
图1是本发明实施例提供的一种基于虚拟现实的数据处理方法的流程示意图;1 is a schematic flowchart of a data processing method based on virtual reality according to an embodiment of the present invention;
图2是本发明实施例提供的另一种基于虚拟现实的数据处理方法的流程示意图;2 is a schematic flowchart of another virtual reality-based data processing method according to an embodiment of the present invention;
图3是本发明实施例提供的又一种基于虚拟现实的数据处理方法的流程示意图;3 is a schematic flowchart diagram of still another virtual reality-based data processing method according to an embodiment of the present invention;
图4是本发明实施例提供的一种基于虚拟现实的数据处理系统的结构示意图;4 is a schematic structural diagram of a data processing system based on virtual reality according to an embodiment of the present invention;
图5是本发明实施例提供的另一种一种基于虚拟现实的数据处理系统的结构示意图。 FIG. 5 is a schematic structural diagram of another data processing system based on virtual reality according to an embodiment of the present invention.
具体实施方式detailed description
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions in the embodiments of the present invention are clearly and completely described in the following with reference to the accompanying drawings in the embodiments of the present invention. It is obvious that the described embodiments are only a part of the embodiments of the present invention, but not all embodiments. All other embodiments obtained by those skilled in the art based on the embodiments of the present invention without creative efforts are within the scope of the present invention.
本发明的说明书和权利要求书及上述附图中的术语“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其他步骤或单元。The terms "comprising" and "comprising" and variations of the invention are intended to be in the meaning For example, a process, method, system, product, or device that comprises a series of steps or units is not limited to the listed steps or units, but optionally also includes steps or units not listed, or alternatively Other steps or units inherent to these processes, methods, products, or equipment.
本发明实施例中提及的基于虚拟现实的数据处理方法的执行依赖于计算机程序,可运行于冯若依曼体系的计算机系统之上。该计算机程序可集成在应用中,也可作为独立的工具类应用运行。该计算机系统可以是个人电脑、平板电脑、笔记本电脑、智能手机等终端设备。The execution of the virtual reality-based data processing method mentioned in the embodiments of the present invention depends on a computer program and can run on a computer system of the Von Ruyman system. The computer program can be integrated into the application or run as a standalone tool class application. The computer system can be a terminal device such as a personal computer, a tablet computer, a notebook computer, or a smart phone.
以下分别进行详细说明。The details are described below separately.
请参见图1,是本发明实施例提供的一种基于虚拟现实的数据处理方法的流程示意图,如图1所示,所述数据处理方法至少包括:FIG. 1 is a schematic flowchart of a data processing method based on virtual reality according to an embodiment of the present invention. As shown in FIG. 1 , the data processing method includes at least:
步骤S101,第一虚拟现实设备通过摄像头捕捉第一环境信息,并记录第一捕捉时间;Step S101, the first virtual reality device captures the first environment information by using the camera, and records the first capture time;
具体地,所述第一虚拟现实设备可通过前置或后置摄像头捕捉当前用户所处的环境数据,并将所述环境数据作为第一环境信息,并记录所述第一环境信息对应的第一捕捉时间,此外,所述摄像头还具有视频通话和投影等功能。Specifically, the first virtual reality device may capture the environment data of the current user by using the front or rear camera, and use the environment data as the first environment information, and record the first corresponding to the first environment information. A capture time, in addition, the camera also has functions such as video call and projection.
其中,所述第一环境信息的获得是通过所述第一虚拟现实设备实时检测用户的头部转动角度,并根据所述转动角度实时捕捉到相应摄像区域范围内的空间数据,并将各个角度所对应的空间数据进行整合而生成所述环境数据,并将所述环境数据作为所述第一环境信息。The obtaining, by the first virtual reality device, the user's head rotation angle is detected in real time, and the spatial data in the corresponding imaging area is captured in real time according to the rotation angle, and each angle is The corresponding spatial data is integrated to generate the environmental data, and the environmental data is used as the first environmental information.
步骤S102,所述第一虚拟现实设备接收对虚拟教学应用的设置指令,并发送所述设置指令到后台服务器;Step S102, the first virtual reality device receives a setting instruction for the virtual teaching application, and sends the setting instruction to the background server;
具体地,所述第一虚拟现实设备可为头戴式设备,包括:虚拟现实眼睛或 虚拟现实头盔;所述第一虚拟现实设备可用于接收用户对所述虚拟教学应用对应的屏幕区域的设置指令,并在所述第一虚拟现实设备接收到所述设置指令后,向所述后台服务器发送所述设置指令;Specifically, the first virtual reality device may be a head mounted device, including: a virtual reality eye or a virtual reality helmet; the first virtual reality device is configured to receive a setting instruction of a screen area corresponding to the virtual teaching application by the user, and after the first virtual reality device receives the setting instruction, to the background The server sends the setting instruction;
其中,所述设置指令是指用户对所述第一虚拟现实设备的虚拟屏幕区域执行点击操作。其中,所述点击操作包括但不限于:按压操作、双击操作或者滑屏操作等各类型触摸触控屏的操作。通常,在具有触控屏功能的终端中,其触控屏的结构包括至少三层:屏幕玻璃层、触控面板层和显示面板层。其中屏幕玻璃层为保护层,触控面板层用于感知用户的触控操作,显示面板层用于显示图像。且目前已有相关技术能使触控面板层和显示面板层融合。The setting instruction refers to that the user performs a click operation on the virtual screen area of the first virtual reality device. The click operation includes, but is not limited to, an operation of each type of touch touch screen, such as a pressing operation, a double-click operation, or a sliding screen operation. Generally, in a terminal having a touch screen function, the structure of the touch screen includes at least three layers: a screen glass layer, a touch panel layer, and a display panel layer. The screen glass layer is a protective layer, the touch panel layer is used to sense a user's touch operation, and the display panel layer is used to display an image. At present, related technologies enable the integration of the touch panel layer and the display panel layer.
步骤S103,所述后台服务器根据所述设置指令获取教学环境参数,并将所述教学环境参数发送回所述第一虚拟现实设备;Step S103, the background server acquires the teaching environment parameter according to the setting instruction, and sends the teaching environment parameter back to the first virtual reality device;
具体地,所述后台服务器在接收到所述第一虚拟现实设备发送来的设置请求信息时,根据所述设置请求信息返回设置响应信息,并获取所述教学环境参数,并将所述教学环境参数发送回所述第一虚拟现实设备;其中,所述教学环境参数包括:虚拟投影仪,虚拟课桌、虚拟座位和虚拟教室;Specifically, when receiving the setting request information sent by the first virtual reality device, the background server returns setting response information according to the setting request information, and acquires the teaching environment parameter, and the teaching environment is Sending parameters back to the first virtual reality device; wherein the teaching environment parameters include: a virtual projector, a virtual desk, a virtual seat, and a virtual classroom;
步骤S104所述第一虚拟现实设备根据接收到的所述教学环境参数,对所述教学环境参数进行缓存处理;Step S104: The first virtual reality device caches the teaching environment parameter according to the received teaching environment parameter;
具体地,所述第一虚拟现实设备在接收到所述后台服务器返回的所述教学环境参数时,可根据所述教学环境参数对所述虚拟教学应用的教学场景进行设置;其中,所述教学环境参数包括:虚拟投影仪,虚拟课桌、虚拟座位和虚拟教室;Specifically, when the first virtual reality device receives the teaching environment parameter returned by the background server, the teaching scenario of the virtual teaching application may be set according to the teaching environment parameter; wherein the teaching Environmental parameters include: virtual projectors, virtual desks, virtual seats, and virtual classrooms;
步骤S105,所述第一虚拟现实设备将缓存处理后的所述教学环境参数与捕捉到的所述第一环境信息进行融合,并根据所述第一捕捉时间生成第一虚拟教学场景,并显示所述第一虚拟教学场景;Step S105, the first virtual reality device merges the cached processed teaching environment parameter with the captured first environment information, and generates a first virtual teaching scene according to the first capturing time, and displays The first virtual teaching scene;
具体地,所述第一虚拟现实设备可通过所述摄像头捕捉用户当前所处真实环境下的所述环境数据,并根据所述环境数据估算和形成相应教学环境参数的模型,以从所述后台服务器上提取到相应的所述教学环境参数,并将所述教学环境参数与捕捉到的所述第一环境信息进行数据融合,以根据所述第一捕捉时间生成第一虚拟教学场景,并显示所述第一虚拟教学场景; Specifically, the first virtual reality device may capture the environment data in a real environment in which the user is currently located by using the camera, and estimate and form a model of a corresponding teaching environment parameter according to the environment data, to Extracting the corresponding teaching environment parameter on the server, and performing data fusion with the captured first environment information to generate a first virtual teaching scene according to the first capturing time, and displaying The first virtual teaching scene;
比如,所述第一虚拟现实设备通过摄像头捕捉到目前用户处于房间A内的中间位置,且检测到用户的头部略微向左偏移,则所述第一虚拟现实设备将用户所处的位置和头部偏移角度等作为所述环境数据,并根据所述环境数据提取与该环境数据对应的教学环境参数,以使所述环境数据与所述教学环境参数进行数据融合,以将虚拟的所述教学环境参数应用到真实的教学环境中,使真实的环境数据和虚拟的教学场景实时地叠加到同一个画面或空间,并使二者同时存在。此时,用户在该虚拟的教学环境中将位于教室中间位置,且投影显示界面定位于以用户为分界线的稍偏左位置。For example, the first virtual reality device captures the current position of the user in the room A by the camera, and detects that the user's head is slightly shifted to the left, and the first virtual reality device positions the user. And the head offset angle or the like as the environmental data, and extracting teaching environment parameters corresponding to the environmental data according to the environmental data, so that the environmental data and the teaching environment parameters are data-fused to be virtualized The teaching environment parameters are applied to the real teaching environment, so that the real environment data and the virtual teaching scene are superimposed on the same picture or space in real time, and the two exist at the same time. At this time, the user will be located in the middle of the classroom in the virtual teaching environment, and the projection display interface is positioned at a slightly left position with the user as the dividing line.
步骤S106,所述第一虚拟现实设备接收对所述第一虚拟教学场景中目标科目的操作指令,并根据所述操作指令选择相应的人物教学模型,并基于所述人物模型展示所述目标科目的教学内容。Step S106, the first virtual reality device receives an operation instruction for the target subject in the first virtual teaching scene, and selects a corresponding character teaching model according to the operation instruction, and displays the target subject based on the character model. Teaching content.
具体地,所述第一虚拟现实设备通过语音识别接收对所述第一虚拟教学场景中目标科目的操作指令;其中,所述操作指令包括:模型选择操作指令、教学内容调用操作指令以及座位选择操作指令;Specifically, the first virtual reality device receives an operation instruction for the target subject in the first virtual teaching scenario by using voice recognition; wherein the operation instruction includes: a model selection operation instruction, a teaching content call operation instruction, and a seat selection Operation instruction
此时,所述第一虚拟现实设备可根据所述模型选择操作指令选择相应的人物教学模型,并根据所述教学内容调用操作指令展示所述目标科目的教学内容,并根据所述座位选择操作指令以及所述第一捕捉时间,安排教学环境参数中的虚拟目标座位。At this time, the first virtual reality device may select a corresponding character teaching model according to the model selection operation instruction, and invoke an operation instruction to display the teaching content of the target subject according to the teaching content, and select an operation according to the seat selection. The instruction and the first capture time schedule virtual target seats in the teaching environment parameters.
比如,以模型选择操作指令为例,所述第一虚拟现实设备可通过语音识别接收对所述第一虚拟教学场景中目标科目(比如:数学科目)的模型选择操作指令去选择用户喜爱的人物教学模型。其中,所述人物教学模型可以是形象的动物卡通模型,也可以是逼真的明星模拟模型。For example, taking the model selection operation instruction as an example, the first virtual reality device may receive a model selection operation instruction for a target subject (eg, a plurality of subjects) in the first virtual teaching scene by voice recognition to select a user favorite user. Teaching model. The character teaching model may be an animal cartoon model of the image or a realistic star simulation model.
又比如:以座位选择操作指令为例,所述第一虚拟现实设备通过语音识别接收对所述第一虚拟教学场景中英语科目的座位选择指令后,将读取所述第一捕捉时间,并根据所述座位选择操作指令以及所述第一捕捉时间,安排教学环境参数中的虚拟目标座位(比如:正中偏左处正对投影屏幕的位置)。此外,所述第一捕捉时间可用于记录所述目标科目的学习时长,还可用于选择所述虚拟目标座位,以使后续进入该英语科目中的其他用户无法重复占用该学习时长内已选的目标座位。 For example, taking the seat selection operation command as an example, after the first virtual reality device receives the seat selection instruction for the English subject in the first virtual teaching scene by voice recognition, the first capture time is read, and A virtual target seat in the teaching environment parameter is arranged according to the seat selection operation command and the first capture time (eg, the position of the projection screen facing the left side of the center). In addition, the first capture time may be used to record the learning duration of the target subject, and may also be used to select the virtual target seat, so that other users who subsequently enter the English subject cannot repeatedly occupy the selected one of the learning durations. Target seat.
由上可见,所述第一虚拟现实设备首先通过摄像头捕捉第一环境信息,并记录第一捕捉时间;其次,第一虚拟现实设备接收对虚拟教学应用的设置指令,并接收和缓存后台服务器根据所述设置指令返回的教学环境参数;然后,所述第一虚拟现实设备将缓存处理后的所述教学环境参数与捕捉到的所述第一环境信息进行融合,并根据所述第一捕捉时间生成第一虚拟教学场景,并显示所述第一虚拟教学场景;最后,所述第一虚拟现实设备接收对所述第一虚拟教学场景中目标科目的操作指令,并根据所述操作指令选择相应的人物教学模型,并基于所述人物模型展示所述目标科目的教学内容。因此,采用本发明,不仅能为用户提供贴合真实的模拟教学场景,还能为用户打造更加丰富和更多样化的视觉体验,充分调动用户的感觉和思维,极大地提高用户的学习效率,As can be seen from the above, the first virtual reality device first captures the first environment information through the camera, and records the first capture time; secondly, the first virtual reality device receives the setting instruction for the virtual teaching application, and receives and caches the background server according to Setting the teaching environment parameter returned by the instruction; then, the first virtual reality device fuses the cached processed teaching environment parameter with the captured first environment information, and according to the first capturing time Generating a first virtual teaching scene and displaying the first virtual teaching scene; finally, the first virtual reality device receives an operation instruction for the target subject in the first virtual teaching scene, and selects according to the operation instruction a character teaching model, and displaying the teaching content of the target subject based on the character model. Therefore, the invention can not only provide users with a realistic simulation teaching scene, but also create a richer and more diverse visual experience for the user, fully mobilize the user's feelings and thinking, and greatly improve the user's learning efficiency. ,
进一步地,请参见图2,是本发明实施例提供的另一种基于虚拟现实的数据处理方法的流程示意图,如图2所示,所述数据处理方法至少包括:2 is a schematic flowchart of another virtual reality-based data processing method according to an embodiment of the present invention. As shown in FIG. 2, the data processing method includes at least:
步骤S201,第一虚拟现实设备通过摄像头捕捉第一环境信息,并记录第一捕捉时间;Step S201, the first virtual reality device captures the first environment information by using the camera, and records the first capture time;
具体地,所述第一虚拟现实设备可通过前置或后置摄像头捕捉当前用户所处的环境数据,并将所述环境数据作为第一环境信息,并记录所述第一环境信息对应的第一捕捉时间,此外,所述摄像头还具有视频通话和投影等功能。Specifically, the first virtual reality device may capture the environment data of the current user by using the front or rear camera, and use the environment data as the first environment information, and record the first corresponding to the first environment information. A capture time, in addition, the camera also has functions such as video call and projection.
其中,所述第一环境信息的获得是通过所述第一虚拟现实设备实时检测用户的头部转动角度,并根据所述转动角度实时捕捉到相应摄像区域范围内的空间数据,并将各个角度所对应的空间数据进行整合而生成所述环境数据,并将所述环境数据作为所述第一环境信息。The obtaining, by the first virtual reality device, the user's head rotation angle is detected in real time, and the spatial data in the corresponding imaging area is captured in real time according to the rotation angle, and each angle is The corresponding spatial data is integrated to generate the environmental data, and the environmental data is used as the first environmental information.
步骤S202,第一虚拟现实设备接收对虚拟教学应用的设置指令,并接收和缓存后台服务器根据所述设置指令返回的教学环境参数;Step S202: The first virtual reality device receives a setting instruction for the virtual teaching application, and receives and caches a teaching environment parameter returned by the background server according to the setting instruction.
具体地,所述第一虚拟现实设备接收对虚拟教学应用的设置指令,并发送所述设置指令到后台服务器,以使所述后台服务器根据所述设置指令获取教学环境参数,并将所述教学环境参数发送回所述第一虚拟现实设备;其中,所述教学环境参数包括:虚拟投影仪,虚拟课桌、虚拟座位和虚拟教室;在所述第一虚拟现实设备接收到的所述教学环境参数后,对所述教学环境参数进行缓存处理。 Specifically, the first virtual reality device receives a setting instruction for the virtual teaching application, and sends the setting instruction to the background server, so that the background server acquires the teaching environment parameter according to the setting instruction, and the teaching The environmental parameter is sent back to the first virtual reality device; wherein the teaching environment parameter comprises: a virtual projector, a virtual desk, a virtual seat, and a virtual classroom; and the teaching environment received by the first virtual reality device After the parameter, the teaching environment parameters are cached.
其中,所述第一虚拟现实设备可为头戴式设备,包括:虚拟现实眼镜或虚拟现实头盔;所述第一虚拟现实设备可用于接收用户对所述虚拟教学应用对应的屏幕区域的设置指令,并在所述第一虚拟现实设备接收到所述设置指令后,向所述后台服务器发送所述设置指令;The first virtual reality device may be a head mounted device, including: a virtual reality glasses or a virtual reality helmet; the first virtual reality device may be configured to receive a setting instruction of a screen area corresponding to the virtual teaching application by the user. And after the first virtual reality device receives the setting instruction, send the setting instruction to the background server;
其中,所述设置指令可用于向所述后台服务器发送设置请求信息,并使所述后台服务器根据所述设置请求信息返回设置响应信息,以提取所述后台服务器上存储的教学环境参数,并可根据所述教学环境参数对所述虚拟教学应用的教学场景进行设置;其中,所述教学环境参数包括:虚拟投影仪,虚拟课桌、虚拟座位和虚拟教室;The setting instruction may be configured to send the setting request information to the background server, and enable the background server to return setting response information according to the setting request information, to extract the teaching environment parameter stored on the background server, and Setting a teaching scenario of the virtual teaching application according to the teaching environment parameter; wherein the teaching environment parameters include: a virtual projector, a virtual desk, a virtual seat, and a virtual classroom;
此外,所述设置指令是指用户对所述第一虚拟现实设备的虚拟屏幕区域执行点击操作。其中,所述点击操作包括但不限于:按压操作、双击操作或者滑屏操作等各类型触摸触控屏的操作。通常,在具有触控屏功能的终端中,其触控屏的结构包括至少三层:屏幕玻璃层、触控面板层和显示面板层。其中屏幕玻璃层为保护层,触控面板层用于感知用户的触控操作,显示面板层用于显示图像。且目前已有相关技术能使触控面板层和显示面板层融合。In addition, the setting instruction refers to a user performing a click operation on a virtual screen area of the first virtual reality device. The click operation includes, but is not limited to, an operation of each type of touch touch screen, such as a pressing operation, a double-click operation, or a sliding screen operation. Generally, in a terminal having a touch screen function, the structure of the touch screen includes at least three layers: a screen glass layer, a touch panel layer, and a display panel layer. The screen glass layer is a protective layer, the touch panel layer is used to sense a user's touch operation, and the display panel layer is used to display an image. At present, related technologies enable the integration of the touch panel layer and the display panel layer.
步骤S203,所述第一虚拟现实设备基于主动分屏技术,将显卡缓存中所缓存的所述教学环境参数转换为三维教室界面数据,并根据所述第一捕捉时间将所述三维教室界面数据和捕捉到的所述第一环境信息进行融合,生成所述第一虚拟教学场景;其中,所述第一捕捉时间用于记录所述目标科目的学习时长;Step S203, the first virtual reality device converts the teaching environment parameter cached in the graphics card cache into three-dimensional classroom interface data based on the active split screen technology, and uses the first capture time to set the three-dimensional classroom interface data. Merging with the captured first environment information to generate the first virtual teaching scene; wherein the first capturing time is used to record the learning duration of the target subject;
具体地,所述第一虚拟现实设备根据主动分屏技术,将系统底层显卡缓存中所缓存的所述教学环境参数进行分屏处理,使系统显示的所述教学环境参数在进行等比例分屏处理后,能转换成三维教学界面数据,随后,所述第一虚拟现实设备将捕捉到的所述第一环境信息与所述所述三维教学界面数据进行数据融合,以生成第一虚拟教学场景,并显示所述第一虚拟教学场景;Specifically, the first virtual reality device performs split screen processing on the teaching environment parameters buffered in the bottom layer graphics card cache according to the active split screen technology, so that the teaching environment parameters displayed by the system are equally divided. After processing, the data can be converted into three-dimensional teaching interface data, and then the first virtual reality device performs data fusion between the captured first environment information and the three-dimensional teaching interface data to generate a first virtual teaching scene. And displaying the first virtual teaching scene;
其中,主动分屏技术是通过系统底层驱动来实现分屏处理的,可从系统底层的显示缓存区来实现分屏,通过在FrameBuffer这一层用采用独有的算法进行等比例分屏处理,以达到所有系统显示内容都能进行分屏,然后配以虚拟现实眼镜从而达到3D显示的效果;另外,所述第一虚拟现实设备可通过所述摄像头捕捉用户当前所处真实环境下的所述环境数据,并根据所述环境数据估算 和形成相应的教学环境参数模型,以从所述后台服务器上提取到相应的所述教学环境参数,并将所述教学环境参数与捕捉到的所述第一环境信息进行数据融合,以根据所述第一捕捉时间生成第一虚拟教学场景,并显示所述第一虚拟教学场景;Among them, the active split screen technology realizes the split screen processing through the underlying driver of the system, and can realize the split screen from the display buffer area of the bottom layer of the system, and uses the unique algorithm to perform the equal division split screen processing in the layer of FrameBuffer. In order to achieve all system display content, the screen can be split, and then equipped with virtual reality glasses to achieve the effect of 3D display; in addition, the first virtual reality device can capture the user in the real environment in which the user is currently located by the camera. Environmental data and estimated based on the environmental data And forming a corresponding teaching environment parameter model to extract corresponding teaching environment parameters from the background server, and data fusion between the teaching environment parameters and the captured first environment information, according to Generating a first virtual teaching scene by the first capturing time, and displaying the first virtual teaching scene;
步骤S204,所述第一虚拟现实设备接收对所述第一虚拟教学场景中目标科目的操作指令,并根据所述操作指令选择相应的人物教学模型,并基于所述人物模型展示所述目标科目的教学内容;Step S204, the first virtual reality device receives an operation instruction for the target subject in the first virtual teaching scene, and selects a corresponding character teaching model according to the operation instruction, and displays the target subject based on the character model. Teaching content;
具体地,所述第一虚拟现实设备通过语音识别接收对所述第一虚拟教学场景中目标科目的操作指令;其中,所述操作指令包括:模型选择操作指令、教学内容调用操作指令以及座位选择操作指令;此时,所述第一虚拟现实设备可根据所述模型选择操作指令选择相应的人物教学模型,并根据所述教学内容调用操作指令展示所述目标科目的教学内容,并根据所述座位选择操作指令以及所述第一捕捉时间,安排教学环境参数中的虚拟目标座位。接收对所述第一虚拟教学场景中目标科目的操作指令,并根据所述操作指令选择相应的人物教学模型,并基于所述人物模型展示所述目标科目的教学内容。Specifically, the first virtual reality device receives an operation instruction for the target subject in the first virtual teaching scenario by using voice recognition; wherein the operation instruction includes: a model selection operation instruction, a teaching content call operation instruction, and a seat selection An operation instruction; at this time, the first virtual reality device may select a corresponding character teaching model according to the model selection operation instruction, and invoke an operation instruction to display the teaching content of the target subject according to the teaching content, and according to the The seat selection operation command and the first capture time arrange virtual target seats in the teaching environment parameters. Receiving an operation instruction for the target subject in the first virtual teaching scene, and selecting a corresponding character teaching model according to the operation instruction, and displaying the teaching content of the target subject based on the character model.
比如,以模型选择操作指令为例,所述第一虚拟现实设备可通过语音识别接收对所述第一虚拟教学场景中目标科目(比如:物理科目)的模型选择操作指令去选择用户喜爱的人物教学模型。其中,所述人物教学模型可以是形象的动物卡通模型,也可以是逼真的明星模拟模型。For example, taking the model selection operation instruction as an example, the first virtual reality device may receive a model selection operation instruction for a target subject (eg, a physical subject) in the first virtual teaching scene by voice recognition to select a user favorite user. Teaching model. The character teaching model may be an animal cartoon model of the image or a realistic star simulation model.
又比如:以教学内容调用操作指令为例,所述第一虚拟现实设备通过语音识别接收对所述第一虚拟教学场景中目标科目的教学内容调用操作指令后,可使第一虚拟现实设备将从所述后台服务器中提取到的虚拟的图像覆盖到真实世界画面中的功能,即在一定投影距离内,通过主动分屏技术,将电脑虚拟的图像融合标记在该用户位置的真实世界画面中,最后输出经过叠加的三维图像。例如,可以想用户展现一个消失多年的恐龙模型,让它置于用户所在位置的周围;还可以帮助用户在眼前重现神州十一号飞船发射及升空的模拟场景,以做到给用户以生动形象的3D影像,让更多的用户能最贴近现实的了解身边的科技。For example, the operation instruction is invoked by the teaching content, and the first virtual reality device can receive the operation instruction for the teaching content of the target subject in the first virtual teaching scene by using voice recognition, so that the first virtual reality device can be The virtual image extracted from the background server covers the function in the real world picture, that is, within a certain projection distance, the virtual image of the computer is merged and marked in the real world picture of the user position through the active split screen technology. Finally, the superimposed three-dimensional image is output. For example, you can think of a dinosaur model that has disappeared for many years, and let it be placed around the user's location. It can also help users to reproduce the simulation scene of the launch and launch of the Shenzhou 11 spacecraft in front of the scene. The vivid 3D image allows more users to get to know the technology around them.
可选地,在执行完上述步骤S201-S204步骤之后,所述数据处理方法还包 括执行如下步骤:Optionally, after performing the foregoing steps S201-S204, the data processing method further includes Including the following steps:
步骤S205,所述第一虚拟现实设备还可基于无线视频传输技术,将所述第一虚拟教学场景发送至与所述第一虚拟现实设备具有无线连接关系的用户终端,以使所述用户终端显示所述第一虚拟教学场景;Step S205, the first virtual reality device may further send the first virtual teaching scenario to a user terminal having a wireless connection relationship with the first virtual reality device based on a wireless video transmission technology, so that the user terminal Displaying the first virtual teaching scene;
其中,所述用户终端包括:智能电视。笔记本电脑、掌上电脑、游戏外设和平板电脑。The user terminal includes: a smart TV. Laptops, PDAs, gaming peripherals and tablets.
由此可见,第一虚拟现实设备首先通过摄像头捕捉第一环境信息,并记录第一捕捉时间;其次,第一虚拟现实设备接收对虚拟教学应用的设置指令,并接收和缓存后台服务器根据所述设置指令返回的教学环境参数;然后,所述第一虚拟现实设备基于主动分屏技术,将显卡缓存中所缓存的所述教学环境参数转换为三维教室界面数据,并根据所述第一捕捉时间将所述三维教室界面数据和捕捉到的所述第一环境信息进行融合,生成所述第一虚拟教学场景;其中,所述第一捕捉时间用于记录所述目标科目的学习时长;最后,所述第一虚拟现实设备接收对所述第一虚拟教学场景中目标科目的操作指令,并根据所述操作指令选择相应的人物教学模型,并基于所述人物模型展示所述目标科目的教学内容;可见,采用本发明,可使系统展示的教学内容在底层驱动中实现等比例分屏,以使系统中的应用界面实现主动分屏效果,这样可从根本上改善3D展示效果,并可丰富用户的学习资源,并为用户提供多样化、且贴合真实的虚拟教学场景。It can be seen that the first virtual reality device first captures the first environment information through the camera and records the first capture time; secondly, the first virtual reality device receives the setting instruction for the virtual teaching application, and receives and caches the background server according to the Setting the teaching environment parameter returned by the instruction; then, the first virtual reality device converts the teaching environment parameter cached in the graphics card cache into three-dimensional classroom interface data based on the active split screen technology, and according to the first capture time Combining the three-dimensional classroom interface data with the captured first environment information to generate the first virtual teaching scene; wherein the first capturing time is used to record the learning duration of the target subject; and finally, The first virtual reality device receives an operation instruction for the target subject in the first virtual teaching scene, and selects a corresponding character teaching model according to the operation instruction, and displays the teaching content of the target subject based on the character model It can be seen that with the present invention, the teaching content displayed by the system can be in the underlying driver. Now equalize the screen, so that the application interface in the system realizes the active split screen effect, which can fundamentally improve the 3D display effect, enrich the user's learning resources, and provide users with diverse and realistic virtual Teaching scene.
进一步地,再请参见图3,是本发明实施例提供的又一种基于虚拟现实的数据处理方法的流程示意图,如图3所示,所述数据处理方法至少包括:Further, please refer to FIG. 3, which is a schematic flowchart of still another method for processing data based on virtual reality according to an embodiment of the present invention. As shown in FIG. 3, the data processing method includes at least:
步骤S301,第一虚拟现实设备通过摄像头捕捉第一环境信息,并记录第一捕捉时间;Step S301, the first virtual reality device captures the first environment information by using the camera, and records the first capture time;
具体地,所述第一虚拟现实设备可通过前置或后置摄像头捕捉当前用户所处的环境数据,并将所述环境数据作为第一环境信息,并记录所述第一环境信息对应的第一捕捉时间,此外,所述摄像头还具有视频通话和投影等功能。Specifically, the first virtual reality device may capture the environment data of the current user by using the front or rear camera, and use the environment data as the first environment information, and record the first corresponding to the first environment information. A capture time, in addition, the camera also has functions such as video call and projection.
其中,所述第一环境信息的获得是通过所述第一虚拟现实设备实时检测用户的头部转动角度,并根据所述转动角度实时捕捉到相应摄像区域范围内的空间数据,并将各个角度所对应的空间数据进行整合而生成所述环境数据,并将 所述环境数据作为所述第一环境信息。The obtaining, by the first virtual reality device, the user's head rotation angle is detected in real time, and the spatial data in the corresponding imaging area is captured in real time according to the rotation angle, and each angle is The corresponding spatial data is integrated to generate the environmental data, and The environmental data is used as the first environmental information.
步骤S302,所述第一虚拟现实设备接收对虚拟教学应用的设置指令,并发送所述设置指令到后台服务器;Step S302, the first virtual reality device receives a setting instruction for the virtual teaching application, and sends the setting instruction to the background server;
步骤S303,所述后台服务器根据所述设置指令获取教学环境参数,并将所述教学环境参数发送回所述第一虚拟现实设备;其中,所述教学环境参数包括:虚拟投影仪,虚拟课桌、虚拟座位和虚拟教室;Step S303, the background server acquires the teaching environment parameter according to the setting instruction, and sends the teaching environment parameter back to the first virtual reality device; wherein the teaching environment parameter comprises: a virtual projector, a virtual desk , virtual seats and virtual classrooms;
步骤S304所述第一虚拟现实设备根据接收到的所述教学环境参数,对所述教学环境参数进行缓存处理;Step S304: The first virtual reality device caches the teaching environment parameter according to the received teaching environment parameter;
具体地,所述第一虚拟现实设备可为头戴式设备,包括:虚拟现实眼睛或虚拟现实头盔;所述第一虚拟现实设备可用于接收用户对所述虚拟教学应用对应的屏幕区域的设置指令,并在所述第一虚拟现实设备接收到所述设置指令后,向所述后台服务器发送所述设置指令;Specifically, the first virtual reality device may be a head mounted device, including: a virtual reality eye or a virtual reality helmet; the first virtual reality device may be configured to receive a setting of a screen area corresponding to the virtual teaching application by a user. And after the first virtual reality device receives the setting instruction, sending the setting instruction to the background server;
其中,所述设置指令可用于向所述后台服务器发送设置请求信息,并使所述后台服务器根据所述设置请求信息返回设置响应信息,以提取所述后台服务器上存储的教学环境参数,并可根据所述教学环境参数对所述虚拟教学应用的教学场景进行设置;其中,所述教学环境参数包括:虚拟投影仪,虚拟课桌、虚拟座位和虚拟教室;The setting instruction may be configured to send the setting request information to the background server, and enable the background server to return setting response information according to the setting request information, to extract the teaching environment parameter stored on the background server, and Setting a teaching scenario of the virtual teaching application according to the teaching environment parameter; wherein the teaching environment parameters include: a virtual projector, a virtual desk, a virtual seat, and a virtual classroom;
此外,所述设置指令是指用户对所述第一虚拟现实设备的虚拟屏幕区域执行点击操作。其中,所述点击操作包括但不限于:按压操作、双击操作或者滑屏操作等各类型触摸触控屏的操作。通常,在具有触控屏功能的终端中,其触控屏的结构包括至少三层:屏幕玻璃层、触控面板层和显示面板层。其中屏幕玻璃层为保护层,触控面板层用于感知用户的触控操作,显示面板层用于显示图像。且目前已有相关技术能使触控面板层和显示面板层融合。In addition, the setting instruction refers to a user performing a click operation on a virtual screen area of the first virtual reality device. The click operation includes, but is not limited to, an operation of each type of touch touch screen, such as a pressing operation, a double-click operation, or a sliding screen operation. Generally, in a terminal having a touch screen function, the structure of the touch screen includes at least three layers: a screen glass layer, a touch panel layer, and a display panel layer. The screen glass layer is a protective layer, the touch panel layer is used to sense a user's touch operation, and the display panel layer is used to display an image. At present, related technologies enable the integration of the touch panel layer and the display panel layer.
步骤S305,所述第一虚拟现实设备将缓存处理后的所述教学环境参数与捕捉到的所述第一环境信息进行融合,并根据所述第一捕捉时间生成第一虚拟教学场景,并显示所述第一虚拟教学场景;Step S305, the first virtual reality device merges the cached processed teaching environment parameter with the captured first environment information, and generates a first virtual teaching scene according to the first capturing time, and displays The first virtual teaching scene;
具体地,所述第一虚拟现实设备可通过所述摄像头捕捉用户当前所处真实环境下的所述环境数据,并根据所述环境数据估算和形成相应教学环境参数的模型,以从所述后台服务器上提取到相应的所述教学环境参数,并将所述教学 环境参数与捕捉到的所述第一环境信息进行数据融合,以根据所述第一捕捉时间生成第一虚拟教学场景,并显示所述第一虚拟教学场景;Specifically, the first virtual reality device may capture the environment data in a real environment in which the user is currently located by using the camera, and estimate and form a model of a corresponding teaching environment parameter according to the environment data, to Extracting corresponding teaching environment parameters from the server, and the teaching The environment parameter is data-fused with the captured first environment information to generate a first virtual teaching scene according to the first capturing time, and display the first virtual teaching scene;
步骤S306,所述第一虚拟现实设备接收对所述第一虚拟教学场景中目标科目的操作指令,并根据所述操作指令选择相应的人物教学模型,并基于所述人物模型展示所述目标科目的教学内容。Step S306, the first virtual reality device receives an operation instruction for the target subject in the first virtual teaching scene, and selects a corresponding character teaching model according to the operation instruction, and displays the target subject based on the character model. Teaching content.
具体地,所述第一虚拟现实设备通过语音识别接收对所述第一虚拟教学场景中目标科目的操作指令;其中,所述操作指令包括:模型选择操作指令、教学内容调用操作指令以及座位选择操作指令;此时,所述第一虚拟现实设备可根据所述模型选择操作指令选择相应的人物教学模型,并根据所述教学内容调用操作指令展示所述目标科目的教学内容,并根据所述座位选择操作指令以及所述第一捕捉时间,安排教学环境参数中的虚拟目标座位。Specifically, the first virtual reality device receives an operation instruction for the target subject in the first virtual teaching scenario by using voice recognition; wherein the operation instruction includes: a model selection operation instruction, a teaching content call operation instruction, and a seat selection An operation instruction; at this time, the first virtual reality device may select a corresponding character teaching model according to the model selection operation instruction, and invoke an operation instruction to display the teaching content of the target subject according to the teaching content, and according to the The seat selection operation command and the first capture time arrange virtual target seats in the teaching environment parameters.
步骤S307,第二虚拟现实设备向所述后台服务器发送加入请求;Step S307, the second virtual reality device sends a join request to the background server.
步骤S308,所述后台服务器将接收到的所述加入请求转发给所述第一虚拟现实设备;Step S308, the background server forwards the received join request to the first virtual reality device;
步骤S309,所述第一虚拟现实设备生成与所述加入请求对应的确认响应消息,并将所述确认响应消息发送到所述第二虚拟现实设备;Step S309, the first virtual reality device generates an acknowledgement response message corresponding to the join request, and sends the acknowledgement response message to the second virtual reality device.
步骤S310,所述第二虚拟现实设备根据所述确认响应消息向所述后台服务器上传第二环境信息,并记录第二捕捉时间;Step S310, the second virtual reality device uploads second environment information to the background server according to the confirmation response message, and records a second capture time;
具体地,所述第二虚拟现实设备可通过前置或后置摄像头捕捉当前环境数据,并将所述环境数据作为第二环境信息,并记录所述第二环境信息对应的第二捕捉时间,此外,所述摄像头还具有视频通话和投影等功能。在所述第二虚拟现实设备接收到所述第一虚拟现实设备发送来的所述确认响应消息后,向所述后台服务器上传所述第二环境信息;Specifically, the second virtual reality device may capture the current environment data by using the front or rear camera, and use the environment data as the second environment information, and record the second capture time corresponding to the second environment information. In addition, the camera also has functions such as video calling and projection. After the second virtual reality device receives the confirmation response message sent by the first virtual reality device, uploading the second environment information to the background server;
其中,所述第二环境信息的获得是通过所述第二虚拟现实设备实时检测该用户的头部转动角度,捕捉对应摄像区域范围内的空间数据,并将所述空间数据进行整合而生成所述环境数据,并将所述环境数据作为所述第二环境信息上传给所述后台服务器以获取所述第二捕捉时间对应的目标科目的第二教学场景数据。The obtaining, by the second virtual reality device, the head rotation angle of the user in real time, capturing spatial data in a range corresponding to the imaging area, and integrating the spatial data to generate a The environment data is uploaded, and the environment data is uploaded to the background server as the second environment information to obtain the second teaching scene data of the target subject corresponding to the second capturing time.
步骤S311,所述后台服务器将所述第一虚拟教学场景与所述第二环境信 息进行融合,生成与所述第一虚拟现实设备对应的第一教学场景数据,并生成与所述第二虚拟现实设备对应的第二教学场景数据;Step S311, the background server compares the first virtual teaching scene with the second environment letter Converging, generating first teaching scene data corresponding to the first virtual reality device, and generating second teaching scene data corresponding to the second virtual reality device;
具体地,所述后台服务器在接收到所述第二虚拟现实设备所上传的第二环境信息后,将所述第二环境信息与所述第一虚拟教学场景进行数据融合,以生成与所述第一虚拟现实设备对应的第一教学场景数据,并发送给所述第一虚拟现实设备;与此同时,将生成的与所述第二虚拟现实设备对应的第二教学场景数据发送给所述第二虚拟现实设备,以使两虚拟现实设备在相应的虚拟教学场景中能相互显示不同方位或视角下的模拟教学场景。Specifically, after receiving the second environment information uploaded by the second virtual reality device, the background server performs data fusion between the second environment information and the first virtual teaching scenario to generate and The first teaching scene data corresponding to the first virtual reality device is sent to the first virtual reality device; at the same time, the generated second teaching scene data corresponding to the second virtual reality device is sent to the The second virtual reality device is configured to enable the two virtual reality devices to display simulated teaching scenes in different orientations or perspectives in the corresponding virtual teaching scenes.
步骤S312,所述第一虚拟现实设备接收所述后台服务器发送的所述第一教学场景数据,并结合所述第二捕捉时间,更新显示所述第一虚拟教学场景;Step S312, the first virtual reality device receives the first teaching scene data sent by the background server, and updates and displays the first virtual teaching scene according to the second capturing time;
步骤S313,所述第二虚拟现实设备接收所述后台服务器发送的所述第二教学场景数据,并结合所述第二捕捉时间,生成并显示第二虚拟教学场景。Step S313, the second virtual reality device receives the second teaching scene data sent by the background server, and generates and displays a second virtual teaching scene according to the second capturing time.
可选地,在执行完上述步骤S301-S313后,所述数据处理方法还包括执行如下步骤,所述第一虚拟现实设备还可基于无线视频传输技术,将所述第一虚拟教学场景发送至与所述第一虚拟现实设备具有无线连接关系的用户终端,以使所述用户终端显示所述第一虚拟教学场景;Optionally, after performing the foregoing steps S301-S313, the data processing method further includes performing the following steps, the first virtual reality device may further send the first virtual teaching scenario to the wireless video transmission technology to a user terminal having a wireless connection relationship with the first virtual reality device, so that the user terminal displays the first virtual teaching scene;
其中,所述用户终端包括:智能电视。笔记本电脑、掌上电脑、游行外设和平板电脑。The user terminal includes: a smart TV. Laptops, PDAs, parade peripherals, and tablets.
由上可见,所述第一虚拟现实设备首先通过摄像头捕捉第一环境信息,并记录第一捕捉时间;其次,第一虚拟现实设备接收对虚拟教学应用的设置指令,并接收和缓存后台服务器根据所述设置指令返回的教学环境参数;紧接着,所述第一虚拟现实设备将缓存处理后的所述教学环境参数与捕捉到的所述第一环境信息进行融合,并根据所述第一捕捉时间生成第一虚拟教学场景,并显示所述第一虚拟教学场景;最后,所述第一虚拟现实设备接收对所述第一虚拟教学场景中目标科目的操作指令,并根据所述操作指令选择相应的人物教学模型,并基于所述人物模型展示所述目标科目的教学内容;然后,所述后台服务器在接收到所述第二虚拟现实设备的加入请求后,转发所述加入请求给所述第一虚拟现实设备,以使所述第二虚拟现实设备在接收到所述确认响应消息后,上传所述第二环境信息;最好,所述后台服务器将所述第一虚拟教学场景与所 述第二环境信息进行数据融合,生成与所述第一虚拟现实设备对应的第一教学场景数据,并生成与所述第二虚拟现实设备对应的第二教学场景数据,并分别发送给所述第一虚拟现实设备和所述第二虚拟现实设备。因此,采用本发明,不仅能为用户提供贴合真实的模拟教学场景,还能为多个用户提供丰富的虚拟互动平台,进而为用户打造更加丰富和更多样化的视觉体验,为用户提供贴合现实的教学场景,极大地帮助用户理解教学内容,提高学习效率,As can be seen from the above, the first virtual reality device first captures the first environment information through the camera, and records the first capture time; secondly, the first virtual reality device receives the setting instruction for the virtual teaching application, and receives and caches the background server according to The teaching environment parameter returned by the setting instruction; the first virtual reality device merges the cached processed teaching environment parameter with the captured first environment information, and according to the first capture Generating a first virtual teaching scene and displaying the first virtual teaching scene; finally, the first virtual reality device receives an operation instruction for the target subject in the first virtual teaching scene, and selects according to the operation instruction Corresponding character teaching model, and displaying the teaching content of the target subject based on the character model; then, after receiving the joining request of the second virtual reality device, the background server forwards the joining request to the a first virtual reality device to cause the second virtual reality device to receive the confirmation response After interest, the second environmental information uploaded; Preferably, the first virtual server, the background scene and the teaching The second environment information is used for data fusion, and the first teaching scene data corresponding to the first virtual reality device is generated, and the second teaching scene data corresponding to the second virtual reality device is generated and sent to the a first virtual reality device and the second virtual reality device. Therefore, the present invention not only provides a user with a realistic simulation teaching scene, but also provides a rich virtual interactive platform for multiple users, thereby creating a richer and more diverse visual experience for the user, and providing the user with a visual experience. Fit the realistic teaching scene, greatly help users understand the teaching content and improve learning efficiency.
进一步地,请参见图4,是本发明实施例提供的一种基于虚拟现实的数据处理系统的结构示意图,如图4所示,所述数据处理系统1包括:第一虚拟现实设备10和后台服务器20;Further, please refer to FIG. 4, which is a schematic structural diagram of a data processing system based on virtual reality according to an embodiment of the present invention. As shown in FIG. 4, the data processing system 1 includes: a first virtual reality device 10 and a background. Server 20;
所述第一虚拟现实设备10,用于通过摄像头捕捉第一环境信息,并记录第一捕捉时间;The first virtual reality device 10 is configured to capture first environment information by using a camera, and record a first capture time;
具体地,所述第一虚拟现实设备10可通过前置或后置摄像头捕捉当前用户所处的环境数据,并将所述环境数据作为第一环境信息,并记录所述第一环境信息对应的第一捕捉时间,此外,所述摄像头还具有视频通话和投影等功能。Specifically, the first virtual reality device 10 may capture the environment data of the current user by using the front or rear camera, and use the environment data as the first environment information, and record the corresponding information of the first environment information. The first capture time, in addition, the camera also has functions such as video call and projection.
其中,所述第一环境信息的获得是通过所述第一虚拟现实设备10实时检测用户的头部转动角度,并根据所述转动角度实时捕捉到相应摄像区域范围内的空间数据,并将各个角度所对应的空间数据进行整合而生成所述环境数据,并将所述环境数据作为所述第一环境信息。The obtaining, by the first virtual reality device 10, the user's head rotation angle is detected in real time, and the spatial data in the corresponding imaging area is captured in real time according to the rotation angle, and each The spatial data corresponding to the angle is integrated to generate the environmental data, and the environmental data is used as the first environmental information.
所述第一虚拟现实设备10,所述第一虚拟现实设备,还用于接收对虚拟教学应用的设置指令,并发送所述设置指令到后台服务器;The first virtual reality device 10, the first virtual reality device is further configured to receive a setting instruction for the virtual teaching application, and send the setting instruction to the background server;
所述后台服务器20,用于根据所述设置指令获取教学环境参数,并将所述教学环境参数发送回所述第一虚拟现实设备;其中,所述教学环境参数包括:虚拟投影仪,虚拟课桌、虚拟座位和虚拟教室;The background server 20 is configured to acquire a teaching environment parameter according to the setting instruction, and send the teaching environment parameter to the first virtual reality device; wherein the teaching environment parameter comprises: a virtual projector, a virtual class Tables, virtual seats and virtual classrooms;
所述第一虚拟现实设备10,还用于根据接收到的所述教学环境参数,对所述教学环境参数进行缓存处理;The first virtual reality device 10 is further configured to cache the teaching environment parameter according to the received teaching environment parameter;
具体地,所述第一虚拟现实设备10接收对虚拟教学应用的设置指令,并发送所述设置指令到后台服务器20,以使所述后台服务20器根据所述设置指令获取教学环境参数,并将所述教学环境参数发送回所述第一虚拟现实设10;随后,所述第一虚拟现实设备10根据接收到的所述教学环境参数,对所述教 学环境参数进行缓存处理。Specifically, the first virtual reality device 10 receives a setting instruction for the virtual teaching application, and sends the setting instruction to the background server 20, so that the background service device 20 obtains the teaching environment parameter according to the setting instruction, and Sending the teaching environment parameter back to the first virtual reality device 10; subsequently, the first virtual reality device 10 refers to the teaching according to the received teaching environment parameter Learn environment parameters for cache processing.
其中,所述教学环境参数包括:虚拟投影仪,虚拟课桌、虚拟座位和虚拟教室;The teaching environment parameters include: a virtual projector, a virtual desk, a virtual seat, and a virtual classroom;
此外,所述第一虚拟现实设备10可为头戴式设备,包括:虚拟现实眼睛或虚拟现实头盔;所述第一虚拟现实设备可用于接收用户对所述虚拟教学应用对应的屏幕区域的设置指令,并在所述第一虚拟现实设备接收到所述设置指令后,向所述后台服务器发送所述设置指令;In addition, the first virtual reality device 10 may be a head mounted device, including: a virtual reality eye or a virtual reality helmet; the first virtual reality device may be configured to receive a setting of a screen area corresponding to the virtual teaching application by a user. And after the first virtual reality device receives the setting instruction, sending the setting instruction to the background server;
其中,所述第一虚拟现实设备10向所述后台服务器20发送设置请求信息,并使所述后台服务器20根据所述设置请求信息返回设置响应信息,以提取所述后台服务器20上存储的教学环境参数,并可根据所述教学环境参数对所述虚拟教学应用的教学场景进行设置;其中,所述教学环境参数包括:虚拟投影仪,虚拟课桌、虚拟座位和虚拟教室;The first virtual reality device 10 sends the setting request information to the background server 20, and causes the background server 20 to return setting response information according to the setting request information to extract the teaching stored on the background server 20. An environmental parameter, and the teaching scene of the virtual teaching application is set according to the teaching environment parameter; wherein the teaching environment parameter comprises: a virtual projector, a virtual desk, a virtual seat, and a virtual classroom;
其中,所述设置指令是指用户对所述第一虚拟现实设备的虚拟屏幕区域执行点击操作。其中,所述点击操作包括但不限于:按压操作、双击操作或者滑屏操作等各类型触摸触控屏的操作。通常,在具有触控屏功能的终端中,其触控屏的结构包括至少三层:屏幕玻璃层、触控面板层和显示面板层。其中屏幕玻璃层为保护层,触控面板层用于感知用户的触控操作,显示面板层用于显示图像。且目前已有相关技术能使触控面板层和显示面板层融合。The setting instruction refers to that the user performs a click operation on the virtual screen area of the first virtual reality device. The click operation includes, but is not limited to, an operation of each type of touch touch screen, such as a pressing operation, a double-click operation, or a sliding screen operation. Generally, in a terminal having a touch screen function, the structure of the touch screen includes at least three layers: a screen glass layer, a touch panel layer, and a display panel layer. The screen glass layer is a protective layer, the touch panel layer is used to sense a user's touch operation, and the display panel layer is used to display an image. At present, related technologies enable the integration of the touch panel layer and the display panel layer.
所述第一虚拟现实设备10,还用于将缓存处理后的所述教学环境参数与捕捉到的所述第一环境信息进行融合,并根据所述第一捕捉时间生成第一虚拟教学场景,并显示所述第一虚拟教学场景;The first virtual reality device 10 is further configured to fuse the cached processed teaching environment parameter with the captured first environment information, and generate a first virtual teaching scenario according to the first capture time. And displaying the first virtual teaching scene;
具体地,所述第一虚拟现实设备10,可用于通过所述摄像头捕捉用户当前所处真实环境下的所述环境数据,并根据所述环境数据估算和形成相应教学环境参数的模型,以从所述后台服务器20上提取到相应的所述教学环境参数,并将所述教学环境参数与捕捉到的所述第一环境信息进行数据融合,以根据所述第一捕捉时间生成第一虚拟教学场景,并显示所述第一虚拟教学场景;Specifically, the first virtual reality device 10 may be configured to capture, by using the camera, the environment data in a real environment where the user is currently located, and estimate and form a model of a corresponding teaching environment parameter according to the environment data, to The background server 20 extracts corresponding teaching environment parameters, and combines the teaching environment parameters with the captured first environment information to generate a first virtual teaching according to the first capturing time. a scene and displaying the first virtual teaching scene;
所述第一虚拟现实设备10,还用于接收对所述第一虚拟教学场景中目标科目的操作指令,并根据所述操作指令选择相应的人物教学模型,并基于所述人物模型展示所述目标科目的教学内容; The first virtual reality device 10 is further configured to receive an operation instruction for a target subject in the first virtual teaching scene, and select a corresponding character teaching model according to the operation instruction, and display the Teaching content of the target subject;
具体地,所述第一虚拟现实设备10,具体用于通过语音识别接收对所述第一虚拟教学场景中目标科目的操作指令;其中,所述操作指令包括:模型选择操作指令、教学内容调用操作指令以及座位选择操作指令;此时,所述第一虚拟现实设备10,还可用于根据所述模型选择操作指令选择相应的人物教学模型,并根据所述教学内容调用操作指令展示所述目标科目的教学内容,并根据所述座位选择操作指令以及所述第一捕捉时间,安排教学环境参数中的虚拟目标座位。Specifically, the first virtual reality device 10 is specifically configured to receive an operation instruction for a target account in the first virtual teaching scenario by using voice recognition; wherein the operation instruction includes: a model selection operation instruction, and a teaching content call The operation instruction and the seat selection operation instruction; at this time, the first virtual reality device 10 is further configured to select a corresponding character teaching model according to the model selection operation instruction, and invoke the operation instruction to display the target according to the teaching content. The teaching content of the subject, and the virtual target seat in the teaching environment parameter is arranged according to the seat selection operation instruction and the first capturing time.
由上可见,所述第一虚拟现实设备10,首先通过摄像头捕捉第一环境信息,并记录第一捕捉时间;其次,所述第一虚拟现实设备10接收对虚拟教学应用的设置指令,并接收和缓存后台服务器20根据所述设置指令返回的教学环境参数;然后,所述第一虚拟现实设备10将缓存处理后的所述教学环境参数与捕捉到的所述第一环境信息进行融合,并根据所述第一捕捉时间生成第一虚拟教学场景,并显示所述第一虚拟教学场景;最后,所述第一虚拟现实设备10接收对所述第一虚拟教学场景中目标科目的操作指令,并根据所述操作指令选择相应的人物教学模型,并基于所述人物模型展示所述目标科目的教学内容。因此,采用本发明,不仅能为用户提供贴合真实的模拟教学场景,还能为用户打造更加丰富和更多样化的视觉体验,充分调动用户的感觉和思维,极大地提高用户的学习效率,As can be seen from the above, the first virtual reality device 10 first captures the first environment information through the camera and records the first capture time; secondly, the first virtual reality device 10 receives the setting instruction for the virtual teaching application, and receives And the teaching environment parameter returned by the cache background server 20 according to the setting instruction; then, the first virtual reality device 10 fuses the cached processed teaching environment parameter with the captured first environment information, and Generating a first virtual teaching scene according to the first capturing time, and displaying the first virtual teaching scene; finally, the first virtual reality device 10 receives an operation instruction for a target subject in the first virtual teaching scene, And selecting a corresponding character teaching model according to the operation instruction, and displaying the teaching content of the target subject based on the character model. Therefore, the invention can not only provide users with a realistic simulation teaching scene, but also create a richer and more diverse visual experience for the user, fully mobilize the user's feelings and thinking, and greatly improve the user's learning efficiency. ,
进一步地,请参见图5,是本发明实施例提供的另一种基于虚拟现实的数据处理系统,所述数据处理系统1包括上述图4对应的具体实施例中的所述第一虚拟现实设备10和后台服务器20,进一步地,所述数据处理系统1还包括第二虚拟现实设备30;Further, please refer to FIG. 5, which is another virtual reality-based data processing system according to an embodiment of the present invention. The data processing system 1 includes the first virtual reality device in the specific embodiment corresponding to FIG. 4. 10 and the background server 20, further, the data processing system 1 further includes a second virtual reality device 30;
所述第二虚拟现实设备30,用于向所述后台服务器20发送加入请求;The second virtual reality device 30 is configured to send a join request to the background server 20;
所述后台服务器20,还用于将接收到的所述加入请求转发给所述第一虚拟现实设备10;The background server 20 is further configured to forward the received join request to the first virtual reality device 10;
所述第一虚拟现实设备10,还用于生成与所述加入请求对应的确认响应消息,并将所述确认响应消息发送到所述第二虚拟现实设备30;The first virtual reality device 10 is further configured to generate an acknowledgment response message corresponding to the join request, and send the acknowledgment response message to the second virtual reality device 30;
所述第二虚拟现实设备30,还用于根据所述确认响应消息向所述后台服务器20上传第二环境信息,并记录第二捕捉时间; The second virtual reality device 30 is further configured to upload the second environment information to the background server 20 according to the confirmation response message, and record the second capture time;
所述后台服务器20,还用于将所述第一虚拟教学场景与所述第二环境信息进行融合,生成与所述第一虚拟现实设备10对应的第一教学场景数据,并生成与所述第二虚拟现实设备30对应的第二教学场景数据;The background server 20 is further configured to merge the first virtual teaching scenario with the second environment information, generate first teaching scene data corresponding to the first virtual reality device 10, and generate and The second teaching scene data corresponding to the second virtual reality device 30;
所述第一虚拟现实设备10,还用于接收所述后台服务器20发送的所述第一教学场景数据,并结合所述第二捕捉时间,更新显示所述第一虚拟教学场景;The first virtual reality device 10 is further configured to receive the first teaching scene data sent by the background server 20, and update and display the first virtual teaching scene according to the second capturing time;
所述第二虚拟现实设备30,还用于接收所述后台服务器20发送的所述第二教学场景数据,并结合所述第二捕捉时间,生成并显示第二虚拟教学场景。The second virtual reality device 30 is further configured to receive the second teaching scene data sent by the background server 20, and generate and display a second virtual teaching scene according to the second capturing time.
可选地,在图4或图5给出的具体实施例中,所述第一虚拟现实设备10,还可用于基于无线视频传输技术,将所述第一虚拟教学场景发送至与所述第一虚拟现实设备10具有无线连接关系的用户终端,以使所述用户终端显示所述第一虚拟教学场景。Optionally, in the specific embodiment shown in FIG. 4 or FIG. 5, the first virtual reality device 10 is further configured to send the first virtual teaching scenario to the first A virtual reality device 10 has a user terminal in a wireless connection relationship to cause the user terminal to display the first virtual teaching scene.
由此可见,第一虚拟现实设备10首先接收对虚拟教学应用的设置指令,并发送所述设置指令到后台服务器20;其次,所述后台服务器20根据所述设置指令获取教学环境参数,并将所述教学环境参数发送回所述第一虚拟现实设备10;然后,所述第一虚拟现实设备10将接收到的所述教学环境参数进行缓存处理;并将缓存处理后的所述教学环境参数与捕捉到的所述第一环境信息进行融合,并根据所述第一捕捉时间生成第一虚拟教学场景,并显示所述第一虚拟教学场景;紧接着,所述第一虚拟现实设备10接收对所述第一虚拟教学场景中目标科目的操作指令,并根据所述操作指令选择相应的人物教学模型,并基于所述人物模型展示所述目标科目的教学内容;最后,所述后台服务器20在接收到所述第二虚拟现实设备30的加入请求后,转发所述加入请求给所述第一虚拟现实设备10,以使所述第二虚拟现实设备30在接收到所述确认响应消息后,上传所述第二环境信息;随后,所述后台服务器20将所述第一虚拟教学场景与所述第二环境信息进行融合,生成与所述第一虚拟现实设备10对应的第一教学场景数据,并生成与所述第二虚拟现实设备30对应的第二教学场景数据。可见,采用本发明,可为多个用户提供贴合真实的虚拟互动平台,丰富用户的学习资源,让用户充分地享受到沉浸式的教学体验,以提高用户的学习效率。It can be seen that the first virtual reality device 10 first receives a setting instruction for the virtual teaching application, and sends the setting instruction to the background server 20; secondly, the background server 20 acquires the teaching environment parameter according to the setting instruction, and The teaching environment parameter is sent back to the first virtual reality device 10; then, the first virtual reality device 10 caches the received teaching environment parameter; and the cached processed teaching environment parameter Merging with the captured first environment information, generating a first virtual teaching scene according to the first capturing time, and displaying the first virtual teaching scene; then, the first virtual reality device 10 receives An operation instruction of the target subject in the first virtual teaching scene, and selecting a corresponding character teaching model according to the operation instruction, and displaying the teaching content of the target subject based on the character model; and finally, the background server 20 After receiving the join request of the second virtual reality device 30, forwarding the join request to the first virtual The real device 10, so that the second virtual reality device 30 uploads the second environment information after receiving the confirmation response message; subsequently, the background server 20 compares the first virtual teaching scenario with the The second environment information is merged, the first teaching scene data corresponding to the first virtual reality device 10 is generated, and the second teaching scene data corresponding to the second virtual reality device 30 is generated. It can be seen that the present invention can provide a virtual interactive platform for a plurality of users to enrich the learning resources of the user, and allow the user to fully enjoy the immersive teaching experience to improve the learning efficiency of the user.
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程, 是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(Random Access Memory,RAM)等。Those skilled in the art can understand all or part of the process in implementing the above embodiments. This may be accomplished by a computer program instructing the associated hardware, which may be stored in a computer readable storage medium, which, when executed, may include the flow of an embodiment of the methods described above. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), or a random access memory (RAM).
以上所揭露的仅为本发明较佳实施例而已,当然不能以此来限定本发明之权利范围,因此依本发明权利要求所作的等同变化,仍属本发明所涵盖的范围。 The above is only the preferred embodiment of the present invention, and the scope of the present invention is not limited thereto, and thus equivalent changes made in the claims of the present invention are still within the scope of the present invention.

Claims (10)

  1. 一种基于虚拟现实的数据处理方法,其特征在于,包括:A data processing method based on virtual reality, characterized in that it comprises:
    第一虚拟现实设备通过摄像头捕捉第一环境信息,并记录第一捕捉时间;The first virtual reality device captures the first environment information through the camera, and records the first capture time;
    所述第一虚拟现实设备接收对虚拟教学应用的设置指令,并发送所述设置指令到后台服务器;The first virtual reality device receives a setting instruction for the virtual teaching application, and sends the setting instruction to the background server;
    所述后台服务器根据所述设置指令获取教学环境参数,并将所述教学环境参数发送回所述第一虚拟现实设备;其中,所述教学环境参数包括:虚拟投影仪,虚拟课桌、虚拟座位和虚拟教室;The background server acquires the teaching environment parameter according to the setting instruction, and sends the teaching environment parameter back to the first virtual reality device; wherein the teaching environment parameter comprises: a virtual projector, a virtual desk, and a virtual seat And virtual classrooms;
    所述第一虚拟现实设备根据接收到的所述教学环境参数,对所述教学环境参数进行缓存处理;The first virtual reality device caches the teaching environment parameter according to the received teaching environment parameter;
    所述第一虚拟现实设备将缓存处理后的所述教学环境参数与捕捉到的所述第一环境信息进行融合,并根据所述第一捕捉时间生成第一虚拟教学场景,并显示所述第一虚拟教学场景;The first virtual reality device merges the cached processed teaching environment parameter with the captured first environment information, and generates a first virtual teaching scene according to the first capturing time, and displays the first a virtual teaching scene;
    所述第一虚拟现实设备接收对所述第一虚拟教学场景中目标科目的操作指令,并根据所述操作指令选择相应的人物教学模型,并基于所述人物模型展示所述目标科目的教学内容。The first virtual reality device receives an operation instruction for the target subject in the first virtual teaching scene, and selects a corresponding character teaching model according to the operation instruction, and displays the teaching content of the target subject based on the character model .
  2. 根据权利要求1所述的方法,其特征在于,所述第一虚拟现实设备将缓存处理后的所述教学环境参数与捕捉到的所述第一环境信息进行融合,并根据所述第一捕捉时间生成第一虚拟教学场景,并显示所述第一虚拟教学场景,包括:The method according to claim 1, wherein the first virtual reality device fuses the cached processed teaching environment parameter with the captured first environment information, and according to the first capture Time generating a first virtual teaching scene and displaying the first virtual teaching scene, including:
    基于主动分屏技术,将显卡缓存中所缓存的所述教学环境参数转换为三维教室界面数据,并根据所述第一捕捉时间将所述三维教室界面数据和捕捉到的所述第一环境信息进行融合,生成所述第一虚拟教学场景;其中,所述第一捕捉时间用于记录所述目标科目的学习时长。Converting the teaching environment parameter cached in the graphics card cache into three-dimensional classroom interface data based on the active split screen technology, and the three-dimensional classroom interface data and the captured first environment information according to the first capture time Performing fusion to generate the first virtual teaching scene; wherein the first capturing time is used to record the learning duration of the target subject.
  3. 根据权利要求1所述的方法,其特征在于,所述第一虚拟现实设备接 收对所述第一虚拟教学场景中目标科目的操作指令,并根据所述操作指令选择相应的人物教学模型,并基于所述人物模型展示所述目标科目的教学内容,包括:The method of claim 1 wherein said first virtual reality device is connected And receiving an operation instruction of the target subject in the first virtual teaching scene, and selecting a corresponding character teaching model according to the operation instruction, and displaying the teaching content of the target subject based on the character model, including:
    所述第一虚拟现实设备通过语音识别接收对所述第一虚拟教学场景中目标科目的操作指令;其中,所述操作指令包括:模型选择操作指令、教学内容调用操作指令以及座位选择操作指令;The first virtual reality device receives an operation instruction for the target subject in the first virtual teaching scene by using voice recognition; wherein the operation instruction includes: a model selection operation instruction, a teaching content invocation operation instruction, and a seat selection operation instruction;
    所述第一虚拟现实设备根据所述模型选择操作指令选择相应的人物教学模型,并根据所述教学内容调用操作指令展示所述目标科目的教学内容,并根据所述座位选择操作指令以及所述第一捕捉时间,安排教学环境参数中的虚拟目标座位。The first virtual reality device selects a corresponding character teaching model according to the model selection operation instruction, and invokes an operation instruction to display the teaching content of the target subject according to the teaching content, and selects an operation instruction according to the seat and the The first capture time, the virtual target seat in the teaching environment parameters is arranged.
  4. 根据权利要求1所述的方法,其特征在于,在所述第一虚拟现实设备接收对所述第一虚拟教学场景中目标科目的操作指令,并根据所述操作指令选择相应的人物教学模型,以使所述人物模型通过语音识别掌控所述目标科目的教学进度之后,还包括:The method according to claim 1, wherein the first virtual reality device receives an operation instruction for a target subject in the first virtual teaching scene, and selects a corresponding character teaching model according to the operation instruction, After the character model is controlled by the voice recognition to control the progress of the target subject, the method further includes:
    第二虚拟现实设备向所述后台服务器发送加入请求;The second virtual reality device sends a join request to the background server;
    所述后台服务器将接收到的所述加入请求转发给所述第一虚拟现实设备;The background server forwards the received join request to the first virtual reality device;
    所述第一虚拟现实设备生成与所述加入请求对应的确认响应消息,并将所述确认响应消息发送到所述第二虚拟现实设备;The first virtual reality device generates an acknowledgement response message corresponding to the join request, and sends the acknowledgement response message to the second virtual reality device;
    所述第二虚拟现实设备根据所述确认响应消息向所述后台服务器上传第二环境信息,并记录第二捕捉时间;The second virtual reality device uploads second environment information to the background server according to the confirmation response message, and records a second capture time;
    所述后台服务器将所述第一虚拟教学场景与所述第二环境信息进行融合,生成与所述第一虚拟现实设备对应的第一教学场景数据,并生成与所述第二虚拟现实设备对应的第二教学场景数据;The background server combines the first virtual teaching scenario with the second environment information to generate first teaching scene data corresponding to the first virtual reality device, and generates a corresponding corresponding to the second virtual reality device. Second teaching scene data;
    所述第一虚拟现实设备接收所述后台服务器发送的所述第一教学场景数据,并结合所述第二捕捉时间,更新显示所述第一虚拟教学场景;Receiving, by the first virtual reality device, the first teaching scene data sent by the background server, and displaying the first virtual teaching scene in combination with the second capturing time;
    所述第二虚拟现实设备接收所述后台服务器发送的所述第二教学场景数据,并结合所述第二捕捉时间,生成并显示第二虚拟教学场景。 The second virtual reality device receives the second teaching scene data sent by the background server, and generates and displays a second virtual teaching scene in combination with the second capturing time.
  5. 根据权利要求1至4任一项所述的方法,其特征在于,还包括:The method according to any one of claims 1 to 4, further comprising:
    所述第一虚拟现实设备基于无线视频传输技术,将所述第一虚拟教学场景发送至与所述第一虚拟现实设备具有无线连接关系的用户终端,以使所述用户终端显示所述第一虚拟教学场景。Transmitting, by the first virtual reality device, the first virtual teaching scenario to a user terminal having a wireless connection relationship with the first virtual reality device, so that the user terminal displays the first Virtual teaching scenarios.
  6. 一种基于虚拟现实的数据处理系统,其特征在于,所述数据处理系统包括:第一虚拟现实设备和后台服务器;A data processing system based on virtual reality, wherein the data processing system comprises: a first virtual reality device and a background server;
    所述第一虚拟现实设备,用于通过摄像头捕捉第一环境信息,并记录第一捕捉时间;The first virtual reality device is configured to capture first environment information by using a camera, and record a first capture time;
    所述第一虚拟现实设备,还用于接收对虚拟教学应用的设置指令,并发送所述设置指令到后台服务器;The first virtual reality device is further configured to receive a setting instruction for the virtual teaching application, and send the setting instruction to the background server;
    所述后台服务器,用于根据所述设置指令获取教学环境参数,并将所述教学环境参数发送回所述第一虚拟现实设备;其中,所述教学环境参数包括:虚拟投影仪,虚拟课桌、虚拟座位和虚拟教室;The background server is configured to acquire a teaching environment parameter according to the setting instruction, and send the teaching environment parameter to the first virtual reality device; wherein the teaching environment parameter comprises: a virtual projector, a virtual desk , virtual seats and virtual classrooms;
    所述第一虚拟现实设备,还用于根据接收到的所述教学环境参数,对所述教学环境参数进行缓存处理;The first virtual reality device is further configured to cache the teaching environment parameter according to the received teaching environment parameter;
    所述第一虚拟现实设备,还用于将缓存处理后的所述教学环境参数与捕捉到的所述第一环境信息进行融合,并根据所述第一捕捉时间生成第一虚拟教学场景,并显示所述第一虚拟教学场景;The first virtual reality device is further configured to fuse the cached processed teaching environment parameter with the captured first environment information, and generate a first virtual teaching scenario according to the first capture time, and Displaying the first virtual teaching scene;
    所述第一虚拟现实设备,还用于接收对所述第一虚拟教学场景中目标科目的操作指令,并根据所述操作指令选择相应的人物教学模型,并基于所述人物模型展示所述目标科目的教学内容。The first virtual reality device is further configured to receive an operation instruction for the target subject in the first virtual teaching scene, and select a corresponding character teaching model according to the operation instruction, and display the target based on the character model Teaching content of the subject.
  7. 根据权利要求6所述的数据处理系统,其特征在于,A data processing system according to claim 6 wherein:
    所述第一虚拟现实设备,还具体用于基于主动分屏技术,将显卡缓存中所缓存的所述教学环境参数转换为三维教室界面数据,并根据所述第一捕捉时间将所述三维教室界面数据和捕捉到的所述第一环境信息进行融合,生成所述第一虚拟教学场景;The first virtual reality device is further configured to convert the teaching environment parameter cached in the graphics card cache into three-dimensional classroom interface data based on the active split screen technology, and the three-dimensional classroom according to the first capture time The interface data is merged with the captured first environment information to generate the first virtual teaching scene;
    其中,所述第一捕捉时间用于记录所述目标科目的学习时长。 The first capture time is used to record the learning duration of the target subject.
  8. 根据权利要求6所述的数据处理系统,其特征在于,A data processing system according to claim 6 wherein:
    所述第一虚拟现实设备,还用于通过语音识别接收对所述第一虚拟教学场景中目标科目的操作指令;其中,所述操作指令包括:模型选择操作指令、教学内容调用操作指令以及座位选择操作指令;The first virtual reality device is further configured to receive, by using voice recognition, an operation instruction for a target account in the first virtual teaching scenario; wherein the operation instruction includes: a model selection operation instruction, a teaching content call operation instruction, and a seat Select an operation instruction;
    所述第一虚拟现实设备,还用于根据所述模型选择操作指令选择相应的人物教学模型,并根据所述教学内容调用操作指令展示所述目标科目的教学内容,并根据所述座位选择操作指令以及所述第一捕捉时间,安排教学环境参数中的虚拟目标座位。The first virtual reality device is further configured to select a corresponding character teaching model according to the model selection operation instruction, and invoke an operation instruction to display the teaching content of the target subject according to the teaching content, and select an operation according to the seat selection The instruction and the first capture time schedule virtual target seats in the teaching environment parameters.
  9. 根据权利要求6所述的数据处理系统,其特征在于,所述数据处理系统还包括:第二虚拟现实设备和后台服务器;The data processing system according to claim 6, wherein the data processing system further comprises: a second virtual reality device and a background server;
    所述第二虚拟现实设备,用于向所述后台服务器发送加入请求;The second virtual reality device is configured to send a join request to the background server;
    所述后台服务器,用于将接收到的所述加入请求转发给所述第一虚拟现实设备;The background server is configured to forward the received join request to the first virtual reality device;
    所述第一虚拟现实设备,还用于生成与所述加入请求对应的确认响应消息,并将所述确认响应消息发送到所述第二虚拟现实设备;The first virtual reality device is further configured to generate an acknowledgement response message corresponding to the join request, and send the acknowledgement response message to the second virtual reality device;
    所述第二虚拟现实设备,还用于根据所述确认响应消息向所述后台服务器上传第二环境信息,并记录第二捕捉时间;The second virtual reality device is further configured to upload the second environment information to the background server according to the confirmation response message, and record the second capture time;
    所述后台服务器,还用于将所述第一虚拟教学场景与所述第二环境信息进行融合,生成与所述第一虚拟现实设备对应的第一教学场景数据,并生成与所述第二虚拟现实设备对应的第二教学场景数据;The background server is further configured to merge the first virtual teaching scenario with the second environment information, generate first teaching scene data corresponding to the first virtual reality device, and generate and the second The second teaching scene data corresponding to the virtual reality device;
    所述第一虚拟现实设备,还用于接收所述后台服务器发送的所述第一教学场景数据,并结合所述第二捕捉时间,更新显示所述第一虚拟教学场景;The first virtual reality device is further configured to receive the first teaching scene data sent by the background server, and update and display the first virtual teaching scene according to the second capturing time;
    所述第二虚拟现实设备,还用于接收所述后台服务器发送的所述第二教学场景数据,并结合所述第二捕捉时间,生成并显示第二虚拟教学场景。The second virtual reality device is further configured to receive the second teaching scene data sent by the background server, and generate and display a second virtual teaching scene according to the second capturing time.
  10. 根据权利要求6至9任一项所述的数据处理系统,其特征在于:A data processing system according to any one of claims 6 to 9, wherein:
    所述第一虚拟现实设备,还具体用于基于无线视频传输技术,将所述第一 虚拟教学场景发送至与所述第一虚拟现实设备具有无线连接关系的用户终端,以使所述用户终端显示所述第一虚拟教学场景。 The first virtual reality device is further specifically configured to: according to a wireless video transmission technology, the first The virtual teaching scenario is sent to a user terminal having a wireless connection relationship with the first virtual reality device, so that the user terminal displays the first virtual teaching scenario.
PCT/CN2016/108118 2016-11-30 2016-11-30 Virtual reality-based data processing method and system WO2018098720A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/108118 WO2018098720A1 (en) 2016-11-30 2016-11-30 Virtual reality-based data processing method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/108118 WO2018098720A1 (en) 2016-11-30 2016-11-30 Virtual reality-based data processing method and system

Publications (1)

Publication Number Publication Date
WO2018098720A1 true WO2018098720A1 (en) 2018-06-07

Family

ID=62241079

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/108118 WO2018098720A1 (en) 2016-11-30 2016-11-30 Virtual reality-based data processing method and system

Country Status (1)

Country Link
WO (1) WO2018098720A1 (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109460482A (en) * 2018-11-15 2019-03-12 平安科技(深圳)有限公司 Courseware methods of exhibiting, device, computer equipment and computer readable storage medium
CN109817052A (en) * 2019-03-19 2019-05-28 河南理工大学 Coal seam gas-bearing capacity measurement experiment system and method based on virtual reality technology
CN110413130A (en) * 2019-08-15 2019-11-05 泉州师范学院 Virtual reality sign language study, test and evaluation method based on motion capture
CN110413112A (en) * 2019-07-11 2019-11-05 安徽皖新研学教育有限公司 A kind of safety experience educational system and its method based on virtual reality technology
CN110794952A (en) * 2018-08-01 2020-02-14 北京鑫媒世纪科技发展有限公司 Virtual reality cooperative processing method, device and system
CN111464577A (en) * 2019-01-21 2020-07-28 阿里巴巴集团控股有限公司 Equipment control method and device
CN111540057A (en) * 2020-04-24 2020-08-14 湖南翰坤实业有限公司 VR scene action display method and system based on servo electric cylinder technology
CN111538412A (en) * 2020-04-21 2020-08-14 北京恒华伟业科技股份有限公司 Safety training method and device based on VR
CN111862346A (en) * 2020-07-29 2020-10-30 重庆邮电大学 Teaching method for preparing oxygen from potassium permanganate based on virtual reality and internet
CN112286354A (en) * 2020-10-28 2021-01-29 上海盈赞通信科技有限公司 Education and teaching method and system based on virtual reality
CN112347507A (en) * 2020-10-29 2021-02-09 北京市商汤科技开发有限公司 Online data processing method, electronic device and storage medium
CN112445808A (en) * 2020-11-18 2021-03-05 傲普(上海)新能源有限公司 Method for updating monitoring system based on remote sensing data
CN112740280A (en) * 2018-09-28 2021-04-30 苹果公司 Computationally efficient model selection
CN113507599A (en) * 2021-07-08 2021-10-15 四川纵横六合科技股份有限公司 Education cloud service platform based on big data analysis
CN113936516A (en) * 2021-09-30 2022-01-14 国能神东煤炭集团有限责任公司 Collaborative drilling system based on virtual reality technology
CN114170859A (en) * 2021-10-22 2022-03-11 青岛虚拟现实研究院有限公司 Online teaching system and method based on virtual reality
CN114327220A (en) * 2021-12-24 2022-04-12 软通动力信息技术(集团)股份有限公司 Virtual display system and method
CN114697755A (en) * 2022-03-31 2022-07-01 北京百度网讯科技有限公司 Virtual scene information interaction method, device, equipment and storage medium
CN114779942A (en) * 2022-05-23 2022-07-22 广州芸荟数字软件有限公司 Virtual reality immersive interaction system, equipment and method
CN115035278A (en) * 2022-06-06 2022-09-09 北京新唐思创教育科技有限公司 Teaching method, device, equipment and storage medium based on virtual image
CN116506559A (en) * 2023-04-24 2023-07-28 江苏拓永科技有限公司 Virtual reality panoramic multimedia processing system and method thereof

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105573592A (en) * 2016-01-29 2016-05-11 北京宝贝星球科技有限公司 Preschool education smart interaction system and method
CN105654800A (en) * 2016-04-05 2016-06-08 瞿琛 Simulation teaching system based on immersive VR (virtual reality) technology
CN105872575A (en) * 2016-04-12 2016-08-17 乐视控股(北京)有限公司 Live broadcasting method and apparatus based on virtual reality
CN106023693A (en) * 2016-05-25 2016-10-12 北京九天翱翔科技有限公司 Education system and method based on virtual reality technology and pattern recognition technology

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105573592A (en) * 2016-01-29 2016-05-11 北京宝贝星球科技有限公司 Preschool education smart interaction system and method
CN105654800A (en) * 2016-04-05 2016-06-08 瞿琛 Simulation teaching system based on immersive VR (virtual reality) technology
CN105872575A (en) * 2016-04-12 2016-08-17 乐视控股(北京)有限公司 Live broadcasting method and apparatus based on virtual reality
CN106023693A (en) * 2016-05-25 2016-10-12 北京九天翱翔科技有限公司 Education system and method based on virtual reality technology and pattern recognition technology

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110794952B (en) * 2018-08-01 2023-11-28 北京鑫媒世纪科技发展有限公司 Virtual reality cooperative processing method, device and system
CN110794952A (en) * 2018-08-01 2020-02-14 北京鑫媒世纪科技发展有限公司 Virtual reality cooperative processing method, device and system
CN112740280A (en) * 2018-09-28 2021-04-30 苹果公司 Computationally efficient model selection
CN109460482A (en) * 2018-11-15 2019-03-12 平安科技(深圳)有限公司 Courseware methods of exhibiting, device, computer equipment and computer readable storage medium
CN109460482B (en) * 2018-11-15 2024-05-28 平安科技(深圳)有限公司 Courseware display method and device, computer equipment and computer readable storage medium
CN111464577B (en) * 2019-01-21 2022-05-27 阿里巴巴集团控股有限公司 Equipment control method and device
CN111464577A (en) * 2019-01-21 2020-07-28 阿里巴巴集团控股有限公司 Equipment control method and device
CN109817052A (en) * 2019-03-19 2019-05-28 河南理工大学 Coal seam gas-bearing capacity measurement experiment system and method based on virtual reality technology
CN110413112A (en) * 2019-07-11 2019-11-05 安徽皖新研学教育有限公司 A kind of safety experience educational system and its method based on virtual reality technology
CN110413130B (en) * 2019-08-15 2024-01-26 泉州师范学院 Virtual reality sign language learning, testing and evaluating method based on motion capture
CN110413130A (en) * 2019-08-15 2019-11-05 泉州师范学院 Virtual reality sign language study, test and evaluation method based on motion capture
CN111538412A (en) * 2020-04-21 2020-08-14 北京恒华伟业科技股份有限公司 Safety training method and device based on VR
CN111538412B (en) * 2020-04-21 2023-12-15 北京恒华伟业科技股份有限公司 VR-based safety training method and device
CN111540057A (en) * 2020-04-24 2020-08-14 湖南翰坤实业有限公司 VR scene action display method and system based on servo electric cylinder technology
CN111540057B (en) * 2020-04-24 2023-07-28 湖南翰坤实业有限公司 VR scene action display method and system based on servo electric cylinder technology
CN111862346A (en) * 2020-07-29 2020-10-30 重庆邮电大学 Teaching method for preparing oxygen from potassium permanganate based on virtual reality and internet
CN111862346B (en) * 2020-07-29 2023-11-07 重庆邮电大学 Experimental teaching method for preparing oxygen from potassium permanganate based on virtual reality and Internet
CN112286354A (en) * 2020-10-28 2021-01-29 上海盈赞通信科技有限公司 Education and teaching method and system based on virtual reality
CN112347507A (en) * 2020-10-29 2021-02-09 北京市商汤科技开发有限公司 Online data processing method, electronic device and storage medium
CN112445808A (en) * 2020-11-18 2021-03-05 傲普(上海)新能源有限公司 Method for updating monitoring system based on remote sensing data
CN113507599B (en) * 2021-07-08 2022-07-08 四川纵横六合科技股份有限公司 Education cloud service platform based on big data analysis
CN113507599A (en) * 2021-07-08 2021-10-15 四川纵横六合科技股份有限公司 Education cloud service platform based on big data analysis
CN113936516A (en) * 2021-09-30 2022-01-14 国能神东煤炭集团有限责任公司 Collaborative drilling system based on virtual reality technology
CN114170859B (en) * 2021-10-22 2024-01-26 青岛虚拟现实研究院有限公司 Online teaching system and method based on virtual reality
CN114170859A (en) * 2021-10-22 2022-03-11 青岛虚拟现实研究院有限公司 Online teaching system and method based on virtual reality
CN114327220A (en) * 2021-12-24 2022-04-12 软通动力信息技术(集团)股份有限公司 Virtual display system and method
CN114327220B (en) * 2021-12-24 2023-10-17 软通动力信息技术(集团)股份有限公司 Virtual display system and method
CN114697755A (en) * 2022-03-31 2022-07-01 北京百度网讯科技有限公司 Virtual scene information interaction method, device, equipment and storage medium
CN114779942A (en) * 2022-05-23 2022-07-22 广州芸荟数字软件有限公司 Virtual reality immersive interaction system, equipment and method
CN115035278B (en) * 2022-06-06 2023-06-27 北京新唐思创教育科技有限公司 Teaching method, device, equipment and storage medium based on virtual image
CN115035278A (en) * 2022-06-06 2022-09-09 北京新唐思创教育科技有限公司 Teaching method, device, equipment and storage medium based on virtual image
CN116506559A (en) * 2023-04-24 2023-07-28 江苏拓永科技有限公司 Virtual reality panoramic multimedia processing system and method thereof

Similar Documents

Publication Publication Date Title
WO2018098720A1 (en) Virtual reality-based data processing method and system
US11899900B2 (en) Augmented reality computing environments—immersive media browser
US11403595B2 (en) Devices and methods for creating a collaborative virtual session
JP3212833U (en) Interactive education support system
US10200654B2 (en) Systems and methods for real time manipulation and interaction with multiple dynamic and synchronized video streams in an augmented or multi-dimensional space
US7840638B2 (en) Participant positioning in multimedia conferencing
US20120192088A1 (en) Method and system for physical mapping in a virtual world
JP2017522682A (en) Handheld browsing device and method based on augmented reality technology
EP3776146A1 (en) Augmented reality computing environments
JP6683864B1 (en) Content control system, content control method, and content control program
WO2019028855A1 (en) Virtual display device, intelligent interaction method, and cloud server
CN110174950B (en) Scene switching method based on transmission gate
US20240155074A1 (en) Movement Tracking for Video Communications in a Virtual Environment
WO2022255262A1 (en) Content provision system, content provision method, and content provision program
WO2022151882A1 (en) Virtual reality device
WO2020248682A1 (en) Display device and virtual scene generation method
TWI652582B (en) File sharing system and method based on virtual reality/amplification reality combined with instant messaging service
US20180160078A1 (en) System and Method for Producing Three-Dimensional Images from a Live Video Production that Appear to Project Forward of or Vertically Above an Electronic Display
US20230334790A1 (en) Interactive reality computing experience using optical lenticular multi-perspective simulation
US20230334792A1 (en) Interactive reality computing experience using optical lenticular multi-perspective simulation
US20220417449A1 (en) Multimedia system and multimedia operation method
US20230334791A1 (en) Interactive reality computing experience using multi-layer projections to create an illusion of depth
US20240185546A1 (en) Interactive reality computing experience using multi-layer projections to create an illusion of depth
KR101816446B1 (en) Image processing system for processing 3d contents displyed on the flat display and applied telepresence, and method of the same
Haider et al. Towards Representation of Real Entities using Holographic Technology

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16922988

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16922988

Country of ref document: EP

Kind code of ref document: A1