CN114664138A - Teaching resource interaction method and system based on data stream pushing - Google Patents

Teaching resource interaction method and system based on data stream pushing Download PDF

Info

Publication number
CN114664138A
CN114664138A CN202210546261.8A CN202210546261A CN114664138A CN 114664138 A CN114664138 A CN 114664138A CN 202210546261 A CN202210546261 A CN 202210546261A CN 114664138 A CN114664138 A CN 114664138A
Authority
CN
China
Prior art keywords
data
teacher
student
server
course
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210546261.8A
Other languages
Chinese (zh)
Other versions
CN114664138B (en
Inventor
王忍
张惠冬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Liren Technology Co ltd
Original Assignee
Jiangsu Liren Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Liren Technology Co ltd filed Critical Jiangsu Liren Technology Co ltd
Priority to CN202210546261.8A priority Critical patent/CN114664138B/en
Publication of CN114664138A publication Critical patent/CN114664138A/en
Application granted granted Critical
Publication of CN114664138B publication Critical patent/CN114664138B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/08Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations
    • G09B5/14Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations with provision for individual teacher-student communication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2183Cache memory
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4331Caching operations, e.g. of an advertisement for later insertion during playback
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Databases & Information Systems (AREA)
  • Electrically Operated Instructional Devices (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention provides a teaching resource interaction method and system based on data stream pushing, which can download required resources to the local in advance in a broadband idle period, and only transmit data streams including virtual position information, a moving direction and a rotating angle during teaching, so that interaction among a server, a student end and a teacher end can be realized. In addition, when the signals are unstable, flow out or short-term packet loss occurs, the method can preprocess the data, predicts the pictures of the next frames in advance through the head action trend before packet loss and continuously renders the pictures to students until the signals are recovered to receive a new data packet again, and smooth connection of the pictures before and after packet loss is realized.

Description

Teaching resource interaction method and system based on data stream pushing
Technical Field
The invention relates to the field G09B: educational or demonstration appliances, in particular to a teaching resource interaction method and system based on data stream pushing.
Background
At present, learning course knowledge by utilizing VR glasses becomes a new teaching mode, so that development of a teacher end system scheme for managing VR class of students becomes necessary. The conventional teacher management system monitors and manages the learning content of VR glasses of students by checking the pictures of student equipment after a teacher logs in.
In the prior art, the monitoring method for observing contents when students use the method is mainly to transmit screenshot pictures of VR glasses ends in real time and display the screenshot pictures in a classroom management system. And along with the increase of student's quantity, teacher's management system is easy to appear unusually blocking, breakdown scheduling problem more.
Disclosure of Invention
The purpose of the invention is as follows: the teaching resource interaction method based on data stream pushing is provided, and a system for realizing the method is further provided.
In a first aspect, a method and a system for teaching resource interaction based on data stream pushing are provided, the method comprising the following steps:
s1, accessing the course resource data and analyzing the course resource data;
s2, vectorizing course resource data: dividing the course resources analyzed by the S1 into N primary categories Xi according to safety education, science popularization education, basic disciplines and quality education; each of the first-level classes Xi includes M curriculum resources Yj;
constructing the N primary classifications Xi and the M course resources Yj into an N-dimensional vector, wherein each element XiYj in the N-dimensional vector represents one course resource;
the N-dimensional vector is represented as follows:
Figure DEST_PATH_IMAGE002
s3, the developer compresses and packs the course resources XiYj into a plurality of blocks with the same size, and the packed course resources are stored under folders of the first-level classification Xi in the server in a combined mode of binary compression files of the course resource contents and bag body description files.
S4, the teacher end selects corresponding course resources according to teaching requirements and pushes the course resources to the student end; when the compressed bag is read, the bag body description file of the compressed bag is loaded firstly, and then the corresponding compressed bag is decompressed according to the content described by the bag body description file, so that the efficiency is further improved compared with the case of directly decompressing the whole bag body.
S5, downloading course resources by the student terminal, and storing the course resources locally according to the format of primary classification/secondary classification; the chaotic problem of client storage can be effectively reduced, the continuous development of later courses is facilitated, and the local data management is facilitated; and the teacher can freely combine different course contents, customize the teaching scheme in an individualized way, and push the teaching scheme to the VR glasses end of the student by one key, thereby being convenient for the teacher to flexibly customize the teaching scheme.
S6, after all students are ready, decompressing preset curriculum resources stored in the local through the student end, and starting on-line lectures through VR glasses; and the student end, the server and the teacher end transmit data streams including virtual position information, moving direction and rotation angle, and the student end renders pictures according to the data streams and outputs the pictures. Compared with the traditional use platform for managing helmet contents, when a plurality of people use the helmet contents concurrently, the helmet contents are transmitted in a picture and video streaming mode, the scheme can effectively reduce the pressure of the server, and improve the teaching smoothness and the reliability.
In some implementations of the first aspect, the lecture modes are divided into a unified picture mode and a free mode:
in the unified picture mode, pictures received by the student end are controlled by the teacher end, and the pictures of the student end and the pictures of the teacher end are kept synchronous; the teacher end can adopt the mode of key mouse operation or carrying VR glasses to remove the visual angle, and the picture content that shows in student end VR glasses is the picture content that the teacher end shows. In this mode, the student arbitrarily moves his head without causing a change in the screen.
In the free mode, the student end can freely watch the content in the glasses, the visual angle of the picture is automatically changed through the head action and is not controlled by the teacher end, and the picture received by the student end is automatically controlled by the student end through the head action. The teacher end can display the real-time pictures seen by each student through the monitoring screen.
In some implementations of the first aspect, in the free mode:
the VR glasses at the student end record virtual position information-Vector 3(x1, y1, z1), a moving direction-Vector 3(x2, y2, z2) and a rotating angle-Vector 3(x3, y3, z3) at intervals of preset time, the data streams are transmitted to the server in real time and then pushed to the teacher end by the server in real time, and pictures seen by the student end are rendered on a monitoring screen at the teacher end in real time;
each student corresponds to one sub-picture and is arranged and displayed on the monitoring screen of the teacher end in the order of the student numbers.
In some implementations of the first aspect, in the unified picture mode:
the teacher end records virtual position information-Vector 3(x1, y1, z1), moving direction-Vector 3(x2, y2, z2) and rotating angle-Vector 3(x3, y3, z3) at preset time intervals through VR glasses or keyboard and mouse operation pictures through a GUI interface, the data streams are transmitted to the server in real time and then pushed to the student end by the server in real time, and pictures seen by the teacher end are rendered in the VR glasses of the student end.
In some implementations of the first aspect, the teacher requests the server for a data stream of students in the current class at predetermined time intervals, and the server returns the data stream to the teacher according to a format of "student number + virtual position information + movement direction + rotation angle". In the student card interface of the teacher end, each student card corresponds to one student according to the student number. In the VR scene lesson taught by the teacher, cameras equivalent to the current number of students will be deployed and will hold IDs corresponding to the school number.
In some realizations of the first aspect, cameras equal to the current number of students are deployed during the course of a teacher giving lesson, each camera holding an ID corresponding to a school number; the transmitted data stream is subjected to string.Splt character string cutting, and the Vector3(x1, y1, z1) after the virtual position information is read is assigned to the localPositon of the camera corresponding to the current school number; the Vector3(x3, y3, z3) after the rotation angle is read is assigned to the localRotation of the camera corresponding to the current school number;
wherein localPositon represents local position information of the camera; localRotation represents local rotation information of the camera;
with localPosition and localRotation, the camera angle is fixed and unique.
In some implementation manners of the first aspect, before decompressing the predetermined course resource stored locally, the student end first establishes communication with the server, obtains a bag body description file corresponding to the teaching content, points to a corresponding binary compressed bag file according to the bag body description file, and decompresses the binary compressed bag file.
In some implementations of the first aspect, the data stream sent to the server by the student terminal at a time is denoted as n, the server sends the n network data streams received by the student terminal to the teacher terminal, and the next network data stream sent by the student terminal is denoted as n + 1.
In order to solve the problem that pictures of a client and a teacher end are not synchronous due to network delay or network fluctuation, when the teacher end receives two data streams, namely n data stream and n +1 data stream, the teacher end calculates angular speeds Wx, Wy and Wz of the student end cameras from n to n +1 rotating around three axes of x, y and z according to recording camera rotation angles-Vector 3(x3, y3 and z3) in the n and n +1 data streams.
According to the calculation result, the teacher end locally deduces the camera rotation angle in the n +2 data stream on the basis of the camera rotation data obtained in the n +1 data stream, and the teacher end caches the deduced data locally.
For the student end with stronger VR glasses calculation performance, the whole course real-time prediction image trend of data transmission can be realized at the student end, the teacher end and the server:
when a teacher end receives a plurality of data streams n +2, n +3, n +4 and n +5 sent from a server at one time, the data streams are analyzed one by one and judged, if the n +2 data stream contains other data contents except camera position information, such as scoring points, experiment operation, experiment result presentation and the like, and if the n +2 data stream is inconsistent with the local deduction cache n +2, the n +2 and the following data of the local deduction cache are abandoned, the teacher end rolls back to a state of receiving the n +1 data stream, and performs next-step data application and picture presentation according to the received latest n +2 data.
And if the local cache data are consistent, continuing to check n +3, n +4 and n +5.
If the received data streams do not contain other data contents except the position information of the camera, the teacher directly applies the n +5 data streams to the camera, and clears the data cached in local deduction to finish the picture presentation of the teacher.
Meanwhile, the server does not receive the data stream sent by the student end within the preset time, and then sends out alarm information to the teacher end.
In some implementations of the first aspect, for a student end with weak VR glasses computation performance, a compromise method may be adopted, where a network transmission state is sensed in a communication process of the student end, the server, and the teacher end, and if a fluctuation of the network transmission is greater than a threshold within a predetermined time (the threshold is determined as an upper and lower network delay fluctuation 300ms, and it is determined that the network fluctuation is unstable if the fluctuation exceeds the threshold), the student end starts picture prediction computation.
The calculation process is the same regardless of the two methods described above.
Calculating rotation angular velocities Wx, Wy and Wz of the VR glasses around x, y and z through the rotation angular changes of the nth record and the (n + 1) th record;
the angle data of n times is recorded as rot 1; the angle data of n +1 times is recorded as rot 2; d is the time interval for transmitting the data stream.
Wx=(rot2.x-rot1.x)/d;Wy=(rot2.y-rot1.y)/d Wz=(rot2.z-rot1.z)/d;
t is the time interval according to the last n + 1.
At this time, the predicted VR eye angle = packet loss angle + Vector3(Wx × t, Wy × t, Wz × t).
In a second aspect, a teaching resource interaction system based on data stream pushing is provided, and the system comprises a server, a teacher end and a student end.
And the server accesses the course resource data, analyzes and vectorizes the course resource data.
The teacher end establishes two-way communication with the server and is used for reading the course resources in the server and selecting the preset course resources to be pushed to the student end.
The student end establishes two-way communication with the teacher end and the server, and is used for receiving the course resources pushed by the teacher end and downloading the course resources to the local in an OTA (over the air) mode. And data streams including virtual position information, moving directions and rotation angles are transmitted among the student terminal, the server and the teacher terminal.
Has the advantages that: according to the teaching resource interaction method and system based on data stream pushing, the required resources are downloaded to the local in advance in the broadband idle period, only the data stream including the virtual position information, the moving direction and the rotating angle is transmitted during teaching, interaction among the server, the student end and the teacher end can be achieved, compared with the traditional mode of transmitting pictures and video streams, the scheme greatly reduces the bandwidth requirement under the condition of achieving the same function, and therefore the problems of picture blocking, frame loss and the like caused by insufficient bandwidth, signal fluctuation and the like are effectively avoided. In addition, when the signal is unstable, the signal is cut off or the packet is lost in a short period, the method can preprocess the data, predict the pictures of the next frames in advance through the head action trend before the packet is lost and continuously render the pictures to students until the signal is recovered to receive a new data packet again, and realize the smooth connection of the pictures before and after the packet is lost.
Drawings
FIG. 1 is a classification diagram of the curriculum resources XiYj of the present invention.
FIG. 2 is a diagram illustrating the definition of the virtual location information Vector3(x1, y1, z1) according to the present invention.
FIG. 3 is a diagram illustrating the definition of Vector3(x2, y2, z2) as the moving direction of the present invention.
FIG. 4 is a diagram illustrating the definition of the rotation angle-Vector 3(x3, y3, z3) in the present invention.
FIG. 5 is a diagram of data format definition returned by the server to the teacher end in the present invention.
Fig. 6 is a schematic diagram of the deflection of the camera on XYZ three axes when packet loss is predicted in advance in the present invention.
Fig. 7 is a schematic diagram of a triangle constructed in the present invention for packet loss advance prediction.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a more thorough understanding of the present invention. It will be apparent, however, to one skilled in the art, that the present invention may be practiced without one or more of these specific details. In other instances, well-known features have not been described in order to avoid obscuring the invention.
The applicant finds that the following problems are urgently needed to be solved in the process of online teaching in the last two years: in the prior art, the monitoring method for observing contents when students use the method is mainly to transmit screenshot pictures of VR glasses ends in real time and display the screenshot pictures in a classroom management system. And along with the increase of student's quantity, teacher's management system is easy to appear unusually blocking, breakdown scheduling problem more.
For this purpose, the applicant proposes a teaching resource interaction system based on data stream pushing, which is composed of a server, a teacher end and a student end. The system is utilized to execute a series of teaching resource interaction methods based on data stream pushing.
In order to solve the problem that the unified management of teaching courses is lacked in the prior art scheme and teachers cannot customize own teaching schemes, the invention provides the following solution:
and the server accesses the course resource data, analyzes and vectorizes the course resource data. The course resources are classified into two levels Xi and Yj, the first level Xi comprises (safety education is recorded as X1, science education is recorded as X2, course resources are recorded as X3, and quality education is recorded as X4), and the second level is a combination of Yj subdivided for X1, X2, X3 and X4. See figure 1 for details. The teacher logs in the management system, looks over the content that can give lessons at present and course details at the homepage, through self-defined course scheme, in one is issued student's VR glasses end, can go to class through equipment management, unifies student's picture, looks over the observation content in the student's VR glasses in real time and interacts. And the account number of the teacher is associated with the account number of the class, so that the information of the class students can be managed.
In order to further assist teachers to customize own teaching schemes, developers can pack and compress each piece of curriculum resources corresponding to each XiYj in advance in a block compression mode, and each piece of curriculum resources is guaranteed to be divided into blocks with the same size and to be compressed respectively. Compared with the traditional LZMA stream-based compression method, the stream-based packing method has extremely high reading efficiency and does not cause blockage and extra memory occupation. The packaged resources are stored under different server files classified by Xi in a combined mode of binary compressed files of the contents of the resources and the inclusion description files. When decompression is carried out after downloading, the effect of 'what is used for decompressing, the memory occupation is lower' can be achieved by loading the Header of the compression packet, reading is faster, and efficiency is higher.
The teacher end establishes two-way communication with the server and is used for reading the course resources in the server and selecting the preset course resources to push the course resources to the student end. After the teacher checks the course introduction at the client, clicking to buy/download the corresponding "class level one + level two" that requests downloading, that is, the specific value of XiYj, such as the traffic safety course represented by X1Y2, will be sent to the server, and the client starts the download server to download the X1Y2 resource under the folder classification stored in the corresponding server X1. After the downloading is finished, the data are stored locally and respectively according to the format of primary classification/secondary classification. The chaotic problem of client storage can be effectively reduced, the continuous development of later courses is facilitated, and the local data management is facilitated; and the teacher can freely combine different course contents, customize the teaching scheme in an individualized way, and push the teaching scheme to the VR glasses end of the student by one key, thereby being convenient for the teacher to flexibly customize the teaching scheme.
The student end establishes two-way communication with the teacher end and the server, and is used for receiving the course resources pushed by the teacher end and downloading the course resources to the local in an OTA (over the air) mode.
Aiming at the current situation that the pictures of the monitoring students in the existing products are uploaded to a server through ' picture cutting of VR glasses pictures of students, and then downloaded from the server to a teacher ' end ', the invention provides another completely different idea:
required resources are downloaded to the local in advance in a broadband idle period, and only data streams including virtual position information, moving directions and rotating angles are transmitted during teaching, so that interaction among the server, the student end and the teacher end can be achieved. The virtual Position information-Vector 3(x1, y1, z1) of the video camera at the VR glasses end of the student is read (by reading the Position information of the video camera in the actual scene); direction of movement-Vector 3(x2, y2, z2); rotation angle-Vector 3(x3, y3, z 3). See fig. 2, 3, 4 for details, defined as follows:
virtual position information-Vector 3(x1, y1, z1)
x1 represents the value of the x-axis direction in three-dimensional coordinates;
y1 represents the value of the y-axis direction in the three-dimensional coordinates;
z1 represents a numerical value in the z-axis direction in three-dimensional coordinates.
Moving direction-Vector 3(x2, y2, z2)
x2 has three values of 0, 1 and-1; x2=0, representing no movement in the left-right direction; x2= -1, representing movement to the left; x2=1, representing a rightward shift.
y2 has three values of 0, 1 and-1; y2=0, representing no movement in the front-to-back direction; y2= -1, representing forward movement; y2=1, representing a backward movement.
z2 has three values of 0, 1 and-1; z2=0, representing no movement in the up-down direction; z2= -1, representing downward movement; z2=1, representing an upward movement.
The left and right direction of the camera movement is found by calculating whether vector3.dot (transform. right, vector directly in front of the camera) is greater than 0.
The front-back direction of the camera movement is found by calculating whether vector3.dot (transform. forward, vector directly in front of the camera) is greater than 0.
The up-down direction of the camera movement is found by calculating whether vector3.dot (transform. up, vector right in front of the camera) is greater than 0.
Rotation angle-Vector 3(x3, y3, z3)
x3 is the x axis rotation angle, x3= transform.
y3 is the y axis rotation angle, y3= transform.
z3 is z axis rotation angle, z3= transform.
Every 1s, the record format is strictly according to the data of Student number, Position, Vector3(x0, y0, z), Direction, Vector3(x2, y2, z2), Angle, Vector3(x3, y3, z3), and the record is transmitted to the server after being recorded. Compared with the prior art, the method has the advantages that the requirement on the bandwidth is lower, the transmission speed of the character strings is higher than that of the pictures, the requirement on the hard disk space of the server is lower, and the memory space of the server does not need to be cleaned regularly.
The teacher will request the camera data of the current class of students from the server at an interval of 1s, and the server returns the camera data to the teacher according to the format of 'school number + Position + Direction + Angle', as shown in fig. 5. In the student card interface of the teacher end, each student card corresponds to one student according to the student number. In the VR scene lesson taught by the teacher, cameras equivalent to the current number of students will be deployed and will hold IDs corresponding to the student numbers. The transmitted data is subjected to character string cutting through string.Splt, and the positioned Vector3(x1, y1, z1) is read and assigned to localPosion of the camera corresponding to the current school number; after reading Angle, Vector3(x3, y3, z3) assigns a localRotation to the camera corresponding to the current school number, and the camera Angle will be fixed and unique when there is locational information of localPosition and rotational information of localRotation.
By establishing a Render Texture and a Material Texture ball corresponding to the school number, the Material of the Image of the student card in the UI is set to be the Material Texture corresponding to the school number, the Target Texture corresponding to the camera is set to be the Render Texture corresponding to the school number, and at the moment, the Image of the canvas in the student card synchronously displays the picture irradiated by the camera of the current school number.
Through above-mentioned flow, can fully solve prior art's problem, but bring the problem that can not avoid in engineering realization and the product experience along with it: if the signal is unstable, the current is cut off or the packet is lost in a short period of time, the picture can be connected within a certain time, so that the problems of frame loss and blocking are effectively avoided.
To solve the above problem of user experience, the present invention proposes the following solution, see fig. 6. The performance of VR glasses at the student end is different, and two alternatives can be adopted.
For the student end with stronger VR glasses calculation performance, the image trend can be predicted in real time in the whole process of data transmission of the student end, the teacher end and the server. The real-time processing is carried out in a preprocessing mode, the performance is strongest, the effect is best, and the requirement on local calculation is high.
For a student end with weak VR glasses calculation performance, a compromise method can be adopted, a network transmission state is sensed in the communication process of the student end, the server and the teacher end, and if fluctuation of network transmission in a preset time is larger than a threshold value, the student end starts picture prediction calculation. By the aid of the processing mode, picture prediction can be restarted under the condition that the network obviously fluctuates, and local computing pressure is effectively reduced.
There may be two ways of calculating:
the first method is as follows: the data stream sent to the server by the student terminal at a certain time is recorded as n, the server sends the received student terminal n network data streams to the teacher terminal, and the next network data stream sent by the student terminal is recorded as n + 1.
In order to solve the problem that the pictures of the client and the teacher end are asynchronous due to network delay or network fluctuation, after the teacher end receives two data streams, namely n data stream and n +1 data stream, the teacher end calculates the angular speeds Wx, Wy and Wz of the student end cameras from n to n +1 rotating around the three axes of x, y and z according to the recording camera rotation angle-Vector 3(x3, y3 and z3) in the n and n +1 data streams, and see fig. 6.
According to the calculation result, the teacher end locally deduces the camera rotation angle in the n +2 data stream on the basis of the camera rotation data obtained in the n +1 data stream, and the teacher end caches the deduced data to the local.
When a teacher end receives a plurality of data streams n +2, n +3, n +4 and n +5 sent from a server at one time, the data streams are analyzed one by one and judged, if the n +2 data stream contains other data contents except the camera position information, such as scoring points, experiment operation, experiment result presentation and the like, and if the n +2 data stream data is inconsistent with the local deduction cache n +2, the n +2 cached in the local deduction cache and the subsequent data are discarded, and the teacher end rolls back to a state when receiving the n +1 data stream, and performs next data application and picture presentation according to the latest n +2 data.
And if the local cache data are consistent, continuing to check n +3, n +4 and n +5.
If the received data streams do not contain other data contents except the position information of the camera, the teacher directly applies the n +5 data streams to the camera, and clears the data cached in local deduction to finish the picture presentation of the teacher.
Calculating rotation angular velocities Wx, Wy and Wz of the VR glasses around x, y and z through the rotation angular changes of the nth record and the (n + 1) th record;
the angle data of n times is recorded as rot 1; the angle data of n +1 times is recorded as rot 2; d is the time interval for transmitting the data stream.
Wx=(rot2.x-rot1.x)/d;Wy=(rot2.y-rot1.y)/d Wz=(rot2.z-rot1.z)/d;
t is the time interval according to the last n + 1.
At this time, the predicted VR eye angle = packet loss angle + Vector3(Wx × t, Wy × t, Wz × t).
The second method comprises the following steps: after receiving the data returned by the server for the first time, the teacher sends a ray to a position point right in front of the camera corresponding to the school number, records coordinates Vector3(x0, y0, z0) after the ray contacts a scene, returns data for the second time and records Vector3(x1, y1, z1), and can obtain that the last speed Vt = V0+ at and the view angle moving acceleration a ═ Vt-Vo/t { takes Vo as the positive direction, and a is in the same direction with Vo (acceleration) a >0 in the interval time of data transmission; and if the student of the school number loses packet for a short time or data transmission is abnormal, calculating the residual moving speed Vm of the VR glasses by using the acceleration and the final speed, and obtaining the deduced return point coordinates of the camera ray by Vm and the previous moving Direction. Connecting the coordinates of the returned point with the coordinates of the last ray point and the camera will result in a triangle, see fig. 7. The method comprises the steps of connecting the position of VR glasses with the position of a last-time camera observation point to obtain a unique vector, recording the unique vector as fromVector, connecting the position of the VR glasses with a calculated coordinate to obtain a unique vector, recording the unique vector as toVector, calculating an included angle between two vectors at the moment, calculating an included angle = Vector3.Angle (fromVector, toVector), then using vector cross multiplication to obtain a normal vector normal = Vector3.Cross (fromVector, toVector), finally multiplying the normal vector with an upper vector upVector point of the glasses to correct the rotating direction angle [ -Mathf (front, upVector) ], wherein the angle of the VR glasses = angle before packet loss + angle, adopting a front coordinate to lose a packet, rendering the observation content of the VR glasses to a student card, sending warning information to a teacher end at the same time, and finding out a connection prompt problem.
The following explains the flow under the practical application scenario of the method:
step 1, enabling a teacher to enter a login module, judging whether the teacher needs to register, if the teacher needs to register, establishing an account after passing the verification of school and basic information, logging in through the account or a mobile phone number, forgetting that a password can be reset through a verification code, and entering step 2 after logging in.
And 2, entering a homepage interface, enabling a teacher to freely search and view the curriculum abstracts, viewing details of the curriculum after clicking, selecting to purchase/download the curriculum and adding the curriculum to the curriculum group, and entering the step 3.
And 3, the teacher can check the purchased/downloaded course list or select some courses, edit the teaching scheme and enter the step 4, or directly click to go to the course and enter the step 5.
Step 4, after entering the my course module, clicking to select a course group, if the course group exists, clicking to select a class, clicking to select a course, entering step 5, if the course group does not exist, selecting a purchased/downloaded course after newly building the course group, clicking to select the class, clicking to start the course, and entering step 5.
And 5, during class taking, the teacher end can operate one-key black screen, correction and class taking, can observe student information, equipment electric quantity and real-time pictures displayed on the student card in real time, can control visual angles in the teacher card, unify the pictures, suspend the comprehensive playing of courses and the like, can interact with students through the course lists and the problem lists, and can enter the step 6 after class taking.
And 6, adding students and newly-built classes in my class, managing the classes, and updating class student data in real time.
Taking the course of 'special country of German submerged by flood' of traditional culture as an example, after a teacher A logs in a teacher management system, under the condition that science popularization education classification is selected when the teacher A checks the course, the course of 'special country of German submerged by flood' of traditional culture, the teacher A sends a request to a server to download 'science popularization education + traditional culture + special country of German submerged by flood', after downloading and storing are completed, the teacher A can add the course to a course group and choose to go to the course. At this time, the second student and other VR glasses end will receive the pushed course, "German & Temura village flooded with flood", timing is started after entering the course, and when the second student observes the great gate of the village at a rotating viewing angle, the virtual position information Vector3(x1, y1, z1) of the video camera at the VR glasses end, the rotation angle information Vector3(x3, y3, z3) of the current video camera, and the direction information Vector3(x2, y2, z2) of the Vector right ahead of the current position angle comparison previous camera will be stored, and after 500ms, data will be transmitted to the server, and the first student card at the teacher end can see the content currently observed by the second VR glasses in the card corresponding to the second student number. And when the VR glasses end continues to count time, and the field of view turns to buildings in the village after the second person observes the village gate in the course, the VR glasses continue to record the virtual position information Vector3(x1, y1, z1) of the camera and the rotation angle information Vector3(x3, y3, z3) of the current camera, and the direction information Vector3(x2, y2, z2) of the Vector right in front of the camera before the current position angle comparison, and transmit the information to the server, and the teacher end also synchronously refreshes and acquires the data information of the student B. After teacher A clicks back on, the camera viewing angle of student B will return to the angle of teacher A. In the process, the teacher A can send a question to the helmet of the student to interact. The lesson ending teacher A can click to end the lesson.
As noted above, while the present invention has been shown and described with reference to certain preferred embodiments, it is not to be construed as limited thereto. Various changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. The teaching resource interaction method based on data stream pushing is characterized by comprising the following steps:
s1, accessing the course resource data and analyzing the course resource data;
s2, vectorizing course resource data: dividing the course resources analyzed by the S1 into N primary categories Xi according to safety education, science popularization education, basic disciplines and quality education; each of the first-level classes Xi includes M curriculum resources Yj;
constructing the N primary classifications Xi and the M course resources Yj into an N-dimensional vector, wherein each element XiYj in the N-dimensional vector represents one course resource;
s3, compressing and packaging the course resources XiYj into a plurality of blocks with the same size by a developer, and storing the packaged course resources under folders of first-level classification Xi in the server in a combined mode of binary compression files of course resource contents and inclusion description files;
s4, the teacher end selects corresponding course resources according to teaching requirements and pushes the course resources to the student end;
s5, downloading the course resource by the student terminal, and storing the course resource locally according to the format of 'primary classification/secondary classification';
s6, after all students are ready, decompressing preset curriculum resources stored in the local through the student end, and starting on-line lectures through VR glasses;
and the student end, the server and the teacher end transmit data streams including virtual position information, moving direction and rotation angle, and the student end renders pictures according to the data streams and outputs the pictures.
2. The teaching resource interaction method of claim 1, wherein in step S6, the teaching mode is divided into a unified screen mode and a free mode:
in the unified picture mode, pictures received by the student end are controlled by the teacher end, the learning content of the student end is consistent with the content of the teacher end, and the pictures can be freely viewed and can also be controlled by the teacher end to realize picture synchronization; the synchronous picture content is specified by the teacher end;
in the free mode, the student end automatically changes the visual angle of the picture through head movement and is not controlled by the teacher end.
3.A teaching resource interaction method as claimed in claim 2, wherein in free mode:
the VR glasses of the student end record virtual position information-Vector 3(x1, y1, z1), moving direction-Vector 3(x2, y2, z2) and rotating angle-Vector 3(x3, y3, z3) at preset time intervals, the data streams are transmitted to the server in real time and then pushed to the teacher end by the server in real time, and a picture seen by the student end is rendered on a monitoring screen of the teacher end in real time;
each student corresponds to one sub-picture and is arranged and displayed on the monitoring screen of the teacher end in the order of the student numbers.
4. The instructional resource interaction method of claim 1, wherein in the unified view mode:
the teacher end records virtual position information-Vector 3(x1, y1, z1), moving direction-Vector 3(x2, y2, z2) and rotating angle-Vector 3(x3, y3, z3) at preset time intervals through VR glasses or keyboard and mouse operation pictures through a GUI interface, the data streams are transmitted to the server in real time and then pushed to the student end by the server in real time, and pictures seen by the teacher end or pictures of appointed preset students are rendered in VR glasses of the student end.
5. The teaching resource interaction method of claim 1, wherein the teacher end requests the server for the data stream of the current class of students at predetermined time intervals, and the server returns the data stream to the teacher end in a format of student number + virtual position information + movement direction + rotation angle.
6. The teaching resource interaction method as claimed in claim 5, wherein in the course of teaching by the teacher, cameras equivalent to the current number of students are deployed, each camera holding an ID corresponding to the number of students;
cutting character strings into the transmitted data stream, and assigning localPositon of the camera corresponding to the current school number to a Vector3(x1, y1, z1) after virtual position information is read; the Vector3(x3, y3, z3) after the rotation angle is read is assigned to the localRotation of the camera corresponding to the current school number;
wherein localPositon represents local position information of the camera; localRotation represents local rotation information of the camera;
with local position information and local rotation information, the camera angle is fixed and unique.
7. The teaching resource interaction method as claimed in claim 1, wherein in step S6, before decompressing the predetermined course resource stored locally, the student end first establishes communication with the server, obtains the bag body description file corresponding to the teaching content, points to the corresponding binary compressed bag file according to the bag body description file, and decompresses.
8. The teaching resource interaction method of claim 5, wherein the network data stream transmitted to the server by the student side at a time is denoted as n, the server transmits the received n network data streams to the student side to the teacher side, and the network data stream transmitted next time corresponding to the student side is denoted as n + 1;
after the teacher end receives at least two data streams, namely n data streams and n +1 data streams, the teacher end calculates angular velocities Wx, Wy and Wz of rotation of a student end camera from the n data streams to the n +1 data streams around three axes of x, y and z according to camera rotation angles-Vector 3(x3, y3 and z3) recorded in the n data streams and the n +1 data streams;
according to the calculation result, on the basis of the camera rotation data obtained in the n +1 data stream, the teacher end locally deduces the camera rotation angle in the n +2 data stream, and the teacher end caches the deduced data to the local.
9. The teaching resource interaction method of claim 8, wherein when the teacher side receives a plurality of data streams n +2, n +3, n +4, n +5 transmitted from the server at a time, parsing the data streams one by one and judging:
if the actually received n +2 data stream contains other data contents except the camera position information, and the actually received n +2 data stream is inconsistent with the locally derived and cached n +2 data stream, discarding the locally derived and cached n +2 data stream and the subsequent data, and rolling back by the teacher end to the state when the n +1 data stream is received, and performing the next data application and picture presentation according to the received latest n +2 data;
if the data streams are consistent, continuously checking a plurality of data streams behind the n +2 data stream until all local cache data are checked;
and if the received data streams do not contain other data contents except the position information of the camera, the teacher directly applies the n +5 data stream data to the camera, and clears the data cached in local deduction to finish the picture presentation of the teacher.
10. Instructional resource interaction system based on data streaming for driving and executing an instructional resource interaction method according to any one of claims 1 to 9, said system comprising:
the server is used for accessing the course resource data, analyzing the course resource data and vectorizing the course resource data;
the teacher end establishes two-way communication with the server and is used for reading the course resources in the server and selecting the preset course resources to push the course resources to the student end;
the student end establishes two-way communication with the teacher end and the server, is used for receiving the course resources pushed by the teacher end and downloads the course resources to the local in an OTA (over the air) mode;
and data streams including virtual position information, moving directions and rotating angles are transmitted among the student terminal, the server and the teacher terminal.
CN202210546261.8A 2022-05-20 2022-05-20 Teaching resource interaction method and system based on data stream pushing Active CN114664138B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210546261.8A CN114664138B (en) 2022-05-20 2022-05-20 Teaching resource interaction method and system based on data stream pushing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210546261.8A CN114664138B (en) 2022-05-20 2022-05-20 Teaching resource interaction method and system based on data stream pushing

Publications (2)

Publication Number Publication Date
CN114664138A true CN114664138A (en) 2022-06-24
CN114664138B CN114664138B (en) 2022-08-16

Family

ID=82037653

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210546261.8A Active CN114664138B (en) 2022-05-20 2022-05-20 Teaching resource interaction method and system based on data stream pushing

Country Status (1)

Country Link
CN (1) CN114664138B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180301048A1 (en) * 2017-04-12 2018-10-18 Age Of Learning, Inc. Remote live tutoring platform
CN109032362A (en) * 2018-08-31 2018-12-18 苏州竹原信息科技有限公司 A kind of tutoring system and its control method based on VR
CN110533968A (en) * 2019-09-06 2019-12-03 昆明中经网络有限公司 VR teaching unified control system
CN110727351A (en) * 2019-10-22 2020-01-24 黄智勇 Multi-user collaboration system for VR environment
CN112489507A (en) * 2020-11-23 2021-03-12 广西水利电力职业技术学院 Big data fusion type intelligent teaching method based on VR and holographic projection
CN113706944A (en) * 2021-09-15 2021-11-26 安徽工业大学 Primary school science VR classroom teaching device based on STEAM education theory
CN114025147A (en) * 2021-11-01 2022-02-08 华中师范大学 Data transmission method and system for VR teaching, electronic equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180301048A1 (en) * 2017-04-12 2018-10-18 Age Of Learning, Inc. Remote live tutoring platform
CN109032362A (en) * 2018-08-31 2018-12-18 苏州竹原信息科技有限公司 A kind of tutoring system and its control method based on VR
CN110533968A (en) * 2019-09-06 2019-12-03 昆明中经网络有限公司 VR teaching unified control system
CN110727351A (en) * 2019-10-22 2020-01-24 黄智勇 Multi-user collaboration system for VR environment
CN112489507A (en) * 2020-11-23 2021-03-12 广西水利电力职业技术学院 Big data fusion type intelligent teaching method based on VR and holographic projection
CN113706944A (en) * 2021-09-15 2021-11-26 安徽工业大学 Primary school science VR classroom teaching device based on STEAM education theory
CN114025147A (en) * 2021-11-01 2022-02-08 华中师范大学 Data transmission method and system for VR teaching, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN114664138B (en) 2022-08-16

Similar Documents

Publication Publication Date Title
US9485493B2 (en) Method and system for displaying multi-viewpoint images and non-transitory computer readable storage medium thereof
CN107633441A (en) Commodity in track identification video image and the method and apparatus for showing merchandise news
CN111131904B (en) Video playing method and head-mounted electronic equipment
Feng et al. LiveDeep: Online viewport prediction for live virtual reality streaming using lifelong deep learning
CN106060515A (en) Panoramic media file push method and apparatus
WO2018000609A1 (en) Method for sharing 3d image in virtual reality system, and electronic device
CN105791977A (en) Virtual reality data processing method and system based on cloud service and devices
US20230285854A1 (en) Live video-based interaction method and apparatus, device and storage medium
CN110516749A (en) Model training method, method for processing video frequency, device, medium and calculating equipment
CN109460482B (en) Courseware display method and device, computer equipment and computer readable storage medium
CN110947177A (en) Method, system and equipment for cloud game teaching interaction and computer readable storage medium thereof
CN106412614A (en) Electronic gift playing method and device
CN111405314B (en) Information processing method, device, equipment and storage medium
CN114664138B (en) Teaching resource interaction method and system based on data stream pushing
CN110544316B (en) Virtual reality playback method, system, equipment and storage medium
EP4344234A1 (en) Live broadcast room presentation method and apparatus, and electronic device and storage medium
CN107197339A (en) Display control method, device and the head-mounted display apparatus of film barrage
WO2017125899A1 (en) Modular content deployment and playback control system for educational application
CN112423035A (en) Method for automatically extracting visual attention points of user when watching panoramic video in VR head display
CN115578779B (en) Training of face changing model, video-based face changing method and related device
CN111539978B (en) Method, apparatus, electronic device and medium for generating comment information
CN114765692B (en) Live broadcast data processing method, device, equipment and medium
CN115242980B (en) Video generation method and device, video playing method and device and storage medium
CN117894068A (en) Motion data processing method, training method and device for key frame extraction model
Manfredi et al. LSTM-based Viewport Prediction for Immersive Video Systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant