CN115909825A - System, method and teaching end for realizing remote education - Google Patents

System, method and teaching end for realizing remote education Download PDF

Info

Publication number
CN115909825A
CN115909825A CN202110924590.7A CN202110924590A CN115909825A CN 115909825 A CN115909825 A CN 115909825A CN 202110924590 A CN202110924590 A CN 202110924590A CN 115909825 A CN115909825 A CN 115909825A
Authority
CN
China
Prior art keywords
teaching
projection
user
human body
dimensional parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110924590.7A
Other languages
Chinese (zh)
Inventor
吴增程
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Shiyuan Electronics Thecnology Co Ltd
Guangzhou Shiyuan Artificial Intelligence Innovation Research Institute Co Ltd
Original Assignee
Guangzhou Shiyuan Electronics Thecnology Co Ltd
Guangzhou Shiyuan Artificial Intelligence Innovation Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Shiyuan Electronics Thecnology Co Ltd, Guangzhou Shiyuan Artificial Intelligence Innovation Research Institute Co Ltd filed Critical Guangzhou Shiyuan Electronics Thecnology Co Ltd
Priority to CN202110924590.7A priority Critical patent/CN115909825A/en
Publication of CN115909825A publication Critical patent/CN115909825A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention provides a system, a method and a teaching end for realizing remote education, wherein the system comprises: the teaching display terminal and the teaching projection terminal; the teaching display end is provided with an image acquisition device for aligning with a user; the teaching display end is used for acquiring the three-dimensional parameters of the user through the image acquisition device and transmitting the three-dimensional parameters to the teaching projection end; the teaching projection end is provided with a projection device; the teaching projection end is used for receiving the three-dimensional parameters transmitted by the teaching display end, generating projection data according to the three-dimensional parameters, and presenting the projection data through the projection device. The user of teaching show end need not to wear bulky head-mounted holographic head-mounted display, only needs just can realize the reappearance of teaching show content through the system, and the data of transmission is not the projection data that finally presents but three-dimensional parameter, therefore the data bulk is less, and transmission speed and burden are all less, and entire system's design degree of difficulty is low and the cost is also lower.

Description

System, method and teaching end for realizing distance education
Technical Field
The invention relates to the field of distance education, in particular to a system, a method and a teaching end for realizing distance education.
Background
Remote online education is a powerful way to balance educational resources. One of the present remote online education methods is that the students can watch remote real-time video or recorded and broadcast video on the classroom display, so as to achieve the purpose of remote education, but the method is too simple and monotonous, and the students and teachers lack interaction, so that the students can not experience the teaching process in the real-time.
In addition, a slightly improved remote online education mode is that at least one holographic display terminal is respectively arranged in a lecture hall of a teacher and a lecture hall of students so as to construct a virtual-real fused holographic imaging environment, but a holographic head display with an augmented reality function is required to be arranged for the lecture hall in the lecture hall, a cloud rendered holographic picture is transmitted to the remote lecture hall through a 5G network, and the teaching activities in the lecture hall are reproduced in the lecture hall in the form of a holographic LED screen and the like. This mode has used complicated equipment, and the design degree of difficulty is big, the cost is high, and to the teacher, wear holographic head and show not only heavy influence teaching state, and can influence student's efficiency of attending class.
Disclosure of Invention
The invention aims to overcome at least one defect of the prior art, provides a system, a method and a teaching end for realizing distance education, and is used for solving the problems of high design difficulty, high cost and poor classroom effect of the conventional distance education mode.
The technical scheme adopted by the invention comprises the following steps:
a system for implementing distance education, comprising: the teaching display terminal and the teaching projection terminal; the teaching display end is provided with an image acquisition device for aligning with a user; the teaching display end is used for acquiring the three-dimensional parameters of the user through the image acquisition device and transmitting the three-dimensional parameters to the teaching projection end; the teaching projection end is provided with a projection device; the teaching projection end is used for receiving the three-dimensional parameters transmitted by the teaching display end, generating projection data according to the three-dimensional parameters, and displaying the projection data through the projection device.
Further, the three-dimensional parameters of the user comprise human skeleton three-dimensional parameters; the projection data includes human projection data; the teaching demonstration end includes: the human skeleton parameter processing unit is used for acquiring the human skeleton three-dimensional parameters of the user through the image acquisition device and transmitting the human skeleton three-dimensional parameters to the teaching projection end; the teaching projection end includes: the human body parameter assignment unit is used for assigning the human body skeleton three-dimensional parameters to a human body model; and the human body data conversion unit is used for converting the assigned human body model into the human body projection data, and presenting the human body projection data through the projection device.
Further, the human three-dimensional bone parameters are three-dimensional parameters of a plurality of joint points of the user.
In a specific embodiment, the three-dimensional parameters of the user further comprise three-dimensional parameters of the face of the user; the projection data further includes facial projection data; the teaching display end further comprises: the face parameter processing unit is used for acquiring the face three-dimensional parameters of the user through the image acquisition device and transmitting the face three-dimensional parameters to the teaching projection end; teaching projection end still includes: the face parameter assignment unit is used for assigning the received three-dimensional face parameters of the user to the face model; and the face data conversion unit is used for converting the assigned face model into the face projection data and presenting the face projection data through the projection device.
Further, the teaching demonstration end further comprises: the face request receiving and transmitting unit is used for receiving a virtual face request and transmitting the virtual face request to the teaching projection terminal; teaching projection end still includes: a face request receiving unit for receiving the virtual face request; a face model selection unit, configured to use a preset virtual face model as the face model when the virtual face request is received, and use a preset real face model as the face model when the virtual face request is not received.
In another specific embodiment, the teaching demonstration end further comprises: the human body request receiving and transmitting unit is used for receiving a virtual human body request and transmitting the virtual human body request to the teaching projection terminal; teaching projection end still includes: a human body request receiving unit for receiving the virtual human body request; and the human body model selection unit is used for taking a preset virtual human body model as the human body model when the virtual human body request is received, and taking a preset real human body model as the human body model when the virtual human body request is not received.
Further, the teaching demonstration end further comprises: and the real human body model acquisition unit is used for scanning the user through the image acquisition device to acquire scanning data, generating a human body model proportional to the user according to the scanning data, and taking the generated model as the preset real human body model.
Further, teaching show end is education flat equipment, image acquisition device sets up on the frame of teaching show end, and/or, teaching projection end is education flat equipment, projection arrangement sets up on the frame of teaching projection end.
A teaching demonstration end is provided with an image acquisition device aiming at a user; the teaching display end is used for acquiring the three-dimensional parameters of the user through the image acquisition device and transmitting the three-dimensional parameters to the teaching projection end, so that the teaching projection end generates projection data according to the three-dimensional parameters, and the projection data is presented through the projection device arranged on the teaching projection end.
A teaching projection end is provided with a projection device; the teaching projection end is used for receiving the three-dimensional parameters of the user transmitted by the teaching display end, generating projection data according to the three-dimensional parameters, and presenting the projection data through the projection device; the three-dimensional parameters of the user transmitted by the teaching display end are acquired by the teaching display end through an image acquisition device arranged on the teaching display end
A method for realizing remote education is applied to a teaching display end and is provided with an image acquisition device for aligning a user; the method comprises the following steps: and acquiring the three-dimensional parameters of the user through the image acquisition device, transmitting the three-dimensional parameters to a teaching projection end, so that the teaching projection end generates projection data according to the three-dimensional parameters, and presenting the projection data through a projection device arranged on the teaching projection end.
A method for realizing remote education is applied to a teaching projection end and is provided with a projection device; the method comprises the following steps: receiving three-dimensional parameters of a user transmitted by a teaching display end, generating projection data according to the three-dimensional parameters, and displaying the projection data through the projection device; the three-dimensional parameters of the user transmitted by the teaching display end are acquired by the teaching display end through an image acquisition device arranged on the teaching display end.
Compared with the prior art, the invention has the following beneficial effects:
in the system provided by the invention, the content displayed by the user at the teaching display end can be displayed to the user at the teaching projection end in a projection data mode through the system, so that the remote education between the teaching display end and the teaching projection end is realized, the user at the teaching display end does not need to wear a heavy head-mounted holographic head display, the reproduction of the teaching display content can be realized only through the image acquisition device and the projection device at the teaching projection end, and the transmitted data is not the finally displayed projection data but the three-dimensional parameters of the user, so that the data volume is small, the transmission speed and the burden are small, the design difficulty of the whole system is low, and the cost is low.
Drawings
Fig. 1 is a schematic diagram of the system of embodiment 1 of the present invention.
Fig. 2 is a schematic position diagram of 25 joint points explained as an example in embodiment 1 of the present invention.
Fig. 3 is a schematic diagram of the apparatus and the unit components of the teaching demonstration end 11 in embodiment 1 of the present invention.
Fig. 4 is a schematic diagram of the apparatus and the unit components of the teaching demonstration end 21 in embodiment 2 of the present invention.
Fig. 5 is a schematic diagram of the apparatus and the unit components of the teaching projection terminal 22 in embodiment 2 of the present invention.
Fig. 6 is a schematic diagram of the apparatus and the unit components of the teaching demonstration end 31 in embodiment 3 of the present invention.
Fig. 7 is a schematic diagram illustrating the positions of 68 key points in embodiment 3 of the present invention.
Fig. 8 is a schematic diagram of the apparatus and the unit components of the teaching projection terminal 32 according to embodiment 3 of the present invention.
Fig. 9 is a schematic diagram of the device and the unit components of the teaching demonstration terminal 41 in embodiment 4 of the present invention.
Fig. 10 is a schematic diagram of the apparatus and the unit components of the teaching projection end 42 in embodiment 4 of the present invention.
Fig. 11 is a schematic diagram of the apparatus and the unit components of the teaching demonstration end 51 in embodiment 5 of the present invention.
Fig. 12 is a schematic diagram of the apparatus and the unit components of the teaching projection terminal 52 in embodiment 5 of the present invention.
Detailed Description
The drawings are only for purposes of illustration and are not to be construed as limiting the invention. For a better understanding of the following embodiments, certain features of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
Example 1
The embodiment provides a system for realizing remote education, which is used for realizing remote education between at least two teaching terminals, wherein the at least two teaching terminals are generally positioned at different places, and the teaching is realized through propagation media such as the internet, so that the remote education is also called network education and refers to education developed by using network technology and environment.
As shown in fig. 1, in this embodiment, the teaching terminals in the system for implementing distance education may be specifically divided into a teaching display terminal 11 and a teaching projection terminal 12, and the number of the teaching display terminal 11 and the teaching projection terminal 12 may be one or more.
A teaching demonstration end 11 provided with an image acquisition device 13 for aiming at the user A1. The teaching display terminal 11 is used for acquiring the three-dimensional parameters of the user A1 through the image acquisition device 13, and transmitting the three-dimensional parameters to the teaching projection terminal 12.
In this embodiment, the user A1 of the teaching display terminal 11 is an object for teaching interactive display in the teaching process, and may be a teacher or a student. The three-dimensional parameters of the user A1 are specifically composed of three-dimensional parameters of a plurality of positions on the body of the user A1, and the combination of the three-dimensional parameters of the plurality of positions can represent key information such as actions, body postures and the like of the user A1 during actual teaching interactive display. Specifically, the three-dimensional parameter may refer to a three-dimensional position parameter, a three-dimensional size parameter, a parameter representing a shape, a deformation, and a direction, and the like, wherein the three-dimensional position parameter is generally represented in the form of three-dimensional coordinates. Preferably, in this embodiment, the three-dimensional parameters of the user A1 at least include three-dimensional position parameters, so as to clearly represent three-dimensional coordinates of several positions on the body of the user A1. Preferably, the image capturing device 13 is placed on top of the central axis of the teaching demonstration end 11 for better capturing and obtaining the three-dimensional parameters of the user A1. Preferably, for making user A1 be more convenient for teaching interactive display, teaching show end 11 is the teaching tablet device, and the tablet device is a small-size, conveniently carry's personal computer, generally regards as basic input device with the touch-sensitive screen, then image acquisition device 13 sets up in the dull and stereotyped frame of teaching show end 11, and the position of most preferred sets up the axis top at the dull and stereotyped frame of teaching show end 11.
Specifically, the image capturing device 13 of the teaching demonstration end 11 is a depth image capturing device 13, such as a depth camera or a depth camera, specifically, a device capable of measuring a distance from an object to the camera or other depth information, the depth image capturing device 13 may be configured to acquire image data in an aligned area and directly acquire three-dimensional parameters of the object in the aligned area from the image data, and in this embodiment, when the user A1 enters the aligned area of the image capturing device 111, the depth image capturing device 13 may directly acquire the three-dimensional parameters of the user A1 from real-time image data.
A teaching projection terminal 12 provided with a projection device 14. The teaching projection terminal 12 is configured to receive the three-dimensional parameters of the user A1 transmitted by the teaching display terminal 11, generate projection data according to the received three-dimensional parameters of the user A1, and present the projection data through the projection device 14.
The user of the teaching projection terminal 12 is the object to receive the content displayed by the user of the teaching display terminal, and may be a teacher or a student, the projection device 14 may be any suitable projection lens or device, and the projection data refers to data that can be suitable for the projection device 14 to realize projection. Preferably, in order to enable the user of the teaching projection terminal 12 to have a more real interactive experience, the projection device 14 is a holographic projection device 14, and may specifically be other suitable devices such as a holographic projection lens, and the projection data converted by the teaching projection terminal 12 should be holographic projection data, and be projected onto suitable devices or apparatuses such as a holographic projection film through the holographic projection device 14, so as to more clearly show the content of the interactive teaching performed by the user A1. Preferably, the projection device 14 is disposed at the top of the central axis of the teaching projection end 12 in order to better present the projection data to the user of the teaching projection end 12. Preferably, in order to make the user of the teaching projection end 12 more convenient and faster to receive distance education, the teaching projection end 12 is a teaching tablet device, and then the projection device 14 is disposed on the tablet frame of the teaching projection end 12, and the most preferable position is disposed on the top of the central axis of the tablet frame of the teaching projection end 12.
Specifically, the three-dimensional parameters of the user A1 at the teaching display end 11 include human skeleton three-dimensional parameters, which are parameters capable of representing the overall actions and postures of the head, the trunk, the limbs, and the like of the user A1. More specifically, since the joint points on the head, the trunk, and the limbs of the human body can more accurately represent the posture and the motion of the user, the obtained three-dimensional parameters of the skeleton of the human body are composed of the three-dimensional parameters of the joint points of the user A1, including the joint points of the limbs, the trunk, and the fingers of the user. Preferably, the acquired three-dimensional parameters of the human skeleton are three-dimensional parameters of 25 joint points of the user A1, and as an example, as shown in fig. 2, the 25 joint points are a spine base point, a spine middle point, a neck point, a head point, a spine shoulder point, a left elbow point, a left wrist point, a left hand point, a right shoulder point, a right elbow point, a right wrist point, a right hand point, a left hip point, a left knee point, a left ankle point, a left foot point, a right hip point, a right knee point, a right ankle point, a right foot point, a left finger tip point, a left thumb tip point, a right finger tip point and a right thumb tip point of the human body.
Based on this, as shown in fig. 3, the teaching demonstration end 11 specifically includes:
the human skeleton parameter processing unit 111 is configured to acquire three-dimensional parameters of a plurality of joint points of the user A1 through the depth image acquisition device 13, and transmit the three-dimensional parameters of the plurality of joint points of the user A1 to the teaching projection terminal 12;
specifically, as shown in the figure, the teaching projection terminal 12 needs to assign the three-dimensional parameters of the plurality of joint points of the user A1 to a human body model after receiving the three-dimensional parameters, and then converts the assigned human body model into projection data.
Based on this, as shown in fig. 3, the teaching projection terminal 12 specifically includes:
a human body parameter assignment unit 121, configured to assign the received three-dimensional parameters of the plurality of joint points of the user A1 to a human body model;
the human body model can be a preset three-dimensional model, the model can form a main trunk and an appearance of a human body or other figures, when three-dimensional parameters of a plurality of joint points of a user A1 are assigned to a plurality of positions of the human body model according to corresponding positions of the three-dimensional parameters, the assigned human body model can present postures and actions of the user A1 during teaching interactive display, and teaching display contents of the user A1 can be reproduced through the human body model.
And the human body data conversion unit 122 is configured to convert the assigned human body model into human body projection data, and present the projection data through the projection device 14.
Specifically, when the human body data conversion unit 122 converts the assigned human body model into the human body projection data, the assigned human body model is imported into any suitable three-dimensional rendering software, and the human body model is converted into the projection data by using a three-dimensional rendering technology.
In this embodiment, the content displayed by the user A1 of the teaching display terminal 11 can be presented to the user of the teaching projection terminal 12 in the form of projection data or even holographic projection data through the system provided by this embodiment, thereby realizing remote education between the teaching display terminal 11 and the teaching projection terminal 12, the user A1 of the teaching display terminal 11 does not need to wear a heavy head-mounted holographic head display, and can realize the reproduction of the teaching display content only through the image acquisition device 13 and the projection device 14, and the transmitted data is not the projection data finally presented, but is the three-dimensional parameter of the user, so the data volume is small, the transmission speed and the burden are small, the design difficulty of the whole system is low, and the cost is also low.
Example 2
Based on the same idea as that in embodiment 1, this embodiment provides a system for implementing remote education, which is used to implement remote education between a teaching demonstration end 21 and a teaching projection end 22, and a difference between embodiment 2 and embodiment 1 is that the image acquisition device 23 of the teaching demonstration end 21 is a non-depth image acquisition device 23, such as a color camera or other suitable device, which is used to acquire image data in an aligned area, but cannot directly acquire three-dimensional parameters of objects in the area from the image data, so that the three-dimensional parameters of the user A2 of the teaching demonstration end 21 acquired by the teaching demonstration end 21 of the system provided in this embodiment need to be analyzed, calculated and acquired from the image data.
Teaching show end 21 is equipped with non-depth image acquisition device 23, as shown in fig. 4, specifically includes:
a human body image data acquisition unit 211 for acquiring image data directed to the user A2 by the non-depth image capturing device 23;
the human skeleton parameter processing unit 212 is configured to obtain three-dimensional parameters of a plurality of joint points of the user A2 according to the image data, and transmit the three-dimensional parameters of the plurality of joint points of the user A2 to the teaching projection terminal 22.
Specifically, the human skeletal parameter processing unit 212 acquires three-dimensional parameters of a plurality of joint points of the user from the image data by using a recognition algorithm of a gesture, an action and a gesture, and takes a common recognition model SMPL-X as an example to explain the acquisition process:
the SMPL model refers to a skinned multi-person linear model (skinned multi-person linear model), and the SMPL-X model is a model improved on the basis of the SMPL model.
The SMPL-X model is represented by S, which is defined as a function: m w =S(φ www )。
Wherein phi w ∈R 3 Orientation parameters of the whole body; r is a joint point regression function;
θ w is a posture distortion parameter, θ, of the body and the left and right hands w ∈R (21+15×2)×3 Wherein 21 is the number of joints on the user's body and 15 is the number of joints in one hand of the user;
β w is a shape parameter;
M w ∈R 10475×3 is the deformed vertex information obtained by S.
Is calculated to obtain M After w, the whole body orientation parameters, posture deformation parameters and shape parameters of a plurality of joint points are obtained, and meanwhile, the three-dimensional position parameters of the joint points can be obtained through a joint point regression function R
Figure BDA0003208791060000071
Wherein
Figure BDA0003208791060000072
Figure BDA0003208791060000073
/>
Teaching projection end 22 is equipped with projection arrangement 24, as shown in fig. 5, specifically includes:
a human body parameter assigning unit 221, configured to assign the received three-dimensional parameters of the plurality of joint points to a human body model;
when the three-dimensional parameters of the plurality of joint points received by the human body parameter assignment unit 221 are assigned to the human body model and converted into projection data, in the conversion process, parameters related to the joint points in the projection data may have certain errors with parameters of the joint points in the image data acquired by the teaching display terminal 21, and in order to reduce the errors as much as possible, preferably, the human body parameter assignment unit 221 optimizes the received three-dimensional parameters of the joint points before assigning the three-dimensional parameters of the joint points of the user A2 to the human body model, so that a human body grid which is more aligned with and more accurate to the image data acquired by the teaching display terminal 21 is obtained.
Taking three-dimensional parameters of a plurality of joint points calculated by an SMPL-X model as an example for explanation, projecting the received three-dimensional parameters of the plurality of joint points to obtain two-dimensional parameters of the joint points, wherein an error F exists between the two-dimensional parameters and the two-dimensional parameters of the joint points in the image data acquired by the teaching display end 21 2d 。F pri Is a prior term which guarantees theta in the SMPL-X model w And beta w Are all within a reasonable range. And adding the error and the prior term to obtain an objective function to be optimized: f ([ phi ]) www ,c w ])=F 2d +F pri Wherein c is w =(t w ,s w ) And is the weak perspective projection device parameter. And performing optimization fitting on the obtained three-dimensional parameters of the joint points through the objective function.
The human body data conversion unit 222 is configured to convert the assigned human body model into projection data, and present the projection data through the projection apparatus 24.
Except for the above differences, the explanations of the remaining definitions, the descriptions of the specific and preferred schemes, and the like, of the system for implementing remote education provided in embodiment 2 are the same as those in embodiment 1, and therefore, the technical effects brought by the same definitions, the specific and preferred schemes are the same as those of the system for implementing remote education provided in embodiment 1, and specific contents can be referred to the description of embodiment 1, and are not described again here.
Example 3
Based on the same idea as that of embodiment 1, this embodiment provides a system for implementing distance education, which is used for implementing distance education between a teaching demonstration terminal 31 and a teaching projection terminal 32, and embodiment 3 is different from embodiment 1 in that the three-dimensional parameters of a user A3 of the teaching demonstration terminal 31 acquired by the system provided by this embodiment include three-dimensional parameters of a plurality of joint points of the user A3, and three-dimensional parameters of a face of the user A3.
Teaching show end 31 is equipped with degree of depth image acquisition device 33, as shown in fig. 6, specifically includes:
the human skeleton parameter processing unit 311 is configured to acquire three-dimensional parameters of a plurality of joint points of the user A3 through the depth image acquisition device 33, and transmit the three-dimensional parameters of the plurality of joint points of the user A3 to the teaching projection terminal 32.
The face parameter processing unit 312 is configured to obtain the three-dimensional parameters of the face of the user A3 through the depth image collecting device 33, and transmit the three-dimensional parameters of the face of the user A3 to the teaching projection terminal 32.
Specifically, the three-dimensional parameters of the face of the user A3 are obtained by first obtaining image data through the depth image acquisition device 33 and then obtaining the image data, and the three-dimensional parameters of the face of the user A3 are specifically composed of three-dimensional parameters of a plurality of positions on the face of the user A3, and the combination of the three-dimensional parameters of the plurality of positions should be able to represent key information such as the expression and facial movements of the user A3 during the actual teaching interactive display. More specifically, the three-dimensional parameters of the face of the user A3 are composed of three-dimensional parameters of a plurality of key points of the face of the user A3, and the plurality of key points of the face of the user A3 include a plurality of key points of the five sense organs of the user A3. Preferably, the acquired three-dimensional parameters of the face are three-dimensional parameters of 68 key points of the face of the user A3, as an example, as shown in fig. 7, wherein key points 1 to 17 are key points on the contour of the face, key points 18 to 22 are key points on the left eyebrow, key points 23 to 27 are key points on the right eyebrow, key points 28 to 31 are key points on the nose bridge, key points 32 to 36 are key points on the nose, key points 37 to 42 are key points on the left eye, key points 43 to 48 are key points on the right eye, and key points 49 to 68 are key points on the mouth.
Preferably, the three-dimensional facial parameters of the user A3 further include a pose coefficient and an expression coefficient of the user A3, and the pose coefficient and the expression coefficient of the user A3 can be calculated by iteratively calculating differences between a plurality of key points of the user A3 face and preset standard face key points. The pose coefficient is composed of 3 euler angles and can be represented by three-dimensional coordinates (pitch, yaw, roll), where pitch is a pitch angle when the face rotates around the x-axis, determined according to three-dimensional parameters of a plurality of key points on the face of the user A3, yaw is a yaw angle when the face rotates around the y-axis, determined according to the same parameters, and roll is a roll angle when the face rotates around the z-axis, determined according to the same parameters. The expression coefficients simulate the movement of the muscle groups of the user A3's face to adequately describe most of the expression of the user A3's face.
Teaching projection end 32 is equipped with projection arrangement 34, and the projection data that teaching projection end 32 converted comprises human projection data and face projection data, and as shown in fig. 8, teaching projection end 32 specifically includes:
a human body parameter assigning unit 321, configured to assign the received three-dimensional parameters of the plurality of joint points to a human body model;
and the human body data conversion unit 322 is configured to convert the assigned human body model into human body projection data, and present the human body projection data through the projection device 34.
A face parameter assignment unit 323 for assigning the received three-dimensional parameters of the face of the user to a face model;
the facial model can be a preset three-dimensional model, the model can form the whole head and face of a human body or any other person, after three-dimensional parameters of a plurality of key points of the face of the user A3 are assigned to a plurality of positions of the facial model according to the positions of the corresponding key points, the assigned facial model can present the expression, head movement and facial movement of the user A3 during teaching interaction by combining the position and expression coefficients of the face of the user A3, and therefore teaching display contents of the user A3 are reproduced through the facial model.
A face data conversion unit 324, configured to convert the assigned face model into face projection data, and render the face projection data through the projection device 34.
Specifically, when the projection device 34 projects the face projection data, the face projection data is combined with the human body projection data to form a whole, so as to more accurately reproduce the teaching demonstration content of the user A3.
In this embodiment, the three-dimensional parameters of the user A3 acquired by the teaching demonstration end 31 further include facial three-dimensional parameters, so that the projection data presented in front of the user of the teaching projection end 32 further include facial projection data, and the facial expression, facial movement and head movement of the user of the teaching demonstration end 31 can be clearly reproduced for the user of the teaching projection end 32, so that the remote education effect is better.
Except for the above differences, the explanations of the remaining definitions, the descriptions of the specific and preferred schemes, and the like, of the system for implementing remote education provided in embodiment 3 are the same as those of embodiment 1, and therefore, the technical effects brought by the same definitions, the specific and preferred schemes are the same as those of the system for implementing remote education provided in embodiment 1, and specific contents can be referred to the descriptions of embodiment 1, and are not described herein again.
Example 4
Based on the same idea as that of embodiment 1, this embodiment provides a system for implementing remote education, which is used to implement remote education between a teaching demonstration end 41 and a teaching projection end 42, and embodiment 4 is different from embodiment 1 in that the system provided by this embodiment also obtains and performs different data processing according to the virtual human body request of a user A4 of the teaching demonstration end 41.
Teaching show end 41 is equipped with degree of depth image acquisition device 43, as shown in fig. 9, specifically includes:
the human skeleton parameter processing unit 411 is configured to acquire three-dimensional parameters of a plurality of joint points of the user A4 through the image acquisition device 43, and transmit the three-dimensional parameters of the plurality of joint points of the user A4 to the teaching projection end 42;
and a human body request receiving and transmitting unit 412, configured to receive the virtual human body request and transmit the virtual human body request to the teaching projection terminal 42.
The virtual human body request is sent by the user A4 at the teaching demonstration end 42, and is used to indicate that the user A4 requires to assign the acquired three-dimensional parameters of the plurality of joint points of the user A4 to a human body model of a virtual character, where the virtual character may be a fictitious character such as a cartoon character, and may also refer to real or fictitious characters other than all the users A4.
The teaching projection end 42 is provided with a projection device 44, as shown in fig. 10, specifically including:
a human body request receiving unit 421, configured to receive the virtual human body request;
a human body model selecting unit 422, configured to use a preset virtual human body model as a human body model when the virtual human body request is received, and use a preset real human body model as a human body model when the virtual human body request is not received;
the virtual character model is a preset model which can form the main trunk and the outline of the virtual character. The real human body model is a predetermined model that can form/reproduce the real main torso and shape of the user A4. The real human body model may also be generated by obtaining in advance, as shown in fig. 9, the teaching demonstration end 41 further includes a real human body model obtaining unit 413, configured to scan the whole of the user A4 through the image collecting device 43 to obtain scanning data, generate a model proportional to the user A4 according to the scanning data, and use the generated model as the real human body model. The real human body model obtained by real-time scanning is closer to the current actual state of the user A4, and therefore, more real teaching display contents can be reproduced in front of the user of the teaching projection end 42.
When the human body model selecting unit 422 receives a virtual human body request from the user A4, it indicates that the user A4 requires the three-dimensional parameters to be assigned to the human body model of the virtual character, therefore, the human body model selecting unit 422 uses the virtual human body model as the human body model as a model for subsequent assignment, if the virtual human body request from the user A4 is not received, it indicates that the user A4 does not define the model assigned with the three-dimensional parameters, and in order to reproduce the teaching display content of the user A4 more realistically, in this embodiment, the human body model selecting unit 422 defaults to use the real human body model as the model for subsequent assignment when the virtual human body request from the user A4 is not received.
A human body parameter assigning unit 423, configured to assign the received three-dimensional parameters of the plurality of joint points to a human body model;
and the human body data conversion unit 424 is used for converting the assigned human body model into projection data, and presenting the projection data through the projection device 44.
In this embodiment, the user A4 may request to appear in the presence of the user of the teaching projection terminal 42 as an avatar by the avatar, so as to protect the privacy of the user A4.
Except for the above differences, the explanations of the remaining definitions, the descriptions of the specific and preferred schemes, and the like, of the system for implementing remote education provided in embodiment 4 are the same as those of embodiment 1, and therefore, the technical effects brought by the same definitions, the specific and preferred schemes are the same as those of the system for implementing remote education provided in embodiment 1, and specific contents can be referred to the descriptions of embodiment 1, and are not described herein again.
Example 5
Based on the same idea as that of embodiment 3, this embodiment provides a system for implementing remote education, which is used to implement remote education between a teaching demonstration end 51 and a teaching projection end 52, and embodiment 5 is different from embodiment 3 in that the system provided by this embodiment also obtains and performs different data processing according to a virtual human body request and a virtual face request of a user A5 of the teaching demonstration end 51.
Teaching show end 51 is equipped with degree of depth image acquisition device 53, as shown in fig. 11, specifically includes:
the human skeleton parameter processing unit 511 is configured to acquire three-dimensional parameters of a plurality of joint points of the user A5 through the image acquisition device 53, and transmit the three-dimensional parameters of the plurality of joint points of the user to the teaching projection terminal 52;
the human body request receiving and transmitting unit 512 is configured to receive the virtual human body request and transmit the virtual human body request to the teaching projection terminal 52.
The virtual human body request is sent by the user A5 of the teaching demonstration end 51, and is used for indicating that the user A5 of the teaching demonstration end 51 requires to assign the acquired three-dimensional parameters of the plurality of joint points of the user A5 to a human body model of a virtual character, wherein the virtual character can be a fictional character such as a cartoon character, and can also generally refer to real or fictional characters except for all the user A5.
A face request receiving and transmitting unit 513, configured to receive the virtual face request and transmit the virtual face request to the teaching projection terminal 52;
the virtual face request is sent by the user A5 at the teaching display end 51, and is used to indicate that the user A5 requires to assign the three-dimensional parameters, the pose coefficients, and the expression coefficients of the acquired key points of the face of the user A5 to the face model of the virtual character.
The facial parameter processing unit 514 is configured to obtain three-dimensional parameters of a plurality of key points on the face of the user A5 through the image capturing device 53, obtain pose coefficients and expression coefficients of the face of the user A5, and transmit the three-dimensional parameters, the pose coefficients and the expression coefficients of the plurality of key points on the face of the user A5 to the teaching projection terminal 52.
Teaching projection end 52 is equipped with projection arrangement 54, as shown in fig. 12, teaching projection end 52 specifically includes:
a human body request receiving unit 521, configured to receive the virtual human body request;
a human body model selecting unit 522, configured to take the preset virtual human body model as a human body model when the virtual human body request is received, and take the preset real human body model as a human body model when the virtual human body request is not received;
the virtual human body model is a preset model which can form the main trunk and the outline of the virtual character. The real manikin is a predetermined model that is capable of forming/reproducing the real main torso and shape of the user. The real human body model may also be generated by pre-acquisition, as shown in fig. 11, the teaching demonstration end 51 further includes a real human body model acquisition unit 515, configured to scan the whole trunk of the user A5 through the image acquisition device 53 to acquire scan data, generate a model proportional to the user A5 according to the scan data, and use the generated model as the real human body model. The real human body model obtained by real-time scanning is closer to the current actual state of the user A5, and more real teaching display contents can be reproduced in front of the user of the teaching projection end 52.
When the human body model selecting unit 522 receives the virtual human body request of the user A5, it indicates that the user A5 requires the three-dimensional parameters to be assigned to the human body model of the virtual character, therefore, the human body model selecting unit 522 will use the virtual human body model as the model for subsequent assignment, if the virtual human body request of the user A5 is not received, it indicates that the user A5 does not define the model to which the three-dimensional parameters of the human skeleton are assigned, in order to more truly reproduce the teaching display content of the user A5, in this embodiment, the human body model selecting unit 522 defaults to use the real human body model as the model for subsequent assignment when the virtual human body request of the user A5 is not received.
A human body parameter assigning unit 523 configured to assign the received three-dimensional parameters of the plurality of joint points to a human body model;
a human body data conversion unit 524, configured to convert the assigned human body model into human body projection data, and present the human body projection data through the projection apparatus 54.
A face request receiving unit 525 configured to receive the virtual face request;
a face model selecting unit 526, configured to use a preset virtual face model as the face model when the virtual face request is received, and use a preset real face model as the face model when the virtual face request is not received;
the virtual face model is a preset model that can form the entire head and face of the virtual character. The real face model is a preset model that can form/reproduce the real head and face of the user A5. The real human body model may also be generated by pre-acquisition, as shown in fig. 11, the teaching demonstration end 51 further includes a real face model acquisition unit 516 for scanning the head of the user A5 through the image acquisition device 53 to acquire scanning data, generating a real face model of the user proportional to the user A5 according to the scanning data, and using the real face model of the user as the real face model. The real facial model obtained through real-time scanning is closer to the current actual facial expression and movement of the user A5, and therefore a more real teaching display content can be reproduced in front of the user of the teaching projection terminal 52.
When the face model selection unit 526 receives the virtual face request of the user A5, it indicates that the user A5 requires the three-dimensional parameters to be assigned to the face model of the virtual character, so that the face model selection unit 526 uses the virtual face model as the model to be subsequently assigned, and if the virtual face request of the user A5 is not received, it indicates that the user A5 does not define the model to which the three-dimensional parameters of the face are assigned, in order to more realistically reproduce the teaching demonstration content of the user A5, in this embodiment, the face model selection unit 526 defaults to use the real face model as the model to be subsequently assigned when the virtual face request of the user A5 is not received.
The face parameter assignment unit 527 is used for assigning the received three-dimensional parameters of the key points of the face of the user A5 to a face model and adjusting the assigned face model by using the pose coefficients and the expression coefficients;
a face data conversion unit 528, configured to convert the assigned face model into face projection data, and render the face projection data through the projection device 54.
In this embodiment, the user A5 may make the face and the image of the avatar appear in front of the user at the teaching projection end 52 through the avatar request and the avatar request, so as to protect the privacy of the user A5.
Except for the above differences, the explanation, the concrete description and the preferred scheme of the remaining definitions of the system for implementing remote education provided in embodiment 5 are the same as those of embodiment 3, so that the technical effects brought by the same definitions, the concrete description and the preferred scheme are the same as those of the system for implementing remote education provided in embodiment 1, and specific contents can be referred to the description of embodiment 3, and are not described herein again.
Example 6
Based on the same ideas as embodiments 1 to 5, this embodiment provides a method for implementing remote education, which is applied to a teaching demonstration end and is provided with an image acquisition device for aiming at a user;
the method comprises the following steps:
s1: acquiring three-dimensional parameters of the user through the image acquisition device;
s2: and transmitting the three-dimensional parameters to a teaching projection end so that the teaching projection end generates projection data according to the received three-dimensional parameters of the user, and displaying the projection data through a projection device arranged on the teaching projection end.
Specifically, the three-dimensional parameters of the user comprise human skeleton three-dimensional parameters; the projection data comprises human body projection data;
the specific implementation procedure of step S1 is as follows: acquiring human skeleton three-dimensional parameters of the user through the image acquisition device; the specific execution process of step S2 is: and transmitting the human skeleton three-dimensional parameters to a teaching projection end so that the teaching projection end assigns the human skeleton three-dimensional parameters to a human body model, converting the assigned human body model into human body projection data, and displaying the human body projection data through the projection device.
Preferably, the human three-dimensional bone parameters are three-dimensional parameters of a plurality of joint points of the user.
Preferably, the three-dimensional parameters of the user further comprise three-dimensional parameters of the face of the user; the projection data further comprises face projection data;
the specific implementation procedure of step S1 is as follows: acquiring human skeleton three-dimensional parameters and face three-dimensional parameters of the user through the image acquisition device; the specific execution process of step S2 is: and transmitting the human skeleton three-dimensional parameters and the face three-dimensional parameters to a teaching projection end, so that the teaching projection end assigns the human skeleton three-dimensional parameters to a human body model, assigns the face three-dimensional parameters to the face model, converts the assigned human body model into the human body projection data, converts the assigned face model into the face projection data, and presents the human body projection data and the face projection data through the projection device.
Preferably, the method further comprises: receiving a virtual human body request and transmitting the virtual human body request to the teaching projection end, so that the teaching projection end takes a preset virtual human body model as the human body model when receiving the virtual human body request, and takes a preset real human body model as the human body model when not receiving the virtual human body request;
preferably, the method further comprises: receiving a virtual face request and transmitting the virtual face request to the teaching projection end, so that the teaching projection end takes a preset virtual face model as the face model when receiving the virtual face request, and takes a preset real face model as the face model when not receiving the virtual face request.
Preferably, the method further comprises: scanning the user through the image acquisition device to obtain scanning data, generating a human body model proportional to the user according to the scanning data, and taking the generated model as the preset real human body model.
Based on the same idea as the method, the embodiment further provides a method for implementing distance education, which is applied to a teaching projection terminal and provided with a projection device, and the method comprises the following steps:
t1: receiving the three-dimensional parameters of the user transmitted by the teaching display end;
t2: generating projection data according to the three-dimensional parameters, and presenting the projection data through the projection device.
The three-dimensional parameters of the user transmitted by the teaching display end are acquired by the teaching display end through an image acquisition device arranged on the teaching display end.
Based on the same idea as the above-mentioned method applied to the teaching demonstration end, the present embodiment provides a teaching demonstration end, which is provided with an image acquisition device for aligning a user.
The teaching display end is used for acquiring the three-dimensional parameters of the user through the image acquisition device and transmitting the three-dimensional parameters to the teaching projection end, so that the teaching projection end generates projection data according to the three-dimensional parameters, and the projection data is presented through the projection device arranged on the teaching projection end.
Based on the same idea as the method applied to the teaching demonstration end, the embodiment provides a teaching projection end, which is provided with a projection device;
the teaching projection end is used for receiving the three-dimensional parameters of the user transmitted by the teaching display end, generating projection data according to the three-dimensional parameters, and presenting the projection data through the projection device;
the three-dimensional parameters of the user transmitted by the teaching display end are acquired by the teaching display end through an image acquisition device arranged on the teaching display end.
The definition explanation, the specific description and the preferred scheme in the method for implementing remote education, the teaching demonstration end and the teaching projection end provided in embodiment 6 are the same as the corresponding contents mentioned in embodiments 1 to 5, so the technical effects brought by the same definition, the specific description and the preferred scheme are the same as the systems for implementing remote education provided in embodiments 1 to 5, and the specific contents can refer to the corresponding descriptions in embodiments 1 to 5, and are not repeated herein.
It should be understood that the above-mentioned embodiments of the present invention are only examples for clearly illustrating the technical solutions of the present invention, and are not intended to limit the specific embodiments of the present invention. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention claims should be included in the protection scope of the present invention claims.

Claims (12)

1. A system for implementing distance education, comprising: the teaching display terminal and the teaching projection terminal;
the teaching display end is provided with an image acquisition device for aligning with a user; the teaching display end is used for acquiring the three-dimensional parameters of the user through the image acquisition device and transmitting the three-dimensional parameters to the teaching projection end;
the teaching projection end is provided with a projection device; the teaching projection end is used for receiving the three-dimensional parameters transmitted by the teaching display end, generating projection data according to the three-dimensional parameters, and displaying the projection data through the projection device.
2. The distance educational system of claim 1, wherein the three-dimensional parameters of the user comprise human skeletal three-dimensional parameters; the projection data comprises human body projection data;
the teaching display end comprises:
the human skeleton parameter processing unit is used for acquiring human skeleton three-dimensional parameters of the user through the image acquisition device and transmitting the human skeleton three-dimensional parameters to the teaching projection end;
the teaching projection end includes:
the human body parameter assignment unit is used for assigning the human body skeleton three-dimensional parameters to a human body model;
and the human body data conversion unit is used for converting the assigned human body model into the human body projection data and presenting the human body projection data through the projection device.
3. The distance education system of claim 2 wherein the human three-dimensional skeletal parameters are three-dimensional parameters of a number of joints of the user.
4. The distance education system of claim 2 or 3 wherein the three dimensional parameters of the user further include three dimensional parameters of the user's face; the projection data further comprises face projection data;
the teaching display end further comprises:
the face parameter processing unit is used for acquiring the face three-dimensional parameters of the user through the image acquisition device and transmitting the face three-dimensional parameters to the teaching projection end;
teaching projection end still includes:
the face parameter assignment unit is used for assigning the received three-dimensional face parameters of the user to the face model;
and the face data conversion unit is used for converting the assigned face model into the face projection data and presenting the face projection data through the projection device.
5. The distance education system as claimed in any one of claims 2 or 3 wherein the teaching demonstration terminal further includes:
the human body request receiving and transmitting unit is used for receiving a virtual human body request and transmitting the virtual human body request to the teaching projection terminal;
teaching projection end still includes:
a human body request receiving unit for receiving the virtual human body request;
and the human body model selection unit is used for taking a preset virtual human body model as the human body model when the virtual human body request is received, and taking a preset real human body model as the human body model when the virtual human body request is not received.
6. The distance education system of claim 4 wherein the teaching demonstration terminal further comprises:
the face request receiving and transmitting unit is used for receiving a virtual face request and transmitting the virtual face request to the teaching projection end;
teaching projection end still includes:
a face request receiving unit for receiving the virtual face request;
a face model selection unit, configured to use a preset virtual face model as the face model when the virtual face request is received, and use a preset real face model as the face model when the virtual face request is not received.
7. The distance education system of claim 5 wherein the teaching display terminal further includes:
and the real human body model acquisition unit is used for scanning the user through the image acquisition device to acquire scanning data, generating a human body model proportional to the user according to the scanning data, and taking the generated model as the preset real human body model.
8. The distance education system according to any one of claims 1-3 or 6-7 wherein the teaching demonstration end is an education tablet device and the image acquisition device is disposed on a frame of the teaching demonstration end and/or the teaching projection end is an education tablet device and the projection device is disposed on a frame of the teaching projection end.
9. The teaching demonstration end is characterized in that an image acquisition device which is aligned with a user is arranged;
the teaching display end is used for acquiring the three-dimensional parameters of the user through the image acquisition device and transmitting the three-dimensional parameters to the teaching projection end, so that the teaching projection end generates projection data according to the three-dimensional parameters, and the projection data is presented through the projection device arranged on the teaching projection end.
10. A teaching projection terminal is characterized in that a projection device is arranged;
the teaching projection end is used for receiving the three-dimensional parameters of the user transmitted by the teaching display end, generating projection data according to the three-dimensional parameters, and displaying the projection data through the projection device;
the three-dimensional parameters of the user transmitted by the teaching display end are acquired by the teaching display end through an image acquisition device arranged on the teaching display end.
11. A method for realizing remote education is characterized in that the method is applied to a teaching display end and is provided with an image acquisition device for aligning with a user;
the method comprises the following steps:
and acquiring the three-dimensional parameters of the user through the image acquisition device, transmitting the three-dimensional parameters to a teaching projection end, so that the teaching projection end generates projection data according to the three-dimensional parameters, and presenting the projection data through a projection device arranged on the teaching projection end.
12. A method for realizing remote education is characterized in that the method is applied to a teaching projection end, and is provided with a projection device;
the method comprises the following steps:
receiving three-dimensional parameters of a user transmitted by a teaching display end, generating projection data according to the three-dimensional parameters, and displaying the projection data through the projection device;
the three-dimensional parameters of the user transmitted by the teaching display end are acquired by the teaching display end through an image acquisition device arranged on the teaching display end.
CN202110924590.7A 2021-08-12 2021-08-12 System, method and teaching end for realizing remote education Pending CN115909825A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110924590.7A CN115909825A (en) 2021-08-12 2021-08-12 System, method and teaching end for realizing remote education

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110924590.7A CN115909825A (en) 2021-08-12 2021-08-12 System, method and teaching end for realizing remote education

Publications (1)

Publication Number Publication Date
CN115909825A true CN115909825A (en) 2023-04-04

Family

ID=86471378

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110924590.7A Pending CN115909825A (en) 2021-08-12 2021-08-12 System, method and teaching end for realizing remote education

Country Status (1)

Country Link
CN (1) CN115909825A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109740476A (en) * 2018-12-25 2019-05-10 北京琳云信息科技有限责任公司 Instant communication method, device and server
CN210091423U (en) * 2019-12-30 2020-02-18 杭州赛鲁班网络科技有限公司 Remote teaching interactive system based on holographic projection
CN111880709A (en) * 2020-07-31 2020-11-03 北京市商汤科技开发有限公司 Display method and device, computer equipment and storage medium
CN112230772A (en) * 2020-10-14 2021-01-15 华中师范大学 Virtual-actual fused teaching aid automatic generation method
CN112562433A (en) * 2020-12-30 2021-03-26 华中师范大学 5G strong interaction remote delivery teaching system based on holographic terminal and working method thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109740476A (en) * 2018-12-25 2019-05-10 北京琳云信息科技有限责任公司 Instant communication method, device and server
CN210091423U (en) * 2019-12-30 2020-02-18 杭州赛鲁班网络科技有限公司 Remote teaching interactive system based on holographic projection
CN111880709A (en) * 2020-07-31 2020-11-03 北京市商汤科技开发有限公司 Display method and device, computer equipment and storage medium
CN112230772A (en) * 2020-10-14 2021-01-15 华中师范大学 Virtual-actual fused teaching aid automatic generation method
CN112562433A (en) * 2020-12-30 2021-03-26 华中师范大学 5G strong interaction remote delivery teaching system based on holographic terminal and working method thereof

Similar Documents

Publication Publication Date Title
Kruger et al. The responsive workbench: A virtual work environment
Patel et al. The effects of fully immersive virtual reality on the learning of physical tasks
Cheng et al. Construction of interactive teaching system for course of mechanical drawing based on mobile augmented reality technology
Hoang et al. Augmented studio: Projection mapping on moving body for physiotherapy education
JP6683864B1 (en) Content control system, content control method, and content control program
KR20120113058A (en) Apparatus and method for tutoring in the fusion space of real and virtual environment
CN111986334A (en) Hololens and CAVE combined virtual experience system and method
Faridan et al. Chameleoncontrol: Teleoperating real human surrogates through mixed reality gestural guidance for remote hands-on classrooms
Pandzic et al. Towards natural communication in networked collaborative virtual environments
JP2739444B2 (en) Motion generation device using three-dimensional model
CN115909825A (en) System, method and teaching end for realizing remote education
JP6892478B2 (en) Content control systems, content control methods, and content control programs
CN115880954A (en) Teaching interactive system based on meta universe
Ponton et al. Fitted avatars: automatic skeleton adjustment for self-avatars in virtual reality
JP2021009351A (en) Content control system, content control method, and content control program
Mujumdar Augmented Reality
Chessa et al. Insert your own body in the oculus rift to improve proprioception
Horst et al. Avatar2Avatar: Augmenting the Mutual Visual Communication between Co-located Real and Virtual Environments.
Li et al. Application of virtual reality and augmented reality technology in Teaching
Murnane et al. Learning from human-robot interactions in modeled scenes
CN114205640B (en) VR scene control system is used in teaching
Schäfer Improving Essential Interactions for Immersive Virtual Environments with Novel Hand Gesture Authoring Tools
EP4303824A1 (en) System and method for monitoring a body pose of a user
Hirabayashi et al. Development of learning support equipment for sign language and fingerspelling by mixed reality
Liu et al. PianoSyncAR: Enhancing Piano Learning through Visualizing Synchronized Hand Pose Discrepancies in Augmented Reality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination