CN117270681A - Panoramic teaching method based on visual gesture recognition - Google Patents
Panoramic teaching method based on visual gesture recognition Download PDFInfo
- Publication number
- CN117270681A CN117270681A CN202311092208.6A CN202311092208A CN117270681A CN 117270681 A CN117270681 A CN 117270681A CN 202311092208 A CN202311092208 A CN 202311092208A CN 117270681 A CN117270681 A CN 117270681A
- Authority
- CN
- China
- Prior art keywords
- gesture
- trigger
- discussion
- recognition
- panoramic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 32
- 230000000007 visual effect Effects 0.000 title claims abstract description 29
- 230000014509 gene expression Effects 0.000 claims abstract description 23
- 238000004458 analytical method Methods 0.000 claims description 9
- 238000013519 translation Methods 0.000 claims description 7
- 238000012790 confirmation Methods 0.000 claims description 6
- 230000000153 supplemental effect Effects 0.000 claims description 6
- 238000001514 detection method Methods 0.000 claims description 5
- 238000010191 image analysis Methods 0.000 claims description 4
- 238000012545 processing Methods 0.000 claims description 4
- 238000010801 machine learning Methods 0.000 claims description 3
- 206010011878 Deafness Diseases 0.000 abstract description 12
- 238000004891 communication Methods 0.000 abstract description 10
- 230000000694 effects Effects 0.000 abstract description 5
- 210000003414 extremity Anatomy 0.000 description 6
- 241001310793 Podium Species 0.000 description 3
- 230000003993 interaction Effects 0.000 description 3
- 244000060701 Kaempferia pandurata Species 0.000 description 2
- 235000016390 Uvaria chamae Nutrition 0.000 description 2
- 238000007596 consolidation process Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 210000004247 hand Anatomy 0.000 description 2
- 210000000707 wrist Anatomy 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/28—Recognition of hand or arm movements, e.g. recognition of deaf sign language
Abstract
The invention provides a panoramic teaching method based on visual gesture recognition, which at least comprises the following steps: a locking step, namely acquiring a first trigger gesture object and locking key shooting of the first trigger gesture object to acquire a panoramic image containing gesture operation; gesture recognition, namely performing gesture recognition on all gestures in the early stage when acquiring the ending gesture by combining key point recognition with action recognition, analyzing to obtain speaking expression content and displaying the speaking expression content on a panoramic ring screen; and in the classroom discussion step, gesture recognition is carried out when the gesture recognition step monitors that a second trigger gesture exists, and discussion sign language content is obtained according to the recognition of the gesture of the second trigger gesture object and displayed on the panoramic ring screen. According to the method for identifying, analyzing and displaying the gesture on the panoramic ring screen, better classroom discussion conditions of the deaf-mute students are provided, teachers can better integrate and obtain all contents of discussion and communication among the students, and the classroom teaching effect is improved.
Description
Technical Field
The invention relates to the technical field of gesture recognition, in particular to a panoramic teaching method based on visual gesture recognition.
Background
In panoramic teaching, students may enter a simulated virtual environment, such as a virtual laboratory, historic scene, geographic environment, etc., through virtual reality devices to better understand and learn related knowledge. Panorama teaching typically incorporates multimedia elements such as audio, video, text, etc. to provide a richer teaching resource and interactive experience. Panoramic teaching techniques create an immersive teaching environment by using panoramic Virtual Reality (VR) devices or large screen display devices. Students can enter virtual teaching scenes through helmets or screens to feel the experience of being in the scene of learning. The panoramic teaching technology can provide more visual information and interactivity, and helps students to better understand teaching contents. Visual gesture recognition techniques convert a student's gesture motion into a signal that can be understood by a computer through the use of a camera or depth sensor. Such techniques may enable natural interaction with a computer, through which a student may control a device or perform virtual interactions. The visual gesture recognition technology can provide a more visual and flexible interaction mode, and the participation degree and learning power of students are enhanced.
In the whole panoramic teaching process, in order to simulate a classroom more, the study consolidation of students in the classroom is deepened, the classroom discussion links among the students are indispensable, in the student classroom discussion links, the students in the normal classroom can quickly draw attention of other students due to the existence of sound, so that the students can actively participate in the discussion, and in particular classrooms, such as particular classrooms suitable for the study of deaf-mute students, the deaf mute students can only communicate through sign language gestures, cannot communicate through language, further increase the obstruction of communication, so that the students in the remote classrooms of the deaf mute have poor classroom discussion links and even cannot realize the student classroom discussion links; meanwhile, since the communication between the deaf-mute students can only be face-to-face, the students want to discuss each other and only can carry out sign language planning face-to-face, but when the students face-to-face, a teacher is difficult to participate in or see all communication contents between the students, unless the teacher walks down a podium and enters a small-range communication circle where the students face-to-face with the students, all contents published by the students in discussion at present can be obtained in the whole course.
Therefore, a panoramic teaching method based on visual gesture recognition is needed to be designed so as to be suitable for discussion links of the panoramic teaching method based on gesture recognition for deaf-mute students.
Disclosure of Invention
In order to solve the problems, the invention provides a panoramic teaching method based on visual gesture recognition, which is suitable for students in a panoramic teaching link for deaf-mute students.
In order to achieve the above purpose, the panoramic teaching method based on visual gesture recognition provided by the invention comprises the following steps:
a locking step, namely acquiring a first trigger gesture object and locking key shooting of the first trigger gesture object to acquire a panoramic image containing gesture operation;
gesture recognition, namely performing gesture recognition on all gestures in the early stage when acquiring the ending gesture by combining key point recognition with action recognition, analyzing to obtain speaking expression content and displaying the speaking expression content on a panoramic ring screen;
and in the classroom discussion step, gesture recognition is carried out when the gesture recognition step monitors that a second trigger gesture exists at the same time, and discussion sign language content is obtained according to the recognition of the gesture of the second trigger gesture object and displayed on the panoramic ring screen.
Further, before the locking step, the method further includes:
and a triggering step, wherein after the triggering gesture is acquired, the locking step is started.
Further, in the locking step, when a plurality of first trigger gestures are acquired, a video frame containing the first trigger gestures is extracted according to a time axis, a video frame of the first trigger gesture with the earliest time axis is extracted, image processing is performed, the first trigger gesture object information is obtained, the priority of a vision acquisition device corresponding to the first trigger gesture object is adjusted to be optimal, or the vision acquisition device is locked to acquire and shoot a first trigger gesture object limb image containing hands and arms on the upper body of the first trigger gesture object.
Further, image motion recognition is performed according to the first trigger gesture object limb image, a key point detection algorithm is used for positioning the key point position of the hand, gesture continuous motion is obtained, a gesture content translation library is matched, and the speaking expression content corresponding to the gesture continuous motion is obtained through analysis and displayed on a panoramic ring screen.
Further, in the step of executing the gesture recognition, after the speaking ending gesture is recognized, a speaking content confirmation instruction of the first trigger gesture object is obtained.
Further, according to the speech expression content and the speech content confirmation instruction for multiple times, recognition and gesture-language analysis are optimized by using a machine learning depth algorithm, gesture classification is performed, different meanings of the same gesture in different front and rear gesture expressions are obtained, and translation accuracy is optimized.
Further, the second trigger gesture includes a discussion trigger gesture and/or a supplemental trigger gesture, and the discussion trigger gesture has a higher trigger priority than the supplemental trigger gesture.
Further, when the second trigger gesture is determined to be the discussion trigger gesture:
a discussion locking sub-step of extracting video frames containing the second trigger gesture in a preset discussion judging time period, carrying out image analysis on all the extracted video frames, and locking corresponding discussion students according to the second trigger gesture;
a sorting sub-step of sequentially sorting the discussion students according to time axis information contained in the second trigger gesture;
and a discussion speaking sub-step, which allows the discussion students to sequentially speak according to the order information, analyzes and obtains the discussion speaking content and displays the content on the panoramic ring screen.
Further, all the first trigger gesture objects and the second trigger gesture objects participating in the discussion are displayed on the panoramic ring screen, the first trigger gesture object or the second trigger gesture object which selects the current speaking is displayed on the speaking position, the other first trigger gesture object or the second trigger gesture object is displayed on the position to be discussed, and simultaneously the speaking expression content or the discussion speaking content is displayed on the discussion paper position.
Compared with the prior art, the invention has the beneficial effects that:
1. according to the method for identifying, analyzing and displaying the gesture on the panoramic ring screen, better classroom discussion conditions of the deaf-mute students are provided, teachers can better integrate and obtain all contents of discussion and communication among the students, and the classroom teaching effect is improved.
2. According to the invention, a first trigger gesture object is firstly obtained and locked for key shooting, a panoramic image containing gesture actions is obtained, gestures are subjected to gesture recognition, speech expression content is obtained through analysis and is displayed on a panoramic ring screen, gesture recognition is mainly performed when the gesture recognition step is used for simultaneously monitoring that a second trigger gesture exists, discussion sign language content is obtained according to the gesture recognition of the second trigger gesture object and is displayed on the panoramic ring screen, after the second gesture analysis, the second trigger gesture object is displayed on the panoramic ring screen through characters, so that all students participating in discussion can obtain the speech content of the other side without rotating bodies, and the teaching effect of a classroom is improved.
3. According to the invention, the analyzed speaking expression content and the discussion sign language content are displayed on the panoramic ring screen, so that all students are allowed to obtain the contents of the classroom discussion speaking through the text content displayed on the panoramic ring screen, and the classroom teaching effect is improved.
For a better understanding and implementation, the present invention is described in detail below with reference to the drawings.
Drawings
FIG. 1 is a flow chart of a visual gesture recognition-based panoramic teaching method of the present invention;
FIG. 2 is a flow chart of a visual gesture recognition-based panoramic teaching method of the present invention;
FIG. 3 is a flowchart showing a class discussion procedure when a second trigger gesture is determined to be a discussion trigger gesture in a panoramic teaching method based on visual gesture recognition according to the present invention;
FIG. 4 is a tiled view of a panoramic ring screen in a visual gesture recognition-based panoramic teaching method of the present invention;
fig. 5 is a top view of the whole classroom in an on-line classroom environment according to the visual gesture recognition-based panoramic teaching method of the present invention.
1, a panoramic annular screen; 2. a vision acquisition device; 3. a podium; 4. a student seat; a. a speaking position; b. a bit to be discussed; c. discussing the article.
Detailed Description
For a clearer understanding of technical features, objects and effects of the present invention, a specific embodiment of the present invention will be described with reference to the accompanying drawings, but the scope of the present invention is not limited to the following.
In the whole panoramic teaching process, in order to simulate a classroom more, the study consolidation of students in the classroom is deepened, the classroom discussion links among the students are indispensable, in the student classroom discussion links, the students in the normal classroom can quickly draw attention of other students due to the existence of sound, so that the students can actively participate in the discussion, and in particular classrooms, such as particular classrooms suitable for the study of deaf-mute students, the deaf mute students can only communicate through sign language gestures, cannot communicate through language, further increase the obstruction of communication, so that the students in the remote classrooms of the deaf mute have poor classroom discussion links and even cannot realize the student classroom discussion links; meanwhile, since the communication between the deaf-mute students can only be face-to-face, the students want to discuss each other and only can carry out sign language planning face-to-face, but when the students face-to-face, a teacher is difficult to participate in or see all communication contents between the students, unless the teacher walks down a podium and enters a small-range communication circle where the students face-to-face with the students, all contents published by the students in discussion at present can be obtained in the whole course.
Referring to fig. 1 to 5, a panoramic teaching method based on visual gesture recognition according to a preferred embodiment 1 of the present invention at least includes the following steps:
and a locking step, wherein if a plurality of trigger gestures exist at the same time, the order sequence of the trigger gestures is obtained according to the front-to-back sequence of the video image frames, a first trigger gesture object positioned at the front position is selected and obtained, and the first trigger gesture object is locked for key shooting, so that a panoramic image containing gesture actions is obtained.
And a step of locking and displaying, namely displaying the panoramic image in a speaking area (namely a discussion text position c) on the panoramic annular screen 1.
A gesture recognition step, namely performing gesture recognition on all gestures in the earlier stage when acquiring the ending gesture by combining key point recognition with action recognition, analyzing to obtain speaking expression content and displaying the speaking expression content on the panoramic annular screen 1;
and in the classroom discussion step, gesture recognition is carried out when the gesture recognition step monitors that a second trigger gesture exists at the same time, and discussion sign language content is obtained according to the recognition of the gesture of a second trigger gesture object and displayed on the panoramic ring screen 1.
The locking step is preceded by:
and a triggering step, wherein after the triggering gesture is acquired, the locking step is started.
The visual acquisition device 2, such as a camera, is used for capturing video of a student on the remote student side or the panoramic classroom scene, and acquiring images to obtain limb actions of the student on the remote student side or the panoramic classroom scene. And if the limb actions of the students in the students at the remote student end or the scene students in the panoramic class meet the pre-trigger gestures after the point calculation image processing, the recognition and the comparison, starting the locking step.
In the locking step, when a plurality of first trigger gestures are acquired, a video frame containing the first trigger gestures is extracted according to a time axis, the video frame of the first trigger gesture with the earliest time axis is extracted, image processing is performed, the first trigger gesture object information is obtained, the priority of the vision acquisition device 2 corresponding to the first trigger gesture object is adjusted to be optimal, or the vision acquisition device 2 is locked to acquire and shoot a first trigger gesture object limb image containing hands and arms on the upper body of the first trigger gesture object.
According to the first triggering gesture object limb image, image action recognition is carried out, a key point detection algorithm is used for positioning the key point position of the hand, in the application, the key point is preferably the palm center, the finger tip of each finger, the joint point of each finger, the finger root and the wrist of each finger, and particularly the same hand, the palm center records 1 characteristic point, the finger tip of each finger records 5 characteristic points, the joint point of each finger records 9 characteristic points, the finger root of each finger records 5 characteristic points, the wrist records 1 characteristic point, and the total number of the characteristic points is 21. Connecting each characteristic point by combining gesture motion on a time axis through a characteristic point detection or joint detection algorithm to obtain a motion track of the whole characteristic point under the connection, obtaining gesture continuous motion, matching arm motion characteristics at a later stage, and analyzing to obtain speaking expression content corresponding to the gesture continuous motion by combining with a gesture content translation library after analysis and displaying the speaking expression content on the panoramic ring screen 1. Wherein, the gesture content translation library is established in the earlier stage.
And optimizing recognition and gesture and sign language analysis by using a machine learning depth algorithm according to the multi-time speaking expression content and the speaking content confirmation instruction, classifying the gestures, obtaining different meanings of the same gesture in different front and rear gesture expressions, and optimizing translation accuracy. And when the accuracy rate of post statistics of the confirmation of the speaking content reaches more than 99%, the step of confirming the speaking content can be canceled. For example: in the educational field, sign language may be used for questions, discussions and answers in a class. For example, extending the index finger of the right hand and pointing forward indicates that a question is ready to be answered. But in daily life, extending the index finger of the right hand and pointing forward means indicating or guiding a certain object or direction. So that the gesture classification can be more systematically classified, the use environment is distinguished, or the gesture classification is performed in combination with the upper gesture and the lower gesture.
The second trigger gesture includes a discussion trigger gesture and/or a supplemental trigger gesture, and the discussion trigger gesture has a higher trigger priority than the supplemental trigger gesture.
When the second trigger gesture is determined to be the discussion trigger gesture:
a discussion locking sub-step of extracting video frames containing the second trigger gestures in a preset discussion judging time period (the preset discussion judging time can be preset to be 1 minute), carrying out image analysis on all the extracted video frames, locking corresponding discussion students according to the second trigger gestures, extracting all the second trigger gestures contained in the video frames in the preset discussion judging time period, carrying out image analysis, and carrying out image position analysis by a computer if the discussion students are in an offline panoramic class, and locking to obtain discussion students corresponding to the second trigger gestures; if the online panoramic class exists, judging video frames corresponding to all the second trigger gestures in the time period according to preset discussion, extracting network IP contained in the video frames, obtaining student end information according to the network IP, and locking corresponding discussion students;
a sorting sub-step of sequentially sorting the discussion students according to time axis information contained in the second trigger gesture;
and a discussion speaking sub-step, which allows the students to conduct discussion speaking in sequence according to the order information, analyzes and obtains the discussion speaking content and displays the content on the panoramic annular screen 1.
All the first trigger gesture objects and the second trigger gesture objects participating in the discussion are displayed on the panoramic annular screen 1, the first trigger gesture object or the second trigger gesture object which selects the current speaking is displayed on the speaking position a, the other first trigger gesture object or the second trigger gesture object is displayed on the position b to be discussed, and simultaneously the speaking expression content or the discussion speaking content is displayed on the discussion text position c. It is further to be noted that the plurality of first trigger gesture objects or the second trigger gesture objects are displayed at the position b to be discussed, where the position ordering is ordered according to the order of the second trigger gesture objects, and the classmates who complete speaking are ordered to the last position.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The preferred embodiments of the invention disclosed above are intended only to assist in the explanation of the invention. The preferred embodiments are not intended to be exhaustive or to limit the invention to the precise form disclosed. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and the practical application, to thereby enable others skilled in the art to best understand and utilize the invention. The invention is limited only by the claims and the full scope and equivalents thereof.
Claims (9)
1. A panoramic teaching method based on visual gesture recognition is characterized by at least comprising the following steps:
a locking step, namely acquiring a first trigger gesture object and locking key shooting of the first trigger gesture object to acquire a panoramic image containing gesture actions;
gesture recognition, namely performing gesture recognition on all gestures in the early stage when acquiring the ending gesture by combining key point recognition with action recognition, analyzing to obtain speaking expression content and displaying the speaking expression content on a panoramic ring screen;
and in the classroom discussion step, gesture recognition is carried out when the gesture recognition step monitors that a second trigger gesture exists at the same time, and discussion sign language content is obtained according to the recognition of the gesture of the second trigger gesture object and displayed on the panoramic ring screen.
2. The visual gesture recognition-based panorama teaching method according to claim 1, wherein said locking step is preceded by:
and a triggering step, wherein after the triggering gesture is acquired, the locking step is started.
3. The panoramic teaching method based on visual gesture recognition according to claim 1, wherein in the locking step, when a plurality of first trigger gestures are acquired, a video frame containing the first trigger gesture is extracted according to a time axis, a video frame of the first trigger gesture with the earliest time axis is extracted, image processing is performed, the first trigger gesture object information is obtained, and the priority of a visual acquisition device corresponding to the first trigger gesture object is adjusted to be optimal or the visual acquisition device is locked to acquire and shoot a first trigger gesture object limb image containing hands and arms on the upper body of the first trigger gesture object.
4. The panoramic teaching method based on visual gesture recognition according to claim 3, wherein image motion recognition is performed according to the first trigger gesture object limb image, a key point detection algorithm is used for positioning the key point position of a hand, gesture continuous motion is obtained, a gesture content translation library is matched, and the speaking expression content corresponding to the gesture continuous motion is obtained through analysis and displayed on a panoramic ring screen.
5. The panoramic teaching method based on visual gesture recognition according to claim 4, wherein in the gesture recognition step, after the speaking end gesture is recognized, a speaking content confirmation instruction of the first trigger gesture object is acquired.
6. The visual gesture recognition-based panorama teaching method according to claim 5, wherein recognition and gesture sign language analysis are optimized by using a machine learning depth algorithm according to the speech expression content and the speech content confirmation instruction for a plurality of times, gesture classification is performed, different meanings of the same gesture in different front and rear gesture expressions are obtained, and translation accuracy is optimized.
7. A visual gesture recognition based panoramic teaching method according to claim 1 wherein said second trigger gesture comprises a discussion trigger gesture and/or a supplemental trigger gesture, and wherein the trigger priority of said discussion trigger gesture is higher than the trigger priority of said supplemental trigger gesture.
8. The visual gesture recognition-based panorama teaching method according to claim 7, wherein when the second trigger gesture is determined to be the discussion trigger gesture:
a discussion locking sub-step of extracting video frames containing the second trigger gesture in a preset discussion judging time period, carrying out image analysis on all the extracted video frames, and locking corresponding discussion students according to the second trigger gesture;
a sorting sub-step of sequentially sorting the discussion students according to time axis information contained in the second trigger gesture;
and a discussion speaking sub-step, which allows the discussion students to sequentially speak according to the order information, analyzes and obtains the discussion speaking content and displays the content on the panoramic ring screen.
9. The visual gesture recognition-based panorama teaching method according to claim 8, wherein all of the first trigger gesture objects and the second trigger gesture objects participating in discussion are displayed on a panorama ring screen, the first trigger gesture object or the second trigger gesture object selected for current speaking is displayed on a speaking position, the other first trigger gesture object or the second trigger gesture object is displayed on a position to be discussed, and the speaking expression content or the discussion speaking content is displayed on a discussion paper position.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311092208.6A CN117270681A (en) | 2023-08-29 | 2023-08-29 | Panoramic teaching method based on visual gesture recognition |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311092208.6A CN117270681A (en) | 2023-08-29 | 2023-08-29 | Panoramic teaching method based on visual gesture recognition |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117270681A true CN117270681A (en) | 2023-12-22 |
Family
ID=89205257
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311092208.6A Pending CN117270681A (en) | 2023-08-29 | 2023-08-29 | Panoramic teaching method based on visual gesture recognition |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117270681A (en) |
-
2023
- 2023-08-29 CN CN202311092208.6A patent/CN117270681A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111796752B (en) | Interactive teaching system based on PC | |
US11307667B2 (en) | Systems and methods for facilitating accessible virtual education | |
CN211878778U (en) | Real-time interactive chemical experiment teaching system based on virtual reality technology | |
CN109101879B (en) | Posture interaction system for VR virtual classroom teaching and implementation method | |
CN112287844A (en) | Student situation analysis method and device, electronic device and storage medium | |
Lakshmiprabha et al. | [Poster] An augmented and virtual reality system for training autistic children | |
JP4451079B2 (en) | Information management server and information distribution system | |
CN113052085A (en) | Video clipping method, video clipping device, electronic equipment and storage medium | |
CN114387829A (en) | Language learning system based on virtual scene, storage medium and electronic equipment | |
CN109754653B (en) | Method and system for personalized teaching | |
CN111798709A (en) | Remote teaching system based on cloud platform | |
CN110046290B (en) | Personalized autonomous teaching course system | |
CN112367526B (en) | Video generation method and device, electronic equipment and storage medium | |
CN112382151B (en) | Online learning method and device, electronic equipment and storage medium | |
KR20200033700A (en) | Multi-diaplay on-line education system capable of graded teaching | |
Barmaki et al. | Gesturing and Embodiment in Teaching: Investigating the Nonverbal Behavior of Teachers in a Virtual Rehearsal Environment | |
CN112185195A (en) | Method and device for controlling remote teaching classroom by AI (Artificial Intelligence) | |
CN111050111A (en) | Online interactive learning communication platform and learning device thereof | |
CN111311995A (en) | Remote teaching system and teaching method based on augmented reality technology | |
CN117270681A (en) | Panoramic teaching method based on visual gesture recognition | |
Hernández Correa et al. | An application of machine learning and image processing to automatically detect teachers’ gestures | |
JP6819194B2 (en) | Information processing systems, information processing equipment and programs | |
CN114936952A (en) | Digital education internet learning system | |
US20160372154A1 (en) | Substitution method and device for replacing a part of a video sequence | |
CN113837010A (en) | Education assessment system and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |