CN112995132B - Online learning interaction method and device, electronic equipment and storage medium - Google Patents

Online learning interaction method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112995132B
CN112995132B CN202110139622.2A CN202110139622A CN112995132B CN 112995132 B CN112995132 B CN 112995132B CN 202110139622 A CN202110139622 A CN 202110139622A CN 112995132 B CN112995132 B CN 112995132B
Authority
CN
China
Prior art keywords
interaction
action
student
teacher
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110139622.2A
Other languages
Chinese (zh)
Other versions
CN112995132A (en
Inventor
侯在鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202110139622.2A priority Critical patent/CN112995132B/en
Publication of CN112995132A publication Critical patent/CN112995132A/en
Application granted granted Critical
Publication of CN112995132B publication Critical patent/CN112995132B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/40Support for services or applications
    • H04L65/4061Push-to services, e.g. push-to-talk or push-to-video
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/08Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations
    • G09B5/14Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations with provision for individual teacher-student communication
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B7/00Electrically-operated teaching apparatus or devices working with questions and answers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/239Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests
    • H04N21/2393Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests involving handling client requests
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/24Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth, upstream requests
    • H04N21/2407Monitoring of transmitted content, e.g. distribution time, number of downloads
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/251Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25866Management of end-user data
    • H04N21/25891Management of end-user data being end-user preferences

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Computing Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

The disclosure discloses an interaction method, an interaction device, electronic equipment and a storage medium for online learning, relates to the technical field of computers, and particularly relates to the technical field of artificial intelligence such as computer vision. The specific implementation scheme is as follows: performing action recognition on the teacher video stream acquired by the teacher client to obtain teacher action characteristics; performing action recognition on student video streams acquired by student clients to obtain student action characteristics; determining whether the teacher action feature and/or the student action feature hit an interaction scene, and displaying the interaction effect of the interaction scene on the teacher client and/or the student client under the condition of hitting the interaction scene. According to the method and the device for online learning, interaction modes of online learning can be enriched, and interaction efficiency of online learning is improved.

Description

Online learning interaction method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technology, and in particular, to the field of artificial intelligence such as computer vision.
Background
With the development of computer technology, users can learn through the internet in an electronic environment composed of communication technology, microcomputer technology, computer technology, artificial intelligence, network technology, multimedia technology, and the like.
Since the development of offline learning is easily affected in some emergency situations, online learning is a mainstream manner of current learning in view of the feature that online learning is not limited by time, place, space, and the like.
Disclosure of Invention
The disclosure provides an interaction method, an interaction device, electronic equipment and a storage medium for online learning.
According to an aspect of the present disclosure, there is provided an interactive method of online learning, including:
performing action recognition on the teacher video stream acquired by the teacher client to obtain teacher action characteristics;
performing action recognition on student video streams acquired by student clients to obtain student action characteristics;
determining whether the teacher action feature and/or the student action feature hit an interaction scene, and displaying the interaction effect of the interaction scene on the teacher client and/or the student client under the condition of hitting the interaction scene.
According to another aspect of the present disclosure, there is provided an interactive apparatus for online learning, including:
the teacher action recognition module is used for carrying out action recognition on the teacher video stream acquired by the teacher client to obtain teacher action characteristics;
the student action recognition module is used for performing action recognition on the student video stream acquired by the student client to obtain student action characteristics;
and the interaction processing module is used for determining whether the teacher action characteristic and/or the student action characteristic hit an interaction scene, and displaying the interaction effect of the interaction scene on the teacher client side and/or the student client side under the condition of hitting the interaction scene.
According to still another aspect of the present disclosure, there is provided an electronic apparatus including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the interactive method of online learning provided by any embodiment of the present application.
According to yet another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the interactive method of online learning provided by any embodiment of the present application.
According to yet another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the interactive method of online learning provided by any embodiment of the present application.
According to the technology, the interactive mode of online learning is enriched, and the interactive efficiency of online learning is improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1a is a schematic diagram of an interactive method of online learning according to an embodiment of the present disclosure;
FIG. 1b is a schematic illustration of an animation effect of online learning according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of another interactive method of online learning according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of yet another interactive method of online learning according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of an interactive device for online learning according to an embodiment of the present disclosure;
fig. 5 is a block diagram of an electronic device for implementing an interactive method of online learning of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1a is a schematic diagram of an interaction method for online learning according to an embodiment of the present application, which may be applicable to a case of interaction of an engineer during online learning. The method can be performed by an online learning interaction device, which can be implemented in hardware and/or software and can be configured in an electronic device. Referring to fig. 1a, the method specifically comprises the following steps:
s110, performing action recognition on the teacher video stream acquired by the teacher client to obtain teacher action characteristics.
And S120, performing action recognition on the student video stream acquired by the student client to obtain student action characteristics.
S130, determining whether the teacher action feature and/or the student action feature hit an interaction scene, and displaying the interaction effect of the interaction scene on the teacher client side and/or the student client side under the condition that the interaction scene is hit.
In the embodiment of the application, the teacher client and the student client may be smart terminals such as tablet computers and smart phones and bear online learning application products. After a teacher and students enter a live broadcast class by using online learning application products, the video streams of the teacher can be respectively acquired in real time through the image acquisition devices of the client sides of the teacher, and the video streams of the students can be acquired in real time through the image acquisition devices of the client sides of the students; and respectively performing action recognition on the teacher video stream and the student video stream, namely respectively calculating action image characteristics of the teacher video stream and the student video stream.
In the embodiment of the application, the interaction scene can be associated with action features and interaction effects, the action features of the interaction scene are used for matching with teacher action features and/or student action features, whether the teacher action features and/or the student action features hit the interaction scene or not is determined according to the matching results, and specifically, the interaction scene is hit under the condition that the action features are matched; the interaction effect of the interaction scene is used for displaying the interaction effect of the interaction scene on the client of the object to be interacted under the condition that the action characteristics of the teacher and/or the action characteristics of the student hit the interaction scene, so that the object to be interacted can acquire the action characteristics of the other party through the interaction effect displayed on the client. That is, the interaction initiator can execute the actual action, perform action recognition on the video stream collected by the client of the interaction initiator to obtain the actual action feature, and under the condition that the actual action feature hits the interaction scene, the interaction effect of the interaction scene can be displayed on the client of the object to be interacted, so that the object to be interacted can learn the action feature of the other party without watching the video stream collected by the interaction initiator, the interaction of the actual action and the interaction effect mutually matched is realized, compared with the online learning process, the teacher always continuously performs the interaction through the voice, the interaction mode is enriched, compared with the voice interaction, the influence of the interaction process on the teaching process can be reduced by the gesture interaction, the teaching process is not required to be interrupted, and the interaction efficiency of online learning can be improved.
In an alternative embodiment, the interactive effect of the interactive scene is a gesture animation effect of the interactive scene. Wherein, the action characteristic of the interaction scene can be at least one of the following: compared with gestures such as love, clapping, praise, hand lifting and the like, the gesture animation effect of the interaction scene can be that the animation effects such as love, clapping, praise, hand lifting and the like are displayed on a screen of the client. Referring to fig. 1b, the clap animation effect 11 may be presented on a teacher client and/or student client during teaching. Through the interaction of the gesture animation effect displayed on the client, interactivity and interestingness of the interaction can be improved.
According to the technical scheme, in the live broadcast learning process, the teacher action characteristic and the student action characteristic are obtained by respectively carrying out action recognition on the teacher video stream and the student video stream in real time, and under the condition that the teacher action characteristic and the student action characteristic hit an interaction scene, the interaction effect of the interaction scene is displayed on the teacher client and the student client, a new interaction mode of mutually matching with the virtual interaction effect through the actual action characteristic is provided, the convenience of teacher and student interaction can be improved, the teaching process is not required to be interrupted, and the interaction efficiency of online learning can be improved.
The embodiment of the application also provides an optional implementation mode of the online learning interaction method, and the online learning interaction efficiency can be further improved. Fig. 2 is a flow chart of another interactive method for online learning according to an embodiment of the present application. Referring to fig. 2, the method specifically includes the following steps:
s210, performing action recognition on the teacher video stream acquired by the teacher client to obtain teacher action characteristics.
S220, performing action recognition on the student video stream acquired by the student client to obtain student action characteristics.
S230, determining that the teacher action feature and the student action feature hit the interaction scene when the teacher action feature is a trigger action feature in the interaction scene, the student action feature is a response action feature in the interaction scene, and the duration between the teacher action feature and the learning action feature is smaller than a first duration threshold.
And S240, under the condition of hitting an interaction scene, displaying the interaction effect of the interaction scene at the teacher client and/or the student client.
In the embodiment of the application, the interaction scene may be a multiparty interaction scene which at least needs to be participated by a teacher and a student together, such as a clapping scene, a questioning scene and the like. At least a trigger action feature and a response action feature may be included in the multiparty interaction scenario, the trigger action feature may be performed by a teacher, the response action feature may be performed by a student, and a duration between the trigger action feature and the response action feature is less than a first duration threshold. The first time length threshold may be set according to the service requirement, for example, may be 10 seconds.
Specifically, in the online learning process, under the condition that the action characteristics of the teacher are detected to be trigger action characteristics in the multiparty interaction scene, timing can be started, and whether the action characteristics of the students of each student client in the first time length threshold range are response action characteristics in the multiparty interaction scene or not is detected; if the student action characteristic of any student client is the response action characteristic, determining that the teacher client and the student client hit the interaction scene, and displaying the interaction effect of the interaction scene on the teacher client and the student client. If the student action characteristics of any student client in the first time length range are not response action characteristics, determining that the teacher client and the student client miss the interaction scene, and not displaying the interaction effect of the interaction scene on the student client.
In the multiparty interaction scene, the triggering action features can be associated with triggering interaction effects, the response action features can be associated with response interaction results, the triggering action features and the response action features can be the same or different, and the triggering interaction effects and the response interaction effects can be different. Taking the clapping scene as an example, the triggering action feature and the responding action feature can be clapping action features, the triggering interaction effect can be a unilateral clapping animation effect, and the responding interaction effect can be a bilateral clapping animation effect.
Specifically, when the action characteristics of the teacher are detected to be trigger action characteristics, trigger interaction effects can be displayed on the teacher client side and the student client side; under the condition that any student action characteristic is a response action characteristic, a response interaction effect can be displayed on the student client; in the case where any student action feature is not a response action feature, a response interaction effect may also be displayed on the student client. The novel interaction mode of mutual matching of the real action characteristics and the virtual interaction effect is also suitable for multiparty interaction scenes, the application range of the novel interaction mode is further widened, and the interactivity and the interestingness of online learning are increased.
In an alternative embodiment, the method further comprises: and counting the times that the student action characteristics of the student client are response action characteristics in the interaction scene, and determining the learning concentration of the student client according to the times.
Specifically, the number of times that the student action feature of each student client hits the response action feature in the multi-round interaction scene can be counted, and the learning concentration of the student client is positively correlated with the counted number of times. The learning concentration degree is determined by the times that the student client responds to the multi-round interaction scene, and the learning concentration degree is simple, easy to calculate and high in accuracy.
According to the technical scheme, in the live broadcast learning process, multiparty interaction can be performed through mutual cooperation of the real action characteristics and the virtual interaction effect, the application range of a new interaction mode is further widened, and the interactivity and the interestingness of online learning are improved.
The embodiment of the application also provides an optional implementation mode of the online learning interaction method, and the online learning interaction efficiency can be further improved. Fig. 3 is a flow chart of another interactive method for online learning according to an embodiment of the present application. The method specifically comprises the following steps:
s310, performing action recognition on the teacher video stream acquired by the teacher client to obtain teacher action characteristics.
S320, performing action recognition on the student video stream acquired by the student client to obtain student action characteristics.
S330, determining whether the teacher action feature and/or the student action feature hit an interaction scene.
And S340, under the condition of hitting the multi-round interaction scene, determining the interaction client of the current action characteristic in the multi-round interaction scene.
S350, displaying the interaction effect of the current action characteristic at the interaction client.
And under the condition that the intention of at least two continuous rounds of interaction is the same, the scene is a multi-round interaction scene. The intent of the multiple interactions may be determined based on the voice interaction information during at least two interactions, the touch interaction information (i.e., the touch behavior information on the teacher client and/or the student client), and the motion characteristics of at least two interactions. The current action features refer to teacher action features and/or student action features that currently hit the multi-round interaction scenario. The interactive client of the current action feature refers to a client to be displayed of the current action feature.
Specifically, the interactive client of the current action feature can be determined according to at least one of the current action feature, the touch interaction information and the voice interaction information. For example, in a multi-round interactive question scenario, the interactive client of the current action feature may be determined according to at least one of touch behavior data of the teacher client, touch behavior data of each student client, teacher voice information, and each student voice information. Specifically, taking a multi-round interaction scenario of questioning as an example, in the case that a teacher lifts a question, at least one student can participate in the questioning by lifting hands, the teacher can select a student to be answered from students participating in the questioning through voice information and/or touch behavior data, the student to be answered can answer through voice information and/or touch behavior data of a student client, and the teacher can comment on the answer through actions. The current action characteristics of the multi-round interaction scene can be a teacher hand-lifting action, a student hand-lifting action or a teacher comment action, and correspondingly, the client to be displayed can be each student client, a teacher client or a student client participating in answer. In the multi-round interaction scene, the interaction client of the current action characteristic is determined according to the voice interaction information, the touch interaction information and the action characteristics of at least two rounds of interaction during the interaction, the interaction effect of the current action characteristic is displayed on the interaction client, the interaction efficiency can be further improved, the accuracy of the interaction client can be improved, and unnecessary interference can be reduced compared with the interaction effect of displaying the current action characteristic on each client.
In an alternative embodiment, in a case that the hit interaction scenario is a multi-round interaction scenario, the method further includes: and under the condition that the teacher action characteristic is detected to be a forced exit action, determining that the multi-round interaction scene is ended.
Specifically, the teacher can trigger the multi-round interaction scene and can control the end of the multi-round interaction scene through forced exit actions, so that the interaction efficiency and the interaction convenience are further improved.
In an alternative embodiment, in a case that the hit interaction scenario is a multi-round interaction scenario, the method further includes: and when the time length of the multi-round interaction scene is larger than a second time length threshold value, determining that the multi-round interaction scene is ended.
The second duration threshold may be set according to the service requirement, for example, may be 15 seconds. Specifically, timing can be started under the condition that the action characteristic of a teacher is the trigger action characteristic of a multi-round interaction scene, and the multi-round interaction scene starts to be entered; when the time length of the multi-round interaction scene is larger than a second time length threshold value, determining that the multi-round interaction scene is ended, so that teacher action characteristics and student work characteristics are preferentially matched with the multi-round interaction scene during the multi-round interaction scene, and the interaction efficiency can be further improved; after the multi-round interaction scene is finished, teacher action characteristics and student work characteristics are matched with each interaction scene, and interaction accuracy can be improved.
In an alternative embodiment, in a case that the hit interaction scenario is a multi-round interaction scenario, the method further includes: and determining that the multi-round interaction scene is ended under the condition that the teacher action characteristic and/or the student action characteristic hit other interaction scenes except the multi-round interaction scene.
Specifically, during the multi-round interaction scene, not only the teacher action feature and the student work feature are matched with the action features of the multi-round interaction scene, but also the teacher action feature and the student work feature are matched with the action features of other interaction scenes except the multi-round interaction scene, so that the hit accuracy of the interaction scene is improved.
According to the technical scheme, in the multi-round interaction scene of live broadcast learning, the interaction effect of the current action feature is displayed on the interaction client through the interaction client capable of automatically determining the current action feature, so that the interaction efficiency can be further improved. And by determining whether the multi-round interaction scene is finished, the interaction accuracy can be further improved.
Fig. 4 is a schematic diagram of an online learning interaction device according to an embodiment of the present application, where the embodiment may be applicable to a situation of interaction between a teacher and a student in an online learning process, where the device is configured in an electronic device, and may implement the online learning interaction method according to any embodiment of the present application. The interactive device 400 for online learning specifically includes the following:
the teacher action recognition module 401 is configured to perform action recognition on a teacher video stream collected by a teacher client to obtain teacher action characteristics;
the student action recognition module 402 is configured to perform action recognition on a student video stream collected by a student client to obtain student action characteristics;
the interaction processing module 403 is configured to determine whether the teacher action feature and/or the student action feature hit an interaction scene, and display, at the teacher client and/or the student client, an interaction effect of the interaction scene if the interaction scene hit.
The interaction processing module 403 is specifically configured to:
and determining that the teacher action feature and the student action feature hit the interaction scene under the condition that the teacher action feature is a trigger action feature in the interaction scene, the student action feature is a response action feature in the interaction scene and the duration between the teacher action feature and the learning action feature is smaller than a first duration threshold.
Wherein, the interactive device 400 for online learning further comprises:
and the concentration degree determining module is used for counting the times that the student action characteristics of the student client are response action characteristics in the interaction scene, and determining the learning concentration degree of the student client according to the times.
The interaction processing module 403 is specifically configured to:
under the condition of hitting a multi-round interaction scene, determining an interaction client of the current action characteristic in the multi-round interaction scene;
and displaying the interactive effect of the current action characteristic at the interactive client.
The online learning interaction device 400 further includes a first interaction ending module, configured to:
under the condition that the action characteristic of the teacher is detected to be a forced exit action, determining that the multi-round interaction scene is ended; or alternatively, the process may be performed,
and when the time length of the multi-round interaction scene is larger than a second time length threshold value, determining that the multi-round interaction scene is ended.
The online learning interaction device 400 further includes a second interaction ending module, configured to:
and determining that the multi-round interaction scene is ended under the condition that the teacher action characteristic and/or the student action characteristic hit other interaction scenes except the multi-round interaction scene.
The interactive effect of the interactive scene is a gesture animation effect of the interactive scene.
According to the technical scheme, in the live broadcast learning process, the teacher action characteristic and the student action characteristic are obtained by respectively carrying out action recognition on the teacher video stream and the student video stream in real time, and under the condition that the teacher action characteristic and the student action characteristic hit an interaction scene, the interaction effect of the interaction scene is displayed on the teacher client and the student client, a new interaction mode of mutually matching with the virtual interaction effect through the actual action characteristic is provided, so that the convenience of interaction of teachers and students can be improved, the teaching process is not required to be interrupted, and the interaction efficiency of online learning can be improved.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
Fig. 5 illustrates a schematic block diagram of an example electronic device 500 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 5, the apparatus 500 includes a computing unit 501 that can perform various suitable actions and processes according to a computer program stored in a Read Only Memory (ROM) 502 or a computer program loaded from a storage unit 508 into a Random Access Memory (RAM) 503. In the RAM503, various programs and data required for the operation of the device 500 can also be stored. The computing unit 501, ROM 502, and RAM503 are connected to each other by a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
Various components in the device 500 are connected to the I/O interface 505, including: an input unit 506 such as a keyboard, a mouse, etc.; an output unit 507 such as various types of displays, speakers, and the like; a storage unit 508 such as a magnetic disk, an optical disk, or the like; and a communication unit 509 such as a network card, modem, wireless communication transceiver, etc. The communication unit 509 allows the device 500 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit 501 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 501 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a digital information processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 501 performs the respective methods and processes described above, such as an interactive method of online learning. For example, in some embodiments, the interactive method of online learning may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 508. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 500 via the ROM 502 and/or the communication unit 509. When the computer program is loaded into RAM503 and executed by computing unit 501, one or more steps of the interaction method of online learning described above may be performed. Alternatively, in other embodiments, the computing unit 501 may be configured to perform the interactive method of online learning in any other suitable way (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (12)

1. An interactive method of online learning, comprising:
performing action recognition on the teacher video stream acquired by the teacher client to obtain teacher action characteristics;
performing action recognition on student video streams acquired by student clients to obtain student action characteristics;
determining whether the teacher action feature and the student action feature hit an interaction scene, and determining an interaction client of the current action feature in the multi-round interaction scene according to at least one of the current action feature, touch interaction information and voice interaction information under the condition of hitting the multi-round interaction scene;
displaying the interactive effect of the current action characteristic at the interactive client;
the interaction scene is used for matching with the teacher action feature and the student action feature, and determining whether the teacher action feature and the student action feature hit the interaction scene or not according to a matching result; the interaction effect of the interaction scene is used for displaying the interaction effect of the interaction scene on the teacher client side and the student client side under the condition that the teacher action characteristic and the student action characteristic hit the interaction scene;
under the condition that the action characteristic of the teacher is detected to be a forced exit action, determining that the multi-round interaction scene is ended; or alternatively, the process may be performed,
and when the time length of the multi-round interaction scene is larger than a second time length threshold value, determining that the multi-round interaction scene is ended.
2. The method of claim 1, wherein the determining whether the teacher action feature and the student action feature hit an interaction scenario comprises:
and determining that the teacher action feature and the student action feature hit the interaction scene under the condition that the teacher action feature is a trigger action feature in the interaction scene, the student action feature is a response action feature in the interaction scene and the duration between the teacher action feature and the learning action feature is smaller than a first duration threshold.
3. The method of claim 2, further comprising:
and counting the times that the student action characteristics of the student client are response action characteristics in the interaction scene, and determining the learning concentration of the student client according to the times.
4. The method of claim 1, further comprising:
and determining that the multi-round interaction scene is ended under the condition that the teacher action characteristic and the student action characteristic hit other interaction scenes except the multi-round interaction scene.
5. The method of claim 1, wherein the interactive effect of the interactive scene is a gesture animation effect of the interactive scene.
6. An interactive apparatus for online learning, comprising:
the teacher action recognition module is used for carrying out action recognition on the teacher video stream acquired by the teacher client to obtain teacher action characteristics;
the student action recognition module is used for performing action recognition on the student video stream acquired by the student client to obtain student action characteristics;
the interaction processing module is used for determining whether the teacher action feature and the student action feature hit an interaction scene or not, and displaying the interaction effect of the interaction scene on the teacher client side and the student client side under the condition of hitting the interaction scene; the interaction scene is used for matching with the teacher action feature and the student action feature, and determining whether the teacher action feature and the student action feature hit the interaction scene or not according to a matching result; the interaction effect of the interaction scene is used for displaying the interaction effect of the interaction scene on the teacher client side and the student client side under the condition that the teacher action characteristic and the student action characteristic hit the interaction scene;
the interaction processing module is specifically configured to:
under the condition of hitting a multi-round interaction scene, determining an interaction client of the current action characteristic in the multi-round interaction scene according to at least one of the current action characteristic, touch interaction information and voice interaction information;
displaying the interactive effect of the current action characteristic at the interactive client;
a first interaction ending module, configured to:
under the condition that the action characteristic of the teacher is detected to be a forced exit action, determining that the multi-round interaction scene is ended; or alternatively, the process may be performed,
and when the time length of the multi-round interaction scene is larger than a second time length threshold value, determining that the multi-round interaction scene is ended.
7. The apparatus of claim 6, wherein the interaction processing module is specifically configured to:
and determining that the teacher action feature and the student action feature hit the interaction scene under the condition that the teacher action feature is a trigger action feature in the interaction scene, the student action feature is a response action feature in the interaction scene and the duration between the teacher action feature and the learning action feature is smaller than a first duration threshold.
8. The apparatus of claim 7, further comprising:
and the concentration degree determining module is used for counting the times that the student action characteristics of the student client are response action characteristics in the interaction scene, and determining the learning concentration degree of the student client according to the times.
9. The apparatus of claim 6, further comprising a second interaction termination module to:
and determining that the multi-round interaction scene is ended under the condition that the teacher action characteristic and the student action characteristic hit other interaction scenes except the multi-round interaction scene.
10. The apparatus of claim 6, wherein the interactive effect of the interactive scene is a gesture animation effect of the interactive scene.
11. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-5.
12. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-5.
CN202110139622.2A 2021-02-01 2021-02-01 Online learning interaction method and device, electronic equipment and storage medium Active CN112995132B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110139622.2A CN112995132B (en) 2021-02-01 2021-02-01 Online learning interaction method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110139622.2A CN112995132B (en) 2021-02-01 2021-02-01 Online learning interaction method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112995132A CN112995132A (en) 2021-06-18
CN112995132B true CN112995132B (en) 2023-05-02

Family

ID=76346729

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110139622.2A Active CN112995132B (en) 2021-02-01 2021-02-01 Online learning interaction method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112995132B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107316520A (en) * 2017-08-17 2017-11-03 广州视源电子科技股份有限公司 Video teaching interactive approach, device, equipment and storage medium
CN109413002A (en) * 2017-08-16 2019-03-01 Tcl集团股份有限公司 A kind of classroom interaction live broadcasting method, system and terminal
CN111063339A (en) * 2019-11-11 2020-04-24 珠海格力电器股份有限公司 Intelligent interaction method, device, equipment and computer readable medium
CN111274910A (en) * 2020-01-16 2020-06-12 腾讯科技(深圳)有限公司 Scene interaction method and device and electronic equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10872535B2 (en) * 2009-07-24 2020-12-22 Tutor Group Limited Facilitating facial recognition, augmented reality, and virtual reality in online teaching groups

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109413002A (en) * 2017-08-16 2019-03-01 Tcl集团股份有限公司 A kind of classroom interaction live broadcasting method, system and terminal
CN107316520A (en) * 2017-08-17 2017-11-03 广州视源电子科技股份有限公司 Video teaching interactive approach, device, equipment and storage medium
CN111063339A (en) * 2019-11-11 2020-04-24 珠海格力电器股份有限公司 Intelligent interaction method, device, equipment and computer readable medium
CN111274910A (en) * 2020-01-16 2020-06-12 腾讯科技(深圳)有限公司 Scene interaction method and device and electronic equipment

Also Published As

Publication number Publication date
CN112995132A (en) 2021-06-18

Similar Documents

Publication Publication Date Title
CN112527115B (en) User image generation method, related device and computer program product
EP3828868A2 (en) Method and apparatus for determining key learning content, device, storage medium, and computer program product
CN112241715A (en) Model training method, expression recognition method, device, equipment and storage medium
CN113359995B (en) Man-machine interaction method, device, equipment and storage medium
US20220076476A1 (en) Method for generating user avatar, related apparatus and computer program product
CN113596488B (en) Live broadcast room display method and device, electronic equipment and storage medium
CN113242358A (en) Audio data processing method, device and system, electronic equipment and storage medium
CN112995132B (en) Online learning interaction method and device, electronic equipment and storage medium
CN117033587A (en) Man-machine interaction method and device, electronic equipment and medium
CN114363704B (en) Video playing method, device, equipment and storage medium
CN113327311B (en) Virtual character-based display method, device, equipment and storage medium
CN113742581B (en) Method and device for generating list, electronic equipment and readable storage medium
CN113784217A (en) Video playing method, device, equipment and storage medium
CN111741250A (en) Method, device and equipment for analyzing participation degree of video conversation scene and storage medium
CN113961132B (en) Interactive processing method and device, electronic equipment and storage medium
CN114979471B (en) Interface display method, device, electronic equipment and computer readable storage medium
CN117931022A (en) Interface control method and device, electronic equipment and storage medium
CN113840177B (en) Live interaction method and device, storage medium and electronic equipment
CN116233045A (en) Cross-scene chat construction method and device and electronic equipment
CN114283227B (en) Virtual character driving method and device, electronic equipment and readable storage medium
CN116546159A (en) Conference control method, conference control device, online conference system, online conference equipment and online conference medium
CN116360726A (en) Audio playing method and device, electronic equipment and storage medium
CN116841498A (en) Audio playing method and device, electronic equipment and storage medium
CN117311810A (en) Task auxiliary execution method and related equipment
CN114691922A (en) Session processing method, device and equipment based on virtual object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant