CN117876170A - Online training method and device based on multi-mode large model, storage medium and server - Google Patents

Online training method and device based on multi-mode large model, storage medium and server Download PDF

Info

Publication number
CN117876170A
CN117876170A CN202311840966.1A CN202311840966A CN117876170A CN 117876170 A CN117876170 A CN 117876170A CN 202311840966 A CN202311840966 A CN 202311840966A CN 117876170 A CN117876170 A CN 117876170A
Authority
CN
China
Prior art keywords
data
user
course
character
public
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311840966.1A
Other languages
Chinese (zh)
Inventor
张海璇
赵峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yuanguang Software Co Ltd
Original Assignee
Yuanguang Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yuanguang Software Co Ltd filed Critical Yuanguang Software Co Ltd
Priority to CN202311840966.1A priority Critical patent/CN117876170A/en
Publication of CN117876170A publication Critical patent/CN117876170A/en
Pending legal-status Critical Current

Links

Landscapes

  • Electrically Operated Instructional Devices (AREA)

Abstract

The embodiment of the application discloses an online training method, device, storage medium and terminal equipment based on a multi-mode large model, and relates to the field of online education. According to the method, the chat group is created by the user and at least one selected public character, learning material data of courses is analyzed by utilizing the multi-mode large model to generate a knowledge text sequence, then the knowledge text sequence is converted into voice messages or text messages by utilizing personality data of the public character and is output to the chat group, so that the user can flexibly select favorite public characters to explain any courses according to the requirements of the user, the learning interestingness and the immersing atmosphere are improved, and the user viscosity is improved.

Description

Online training method and device based on multi-mode large model, storage medium and server
Technical Field
The application relates to the field of online education, in particular to an online training method, device, storage medium and server based on a multi-mode large model.
Background
In the current online training course, a user subscribes to the course through the internet, and then a server distributes teachers to the user according to the time schedule of the teachers. When the course starts, the user enters an online classroom, and a teacher uses learning materials of the course to explain the user. The teaching style of each teacher is fixed, and if the user does not like the teaching style of the teacher, it is very troublesome to replace the teacher temporarily.
Disclosure of Invention
The embodiment of the application provides an online training method, device, storage medium and server based on a multi-mode large model, which can solve the problem that the teaching style is relatively fixed in the prior art. The technical scheme is as follows:
in a first aspect, an embodiment of the present application provides an online training method based on a multi-mode large model, where the method includes:
receiving a selection instruction of a user; the selection instruction carries a character ID and a course ID;
inquiring corresponding personality data of public characters in a character database according to the character ID, and inquiring corresponding learning material data in a course database according to the course ID; the personality data includes: idioms, voiceprint information, and mood information;
creating a chat group according to the selected public character and the user, and analyzing the learning material data by utilizing a pre-trained multi-mode big model to generate a knowledge text sequence;
and outputting chat messages corresponding to the knowledge text sequences in the chat group according to the personality data.
In a second aspect, embodiments of the present application provide an online training apparatus based on a multi-modal large model, the apparatus including:
a receiving unit for receiving a selection instruction of a user; the selection instruction carries a character ID and a course ID;
the inquiring unit is used for inquiring the personality data of the corresponding public personage in the personage database according to the personage ID and inquiring the corresponding learning material data in the course database according to the course ID; the personality data includes: idioms, voiceprint information, and mood information;
the generation unit is used for creating a chat group according to the selected public character and the user, and analyzing the learning material data by utilizing a pre-trained multi-mode big model to generate a knowledge text sequence;
and the output unit is used for outputting chat messages corresponding to the knowledge text sequences in the chat group according to the personality data.
In a third aspect, embodiments of the present application provide a computer storage medium storing a plurality of instructions adapted to be loaded by a processor and to perform the above-described method steps.
In a fourth aspect, embodiments of the present application provide a server, which may include: a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the above-mentioned method steps.
The technical scheme provided by some embodiments of the present application has the beneficial effects that at least includes:
selecting at least one public character and selecting a required course, then creating a chat group by the user and the selected at least one public character, analyzing learning material data of the course by utilizing a multi-mode large model to generate a knowledge text sequence, converting the knowledge text sequence into voice messages or text messages by utilizing personality data of the public character and outputting the voice messages or the text messages to the chat group, and realizing flexible selection of favorite public characters to explain any course according to the requirements of the user.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of a network architecture provided in an embodiment of the present application;
FIG. 2 is a flow diagram of an online training method based on a multi-modal large model provided in an embodiment of the present application;
FIG. 3 is a schematic structural diagram of an online training device based on a multi-modal large model provided by the present application;
fig. 4 is a schematic structural diagram of a server provided in the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the following detailed description of the embodiments of the present application will be given with reference to the accompanying drawings.
It should be noted that, the online training method based on the multi-mode large model provided by the application is generally executed by a server, and correspondingly, the online training device based on the multi-mode large model is generally arranged in the server.
FIG. 1 illustrates an exemplary system architecture of a multi-modal large model-based online training method or multi-modal large model-based online training apparatus that may be applied to the present application.
As shown in fig. 1, the system architecture may include: a terminal device 101 and a server 102. The communication between the terminal device 101 and the server 102 may be performed through a network for the medium providing the communication link between the above-mentioned respective units. The network may include various types of wired or wireless communication links, such as: the wired communication link includes an optical fiber, a twisted pair wire, a coaxial cable, or the like, and the WIreless communication link includes a bluetooth communication link, a WIreless-FIdelity (Wi-Fi) communication link, a microwave communication link, or the like.
The server 102 is deployed with a trained multi-mode large model, the terminal device 101 selects at least one public figure and course based on the course configuration interface, the server 102 queries learning material data of the course and personality data of the public figure, and the personality data is utilized to output chat messages generated after analyzing the learning material data in the chat group.
The terminal device 101 and the server 102 may be hardware or software. When the terminal apparatus 101 and the server 102 are hardware, they may be realized as a distributed server cluster composed of a plurality of servers, or as a single server. When the terminal device 101 and the server 102 are software, they may be implemented as a plurality of software or software modules (for example, to provide distributed services), or may be implemented as a single software or software module, which is not particularly limited herein.
Various communication client applications can be installed on the terminal device of the present application, for example: video recording applications, video playing applications, voice interaction applications, search class applications, instant messaging tools, mailbox clients, social platform software, and the like.
The terminal device may be hardware or software. When the terminal device is hardware, it may be various terminal devices with a display screen including, but not limited to, smartphones, tablet computers, laptop and desktop computers, and the like. When the terminal device is software, the terminal device may be installed in the above-listed terminal device. Which may be implemented as multiple software or software modules (e.g., to provide distributed services), or as a single software or software module, without limitation.
When the terminal equipment is hardware, a display device and a camera can be arranged on the terminal equipment, the display device can be various equipment capable of realizing the display function, and the camera is used for collecting video streams; for example: the display device may be a cathode ray tube display (cathode ray tube display, CR), a light-emitting diode display (light-emitting diode display, LED), an electronic ink screen, a liquid crystal display (liquid crystal display, LCD), a plasma display panel (plasma displaypanel, PDP), or the like. The user can view the displayed text, picture, video and other information by using the display device on the terminal device.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative. Any number of terminal devices, networks, and servers may be used as desired for implementation.
The online training method based on the multi-mode large model provided in the embodiment of the application will be described in detail with reference to fig. 2. The online training device based on the multi-mode large model in the embodiment of the application may be a server shown in fig. 1.
Referring to fig. 2, a flow chart of an online training method based on a multi-mode large model is provided for an embodiment of the present application. As shown in fig. 2, the method according to the embodiment of the present application may include the following steps:
s201, receiving a selection instruction of a user.
After the user inputs account information on the terminal device, a login instruction is sent to the server, the server allows the user to login after verifying that the account information in the login instruction is correct, the server displays a course configuration interface on the user's terminal device, the user displays a candidate character set and a candidate course set on the course configuration interface through an input unit of the terminal device, and the character set comprises the IDs of one or more public characters authorized to use, for example: the avatar and name of the public character, which may be a star, a learner, a professor, etc., the course set includes the IDs of one or more courses, such as: the user performs an operation of selecting at least one character ID in the character set through the input unit, and performs an operation of selecting one course ID in the course set, and the terminal device generates a selection instruction according to the above operation, and sends the carrying character ID and the course ID to the server, wherein the number of character IDs may be one or more.
S202, inquiring corresponding personality data of the public personage in the personage database according to the personage ID, and inquiring corresponding learning material data in the course database according to the course ID.
Wherein, personage data of each public personage is stored in personage database, and personage data includes: idioms, voiceprint information, and mood information, personality data may be extracted from audio data, video data, and image data of public characters by trained multimodal big models, such as: in the case where the company has obtained the authority of the public character a, in order to extract personality data of the public character a, personal speech audio data, interview program audio data, movie clip video data, and still whole-body photographs of the public character a may be input into the trained multimodal large model to output personality data such as idioms, voiceprint information, and mood information of the public character a. Further, the terminal device may acquire audio data, video data, and image data of the public character a using a search engine, for example: the terminal equipment calls a search engine, sends a search instruction to the search engine, wherein the search instruction carries names, data types (audio files, video files and image files) and movie names of public characters, the search engine utilizes a web crawler to acquire target data meeting search conditions in each network server, the target data is returned to the terminal equipment, and the terminal equipment performs de-duplication processing on the acquired audio data, video data and picture data and then inputs the obtained audio data, video data and picture data into a trained multi-mode large model.
The lesson database stores learning material data of each lesson, wherein the learning material data represents lesson related files learned by the user, including word files, pdf files, image files, and the like. Further, if a course is provided with a plurality of lessons, the server obtains learning progress information of the user, wherein the learning progress information indicates the lessons that the user has learned, and then obtains learning material data corresponding to the course ID in the course database according to the learning progress information, for example: and if the course A is provided with 10 lessons, the server acquires learning material data of the lessons 6 in the course A from the learning database according to the learning progress information of the user to acquire the learning of the lessons 5.
Further, in order to consolidate the learning success of the user, after the user finishes the learning in the current class, the server arranges a test book in the current class on the terminal device of the user, then before starting in the next class, acquires answer conditions of the user aiming at the test book, answer condition information indicates error correction conditions of each question, queries learning material data in the course database in the next class, acquires learning material data corresponding to questions with wrong answers according to the answer condition information, and fuses the two learning material data to obtain learning materials finally used in the next class so as to consolidate learning results in the learning process of the user in the next class.
S203, creating a chat group according to the selected public characters and users, and analyzing the learning material data by utilizing the pre-trained multi-mode large model to generate a knowledge text sequence.
Wherein the chat group is composed of the user and at least one work task selected by the user, such as: the user selects star a and star B, then the server creates a chat group containing the user, star a and star B. The server analyzes the learning material data in the S202 by using the multi-mode large model trained in advance to generate a knowledge text sequence, and the knowledge text sequence splits the learning material data into knowledge texts with a certain time sequence, which represent a plurality of knowledge points, by using a plurality of knowledge texts with time sequence, namely the multi-mode large model.
S204, outputting chat messages corresponding to the knowledge text sequences in the chat group according to the personality data.
Wherein the chat message may be a text message or a voice message. If the chat message is a text message, processing the knowledge text by using idioms and mood information of public characters to generate the text message, and then pushing the text message in the chat group; if the chat message is a voice message, the knowledge text is converted into the voice message by using voiceprint information, idioms and mood information of the public character, and then the voice message is pushed in the chat group. When the number of public characters selected by the user is plural, each knowledge text in the knowledge text sequence may be allocated to the public characters according to a certain allocation rule, and the plural public characters may output voice messages in the chat group by using an average allocation method.
Furthermore, in order to enhance interactivity in the learning process, the user can also ask questions in the chat group, when the server detects the question text of the user, the server detects whether the question text belongs to the knowledge range of the current course by using the trained multi-mode large model, if so, an answer text is output, then the answer text is converted into a reply message by using personality data of a public character, and the reply message can be a text message or a voice message; if not, outputting a preset response message indicating the problem superclass, for example: your questions have been out of the scope of this course.
In order to further improve learning interestingness, the learning material data can comprise movie and television drama fragments of public characters, the movie and television drama fragments are related to knowledge points of courses, a user can learn the knowledge points of the courses while watching the movie and television drama fragments by analyzing the movie and television drama fragments to output a knowledge text sequence, learning interestingness is improved, and immersive learning is achieved.
In the embodiment of the application, a user selects at least one public character and selects a required course in the course of learning courses, then creates a chat group with the user and the selected at least one public character, analyzes learning material data of the course by utilizing a multi-mode large model to generate a knowledge text sequence, converts the knowledge text sequence into voice messages or text messages by utilizing personality data of the public character and outputs the voice messages or text messages to the chat group, so that the user can flexibly select favorite public characters to explain any course according to the requirements of the user, the learning interest and immersion atmosphere are improved, and the user viscosity is improved.
The following are device embodiments of the present application, which may be used to perform method embodiments of the present application. For details not disclosed in the device embodiments of the present application, please refer to the method embodiments of the present application.
Referring to fig. 3, a schematic structural diagram of an online training device based on a multi-mode large model according to an exemplary embodiment of the present application is shown, which is hereinafter referred to as device 3. The device 3 may be implemented as all or part of a server by software, hardware or a combination of both. The device 3 comprises: receiving unit 301, inquiring unit 302, generating unit 303, and output unit 304.
A receiving unit 301, configured to receive a selection instruction of a user; the selection instruction carries a character ID and a course ID;
a query unit 302, configured to query the personality data of the corresponding public personage in the personage database according to the personage ID, and query the corresponding learning material data in the lesson database according to the lesson ID; the personality data includes: idioms, voiceprint information, and mood information;
a generating unit 303, configured to create a chat group according to the selected public character and the user, and parse the learning material data by using a multi-modal big model trained in advance to generate a knowledge text sequence;
and the output unit 304 is configured to output, in the chat group, a chat message corresponding to the knowledge text sequence according to the personality data.
In one or more possible embodiments, further comprising:
an extraction unit for acquiring audio data, video data, and image data of a public character;
extracting the audio data, the video data and the image data by utilizing a pre-trained multi-mode large model, and outputting personality data;
the personality data is stored in a personality database.
In one or more possible embodiments, the acquiring the audio data, the video data, and the image data of the public character includes:
a search engine is invoked to query the internet for audio data, video data, and image data of public characters.
In one or more possible embodiments, further comprising:
the interaction unit is used for receiving the question text input by the user in the chat group;
processing the question text by using a trained multi-modal large model to output an answer text;
converting the question text into a reply voice message according to personality data of the public character, and outputting the reply voice message in a chat group.
In one or more possible embodiments, the learning material data includes movie episodes of the public character, the movie episodes being related to knowledge points of the lesson.
In one or more possible embodiments, further comprising:
the test unit is used for outputting test paper of the course in the test question library after the course is learned;
and scoring the answer text submitted by the user, and recording answer condition information and scores of each question in the test paper.
In one or more possible embodiments, the lesson comprises a plurality of lessons;
the step of inquiring corresponding learning material data in a course database according to the course ID comprises the following steps:
determining a learned lesson time according to the learning progress information of the user;
obtaining answer condition information of the last test paper of the user;
inquiring learning material data in the next lesson in the course database according to the learned lesson time and the answering situation information.
It should be noted that, when the apparatus 3 provided in the foregoing embodiment performs the on-line training method based on the multi-mode large model, only the division of the foregoing functional modules is used as an example, in practical application, the foregoing functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the foregoing functions. In addition, the online training device based on the multi-mode large model provided in the above embodiment and the online training method embodiment based on the multi-mode large model belong to the same concept, which embody detailed implementation procedures and are not described herein.
The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments.
The embodiment of the present application further provides a computer storage medium, where the computer storage medium may store a plurality of instructions, where the instructions are adapted to be loaded by a processor and execute the method steps of the embodiment shown in fig. 2, and the specific execution process may refer to the specific description of the embodiment shown in fig. 2, which is not repeated herein.
The present application also provides a computer program product storing at least one instruction that is loaded and executed by the processor to implement the multi-modal large model-based online training method as described in various embodiments above.
Referring to fig. 4, a schematic structural diagram of a server is provided in an embodiment of the present application. As shown in fig. 4, the server 400 may include: at least one processor 401, at least one network interface 404, a user interface 403, a memory 405, and at least one communication bus 402.
Wherein communication bus 402 is used to enable connected communications between these components.
The user interface 403 may include a Display screen (Display) and a Camera (Camera), and the optional user interface 403 may further include a standard wired interface and a standard wireless interface.
The network interface 404 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), among others.
Wherein the processor 401 may include one or more processing cores. The processor 401 connects the various parts within the entire server 400 using various interfaces and lines, performs various functions of the server 400 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 405, and invoking data stored in the memory 405. Alternatively, the processor 401 may be implemented in at least one hardware form of digital signal processing (Digital Signal Processing, DSP), field programmable gate array (Field-Programmable Gate Array, FPGA), programmable logic array (Programmable Logic Array, PLA). The processor 401 may integrate one or a combination of several of a central processing unit (Central Processing Unit, CPU), an image processor (Graphics Processing Unit, GPU), a modem, etc. The CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the display screen; the modem is used to handle wireless communications. It will be appreciated that the modem may not be integrated into the processor 401 and may be implemented by a single chip.
The Memory 405 may include a random access Memory (RandomAccess Memory, RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 405 includes a non-transitory computer readable medium (non-transitory computer-readable storage medium). Memory 405 may be used to store instructions, programs, code sets, or instruction sets. The memory 405 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the above-described various method embodiments, etc.; the storage data area may store data or the like referred to in the above respective method embodiments. The memory 405 may also optionally be at least one storage device located remotely from the aforementioned processor 401. As shown in fig. 4, an operating system, a network communication module, a user interface module, and application programs may be included in the memory 405, which is one type of computer storage medium.
In the server 400 shown in fig. 4, the user interface 403 is mainly used for providing an input interface for a user, and acquiring data input by the user; the processor 401 may be configured to invoke an application program stored in the memory 405, and specifically execute the method shown in fig. 2, and the specific process may be shown in fig. 2, which is not repeated herein.
Those skilled in the art will appreciate that implementing all or part of the above-described methods in accordance with the embodiments may be accomplished by way of a computer program stored on a computer readable storage medium, which when executed may comprise the steps of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a read-only memory, a random access memory, or the like.
The foregoing disclosure is only illustrative of the preferred embodiments of the present application and is not intended to limit the scope of the claims herein, as the equivalent of the claims herein shall be construed to fall within the scope of the claims herein.

Claims (10)

1. An online training method based on a multi-mode large model is characterized by comprising the following steps:
receiving a selection instruction of a user; the selection instruction carries a character ID and a course ID;
inquiring corresponding personality data of public characters in a character database according to the character ID, and inquiring corresponding learning material data in a course database according to the course ID; the personality data includes: idioms, voiceprint information, and mood information;
creating a chat group according to the selected public character and the user, and analyzing the learning material data by utilizing a pre-trained multi-mode big model to generate a knowledge text sequence;
and outputting chat messages corresponding to the knowledge text sequences in the chat group according to the personality data.
2. The method of claim 1, further comprising, prior to receiving the user selection instruction:
acquiring audio data, video data and image data of a public character;
extracting the audio data, the video data and the image data by utilizing a pre-trained multi-mode large model, and outputting personality data;
and storing the personality data into a personality database.
3. The method of claim 2, wherein the acquiring the audio data, the video data, and the image data of the public character comprises:
a search engine is invoked to query the internet for audio data, video data, and image data of public characters.
4. The method according to claim 1 or 2, further comprising:
receiving a question text input by the user in the chat group;
processing the question text by using a trained multi-modal large model to output an answer text;
converting the question text into a reply voice message according to personality data of the public character, and outputting the reply voice message in a chat group.
5. The method of claim 4, wherein the learning material data includes movie episodes of the public character, the movie episodes being related to knowledge points of the lesson.
6. The method as recited in claim 5, further comprising:
after the course learning is completed, outputting a test paper of the course in a test question library;
and scoring the answer text submitted by the user, and recording answer condition information and scores of each question in the test paper.
7. The method of claim 6, wherein the lesson comprises a plurality of lessons;
the step of inquiring corresponding learning material data in a course database according to the course ID comprises the following steps:
determining a learned lesson time according to the learning progress information of the user;
obtaining answer condition information of the last test paper of the user;
inquiring learning material data in the next lesson in the course database according to the learned lesson time and the answering situation information.
8. An on-line training device based on a multi-mode large model is characterized by comprising:
a receiving unit for receiving a selection instruction of a user; the selection instruction carries a character ID and a course ID;
the inquiring unit is used for inquiring the personality data of the corresponding public personage in the personage database according to the personage ID and inquiring the corresponding learning material data in the course database according to the course ID; the personality data includes: idioms, voiceprint information, and mood information;
the generation unit is used for creating a chat group according to the selected public character and the user, and analyzing the learning material data by utilizing a pre-trained multi-mode big model to generate a knowledge text sequence;
and the output unit is used for outputting chat messages corresponding to the knowledge text sequences in the chat group according to the personality data.
9. A computer storage medium storing a plurality of instructions adapted to be loaded by a processor and to perform the method steps of any one of claims 1 to 7.
10. A server, comprising: a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the method steps of any of claims 1-7.
CN202311840966.1A 2023-12-29 2023-12-29 Online training method and device based on multi-mode large model, storage medium and server Pending CN117876170A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311840966.1A CN117876170A (en) 2023-12-29 2023-12-29 Online training method and device based on multi-mode large model, storage medium and server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311840966.1A CN117876170A (en) 2023-12-29 2023-12-29 Online training method and device based on multi-mode large model, storage medium and server

Publications (1)

Publication Number Publication Date
CN117876170A true CN117876170A (en) 2024-04-12

Family

ID=90591343

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311840966.1A Pending CN117876170A (en) 2023-12-29 2023-12-29 Online training method and device based on multi-mode large model, storage medium and server

Country Status (1)

Country Link
CN (1) CN117876170A (en)

Similar Documents

Publication Publication Date Title
CN110033659B (en) Remote teaching interaction method, server, terminal and system
CN110570698B (en) Online teaching control method and device, storage medium and terminal
CN110673777A (en) Online teaching method and device, storage medium and terminal equipment
CN110600033B (en) Learning condition evaluation method and device, storage medium and electronic equipment
CN110491218A (en) A kind of online teaching exchange method, device, storage medium and electronic equipment
CN111651497B (en) User tag mining method and device, storage medium and electronic equipment
US10796592B2 (en) User generated content within an online education platform
CN110569364A (en) online teaching method, device, server and storage medium
CN110880324A (en) Voice data processing method and device, storage medium and electronic equipment
CN111343507A (en) Online teaching method and device, storage medium and electronic equipment
CN109326151A (en) Implementation method, client and server based on semantics-driven virtual image
CN111107442A (en) Method and device for acquiring audio and video files, server and storage medium
CN111260975B (en) Method, device, medium and electronic equipment for multimedia blackboard teaching interaction
CN113992929A (en) Virtual digital human interaction method, system, equipment and computer program product
US20170270812A1 (en) Method for learning assessment
CN110046290B (en) Personalized autonomous teaching course system
KR20070006742A (en) Language teaching method
CN113850898A (en) Scene rendering method and device, storage medium and electronic equipment
CN110867187B (en) Voice data processing method and device, storage medium and electronic equipment
CN113257060A (en) Question answering solving method, device, equipment and storage medium
CN112991848A (en) Remote education method and system based on virtual reality
CN112447073A (en) Explanation video generation method, explanation video display method and device
CN116881412A (en) Chinese character multidimensional information matching training method and device, electronic equipment and storage medium
CN116962787A (en) Interaction method, device, equipment and storage medium based on video information
CN117876170A (en) Online training method and device based on multi-mode large model, storage medium and server

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination