CN114173191B - Multi-language answering method and system based on artificial intelligence - Google Patents

Multi-language answering method and system based on artificial intelligence Download PDF

Info

Publication number
CN114173191B
CN114173191B CN202111496800.3A CN202111496800A CN114173191B CN 114173191 B CN114173191 B CN 114173191B CN 202111496800 A CN202111496800 A CN 202111496800A CN 114173191 B CN114173191 B CN 114173191B
Authority
CN
China
Prior art keywords
language
video
teacher
words
teaching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111496800.3A
Other languages
Chinese (zh)
Other versions
CN114173191A (en
Inventor
肖君
白庆春
王腊梅
臧宏
盛海龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai open university
Original Assignee
Shanghai open university
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai open university filed Critical Shanghai open university
Priority to CN202111496800.3A priority Critical patent/CN114173191B/en
Publication of CN114173191A publication Critical patent/CN114173191A/en
Application granted granted Critical
Publication of CN114173191B publication Critical patent/CN114173191B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/432Content retrieval operation from a local storage medium, e.g. hard-disk
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Computational Linguistics (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

The application discloses a multilingual answering method and system based on artificial intelligence, wherein the method comprises the following steps: acquiring a problem described in a first language; extracting words and sentences included in the first language from the first language; searching in preset contents by using words and sentences as keywords, wherein the text of the teaching contents of a teacher in the language courses which are learned by the preset content user last time corresponds to the time information of the teaching of the teacher in each section of the text; after a preset paragraph comprising words and sentences is retrieved from the text, time information corresponding to the preset paragraph is obtained; intercepting a multimedia fragment corresponding to time information from a video of a teacher teaching recorded in a language course of the last learning; the multimedia fragments are presented as answers to the questions. The problem of the prior art in the network study by the low answering efficiency caused by manual answering of teachers is solved, so that the answering efficiency is improved to a certain extent.

Description

Multi-language answering method and system based on artificial intelligence
Technical Field
The application relates to the field of network education, in particular to a multilingual answering method and system based on artificial intelligence.
Background
An answering link is generally set in the network course, and part of questions asked by students in the answering link is the content which is taught by teachers on the course.
At present, students can answer the questions through a network, and then the teacher answers the questions through voice or words.
Although network answering is utilized, the teacher finally answers the questions manually, which increases the burden of the teacher and slows down the efficiency of answering the questions of the students.
Disclosure of Invention
The embodiment of the application provides a multilingual answering method and system based on artificial intelligence, which are used for at least solving the problem of low answering efficiency caused by manual answering by a teacher in network learning in the prior art.
According to one aspect of the present application, there is provided an artificial intelligence-based multilingual answering method, including: acquiring a problem described in a first language, wherein the first language is a native language of a user submitting the problem, and the first language is one of a plurality of languages which are preconfigured and can be supported; extracting words and sentences included in the first language from the first language, wherein the words and sentences are expressed by using a second language, and the second language is different from the first language; searching in preset contents by using the words and sentences as keywords, wherein the preset contents are texts of teaching contents of teachers in language courses which are learned by the user last time, and each section of the texts corresponds to time information of the teacher teaching; after a preset paragraph comprising the words and sentences is retrieved from the text, time information corresponding to the preset paragraph is obtained; intercepting a multimedia fragment corresponding to the time information from the video of the teacher teaching recorded in the language course of the last learning; and displaying the multimedia fragments as answers to the questions.
Further, the method further comprises: acquiring a video of the teacher teaching; extracting audio from the video, and converting the audio into characters, wherein the paragraphs are divided according to the fact that the pause time in the audio exceeds a threshold value, and the time information of the paragraphs of characters appearing in the video is marked according to the time points of two pauses.
Further, the method further comprises: recording the teaching process of the teacher to obtain the teaching video of the teacher.
Further, capturing the multimedia clip corresponding to the time information from the video of the teacher teaching recorded in the language course of the last learning includes: acquiring courses to which the problems belong; and under the condition that the course failure of the problem is obtained, intercepting the multimedia fragment corresponding to the time information from the video of the teacher teaching recorded in the language course of the last learning.
According to another aspect of the present application, there is also provided an artificial intelligence based multilingual answering system, including: a first obtaining module, configured to obtain a question described in a first language, where the first language is a native language of a user who submits the question, and the first language is one of a plurality of languages that are preconfigured and can be supported; the extraction module is used for extracting words and sentences included in the first language from the first language, wherein the words and sentences are expressed by using a second language, and the second language is different from the first language; the retrieval module is used for retrieving in preset contents by using the words and sentences as keywords, wherein the preset contents are texts of teaching contents of teachers in language courses which are learned by the user last time, and each section of the texts corresponds to time information of the teacher teaching; the second acquisition module is used for acquiring time information corresponding to a preset paragraph after the preset paragraph comprising the words and sentences is searched in the text; the intercepting module is used for intercepting the multimedia fragments corresponding to the time information from the video of the teacher teaching recorded in the language course of the last learning; and the display module is used for displaying the multimedia fragments as answers to the questions.
Further, the system further comprises: the third acquisition module is used for acquiring the video of the teacher teaching; and the conversion module is used for extracting the audio from the video and converting the audio into characters, wherein the paragraph is divided according to the fact that the pause time in the audio exceeds a threshold value, and the time information of the paragraph characters in the video is marked according to the time points of two pauses.
Further, the system further comprises: and the recording module is used for recording the teaching process of the teacher and obtaining the teaching video of the teacher.
Further, the interception module is configured to: acquiring courses to which the problems belong; and under the condition that the course failure of the problem is obtained, intercepting the multimedia fragment corresponding to the time information from the video of the teacher teaching recorded in the language course of the last learning.
In the embodiment of the application, the method comprises the steps of acquiring a problem described by using a first language, wherein the first language is a native language of a user submitting the problem; extracting words and sentences included in the first language from the first language, wherein the words and sentences are expressed by using a second language, and the second language is different from the first language; searching in preset contents by using the words and sentences as keywords, wherein the preset contents are texts of teaching contents of teachers in language courses which are learned by the user last time, and each section of the texts corresponds to time information of the teacher teaching; after a preset paragraph comprising the words and sentences is retrieved from the text, time information corresponding to the preset paragraph is obtained; intercepting a multimedia fragment corresponding to the time information from the video of the teacher teaching recorded in the language course of the last learning; and displaying the multimedia fragments as answers to the questions. The problem of the prior art in the network study by the low answering efficiency caused by manual answering of teachers is solved, so that the answering efficiency is improved to a certain extent.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application, illustrate and explain the application and are not to be construed as limiting the application. In the drawings:
fig. 1 is a flowchart of an artificial intelligence based multi-language answering method according to an embodiment of the present application.
Detailed Description
It should be noted that, in the case of no conflict, the embodiments and features in the embodiments may be combined with each other. The present application will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is illustrated in the flowcharts, in some cases the steps illustrated or described may be performed in an order other than that illustrated herein.
In this embodiment, an artificial intelligence-based multi-language answering method is provided, and fig. 1 is a flowchart of an artificial intelligence-based multi-language answering method according to an embodiment of the present application, as shown in fig. 1, where the flowchart includes the following steps:
step S102, acquiring a problem described by using a first language, wherein the first language is a native language of a user submitting the problem, and the first language is one of a plurality of languages which are preconfigured and can be supported;
step S104, extracting words and sentences included in the first language from the first language, wherein the words and sentences are expressed by using a second language, and the second language is different from the first language;
step S106, searching in preset contents by using the words and sentences as keywords, wherein the preset contents are texts of teaching contents of teachers in language courses which are learned by the user last time, and each section of the texts corresponds to time information of the teacher teaching;
as an alternative, the text is extracted from the audio/video of the teacher teaching, and the extraction method is various, and an alternative is provided in this embodiment: capturing audio data of a current speaker and an image of the current speaker, and recording the speaking start time of the current speaker; processing the captured audio data and converting the audio data into text information in a text format; processing the captured image, and identifying the expression of the current speaker to obtain the emotion of the speaker; processing the captured audio data and/or the captured image to identify the current speaker and assign an identity tag to each speaker; and generating a text record according to the text information, the recognized starting time, the identity tag of the current speaker and the emotion of the current speaker. Preferably, the identity tag is stored in the storage module in association with voiceprint feature data and/or facial feature data of the speaker, before each speaker is assigned with the identity tag, whether the identity tag matched with the speaker is already stored in the storage module is searched, and if not, the speaker is assigned with the identity tag.
Step S108, after a preset paragraph comprising the words and sentences is retrieved in the text, time information corresponding to the preset paragraph is obtained;
step S110, intercepting a multimedia fragment corresponding to the time information from the video of the teacher teaching recorded in the language course of the last learning;
as an alternative embodiment, a teacher selected by the user (or student) is obtained; and acquiring a language course to which the multimedia fragment belongs according to the multimedia fragment, judging whether the teacher selected by the user teaches the language course, acquiring a teaching video of the teacher selected by the user if the judgment result is yes, and intercepting the multimedia fragment corresponding to the time information from the teaching video.
And step S112, displaying the multimedia fragments as answers to the questions.
As an alternative implementation manner, after the answer as the question is displayed, receiving the evaluation of the matching degree of the student to the answer, and searching a multimedia file matched with the word by using the word in a multimedia library when the matching degree exceeds a preset value, wherein the multimedia file is extracted from a film file, and the multimedia file is provided with a tag, and the tag comprises one or more words. The mode can be used as a further learning mode for students. This way, the learning effect of the students can be enhanced.
As an optional implementation manner, the predetermined content further includes: corresponding texts of recorded video files of all courses corresponding to the language; in the case that a plurality of texts with the words are searched, acquiring paragraphs including the words and time in each text; intercepting multimedia fragments corresponding to the time information from each video file according to the time information so as to obtain a plurality of multimedia fragments; displaying the obtained multiple multimedia fragments; and acquiring the last multimedia fragment selected by the user from the plurality of multimedia fragments, and displaying the teaching video of the teacher corresponding to the last multimedia fragment selected for viewing to the user when the teacher corresponding to the multimedia fragment is different from the teacher of the course learned by the user.
Through the alternative implementation mode, courses of other teachers can be recommended to the user so as to accord with the preference of the user and improve the learning effect.
As another alternative implementation manner, a video may be generated according to the text paragraph, where the generated video is a video of a virtual lecturer, and the generated video and the intercepted multimedia clip are both displayed as answers to the questions. There are a number of ways to generate video: for example, the text information is parsed and the video data set is filtered and screened according to the parsed text information; matching the filtered candidate video clips with the input scene description, calculating the matching degree, and then sequencing according to the matching degree to output video clips with high matching degree; using an encoder and a decoder to carry out text description on the video clips with high matching degree; selecting and comparing the similarity between the text description of the screened video clips and the text expression of the scene expressed by the natural language, and outputting a key frame set of the input text of the scene conforming to the natural language expression on the content; identifying and extracting objects in the key frame set to generate an object set; after the object set is generated, converting the text into a scene graph with nodes representing objects and edges representing the relationship among the objects; finally generating key frames according to the scene graph and the object set, and generating a continuous key frame set which is convenient for synthesizing video textures; and finding a key frame set conversion point and determining a play sequence to generate a video.
Through the steps, the characteristic of answering in the network learning is utilized, the problems commonly asked by students are spoken by teachers in the class, and the problem of low answering efficiency caused by manual answering by the teachers in the network learning in the prior art is solved by analyzing the second language of the questions in the students to automatically match with the teacher lecture video, so that the answering efficiency is improved to a certain extent.
There are many ways to obtain the text, for example, the video of the teacher teaching can be obtained; extracting audio from the video, and converting the audio into characters, wherein the paragraphs are divided according to the fact that the pause time in the audio exceeds a threshold value, and the time information of the paragraphs of characters appearing in the video is marked according to the time points of two pauses.
In this embodiment, a preferred method for converting video into text is provided: and extracting a subtitle file from the video, dividing the subtitle file according to the sound pause in the video under the condition that the subtitle file is successfully extracted, and then marking the time information of the paragraph characters in the video according to the time points of the two pauses. And under the condition that the subtitle file is failed to be extracted, extracting the audio from the video, and converting the audio into characters, wherein the paragraph is divided according to the fact that the pause time in the audio exceeds a threshold value, and the time information of the paragraph characters in the video is marked according to the time points of two pauses.
Preferably, the method may further comprise: recording the teaching process of the teacher to obtain the teaching video of the teacher.
Preferably, capturing the multimedia clip corresponding to the time information from the video of the teacher teaching recorded in the language lesson learned last time includes: acquiring courses to which the problems belong; and under the condition that the course failure of the problem is obtained, intercepting the multimedia fragment corresponding to the time information from the video of the teacher teaching recorded in the language course of the last learning.
As an optional implementation manner, if the course to which the problem belongs is acquired, acquiring video of the teacher teaching of the belonging course, and intercepting the multimedia segment corresponding to the time information from the recorded video of the teacher teaching in the belonging course.
In this embodiment, there is provided an electronic device including a memory in which a computer program is stored, and a processor configured to run the computer program to perform the method in the above embodiment.
The above-described programs may be run on a processor or may also be stored in memory (or referred to as computer-readable media), including both permanent and non-permanent, removable and non-removable media, and information storage may be implemented by any method or technique. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transshipment) such as modulated data signals and carrier waves.
These computer programs may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart block or blocks and/or block diagram block or blocks, and corresponding steps may be implemented in different modules. Also provided in this embodiment is a system, which may be referred to as an artificial intelligence based multilingual question answering system, comprising: a first obtaining module, configured to obtain a question described in a first language, where the first language is a native language of a user who submits the question, and the first language is one of a plurality of languages that are preconfigured and can be supported; the extraction module is used for extracting words and sentences included in the first language from the first language, wherein the words and sentences are expressed by using a second language, and the second language is different from the first language; the retrieval module is used for retrieving in preset contents by using the words and sentences as keywords, wherein the preset contents are texts of teaching contents of teachers in language courses which are learned by the user last time, and each section of the texts corresponds to time information of the teacher teaching; the second acquisition module is used for acquiring time information corresponding to a preset paragraph after the preset paragraph comprising the words and sentences is searched in the text; the intercepting module is used for intercepting the multimedia fragments corresponding to the time information from the video of the teacher teaching recorded in the language course of the last learning; and the display module is used for displaying the multimedia fragments as answers to the questions.
The modules in the system are software modules for performing one or more steps of the methods described above, and are not described in detail herein.
For example, the system may further include: the third acquisition module is used for acquiring the video of the teacher teaching; and the conversion module is used for extracting the audio from the video and converting the audio into characters, wherein the paragraph is divided according to the fact that the pause time in the audio exceeds a threshold value, and the time information of the paragraph characters in the video is marked according to the time points of two pauses.
For another example, the system may further include: and the recording module is used for recording the teaching process of the teacher and obtaining the teaching video of the teacher.
For another example, the intercepting module is configured to obtain a course to which the problem belongs; and under the condition that the course failure of the problem is obtained, intercepting the multimedia fragment corresponding to the time information from the video of the teacher teaching recorded in the language course of the last learning.
Through the embodiment, the problem that the question asked by the students is spoken by the teacher in the class is solved by utilizing the characteristics of answering in the network learning and automatically matching the teacher lecture video by analyzing the second language of the question in the students, and the problem of low answering efficiency caused by manual answering by the teacher in the network learning in the prior art is solved, so that the answering efficiency is improved to a certain extent.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and changes may be made to the present application by those skilled in the art. Any modifications, equivalent substitutions, improvements, etc. which are within the spirit and principles of the present application are intended to be included within the scope of the claims of the present application.

Claims (6)

1. An artificial intelligence-based multilingual answering method is characterized by comprising the following steps:
acquiring video of teaching of a teacher;
extracting audio from the video, and converting the audio into characters, wherein the paragraphs are divided according to the fact that the pause time in the audio exceeds a threshold value, and the time information of the paragraphs of characters appearing in the video is marked according to the time points of two pauses;
acquiring a problem described by using a first language and a second language, wherein the first language is a native language of a user submitting the problem, and the first language is one of a plurality of languages which are preconfigured and can be supported;
extracting words and sentences from the problems, wherein the words and sentences are expressed by using a second language, and the second language is different from the first language;
searching in preset contents by using the words and sentences as keywords, wherein the preset contents are words of teaching contents of teachers in language courses which are learned by the user last time, and each section of words corresponds to the time information;
after a preset paragraph comprising the words and sentences is retrieved from the text, time information corresponding to the preset paragraph is obtained;
intercepting a multimedia fragment corresponding to the time information from the video of the teacher teaching recorded in the language course of the last learning;
and displaying the multimedia fragments as answers to the questions.
2. The method according to claim 1, wherein the method further comprises:
recording the teaching process of the teacher to obtain the teaching video of the teacher.
3. The method according to any one of claims 1 to 2, wherein capturing the multimedia clip corresponding to the time information from the video of the teacher lecture recorded in the language lesson of the last learning includes:
acquiring courses to which the problems belong;
and under the condition that the course to which the problem belongs fails to be acquired, capturing a multimedia fragment corresponding to the time information from the video of the teacher teaching recorded in the language course of the last learning.
4. An artificial intelligence based multilingual answering system, comprising:
the third acquisition module is used for acquiring video of teaching of a teacher;
the conversion module is used for extracting audio from the video and converting the audio into characters, wherein the paragraph is divided according to the fact that the pause time in the audio exceeds a threshold value, and the time information of the paragraph characters in the video is marked according to the time points of two pauses;
a first obtaining module, configured to obtain a question described by using a first language and a second language, where the first language is a native language of a user who submits the question, and the first language is one of a plurality of languages that are preconfigured and can be supported;
the extraction module is used for extracting words and sentences from the problems, wherein the words and sentences are expressed by using a second language, and the second language is different from the first language;
the retrieval module is used for retrieving predetermined contents by using the words and sentences as keywords, wherein the predetermined contents are words of teaching contents of a teacher in a language course which is learned by a user last time, and each section of the words corresponds to the time information;
the second acquisition module is used for acquiring time information corresponding to a preset paragraph after the preset paragraph comprising the words and sentences is searched in the text;
the intercepting module is used for intercepting the multimedia fragments corresponding to the time information from the video of the teacher teaching recorded in the language course of the last learning;
and the display module is used for displaying the multimedia fragments as answers to the questions.
5. The system of claim 4, wherein the system further comprises:
and the recording module is used for recording the teaching process of the teacher and obtaining the teaching video of the teacher.
6. The system of any one of claims 4 to 5, wherein the intercept module is configured to:
acquiring courses to which the problems belong;
and under the condition that the course to which the problem belongs fails to be acquired, capturing a multimedia fragment corresponding to the time information from the video of the teacher teaching recorded in the language course of the last learning.
CN202111496800.3A 2021-12-09 2021-12-09 Multi-language answering method and system based on artificial intelligence Active CN114173191B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111496800.3A CN114173191B (en) 2021-12-09 2021-12-09 Multi-language answering method and system based on artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111496800.3A CN114173191B (en) 2021-12-09 2021-12-09 Multi-language answering method and system based on artificial intelligence

Publications (2)

Publication Number Publication Date
CN114173191A CN114173191A (en) 2022-03-11
CN114173191B true CN114173191B (en) 2024-03-19

Family

ID=80484772

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111496800.3A Active CN114173191B (en) 2021-12-09 2021-12-09 Multi-language answering method and system based on artificial intelligence

Country Status (1)

Country Link
CN (1) CN114173191B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111445734A (en) * 2019-01-17 2020-07-24 上海语朵教育科技有限公司 Chinese learning system based on education reservation

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101281534A (en) * 2008-05-28 2008-10-08 叶睿智 Method for searching multimedia resource based on audio content retrieval
CN103956166A (en) * 2014-05-27 2014-07-30 华东理工大学 Multimedia courseware retrieval system based on voice keyword recognition
CN106126619A (en) * 2016-06-20 2016-11-16 中山大学 A kind of video retrieval method based on video content and system
CN106294764A (en) * 2016-08-11 2017-01-04 乐视控股(北京)有限公司 A kind of video platform word and search method and apparatus
WO2020224362A1 (en) * 2019-05-07 2020-11-12 华为技术有限公司 Video segmentation method and video segmentation device
CN112017487A (en) * 2019-05-29 2020-12-01 深圳市希科普股份有限公司 Flat Flash learning system based on artificial intelligence

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101281534A (en) * 2008-05-28 2008-10-08 叶睿智 Method for searching multimedia resource based on audio content retrieval
CN103956166A (en) * 2014-05-27 2014-07-30 华东理工大学 Multimedia courseware retrieval system based on voice keyword recognition
CN106126619A (en) * 2016-06-20 2016-11-16 中山大学 A kind of video retrieval method based on video content and system
CN106294764A (en) * 2016-08-11 2017-01-04 乐视控股(北京)有限公司 A kind of video platform word and search method and apparatus
WO2020224362A1 (en) * 2019-05-07 2020-11-12 华为技术有限公司 Video segmentation method and video segmentation device
CN112017487A (en) * 2019-05-29 2020-12-01 深圳市希科普股份有限公司 Flat Flash learning system based on artificial intelligence

Also Published As

Publication number Publication date
CN114173191A (en) 2022-03-11

Similar Documents

Publication Publication Date Title
CN111526382B (en) Live video text generation method, device, equipment and storage medium
CN109275046B (en) Teaching data labeling method based on double video acquisition
US10860948B2 (en) Extending question training data using word replacement
US20160133148A1 (en) Intelligent content analysis and creation
CN111639233B (en) Learning video subtitle adding method, device, terminal equipment and storage medium
CN111462554A (en) Online classroom video knowledge point identification method and device
US10089898B2 (en) Information processing device, control method therefor, and computer program
CN114173191B (en) Multi-language answering method and system based on artificial intelligence
CN114357206A (en) Education video color subtitle generation method and system based on semantic analysis
CN113779345B (en) Teaching material generation method and device, computer equipment and storage medium
CN113259763B (en) Teaching video processing method and device and electronic equipment
KR102036721B1 (en) Terminal device for supporting quick search for recorded voice and operating method thereof
Wang et al. Video-Based Big Data Analytics in Cyberlearning.
CN111522992A (en) Method, device and equipment for putting questions into storage and storage medium
CN114786032B (en) Training video management method and system
CN111462760A (en) Voiceprint recognition system, method and device and electronic equipment
CN110223206B (en) Lesson specialty direction determining method and system and lesson matching method and system for analysis
CN110428668B (en) Data extraction method and device, computer system and readable storage medium
CN113177394A (en) Overseas video teaching resource conversion system and method, electronic equipment and storage medium
CN112732187B (en) Big data storage processing method and device
CN113705248B (en) Method and device for processing tactical training data based on result evaluation
US20220375455A1 (en) Phoneme mispronunciation ranking and phonemic rules for identifying reading passages for reading progress
CN112786015A (en) Data processing method and device
KR101958981B1 (en) Method of learning foreign languages and apparatus performing the same
CN118016071A (en) Teaching resource generation method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant