CN112309183A - Interactive listening and speaking exercise system suitable for foreign language teaching - Google Patents

Interactive listening and speaking exercise system suitable for foreign language teaching Download PDF

Info

Publication number
CN112309183A
CN112309183A CN202011263720.9A CN202011263720A CN112309183A CN 112309183 A CN112309183 A CN 112309183A CN 202011263720 A CN202011263720 A CN 202011263720A CN 112309183 A CN112309183 A CN 112309183A
Authority
CN
China
Prior art keywords
module
feedback
user
speaking
user terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011263720.9A
Other languages
Chinese (zh)
Inventor
陈昕昕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Institute of Economic and Trade Technology
Original Assignee
Jiangsu Institute of Economic and Trade Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Institute of Economic and Trade Technology filed Critical Jiangsu Institute of Economic and Trade Technology
Priority to CN202011263720.9A priority Critical patent/CN112309183A/en
Publication of CN112309183A publication Critical patent/CN112309183A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
    • G09B5/065Combinations of audio and video presentations, e.g. videotapes, videodiscs, television systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

The application discloses this interactive listening and speaking exercise system suitable for foreign language teaching includes: user terminal and application server, user terminal includes: a speaker, a microphone, and a processor; the processor is used for processing the voice information; the first type communication module is used for enabling the processor to form communication connection with the outside; the application server includes: the field analysis module is used for analyzing the voice information and obtaining statement fields; the semantic analysis module is used for outputting a feedback field corresponding to the statement field according to a semantic analysis result; the sentence combination module is used for generating a corresponding feedback sentence according to the feedback field and the grammatical relation; the voice generating module is used for generating feedback voice according to the feedback statement; and the second communication module is used for sending the feedback voice to the user terminal. The interactive listening and speaking practice system has the beneficial effect that the interactive listening and speaking practice system suitable for foreign language teaching can effectively realize the simulation of a real conversation scene.

Description

Interactive listening and speaking exercise system suitable for foreign language teaching
Technical Field
The application relates to a listening and speaking exercise system, in particular to an interactive listening and speaking exercise system suitable for foreign language teaching.
Background
The existing listening and speaking practice system applied to foreign language teaching usually adopts a section of sound template to play, and then adopts a mode of reading with the user to train, so that the purpose of interactive training cannot be achieved, the user can only practice pronunciation according to a fixed template, and the ability of dispatching words and sentences cannot be practiced is lacked.
In other prior art schemes, spoken language dialogs are performed in a human-to-human interactive manner.
The real person interaction mode is often adopted to have requirements on both parties, and if the partner is not a teacher but a student, the partner training effect is poor; if the partner is a sophisticated teacher, the cost of implementing the exercise is high, and there is no way to equip every student with a teacher.
At present, there is no practice system for listening and speaking which can effectively improve the spoken language ability of the user.
Disclosure of Invention
In order to solve the defects in the prior art, the application provides an interactive listening and speaking exercise system suitable for foreign language teaching.
The interactive listening and speaking exercise system suitable for foreign language teaching comprises: the user terminal is used for being operated by a user to carry out listening and speaking exercises; the application server is used for forming data interaction with the user terminal so as to provide data required by listening and speaking exercises for the user terminal; wherein, user terminal includes: a speaker for outputting voice information; the microphone is used for collecting voice information; the processor is used for processing the voice information collected by the microphone and outputting the voice information through the loudspeaker; the first type communication module is used for enabling the processor to form communication connection with the outside; the application server includes: the field analysis module is used for analyzing the voice information and obtaining statement fields; the semantic analysis module is used for outputting a feedback field corresponding to the statement field according to a semantic analysis result; the sentence combination module is used for generating a corresponding feedback sentence according to the feedback field and the grammatical relation; the voice generating module is used for generating feedback voice according to the feedback statement; and the second communication module is used for sending the feedback voice to the user terminal.
Further, the application server further comprises: and the pairing module is used for pairing the two user terminals so as to enable the voice information of the two user terminals to be interacted online.
Further, the application server pairs the two user terminals through the pairing module, and the voice information of the paired user terminals is processed by the application server and then sent to the other paired user terminal.
Furthermore, the semantic analysis module comprises a feedback artificial neural network; the feedback artificial neural network comprises a plurality of semantic analysis models, and the semantic analysis models are trained by taking fields as input and output.
Further, the feedback artificial neural network outputs a corresponding feedback field and confidence when analyzing the statement field.
Further, the application server includes: the self-adaptive module is used for self-adaptively outputting the answer sentence when the confidence coefficient of the output of the feedback artificial neural network is lower than a preset value; the self-adapting module finds the field of the closest history statement stored in the database according to the statement field, and then correspondingly outputs the feedback field corresponding to the history statement field.
Further, the user terminal includes: the display module is used for displaying image information; and the simulation module is used for generating a virtual portrait for realizing conversation with a user in the display module.
Further, the user terminal further includes: the camera is used for acquiring a face image of a user; the application server further comprises: the expression recognition module is used for generating user expression data according to the face image acquired by the camera; the expression recognition module inputs the expression data of the user to the voice analysis module as input data of the feedback artificial neural network.
Further, the application server further comprises: the lip shape identification module is used for identifying the lip shape of the user according to the face image collected by the camera and generating lip shape identification data; the lip recognition module inputs the lip recognition data to the speech analysis module as an input data to which the artificial neural network is fed back.
Further, the application server further comprises: the data analysis module is used for analyzing and summarizing the condition of the voice information module of the user analyzed by the semantic analysis module; and the data analysis module sends the analysis data to the user terminal and displays the analysis data to the user through the display module.
The application has the advantages that: an interactive listening and speaking practice system suitable for foreign language teaching is provided, which can effectively realize the simulation of real dialogue scenes.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, serve to provide a further understanding of the application and to enable other features, objects, and advantages of the application to be more apparent. The drawings and their description illustrate the embodiments of the invention and do not limit it. In the drawings:
FIG. 1 is a schematic diagram of a system architecture of an interactive practice system for foreign language instruction, according to an embodiment of the present application;
FIG. 2 is a block diagram of an interactive practice system for listening and speaking for foreign language instruction according to one embodiment of the present application;
fig. 3 is a schematic diagram of a semantic analysis module in an interactive listening and speaking practice system for foreign language teaching according to an embodiment of the present application.
The meaning of the reference symbols in the figures:
a system 100 for an interactive practice system for listening and speaking for foreign language instruction;
a user terminal 200;
an application server 300.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be used. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Referring to fig. 1 to 3, an interactive listening and speaking practice system for foreign language teaching according to the present application includes: a plurality of user terminals and an application server.
Specifically, the user terminal is used for the user to operate for listening and speaking exercises. As a specific scheme, the user terminal includes: speaker, microphone, camera, treater and first type communication module.
The loudspeaker is used for outputting voice information; the microphone is used for collecting voice information; the processor is used for processing the voice information collected by the microphone and outputting the voice information through the loudspeaker. The camera is used for collecting face images of the user.
As a specific implementation scheme, the user terminal of the present application may be a smart phone, a smart tablet, or a PC. Of course, the user terminal of the present application may also be configured as a dedicated learning device.
As another part of the technical scheme of the application, the application server is used for constituting data interaction with the user terminal so as to provide the user terminal with data required by listening and speaking exercises. The application server has general server data operation and storage capacity, and specifically, the application server comprises: the system comprises a field analysis module, a semantic analysis module, a statement analysis module, a voice generation module and a second-class communication module.
The field analysis module is used for analyzing the voice information and obtaining statement fields; the semantic analysis module is used for outputting a feedback field corresponding to the statement field according to a semantic analysis result; the sentence combination module is used for generating a corresponding feedback sentence according to the feedback field and the grammatical relation; the voice generating module is used for generating feedback voice according to the feedback statement; and the second communication module is used for sending the feedback voice to the user terminal.
Specifically, the field parsing module is used for generating corresponding sentence fields according to the voice information, namely, recognizing words in the audio file. As an extension scheme, the field parsing module may be disposed in the user terminal, that is, the user terminal completes field parsing, and sends a result of the field parsing, that is, field data, to the application server for processing.
Preferably, the user terminal prompts the user to re-input the voice when any meaningful field cannot be recognized from the user voice information, and the prompt is prompted in an anthropomorphic dialogue or an image manner.
The semantic analysis module is used for analyzing the meaning of the statement field and outputting a feedback field which can correspond to the statement field, and specifically, the semantic analysis module comprises a feedback artificial neural network; the feedback artificial neural network comprises a plurality of semantic analysis models, and the semantic analysis models are trained by taking fields as input and output.
As a preferred scheme of the present application, a feedback artificial neural network is constructed, then field data of a corresponding question-answer dialog is used as training data, and contexts are respectively used as input data and output data to train the feedback artificial neural network, so that the feedback artificial neural network can output a corresponding feedback field, i.e., a response sentence, according to the input field data. The scheme is more suitable for an application scheme which definitely prescribes a listening and speaking practice scene. The trained feedback artificial neural network can perform intelligent feedback according to the input field and output the feedback field and the confidence coefficient of the feedback field. And the application server judges whether the confidence coefficient exceeds a preset value, if so, a feedback field is adopted for feedback, and if not, a self-adaptive module is adopted for feedback.
After the confidence coefficient of the feedback field exceeds a preset value, the sentence combination module adopts the feedback field to generate a corresponding feedback sentence according to the meaning, the part of speech and the grammar relation of the feedback field, the speech generation module generates feedback speech according to the feedback sentence, then the feedback speech is sent to the user terminal, and the user terminal loudspeaker pronounces the speech.
As an extension, the speech generating module may be provided in the user terminal. And when the feedback field is lower than the preset value, the current feedback field is not suitable for the current dialogue exercise. At this time, the adaptive module is responsible for feedback, specifically, the adaptive module searches the last history statement field closest to this time according to the statement field, and then correspondingly outputs the feedback field corresponding to the history statement field. If the history database has no close statement field, the reverse query statement is searched in the template library for feedback, remarking is carried out in the system, and an administrator is prompted to carry out manual processing.
Preferably, the application server further comprises: expression identification module and lip-shaped identification module.
The expression recognition module is used for generating user expression data according to the face image acquired by the camera; the expression recognition module inputs the expression data of the user to the voice analysis module as input data of the feedback artificial neural network.
The lip shape identification module is used for identifying the lip shape of the user according to the face image collected by the camera and generating lip shape identification data; the lip recognition module inputs the lip recognition data to the speech analysis module as an input data to which the artificial neural network is fed back.
The expression recognition module is mainly used for recognizing the current expression of the user so as to judge the emotion of the user and analyze the context. As a further scheme, the expression recognition module may also adopt a feedback artificial neural network for recognition, and the recognized data is input into the semantic analysis module for analysis, and the expression data of the user may be divided into: happy, general, sad, etc., and various types of data may be given with scores as degrees of distinction.
And the lip shape recognition module is used for analyzing the pronunciation of the user according to the lip shape of the user so as to assist in judging the content of the voice information of the user. The data analyzed according to the lip recognition is synchronously transmitted to the neural network for learning, so that the voice recognition accuracy can be improved according to the lip habits of the user.
As a preferred scheme of the present application, a user terminal includes: the display device comprises a display module and a simulation module, wherein the display module is used for displaying image information; the simulation module is used for generating a virtual portrait for realizing a dialogue with a user at the display module.
As an extended technical solution, the display module may further display a statement field corresponding to the user voice information and a statement field of the voice information sent by the speaker for feedback, that is, display the conversation content.
The scheme is a single-machine practice scheme of the application, namely a scheme for a user to practice listening and speaking by a single person in a learning mode.
Although the technical scheme can realize interactive man-machine conversation exercise, the mode is more suitable for a primary exercise mode due to the characteristics of the machine. I.e. there are some preset conditions for the scope of the dialog and the application scenario.
As an extension, the application server further includes: and a pairing module.
The pairing module is used for pairing the two user terminals so as to enable the voice information of the two user terminals to be interacted online. The application server pairs the two user terminals through the pairing module, and the voice information of the paired user terminals is processed by the application server and then sent to the other paired user terminal.
The pairing module is used for enabling two users needing spoken language practice to carry out interactive training.
As an extension scheme, the pairing module pairs two users, after the two users are paired, the users do not directly form a conversation through the user terminals, but the users feed back the own voice information to another user terminal after processing by a semantic analysis module and other modules in the application server, the other user terminal plays the conversation voice to the other user through a virtual portrait, the users of the other user terminal perform voice answering through the user terminals after hearing the conversation voice, and the voice information collected by the other user terminal is also fed back to the other terminal module after processing by the application server. Through the processing of the application server, two beginners can have smooth conversation practice, so that the beginners at one side of the user terminal think that the beginners are conversing with a user with more proficient spoken language, which is equivalent to that the application server analyzes the meaning expression of the user, and then the sentences conforming to the spoken language expression are obtained through corresponding algorithms, so that the spoken language level of the user at the user terminal is gradually improved.
In order to match the working mode of the pairing module, the semantic analysis module may further set a group of artificial neural networks different from the feedback artificial neural network introduced earlier, and define the artificial neural network as a forward artificial neural network, where the forward artificial neural network outputs spoken statements according to statement fields instead of feedback fields. As a preferred scheme, the application server determines whether the statement field analyzed by the field analysis module meets an expression standard pre-stored in the application server. If the expression standard is not satisfied, the input is input into the forward artificial neural network for processing, and the forward artificial neural network outputs statement information expressed based on the meaning of the user and the confidence. Similarly, a threshold determination of confidence level is required. The forward artificial neural network can learn by presetting English sentences in advance, and the specific mode is that a large number of English sentences are split into single words, then the words are used as input items of the forward artificial neural network for unordered input, and the corresponding sentences are used as output items for training the forward artificial neural network.
As a preferred scheme, the forward neural network may be used as a part of the feedback neural network, that is, the forward statement, that is, the spoken language expression statement obtained according to the statement field, is obtained first, and then the spoken language expression statement is input into a sub-artificial neural network for processing, and then the feedback statement is obtained.
Preferably, the neural network may be trained in a loop manner, the feedback statement is split, and then the forward neural network is continuously trained as training data of the normal neural network.
As an extension, when performing the pairing mode, the forward artificial neural network performs neural network model training as output data and input data of the feedback neural network (or a sub-network thereof), respectively, based on a sentence generated by the user consciousness expression and a sentence corresponding to an answer thereof. Namely, the sentence generated by inputting the sentence field by the user is used as a feedback learning material, and the neural network is trained during the pairing learning.
As a further extension, the pairing module may adopt two pairing modes. The first way is that the application server matches together users that select the same scene according to the user's selection of a voice conversation scene. The second way is that the user of the user terminal can perform voice exercise by actively adding friends, but the friends are virtual friends, and although the back of the virtual friends corresponds to real users, the application server is used as an intermediary, so fixed partners can be selected by adopting a virtual identity and friend mode to perform exercise. The third mode is that the application server analyzes the ongoing conversation theme of the user according to the semantic data of one of the users after semantic analysis, then searches other users with similar topics in the application server, and then the application server converts the self-answering into the matching user-answering.
The matching module can use the first mode and the third mode together.
As an alternative, the application server further comprises: and a data analysis module. The data analysis module is used for analyzing and summarizing the condition of the voice information module of the user analyzed by the semantic analysis module; and the data analysis module sends the analysis data to the user terminal and displays the analysis data to the user through the display module.
The data analysis module analyzes the sentence data in the voice information of the user and compares the sentence data with the sentence data processed by the artificial neural network, so that the spoken language practice condition of the user is fed back to the user. Here too, the practice situation can be analyzed by statistical question-answering validity.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (10)

1. An interactive listening and speaking practice system suitable for foreign language teaching is characterized in that:
the interactive listening and speaking exercise system suitable for foreign language teaching comprises:
the user terminal is used for being operated by a user to carry out listening and speaking exercises;
the application server is used for forming data interaction with the user terminal so as to provide data required by listening and speaking exercises for the user terminal;
wherein the user terminal comprises:
a speaker for outputting voice information;
the microphone is used for collecting voice information;
the processor is used for processing the voice information collected by the microphone and outputting the voice information through the loudspeaker;
the first type communication module is used for enabling the processor to form communication connection with the outside;
the application server includes:
the field analysis module is used for analyzing the voice information and obtaining statement fields;
the semantic analysis module is used for outputting a feedback field corresponding to the statement field according to the semantic analysis result;
the sentence combination module is used for generating a corresponding feedback sentence according to the feedback field and the grammatical relation;
the voice generating module is used for generating feedback voice according to the feedback statement;
and the second communication module is used for sending the feedback voice to the user terminal.
2. The interactive listening and speaking exercise system for foreign language teaching according to claim 1, wherein:
the application server further comprises:
and the pairing module is used for pairing the two user terminals so as to enable the voice information of the two user terminals to be interacted online.
3. The interactive listening and speaking exercise system for foreign language teaching according to claim 2, wherein:
the application server pairs the two user terminals through the pairing module, and the paired voice information of the user terminals is processed by the application server and then is sent to the other paired user terminal.
4. The interactive listening and speaking exercise system for foreign language teaching according to claim 3 wherein:
the semantic analysis module comprises a feedback artificial neural network; the feedback artificial neural network comprises a plurality of semantic analysis models, and the semantic analysis models are trained by taking fields as input and output.
5. The interactive listening and speaking exercise system for foreign language teaching according to claim 4 wherein:
and the feedback artificial neural network outputs a corresponding feedback field and confidence when analyzing the statement field.
6. The interactive listening and speaking exercise system for foreign language teaching according to claim 5, wherein:
the application server includes:
the self-adaptive module is used for self-adaptively outputting a response sentence when the confidence coefficient of the output of the feedback artificial neural network is lower than a preset value;
the self-adapting module finds the closest historical statement field stored in the database according to the statement field, and then correspondingly outputs the feedback field corresponding to the historical statement field.
7. The interactive listening and speaking exercise system for foreign language teaching according to claim 6, wherein:
the user terminal includes:
the display module is used for displaying image information;
and the simulation module is used for generating a virtual portrait for realizing conversation with a user in the display module.
8. The interactive listening and speaking exercise system for foreign language teaching according to claim 7, wherein:
the user terminal further comprises:
the camera is used for acquiring a face image of a user;
the application server further comprises:
the expression recognition module is used for generating the user expression data according to the face image acquired by the camera;
and the expression recognition module inputs the expression data of the user into the voice analysis module as input data of the voice analysis module which feeds back the artificial neural network.
9. The interactive listening and speaking exercise system for foreign language teaching according to claim 8, wherein:
the application server further comprises:
the lip shape recognition module is used for recognizing the lip shape of the user according to the face image collected by the camera and generating lip shape recognition data;
the lip recognition module inputs the lip recognition data to the speech analysis module as an input data to its feedback artificial neural network.
10. The interactive listening and speaking exercise system for foreign language teaching according to claim 9 wherein:
the application server further comprises:
the data analysis module is used for analyzing and summarizing the condition of the voice information module of the user analyzed by the semantic analysis module;
and the data analysis module sends the analysis data to the user terminal and displays the analysis data to the user through the display module.
CN202011263720.9A 2020-11-12 2020-11-12 Interactive listening and speaking exercise system suitable for foreign language teaching Pending CN112309183A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011263720.9A CN112309183A (en) 2020-11-12 2020-11-12 Interactive listening and speaking exercise system suitable for foreign language teaching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011263720.9A CN112309183A (en) 2020-11-12 2020-11-12 Interactive listening and speaking exercise system suitable for foreign language teaching

Publications (1)

Publication Number Publication Date
CN112309183A true CN112309183A (en) 2021-02-02

Family

ID=74326707

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011263720.9A Pending CN112309183A (en) 2020-11-12 2020-11-12 Interactive listening and speaking exercise system suitable for foreign language teaching

Country Status (1)

Country Link
CN (1) CN112309183A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117275456A (en) * 2023-10-18 2023-12-22 南京龙垣信息科技有限公司 Intelligent listening and speaking training device supporting multiple languages

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW516009B (en) * 2001-09-28 2003-01-01 Inventec Corp On-line virtual community system for human oral foreign language pairing instruction and the method thereof
TW200516518A (en) * 2003-11-07 2005-05-16 Inventec Corp On-line life spoken language learning system combining local computer learning and remote training and method thereof
CN101551947A (en) * 2008-06-11 2009-10-07 俞凯 Computer system for assisting spoken language learning
CN106203490A (en) * 2016-06-30 2016-12-07 江苏大学 Based on attribute study and the image ONLINE RECOGNITION of interaction feedback, search method under a kind of Android platform
CN106875940A (en) * 2017-03-06 2017-06-20 吉林省盛创科技有限公司 A kind of Machine self-learning based on neutral net builds knowledge mapping training method
CN107578004A (en) * 2017-08-30 2018-01-12 苏州清睿教育科技股份有限公司 Learning method and system based on image recognition and interactive voice
CN110444087A (en) * 2019-07-26 2019-11-12 深圳市讯呼信息技术有限公司 A kind of intelligent language teaching machine device people
CN110853429A (en) * 2019-12-17 2020-02-28 陕西中医药大学 Intelligent English teaching system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW516009B (en) * 2001-09-28 2003-01-01 Inventec Corp On-line virtual community system for human oral foreign language pairing instruction and the method thereof
TW200516518A (en) * 2003-11-07 2005-05-16 Inventec Corp On-line life spoken language learning system combining local computer learning and remote training and method thereof
CN101551947A (en) * 2008-06-11 2009-10-07 俞凯 Computer system for assisting spoken language learning
CN106203490A (en) * 2016-06-30 2016-12-07 江苏大学 Based on attribute study and the image ONLINE RECOGNITION of interaction feedback, search method under a kind of Android platform
CN106875940A (en) * 2017-03-06 2017-06-20 吉林省盛创科技有限公司 A kind of Machine self-learning based on neutral net builds knowledge mapping training method
CN107578004A (en) * 2017-08-30 2018-01-12 苏州清睿教育科技股份有限公司 Learning method and system based on image recognition and interactive voice
CN110444087A (en) * 2019-07-26 2019-11-12 深圳市讯呼信息技术有限公司 A kind of intelligent language teaching machine device people
CN110853429A (en) * 2019-12-17 2020-02-28 陕西中医药大学 Intelligent English teaching system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117275456A (en) * 2023-10-18 2023-12-22 南京龙垣信息科技有限公司 Intelligent listening and speaking training device supporting multiple languages

Similar Documents

Publication Publication Date Title
CN107203953B (en) Teaching system based on internet, expression recognition and voice recognition and implementation method thereof
CN108000526B (en) Dialogue interaction method and system for intelligent robot
CN110648690B (en) Audio evaluation method and server
JP6705956B1 (en) Education support system, method and program
CN104795065A (en) Method for increasing speech recognition rate and electronic device
KR20160008949A (en) Apparatus and method for foreign language learning based on spoken dialogue
KR102410110B1 (en) How to provide Korean language learning service
CN114821744A (en) Expression recognition-based virtual character driving method, device and equipment
CN117332072B (en) Dialogue processing, voice abstract extraction and target dialogue model training method
KR20200002141A (en) Providing Method Of Language Learning Contents Based On Image And System Thereof
CN114841841A (en) Intelligent education platform interaction system and interaction method for teaching interaction
KR20210123545A (en) Method and apparatus for conversation service based on user feedback
CN101739852B (en) Speech recognition-based method and device for realizing automatic oral interpretation training
CN112309183A (en) Interactive listening and speaking exercise system suitable for foreign language teaching
CN117635383A (en) Virtual teacher and multi-person cooperative talent training system, method and equipment
CN111078010B (en) Man-machine interaction method and device, terminal equipment and readable storage medium
KR100898104B1 (en) Learning system and method by interactive conversation
KR102684930B1 (en) Video learning systems for enable learners to be identified through artificial intelligence and method thereof
JP2015060056A (en) Education device and ic and medium for education device
CN111897434A (en) System, method, and medium for signal control of virtual portrait
CN113053186A (en) Interaction method, interaction device and storage medium
KR20020024828A (en) Language study method by interactive conversation on Internet
CN109147418A (en) A kind of substep guiding Chinese wisdom learning method, device and system
CN116226411B (en) Interactive information processing method and device for interactive project based on animation
CN116741143B (en) Digital-body-based personalized AI business card interaction method and related components

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination