CN112507294A - English teaching system and teaching method based on human-computer interaction - Google Patents

English teaching system and teaching method based on human-computer interaction Download PDF

Info

Publication number
CN112507294A
CN112507294A CN202011146792.5A CN202011146792A CN112507294A CN 112507294 A CN112507294 A CN 112507294A CN 202011146792 A CN202011146792 A CN 202011146792A CN 112507294 A CN112507294 A CN 112507294A
Authority
CN
China
Prior art keywords
module
english
user
teaching
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011146792.5A
Other languages
Chinese (zh)
Other versions
CN112507294B (en
Inventor
李航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Yuanhao Information Technology Service Co.,Ltd.
Qingdao Fruit Science And Technology Service Platform Co ltd
Original Assignee
Chongqing Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Jiaotong University filed Critical Chongqing Jiaotong University
Priority to CN202011146792.5A priority Critical patent/CN112507294B/en
Publication of CN112507294A publication Critical patent/CN112507294A/en
Application granted granted Critical
Publication of CN112507294B publication Critical patent/CN112507294B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/58Use of machine translation, e.g. for multi-lingual retrieval, for server-side translation for client devices or for real-time translation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06393Score-carding, benchmarking or key performance indicator [KPI] analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • G09B19/06Foreign languages
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
    • G09B5/065Combinations of audio and video presentations, e.g. videotapes, videodiscs, television systems
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B7/00Electrically-operated teaching apparatus or devices working with questions and answers
    • G09B7/02Electrically-operated teaching apparatus or devices working with questions and answers of the type wherein the student is expected to construct an answer to the question which is presented or wherein the machine gives an answer to the question presented by a student
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition

Abstract

The invention belongs to the technical field of English teaching, and discloses an English teaching system and a teaching method based on human-computer interaction, wherein the English teaching system based on human-computer interaction comprises: the system comprises a registration module, an identity verification and login module, a course selection module, a central control module, a video teaching module, a labeling module, a translation module, a voice recording module, a voice analysis module, a question answering module, a storage module and a comprehensive evaluation module. The login verification method provided by the invention enhances the login security of the user and solves the problem of low security of the existing login mode; on the basis of playing a teaching video, an interactive question and answer module is added, so that a user can timely perform self-detection during video watching or after learning watching is finished; collecting and analyzing audio information, and correcting wrong pronunciations; when the translation is carried out, sentences marked by teachers conveniently in the video can be identified, the translation accuracy is better, and the user can learn more conveniently.

Description

English teaching system and teaching method based on human-computer interaction
Technical Field
The invention belongs to the technical field of English teaching, and particularly relates to an English teaching system and a teaching method based on human-computer interaction.
Background
At present: in the current society, the English ability is vital, and whether the English ability is higher can actually determine the professional trend of most people. English teaching refers to the process of teaching english to persons whose english language is or is not the first language. English teaching relates to many professional theoretical knowledge, including linguistics, second language acquisition, glossaries, sentence syntactics, literature, corpus theory, cognitive psychology, etc. English teaching is a progressive process, and English learning is crucial today in globalization and rapid development, whether for people who have English in the first language or not. However, the existing english teaching system has poor translation quality and cannot realize comprehensive evaluation of student learning.
Through the above analysis, the problems and defects of the prior art are as follows: the existing English teaching system has poor translation quality and cannot realize comprehensive evaluation of student learning.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides an English teaching system and a teaching method based on human-computer interaction.
The invention is realized in such a way that the English teaching method based on human-computer interaction comprises the following steps:
receiving registration interface information by a registration module through a registration program; acquiring a face image of a user to be registered and user registration data, and associating the user registration data with face image related data of the user; correspondingly storing the user registration data and the related data of the user face image at the same time to finish user registration; receiving a login request sent by a client by an identity verification program through an identity verification and login module, and acquiring an identity of the client, an account name and a combined password carried by the login request;
step two, receiving a login request sent by a client, and acquiring an identity of the client, an account name and a combined password carried by the login request; splitting the combined password according to a preset combination rule to obtain a login password and a check code; verifying the check code according to the identity of the client and a preset check code record; verifying the login password according to the account name and a pre-configured login password database; if the check code passes the verification and the login password passes the verification, judging that the login request passes the verification, and allowing the client to login; otherwise, allowing login;
thirdly, selecting English primary, middle and high class courses by using a course selection program through a course selection module; playing a teaching video according to the selected course by using a video teaching program through a video teaching module; intercepting video images to be marked by a user by using a marking program through a marking module, adding a data interaction interface for each frame of image, forming an image to be marked by the current user, and displaying the image to be marked by the current user;
after the data interface receives the annotation request, executing preset interactive processing associated with the annotation request of each frame of image, and annotating the complex information in the process of playing the teaching video; translating the marked complex information by using a translator through a translation module; recording the user voice by using a voice recording program through a voice recording module to obtain the spoken English information of the user; acquiring a feature combination corresponding to the spoken English information of the user by a voice analysis module through the voice analysis module, acquiring an incidence relation between features in the feature combination, and calculating the discrimination of each feature combination according to the features corresponding to the feature combination and the incidence relation between the features;
screening each feature combination according to a preset discrimination threshold value to obtain an initial feature combination; screening the initial characteristic combination by using a preset evaluation index to obtain an available characteristic combination which accords with the preset evaluation index; acquiring spoken English information corresponding to the available feature combinations, and generating first initial spoken language data based on the discrimination; based on a deep learning noise reduction model, performing noise reduction processing on the first initial spoken language data to generate noise-reduced evaluation user spoken language data;
step six, randomly dividing the evaluated spoken English audio into equal-length slices; carrying out short-time Fourier transform on the segmented audio slices to generate corresponding two-dimensional time-frequency graphs, and then carrying out high-level abstraction on the two-dimensional time-frequency graphs one by one to obtain high-level abstract characteristics of the audio slices; analyzing the high-level abstract features of the audio slices one by one through a machine learning model to obtain the score of each audio slice, and averaging all the scores to obtain the final oral English evaluation score;
step seven, questions in teaching are asked by the question answering module through the dialog box, and other users answer the questions; the storage module is used for storing the annotation information or the video clip in which the annotation information is located; and performing comprehensive evaluation by using a comprehensive evaluation module according to the course selected by the user, the learning duration, the labeling information, the spoken English analysis result and the question and answer information by using a comprehensive evaluation program.
Further, in the second step, the preset combination rule is a preset combination rule which is predetermined by the server and the client and is an arrangement sequence of the check code and the login password in the combination password.
Further, the arrangement sequence of the check code and the login password comprises front and back position sequencing, or recombination of characters or character combinations obtained after the check code and the login password are split.
Further, in step four, the translating the labeled complex information by the translation module using the translator includes:
(1) acquiring marking information, and identifying the marking information to obtain English fields;
(2) judging whether English characters contained in the English field can be identified or not, and when the English characters cannot be identified, horizontally projecting the English characters which cannot be identified and obtaining a horizontal projection curve of the English characters;
(3) identifying English characters according to the horizontal projection curve;
(4) and processing the recognized English characters to obtain a character string, and translating the character string.
Further, in the step (2), the obtaining of the horizontal projection curve of the english character includes:
horizontally projecting the English characters which cannot be identified; taking the height of the English character as an x coordinate, taking the upper edge of the English character as the origin of the x coordinate, and taking the number of pixels obtained by horizontal projection under the height of the English character as a y coordinate; and obtaining a horizontal projection curve of the English character according to the x coordinate, the y coordinate and the origin.
Further, in step six, the performing noise reduction processing on the first initial spoken language data based on the deep learning noise reduction model includes:
slicing the first initial spoken language data according to a preset length; generating a to-be-processed voiceprint atlas of the first initial spoken language data according to the sliced first initial spoken language data, and extracting to-be-processed voiceprint parameters of the first initial spoken language data from the to-be-processed voiceprint atlas; and inputting the voiceprint parameters to be processed into the deep learning noise reduction model to obtain noise-reduced voice data.
Further, in step six, the time duration of the random audio slice is 5 seconds.
Another object of the present invention is to provide a human-computer interaction-based english teaching system for implementing the human-computer interaction-based english teaching method, including:
the system comprises a registration module, an identity verification and login module, a course selection module, a central control module, a video teaching module, a labeling module, a translation module, a voice recording module, a voice analysis module, a question answering module, a storage module and a comprehensive evaluation module;
the registration module is connected with the central control module and is used for registering the user through a registration program;
the identity authentication and login module is connected with the central control module and is used for authenticating the identity of the user through an identity authentication program and logging in after the authentication is passed;
the course selection module is connected with the central control module and is used for selecting English primary, middle and advanced courses through a course selection program;
the central control module is connected with the registration module, the identity verification and login module, the course selection module, the video teaching module, the labeling module, the translation module, the voice recording module, the voice analysis module, the question answering module, the storage module and the comprehensive evaluation module and is used for controlling the normal operation of each module through a main control computer;
the video teaching module is connected with the central control module and is used for playing teaching videos according to the selected courses through a video teaching program;
the marking module is connected with the central control module and is used for marking the complex information in the playing process of the teaching video through a marking program;
the translation module is connected with the central control module and is used for translating the marked complex information through the translator;
the voice recording module is connected with the central control module and is used for recording the voice of the user through a voice recording program to obtain the oral information of the user;
the voice analysis module is connected with the central control module and used for analyzing the acquired spoken English information of the user through the voice analysis module to obtain a spoken English analysis result of the user;
the question-answering module is connected with the central control module and is used for asking questions in teaching through a dialog box and answering the questions by other users;
the storage module is connected with the central control module and used for storing the annotation information or the video clip where the annotation information is located through the storage;
and the comprehensive evaluation module is connected with the central control module and is used for carrying out comprehensive evaluation according to the course selected by the user, the learning duration, the labeling information, the spoken English analysis result and the question and answer information through a comprehensive evaluation program.
By combining all the technical schemes, the invention has the advantages and positive effects that: the login verification method provided by the invention enhances the login security of the user and solves the problem of low security of the existing login mode; on the basis of playing a teaching video in the prior art, an interactive question-answering module is added, so that a user can perform self-detection in time when watching the video or after watching and learning are finished, questions are presented, the learning effect of the current knowledge point is judged, the learning effect is strengthened, a personalized learning route is pushed for the user, audio information can be collected and analyzed, and wrong pronunciation can be corrected; when the translation is carried out, sentences marked by teachers conveniently in the video can be identified, the translation accuracy is better, and the user can learn more conveniently.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the embodiments of the present application will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present application, and it is obvious for those skilled in the art that other drawings can be obtained from the drawings without creative efforts.
Fig. 1 is a flowchart of an english teaching method based on human-computer interaction according to an embodiment of the present invention.
Fig. 2 is a block diagram of an english teaching system based on human-computer interaction according to an embodiment of the present invention.
Fig. 3 is a flowchart of the authentication of the user by the authentication and login module using the authentication procedure according to the embodiment of the present invention.
FIG. 4 is a flowchart of translation of tagged complex information by a translation module using a translator according to an embodiment of the present invention.
Fig. 5 is a flowchart illustrating analyzing, by a speech analysis module, the obtained spoken english language information of the user according to the embodiment of the present invention.
In fig. 2: 1. a registration module; 2. an identity authentication and login module; 3. a course selection module; 4. a central control module; 5. a video teaching module; 6. a labeling module; 7. a translation module; 8. a voice recording module; 9. a voice analysis module; 10. a question-answering module; 11. a storage module; 12. and a comprehensive evaluation module.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Aiming at the problems in the prior art, the invention provides an English teaching system and a teaching method based on human-computer interaction, and the invention is described in detail below with reference to the accompanying drawings.
As shown in fig. 1, the english teaching method based on human-computer interaction according to the embodiment of the present invention includes the following steps:
s101, registering a user by a registration program through a registration module; the identity of the user is verified by an identity verification program through an identity verification and login module, and login is performed after the user passes the verification;
s102, selecting English primary, middle and high classes by using a class selection program through a class selection module; playing a teaching video according to the selected course by using a video teaching program through a video teaching module;
s103, marking the complex information in the playing process of the teaching video by using a marking program through a marking module; translating the marked complex information by using a translator through a translation module;
s104, recording the user voice by using a voice recording program through a voice recording module to obtain the spoken English information of the user; analyzing the acquired spoken English information of the user by using a voice analysis module through the voice analysis module to obtain a spoken English analysis result of the user;
s105, questions in teaching are asked by the question answering module through the dialog box, and other users answer the questions; the storage module is used for storing the annotation information or the video clip in which the annotation information is located;
and S106, performing comprehensive evaluation by using a comprehensive evaluation module according to the course selected by the user, the learning duration, the labeling information, the spoken English analysis result and the question and answer information by using a comprehensive evaluation program.
As shown in fig. 2, the english teaching system based on human-computer interaction according to the embodiment of the present invention includes:
the system comprises a registration module 1, an identity verification and login module 2, a course selection module 3, a central control module 4, a video teaching module 5, a labeling module 6, a translation module 7, a voice recording module 8, a voice analysis module 9, a question and answer module 10, a storage module 11 and a comprehensive evaluation module 12;
the registration module 1 is connected with the central control module 4 and is used for carrying out user registration through a registration program;
the identity authentication and login module 2 is connected with the central control module 4 and is used for authenticating the identity of the user through an identity authentication program and logging in after the authentication is passed;
the course selection module 3 is connected with the central control module 4 and is used for selecting English primary, middle and advanced courses through a course selection program;
the central control module 4 is connected with the registration module 1, the identity verification and login module 2, the course selection module 3, the video teaching module 5, the labeling module 6, the translation module 7, the voice recording module 8, the voice analysis module 9, the question answering module 10, the storage module 11 and the comprehensive evaluation module 12 and is used for controlling the normal operation of each module through a main control computer;
the video teaching module 5 is connected with the central control module 4 and is used for playing teaching videos according to the selected courses through a video teaching program;
the marking module 6 is connected with the central control module 4 and is used for marking the complex information in the playing process of the teaching video through a marking program;
the translation module 7 is connected with the central control module 4 and is used for translating the marked complex information through the translator;
the voice recording module 8 is connected with the central control module 4 and is used for recording the voice of the user through a voice recording program to obtain the oral information of the user;
the voice analysis module 9 is connected with the central control module 4 and used for analyzing the obtained spoken English information of the user through the voice analysis module to obtain a spoken English analysis result of the user;
the question-answering module 10 is connected with the central control module 4 and is used for asking questions in teaching through a dialog box and answering the questions by other users;
the storage module 11 is connected with the central control module 4 and is used for storing the annotation information or the video clip where the annotation information is located through a memory;
and the comprehensive evaluation module 12 is connected with the central control module 4 and is used for carrying out comprehensive evaluation according to the course selected by the user, the learning duration, the labeling information, the spoken English analysis result and the question and answer information through a comprehensive evaluation program.
The technical solution of the present invention is further illustrated by the following specific examples.
Example 1
As shown in fig. 1, the english teaching method based on human-computer interaction according to the embodiment of the present invention, as a preferred embodiment, the registration of a user by a registration program through a registration module according to the embodiment of the present invention includes:
receiving registration interface information by a registration module using a registration program; acquiring a face image of a user to be registered and user registration data, and associating the user registration data with face image related data of the user; and correspondingly storing the user registration data and the data related to the user face image at the same time to finish user registration.
Example 2
An english teaching method based on human-computer interaction according to an embodiment of the present invention is shown in fig. 1, and as a preferred embodiment, as shown in fig. 3, an english teaching method based on human-computer interaction according to an embodiment of the present invention performs authentication of a user identity by using an authentication program through an authentication and login module, including:
s201, receiving a login request sent by a client, and acquiring an identity of the client, an account name and a combined password carried by the login request;
s202, splitting the combined password according to a preset combination rule to obtain a login password and a check code;
s203, verifying the check code according to the identity of the client and a preset check code record;
s204, verifying the login password according to the account name and a pre-configured login password database; and if the check code passes the verification and the login password passes the verification, judging that the login request passes the verification, and allowing the client to login.
In step S202, the preset combination rule provided in the embodiment of the present invention is predetermined by the server and the client, and is the arrangement sequence of the check code and the login password in the combination password.
The arrangement sequence of the check code and the login password provided by the embodiment of the invention comprises front and back position sequencing, or recombination of characters or character combinations obtained after the check code and the login password are split.
Example 3
As shown in fig. 1, the english teaching method based on human-computer interaction according to the embodiment of the present invention is, as a preferred embodiment, the labeling of the complex information in the teaching video playing process by using a labeling program through a labeling module according to the embodiment of the present invention includes:
intercepting video images to be marked by a user by using a marking program through a marking module, adding a data interaction interface for each frame of image, forming an image to be marked by the current user, and displaying the image to be marked by the current user; and after the data interface receives and marks the request, executing preset interactive processing associated with the image marking request of each frame, and marking the complex information in the teaching video playing process.
Example 4
An english teaching method based on human-computer interaction according to an embodiment of the present invention is shown in fig. 1, and as a preferred embodiment, as shown in fig. 4, the method for translating complex information labeled by a translator using a translator according to an embodiment of the present invention includes:
s301, obtaining marking information, and identifying the marking information to obtain English fields;
s302, judging whether English characters contained in the English field can be identified, and when the English characters cannot be identified, horizontally projecting the English characters which cannot be identified and obtaining a horizontal projection curve of the English characters;
s303, identifying English characters according to the horizontal projection curve;
s304, the recognized English characters are processed to obtain character strings, and the character strings are translated.
In step S302, the obtaining of the horizontal projection curve of the english character according to the embodiment of the present invention includes:
horizontally projecting the English characters which cannot be identified; taking the height of the English character as an x coordinate, taking the upper edge of the English character as the origin of the x coordinate, and taking the number of pixels obtained by horizontal projection under the height of the English character as a y coordinate; and obtaining a horizontal projection curve of the English character according to the x coordinate, the y coordinate and the origin.
Example 5
An english teaching method based on human-computer interaction according to an embodiment of the present invention is shown in fig. 1, and as a preferred embodiment, as shown in fig. 5, an english language information of a user obtained by a speech analysis module according to an embodiment of the present invention includes:
s401, obtaining oral English information of a user, and performing noise reduction processing on the information to obtain an evaluated oral English audio;
s402, randomly dividing the evaluated spoken English audio into equal-length slices;
s403, performing short-time Fourier transform on the segmented audio slices to generate corresponding two-dimensional time-frequency graphs, and performing high-level abstraction on the two-dimensional time-frequency graphs one by one to obtain high-level abstract characteristics of the audio slices;
s404, analyzing the high-level abstract features of the audio slices one by one through a machine learning model to obtain the score of each audio slice, and averaging all the scores to obtain the final oral English evaluation score.
In step S401, the performing noise reduction processing on information according to the embodiment of the present invention includes:
acquiring a feature combination corresponding to the spoken English information of the user, acquiring an incidence relation between features in the feature combination, and calculating the discrimination of each feature combination according to the features corresponding to the feature combination and the incidence relation between the features; screening each feature combination according to a preset discrimination threshold value to obtain an initial feature combination; screening the initial characteristic combination by using a preset evaluation index to obtain an available characteristic combination which accords with the preset evaluation index; acquiring spoken English information corresponding to the available feature combinations, and generating first initial spoken language data based on the discrimination; and based on a deep learning noise reduction model, performing noise reduction processing on the first initial spoken language data to generate noise-reduced evaluation user spoken language data.
The noise reduction model based on deep learning provided by the embodiment of the invention comprises the following steps of:
slicing the first initial spoken language data according to a preset length; generating a to-be-processed voiceprint atlas of the first initial spoken language data according to the sliced first initial spoken language data, and extracting to-be-processed voiceprint parameters of the first initial spoken language data from the to-be-processed voiceprint atlas; and inputting the voiceprint parameters to be processed into the deep learning noise reduction model to obtain noise-reduced voice data.
Example 6
Fig. 1 shows an english teaching method based on human-computer interaction according to an embodiment of the present invention, and as a preferred embodiment, the time duration of a random audio slice according to the embodiment of the present invention is 5 seconds.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention, and the scope of the present invention is not limited thereto, and any modification, equivalent replacement, and improvement made by those skilled in the art within the technical scope of the present invention disclosed herein, which is within the spirit and principle of the present invention, should be covered by the present invention.

Claims (10)

1. An English teaching method based on human-computer interaction is characterized by comprising the following steps:
receiving registration interface information by a registration module through a registration program; acquiring a face image of a user to be registered and user registration data, and associating the user registration data with face image related data of the user; correspondingly storing the user registration data and the related data of the user face image at the same time to finish user registration; receiving a login request sent by a client by an identity verification program through an identity verification and login module, and acquiring an identity of the client, an account name and a combined password carried by the login request;
step two, receiving a login request sent by a client, and acquiring an identity of the client, an account name and a combined password carried by the login request; splitting the combined password according to a preset combination rule to obtain a login password and a check code; verifying the check code according to the identity of the client and a preset check code record; verifying the login password according to the account name and a pre-configured login password database; if the check code passes the verification and the login password passes the verification, judging that the login request passes the verification, and allowing the client to login; otherwise, allowing login;
thirdly, selecting English primary, middle and high class courses by using a course selection program through a course selection module; playing a teaching video according to the selected course by using a video teaching program through a video teaching module; intercepting video images to be marked by a user by using a marking program through a marking module, adding a data interaction interface for each frame of image, forming an image to be marked by the current user, and displaying the image to be marked by the current user;
after the data interface receives the annotation request, executing preset interactive processing associated with the annotation request of each frame of image, and annotating the complex information in the process of playing the teaching video; translating the marked complex information by using a translator through a translation module; recording the user voice by using a voice recording program through a voice recording module to obtain the spoken English information of the user; acquiring a feature combination corresponding to the spoken English information of the user by a voice analysis module through the voice analysis module, acquiring an incidence relation between features in the feature combination, and calculating the discrimination of each feature combination according to the features corresponding to the feature combination and the incidence relation between the features;
screening each feature combination according to a preset discrimination threshold value to obtain an initial feature combination; screening the initial characteristic combination by using a preset evaluation index to obtain an available characteristic combination which accords with the preset evaluation index; acquiring spoken English information corresponding to the available feature combinations, and generating first initial spoken language data based on the discrimination; based on a deep learning noise reduction model, performing noise reduction processing on the first initial spoken language data to generate noise-reduced evaluation user spoken language data;
step six, randomly dividing the evaluated spoken English audio into equal-length slices; carrying out short-time Fourier transform on the segmented audio slices to generate corresponding two-dimensional time-frequency graphs, and then carrying out high-level abstraction on the two-dimensional time-frequency graphs one by one to obtain high-level abstract characteristics of the audio slices; analyzing the high-level abstract features of the audio slices one by one through a machine learning model to obtain the score of each audio slice, and averaging all the scores to obtain the final oral English evaluation score;
step seven, questions in teaching are asked by the question answering module through the dialog box, and other users answer the questions; the storage module is used for storing the annotation information or the video clip in which the annotation information is located; and performing comprehensive evaluation by using a comprehensive evaluation module according to the course selected by the user, the learning duration, the labeling information, the spoken English analysis result and the question and answer information by using a comprehensive evaluation program.
2. The human-computer interaction based english teaching method of claim 1, wherein in step two, the preset combination rule is predetermined by the server and the client, and is the sequence of the check code and the login password in the combination password.
3. The human-computer interaction-based english teaching method according to claim 2, wherein the arrangement order of the check code and the login password includes front-back position sorting, or recombination of characters or character combinations obtained by splitting the check code and the login password.
4. The human-computer interaction based english teaching method of claim 1, wherein in step four, the translation of the labeled complex information by the translation module using the translator comprises:
(1) acquiring marking information, and identifying the marking information to obtain English fields;
(2) judging whether English characters contained in the English field can be identified or not, and when the English characters cannot be identified, horizontally projecting the English characters which cannot be identified and obtaining a horizontal projection curve of the English characters;
(3) identifying English characters according to the horizontal projection curve;
(4) and processing the recognized English characters to obtain a character string, and translating the character string.
5. The english teaching method based on human-computer interaction of claim 4, wherein in step (2), said obtaining the horizontal projection curve of the english character includes:
horizontally projecting the English characters which cannot be identified; taking the height of the English character as an x coordinate, taking the upper edge of the English character as the origin of the x coordinate, and taking the number of pixels obtained by horizontal projection under the height of the English character as a y coordinate; and obtaining a horizontal projection curve of the English character according to the x coordinate, the y coordinate and the origin.
6. The human-computer interaction-based english teaching method according to claim 1, wherein in step six, the denoising the first initial spoken language data based on the deep learning denoising model comprises:
slicing the first initial spoken language data according to a preset length; generating a to-be-processed voiceprint atlas of the first initial spoken language data according to the sliced first initial spoken language data, and extracting to-be-processed voiceprint parameters of the first initial spoken language data from the to-be-processed voiceprint atlas; and inputting the voiceprint parameters to be processed into the deep learning noise reduction model to obtain noise-reduced voice data.
7. The human-computer interaction based english teaching method of claim 1, wherein in step six, the duration of said random audio slice is 5 seconds.
8. A human-computer interaction-based english teaching system for implementing the human-computer interaction-based english teaching method according to claims 1 to 7, wherein said human-computer interaction-based english teaching system comprises:
the system comprises a registration module, an identity verification and login module, a course selection module, a central control module, a video teaching module, a labeling module, a translation module, a voice recording module, a voice analysis module, a question answering module, a storage module and a comprehensive evaluation module;
the registration module is connected with the central control module and is used for registering the user through a registration program;
the identity authentication and login module is connected with the central control module and is used for authenticating the identity of the user through an identity authentication program and logging in after the authentication is passed;
the course selection module is connected with the central control module and is used for selecting English primary, middle and advanced courses through a course selection program;
the central control module is connected with the registration module, the identity verification and login module, the course selection module, the video teaching module, the labeling module, the translation module, the voice recording module, the voice analysis module, the question answering module, the storage module and the comprehensive evaluation module and is used for controlling the normal operation of each module through a main control computer;
the video teaching module is connected with the central control module and is used for playing teaching videos according to the selected courses through a video teaching program;
the marking module is connected with the central control module and is used for marking the complex information in the playing process of the teaching video through a marking program;
the translation module is connected with the central control module and is used for translating the marked complex information through the translator;
the voice recording module is connected with the central control module and is used for recording the voice of the user through a voice recording program to obtain the oral information of the user;
the voice analysis module is connected with the central control module and used for analyzing the acquired spoken English information of the user through the voice analysis module to obtain a spoken English analysis result of the user;
the question-answering module is connected with the central control module and is used for asking questions in teaching through a dialog box and answering the questions by other users;
the storage module is connected with the central control module and used for storing the annotation information or the video clip where the annotation information is located through the storage;
and the comprehensive evaluation module is connected with the central control module and is used for carrying out comprehensive evaluation according to the course selected by the user, the learning duration, the labeling information, the spoken English analysis result and the question and answer information through a comprehensive evaluation program.
9. A computer program product stored on a computer readable medium, comprising a computer readable program for providing a user input interface for implementing a human-computer interaction based english teaching method according to any one of claims 1 to 7 when executed on an electronic device.
10. A computer-readable storage medium storing instructions which, when executed on a computer, cause the computer to perform the human-computer interaction based english teaching method according to any one of claims 1 to 7.
CN202011146792.5A 2020-10-23 2020-10-23 English teaching system and teaching method based on human-computer interaction Active CN112507294B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011146792.5A CN112507294B (en) 2020-10-23 2020-10-23 English teaching system and teaching method based on human-computer interaction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011146792.5A CN112507294B (en) 2020-10-23 2020-10-23 English teaching system and teaching method based on human-computer interaction

Publications (2)

Publication Number Publication Date
CN112507294A true CN112507294A (en) 2021-03-16
CN112507294B CN112507294B (en) 2022-04-22

Family

ID=74956021

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011146792.5A Active CN112507294B (en) 2020-10-23 2020-10-23 English teaching system and teaching method based on human-computer interaction

Country Status (1)

Country Link
CN (1) CN112507294B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113593331A (en) * 2021-08-04 2021-11-02 方晓华 Finance english taste teaching aid
CN113658460A (en) * 2021-08-26 2021-11-16 黑龙江工业学院 English teaching platform based on 5G technology
CN115171445A (en) * 2022-08-08 2022-10-11 山东财经大学 English teaching system and teaching method based on human-computer interaction

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107578004A (en) * 2017-08-30 2018-01-12 苏州清睿教育科技股份有限公司 Learning method and system based on image recognition and interactive voice
CN107808674A (en) * 2017-09-28 2018-03-16 上海流利说信息技术有限公司 A kind of method, medium, device and the electronic equipment of voice of testing and assessing
CN109119064A (en) * 2018-09-05 2019-01-01 东南大学 A kind of implementation method suitable for overturning the Oral English Teaching system in classroom
WO2019080639A1 (en) * 2017-10-23 2019-05-02 腾讯科技(深圳)有限公司 Object identifying method, computer device and computer readable storage medium
CN109785698A (en) * 2017-11-13 2019-05-21 上海流利说信息技术有限公司 Method, apparatus, electronic equipment and medium for spoken language proficiency evaluation and test
CN110688556A (en) * 2019-09-21 2020-01-14 郑州工程技术学院 Remote Japanese teaching interaction system and interaction method based on big data analysis
CN111489597A (en) * 2020-04-24 2020-08-04 湖南工学院 Intelligent English teaching system for English teaching
CN111598216A (en) * 2020-04-16 2020-08-28 北京百度网讯科技有限公司 Method, device and equipment for generating student network model and storage medium
CN111681143A (en) * 2020-04-27 2020-09-18 平安国际智慧城市科技股份有限公司 Multi-dimensional analysis method, device, equipment and storage medium based on classroom voice
CN111756825A (en) * 2020-06-12 2020-10-09 引智科技(深圳)有限公司 Real-time cloud voice translation processing method and system

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107578004A (en) * 2017-08-30 2018-01-12 苏州清睿教育科技股份有限公司 Learning method and system based on image recognition and interactive voice
CN107808674A (en) * 2017-09-28 2018-03-16 上海流利说信息技术有限公司 A kind of method, medium, device and the electronic equipment of voice of testing and assessing
WO2019080639A1 (en) * 2017-10-23 2019-05-02 腾讯科技(深圳)有限公司 Object identifying method, computer device and computer readable storage medium
CN109785698A (en) * 2017-11-13 2019-05-21 上海流利说信息技术有限公司 Method, apparatus, electronic equipment and medium for spoken language proficiency evaluation and test
CN109119064A (en) * 2018-09-05 2019-01-01 东南大学 A kind of implementation method suitable for overturning the Oral English Teaching system in classroom
CN110688556A (en) * 2019-09-21 2020-01-14 郑州工程技术学院 Remote Japanese teaching interaction system and interaction method based on big data analysis
CN111598216A (en) * 2020-04-16 2020-08-28 北京百度网讯科技有限公司 Method, device and equipment for generating student network model and storage medium
CN111489597A (en) * 2020-04-24 2020-08-04 湖南工学院 Intelligent English teaching system for English teaching
CN111681143A (en) * 2020-04-27 2020-09-18 平安国际智慧城市科技股份有限公司 Multi-dimensional analysis method, device, equipment and storage medium based on classroom voice
CN111756825A (en) * 2020-06-12 2020-10-09 引智科技(深圳)有限公司 Real-time cloud voice translation processing method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
OKIM KANG 等: "Impact of Rater Characteristics and Prosodic Features of Speaker Accentedness on Ratings of International Teaching Assistants" Oral Performance", 《LANGUAGE ASSESSMENT QUARTERLY》 *
周燕: "基于语音稀疏表示的谎言检测研究", 《中国优秀博士学位论文全文数据库 信息科技辑》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113593331A (en) * 2021-08-04 2021-11-02 方晓华 Finance english taste teaching aid
CN113658460A (en) * 2021-08-26 2021-11-16 黑龙江工业学院 English teaching platform based on 5G technology
CN113658460B (en) * 2021-08-26 2022-04-12 黑龙江工业学院 English teaching platform based on 5G technology
CN115171445A (en) * 2022-08-08 2022-10-11 山东财经大学 English teaching system and teaching method based on human-computer interaction

Also Published As

Publication number Publication date
CN112507294B (en) 2022-04-22

Similar Documents

Publication Publication Date Title
CN112507294B (en) English teaching system and teaching method based on human-computer interaction
Labayen et al. Online student authentication and proctoring system based on multimodal biometrics technology
Berke et al. Deaf and hard-of-hearing perspectives on imperfect automatic speech recognition for captioning one-on-one meetings
US11501656B2 (en) Interactive and automated training system using real interactions
JP6172769B2 (en) Understanding support system, understanding support server, understanding support method, and program
CN106558252B (en) Spoken language practice method and device realized by computer
CN107133709B (en) Quality inspection method, device and system for customer service
KR20160008949A (en) Apparatus and method for foreign language learning based on spoken dialogue
CN109462603A (en) Voiceprint authentication method, equipment, storage medium and device based on blind Detecting
CN108806360A (en) Reading partner method, apparatus, equipment and storage medium
CN113486970B (en) Reading capability evaluation method and device
CN114885216A (en) Exercise pushing method and system, electronic equipment and storage medium
CN110852073A (en) Language learning system and learning method for customizing learning content for user
CN113763962A (en) Audio processing method and device, storage medium and computer equipment
CN116403583A (en) Voice data processing method and device, nonvolatile storage medium and vehicle
CN113963306B (en) Courseware title making method and device based on artificial intelligence
CN109410673A (en) Interactive learning methods and system
CN109582971B (en) Correction method and correction system based on syntactic analysis
KR20060087821A (en) System and its method for rating language ability in language learning stage based on l1 acquisition
KR100687441B1 (en) Method and system for evaluation of foring language voice
KR20140075994A (en) Apparatus and method for language education by using native speaker's pronunciation data and thought unit
CN114241835A (en) Student spoken language quality evaluation method and device
CN115206342A (en) Data processing method and device, computer equipment and readable storage medium
KR20220080401A (en) Methord and device of performing ai interview for foreigners
CN112347990A (en) Multimode-based intelligent manuscript examining system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230330

Address after: Room 318, Tongren Jiayuan Guild Hall, Sandun Town, Xihu District, Hangzhou City, Zhejiang Province, 310000

Patentee after: Hangzhou Yuanhao Information Technology Service Co.,Ltd.

Address before: Room 2319, 23rd Floor, Building B, No.1 Keyuan Weiyi Road, Laoshan District, Qingdao City, Shandong Province, 266100

Patentee before: Qingdao fruit science and technology service platform Co.,Ltd.

Effective date of registration: 20230330

Address after: Room 2319, 23rd Floor, Building B, No.1 Keyuan Weiyi Road, Laoshan District, Qingdao City, Shandong Province, 266100

Patentee after: Qingdao fruit science and technology service platform Co.,Ltd.

Address before: 402247 No. 1 Fuxing Road, Shuang Fu New District, Jiangjin District, Chongqing.

Patentee before: CHONGQING JIAOTONG University