CN111639220A - Spoken language evaluation method and device, electronic equipment and storage medium - Google Patents

Spoken language evaluation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111639220A
CN111639220A CN202010408128.7A CN202010408128A CN111639220A CN 111639220 A CN111639220 A CN 111639220A CN 202010408128 A CN202010408128 A CN 202010408128A CN 111639220 A CN111639220 A CN 111639220A
Authority
CN
China
Prior art keywords
evaluation
user
spoken language
spoken
evaluation unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010408128.7A
Other languages
Chinese (zh)
Inventor
崔颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Genius Technology Co Ltd
Original Assignee
Guangdong Genius Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Genius Technology Co Ltd filed Critical Guangdong Genius Technology Co Ltd
Priority to CN202010408128.7A priority Critical patent/CN111639220A/en
Publication of CN111639220A publication Critical patent/CN111639220A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/63Querying
    • G06F16/632Query formulation
    • G06F16/634Query by example, e.g. query by humming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/683Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • G09B19/04Speaking
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
    • G09B5/065Combinations of audio and video presentations, e.g. videotapes, videodiscs, television systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/10Speech classification or search using distance or distortion measures between unknown speech and reference templates
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/24Speech recognition using non-acoustical features
    • G10L15/25Speech recognition using non-acoustical features using position of the lips, movement of the lips or face analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Library & Information Science (AREA)
  • Tourism & Hospitality (AREA)
  • Computational Linguistics (AREA)
  • Acoustics & Sound (AREA)
  • Human Computer Interaction (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Primary Health Care (AREA)
  • Mathematical Physics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application relates to the technical field of computers, and discloses a spoken language evaluation method and device, electronic equipment and a storage medium, wherein the method comprises the following steps: obtaining evaluation content, wherein the evaluation content consists of a plurality of content units; capturing a user's mouth from a real-time representation of the user presented on a screen; presenting the evaluation unit currently read by the user at a designated position close to the mouth of the user; the evaluation unit belongs to any one of the content units; according to the picked spoken pronunciation of the user when reading the evaluation unit, carrying out spoken evaluation on the user reading evaluation unit to obtain a spoken evaluation result of the user reading evaluation unit; and controlling the presented evaluation unit to display a color corresponding to the spoken language evaluation result. By implementing the embodiment of the application, the students can be better guided to conduct oral assessment on the assessment contents (such as words), so that the accuracy of pronunciations of the students on the assessment contents (such as words) is improved.

Description

Spoken language evaluation method and device, electronic equipment and storage medium
Technical Field
The application relates to the technical field of computers, in particular to a spoken language evaluation method and device, electronic equipment and a storage medium.
Background
At present, the situation that the pronunciation of a word is inaccurate often occurs in the process of learning the word (such as an English word) by students, and how to better guide the students to perform oral assessment on the learned word so as to improve the accuracy of pronunciation of the word by the students is a hot problem which is continuously discussed by many parents and teachers.
Disclosure of Invention
The embodiment of the application discloses a spoken language evaluation method and device, electronic equipment and a storage medium, which can better guide students to conduct spoken language evaluation on evaluation contents (such as words), and therefore are beneficial to improving the accuracy of pronunciations of the evaluation contents (such as words) by the students.
The first aspect of the embodiment of the present application discloses a spoken language assessment method, including:
obtaining evaluation content, wherein the evaluation content consists of a plurality of content units;
capturing a user's mouth from a real-time representation of the user presented on a screen;
presenting the evaluation unit currently read by the user at a designated position close to the mouth of the user; the evaluation unit belongs to any one of the content units;
according to the picked spoken pronunciation when the user reads the evaluation unit, performing spoken evaluation on the user reading the evaluation unit to obtain a spoken evaluation result of the user reading the evaluation unit;
and controlling the presented evaluation unit to display a color corresponding to the spoken language evaluation result.
With reference to the first aspect of the embodiments of the present application, in some optional embodiments, after the evaluation unit controlling the presentation displays a color corresponding to the spoken language evaluation result, the method further includes:
and controlling the evaluation unit which displays the color corresponding to the spoken language evaluation result to slide out of the screen from the specified position according to the preset slide-out direction of the screen.
With reference to the first aspect of the embodiments of the present application, in some optional embodiments, after the evaluation unit controlling the presentation displays a color corresponding to the spoken language evaluation result, the method further includes:
identifying whether the spoken pronunciation of the user reading the evaluation unit is accurate or not according to the spoken evaluation result of the user reading the evaluation unit;
if the spoken language assessment result is accurate, the assessment unit which controls and displays the color corresponding to the spoken language assessment result slides out of the screen from the specified position according to a sliding-out direction preset by the screen;
and if the test result is not accurate, sliding the test and evaluation unit which displays the color corresponding to the spoken language test and evaluation result to a selected area of the screen for displaying.
In combination with the first aspect of the embodiments of the present application, in some optional embodiments, the method further includes:
after the user finishes reading the content units in the evaluation content, detecting whether the evaluation content is associated with an object to be unlocked;
if the evaluation content is associated with the object to be unlocked, acquiring an unlocking permission threshold value configured for the object to be unlocked; wherein the unlock allowance threshold is a specified number of content units that are spoken accurately;
counting the total number of the accurate spoken language pronunciation evaluation units in the content units;
and comparing whether the total number exceeds the specified number, and if so, unlocking the object to be unlocked.
In combination with the first aspect of the embodiments of the present application, in some optional embodiments, after the obtaining of the evaluation content and before capturing the mouth of the user from the real-time representation of the user displayed on the screen, the method further includes:
transversely displaying the evaluation content at the bottom of the screen;
and controlling the displayed evaluation unit to be read in the evaluation content to be highlighted according to the reading sequence.
The second aspect of the embodiment of the present application discloses a spoken language assessment device, including:
the first acquisition module is used for acquiring the evaluation content, and the evaluation content consists of a plurality of content units;
a capture module for capturing a user's mouth from a real-time representation of the user presented on a screen;
the presentation module is used for presenting the evaluation unit read by the user at the current time at a specified position close to the mouth of the user; the evaluation unit belongs to any one of the content units;
the evaluation module is used for carrying out oral evaluation on the evaluation unit read by the user according to the picked oral pronunciation when the user reads the evaluation unit to obtain an oral evaluation result of the evaluation unit read by the user;
and the color control module is used for controlling the presented evaluation unit to display the color corresponding to the spoken language evaluation result.
In combination with the second aspect of the embodiments of the present application, in some optional embodiments, the apparatus further includes:
and the sliding control module is used for controlling the evaluation unit which displays the color corresponding to the spoken language evaluation result to slide out of the screen from the specified position according to the preset sliding-out direction of the screen after the evaluation unit which is controlled by the color control module to display displays the color corresponding to the spoken language evaluation result.
In combination with the second aspect of the embodiments of the present application, in some optional embodiments, the apparatus further includes:
the triggering module is used for identifying whether the spoken pronunciation of the evaluation unit read by the user is accurate or not according to the spoken evaluation result read by the user after the evaluation unit controlled by the color control module to display the color corresponding to the spoken evaluation result; if the spoken language assessment result is accurate, triggering the sliding control module to execute the control to display the assessment unit of the color corresponding to the spoken language assessment result, and sliding the assessment unit out of the screen from the specified position according to a sliding-out direction preset by the screen;
the sliding control module is further used for sliding the evaluation unit displaying the color corresponding to the spoken evaluation result to a selected area of the screen for displaying when the triggering module identifies that the spoken pronunciation of the evaluation unit read by the user is inaccurate.
In combination with the second aspect of the embodiments of the present application, in some optional embodiments, the apparatus further includes:
the detection module is used for detecting whether the evaluation content is related to an object to be unlocked after the user finishes reading the content units in the evaluation content;
the second obtaining module is used for obtaining an unlocking permission threshold configured for the object to be unlocked when the detection module detects that the evaluation content is associated with the object to be unlocked; wherein the unlock allowance threshold is a specified number of content units that are spoken accurately;
the statistical module is used for counting the total number of the accurate spoken language pronunciation evaluation units in the content units;
and the unlocking module is used for comparing whether the total number exceeds the specified number, and unlocking the object to be unlocked if the total number exceeds the specified number.
In combination with the second aspect of the embodiments of the present application, in some optional embodiments, the apparatus further includes:
the display module is used for transversely displaying the evaluation content at the bottom of the screen after the first acquisition module acquires the evaluation content and before the capture module captures the mouth of the user from the real-time portrait of the user displayed on the screen; and controlling the displayed evaluation unit to be read in the evaluation content to be highlighted according to the reading sequence.
A third aspect of the embodiments of the present application discloses an electronic device, which includes the spoken language assessment apparatus described in the second aspect or any optional embodiment of the second aspect of the embodiments of the present application.
A fourth aspect of the embodiments of the present application discloses an electronic device, including:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to execute all or part of the steps of the spoken language assessment method described in the first aspect of the embodiments of the present application or any optional embodiment of the first aspect.
In a fifth aspect of the embodiments of the present application, a computer-readable storage medium is provided, where the computer-readable storage medium has stored thereon computer instructions, and the computer instructions, when executed, cause a computer to perform all or part of the steps of the spoken language assessment method described in the first aspect of the embodiments of the present application or any optional embodiment of the first aspect.
Compared with the prior art, the embodiment of the application has the following beneficial effects:
in the embodiment of the application, the mouth of a user can be captured from a real-time portrait of the user displayed on a screen, and an evaluation unit read currently by the user is presented at a specified position close to the mouth of the user, wherein the evaluation unit read currently by the user belongs to any one of a plurality of content units forming evaluation content; and according to the picked spoken pronunciation of the user when reading the evaluation unit, performing spoken evaluation on the user reading the evaluation unit, thereby obtaining the spoken evaluation result of the user reading the evaluation unit and controlling the presented evaluation unit to display the color corresponding to the spoken evaluation result. Therefore, the implementation of the embodiment of the application can improve the man-machine interaction in the oral assessment process, so that students can be better guided to conduct oral assessment on assessment contents (such as words), and the accuracy of pronunciations of the students on the assessment contents (such as words) can be improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic flow chart of a first embodiment of a spoken language evaluation method disclosed in an embodiment of the present application;
fig. 2 is a schematic flow chart of a second embodiment of the spoken language assessment method disclosed in the embodiments of the present application;
fig. 3 is a schematic flow chart of a third embodiment of the spoken language evaluation method disclosed in the embodiment of the present application;
FIG. 4 is an interface schematic of a screen disclosed in an embodiment of the present application;
fig. 5 is a schematic structural view of a first embodiment of the spoken language evaluation device disclosed in the embodiment of the present application;
fig. 6 is a schematic structural view of a second embodiment of the spoken language evaluation device disclosed in the embodiment of the present application;
fig. 7 is a schematic structural view of a third embodiment of the spoken language evaluation device disclosed in the embodiment of the present application;
fig. 8 is a schematic structural diagram of a first embodiment of an electronic device disclosed in an embodiment of the present application;
fig. 9 is a schematic structural diagram of a second embodiment of the electronic device disclosed in the embodiments of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "comprises" and "comprising," and any variations thereof, in the embodiments of the present application, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The embodiment of the application discloses a spoken language evaluation method and device, electronic equipment and a storage medium, which can better guide students to conduct spoken language evaluation on evaluation contents (such as words) and are beneficial to improving the accuracy of pronunciations of the evaluation contents (such as words) by the students. The following detailed description is made with reference to the accompanying drawings.
Referring to fig. 1, fig. 1 is a schematic flow chart of a spoken language evaluation method according to a first embodiment of the disclosure. The spoken language evaluation method described in fig. 1 is applicable to various electronic devices such as education devices (e.g., family education devices, classroom electronic devices), computers (e.g., student tablets, personal PCs), mobile phones, smart home devices (e.g., smart televisions, smart speakers, and smart robots), and the like, and the embodiment of the present application is not limited thereto. In the spoken language evaluation method described in fig. 1, the spoken language evaluation method is described with an electronic device as an execution subject. As shown in fig. 1, the spoken language assessment method may include the steps of:
101. the electronic equipment acquires evaluation content, and the evaluation content is composed of a plurality of content units.
For example, the evaluation content acquired by the electronic device may be a foreign language sentence (e.g., an english sentence, a russian sentence, etc.), and the content units constituting the foreign language sentence may be words (e.g., english words, russian words, etc.) contained in the foreign language sentence. As another example, the content of the evaluation obtained by the electronic device may be a chinese sentence, and the content units constituting the chinese sentence may be the respective chinese characters contained in the chinese sentence. As another example, the content to be evaluated acquired by the electronic device may be a note string (e.g., a note string composed of music symbols 1-7), and the content units composing the note string may be the music symbols included in the note string.
For example, the electronic device may capture the evaluation content selected by the user (e.g., clicked by the user) in the learning module through a camera device (e.g., a camera). The learning module may be a certain learning page (e.g., a paper learning page or an electronic learning page) corresponding to the user, or may be a certain learning section included in the certain learning page corresponding to the user.
For example, the electronic device may locate the learning module selected by the user's finger, pen, or voice, and use the learning module selected by the user's finger, pen, or voice as the learning module corresponding to the user. For example, the electronic device may use a camera (e.g., a camera) to capture a learning module selected by a finger or a writing pen of a user as a learning module corresponding to the user; alternatively, the electronic device may use a sound pickup device (e.g., a microphone) to pick up a learning module selected by the voice uttered by the user as the learning module corresponding to the user. In some embodiments, the camera device (e.g., a camera) may be disposed on a ring worn by a finger of a user, and when the ring detects that the finger of the user worn by the ring is straightened, the ring may start the camera device (e.g., the camera) to shoot a learning module selected by the finger of the user, and the ring transmits the shot learning module selected by the finger of the user to the electronic device, so that the electronic device may determine the learning module corresponding to the user. By the implementation of the implementation mode, power consumption caused by the fact that the electronic equipment shoots the learning module selected by the finger of the user can be reduced, and therefore battery endurance of the electronic equipment can be improved.
In other examples, the electronic device may obtain the learning module selected by the other external device for the user, and use the learning module selected by the other external device for the user as the learning module corresponding to the user. For example, the electronic device may establish a communication connection with a wrist wearable device worn by a supervisor (such as a classroom teacher or a parent) of a user in advance, the supervisor holds a certain finger of a palm where a wrist wearing the wrist wearable device is located against a root of an ear to make the ear form a closed sound cavity, and the supervisor may send out a voice signal with a volume lower than a certain threshold value for selecting a learning module for the user; the voice signal is transmitted into the wrist type wearing equipment as a vibration signal through a bone medium of a palm, and the voice signal is transmitted to the electronic equipment through the wrist type wearing equipment. By implementing the implementation mode, a supervisor (such as a classroom teacher or a parent) of the user can flexibly select the learning module for the user, and sound interference to surrounding people is avoided in the process of selecting the learning module for the user.
In some examples, when the external device may be a wrist-worn device worn by a classroom teacher, the wrist-worn device may simultaneously establish a communication connection with an electronic device used by each of a plurality of users (i.e., students) in the classroom, and accordingly, the voice signal emitted by the supervisor for selecting a learning module for the user with a volume below a certain threshold may include an identifier (e.g., a chapter number) of the selected learning module and an identifier (e.g., a name and/or a seat number) of the user; further, the wrist-worn device may transmit the voice signal to the electronic device used by the user according to an identification (such as a name and/or a seat number) of the user, so that the electronic device used by the user may determine the learning module corresponding to the user according to an identification (such as a chapter number) of the selected learning module included in the voice signal. By implementing the implementation mode, a classroom teacher can respectively select different learning modules for a plurality of users in a classroom according to different respective learning progresses of the users in the classroom (such as a training classroom), so that the flexibility and convenience of respectively selecting different learning modules for the users in the classroom can be improved.
102. The electronic device captures the user's mouth from a real-time representation of the user presented on the screen.
For example, the electronic device may capture a real-time image of the user through a camera device (e.g., a camera), and output the captured real-time image of the user to a screen (e.g., a display screen provided in the electronic device or an external display screen communicatively connected to the electronic device) for presentation. Further, the electronic device may incorporate facial recognition technology to capture the user's mouth from a real-time representation of the user presented on the screen.
103. The electronic equipment presents the evaluation unit read currently by the user at a designated position close to the mouth of the user; wherein, the evaluation unit belongs to any one of the content units.
For example, the electronic device may determine a picked-up order of the spoken utterance of the evaluation unit when the spoken utterance of the evaluation unit currently read by the user is picked up, and determine the evaluation unit currently read by the user from the plurality of content units according to the picked-up order of the spoken utterance of the evaluation unit and the arrangement order of each of the plurality of content units. For example, if the electronic device determines that the picking sequence of the spoken utterance of the evaluation unit currently read by the user is 3 rd, the content units are "I like to walk to the office", the arrangement sequence of the content unit "I" is 1 st, the arrangement sequence of the content unit "like" is 2 nd, the arrangement sequence of the first content unit "to" is 3 rd, the arrangement sequence of the content unit "walk" is 3 rd, the arrangement sequence of the second content unit "to" is 5 th, the arrangement sequence of the content unit "the" is 6 th, and the arrangement sequence of the content unit "office" is 7 th; at this time, the electronic device may determine that the evaluation unit currently read by the user is the first content unit "to" (the arrangement order of the first content unit "to" is the 3 rd) from the plurality of content units "I like to walk to the office" according to the picked-up order (i.e. the 3 rd) of the spoken utterance of the evaluation unit and the arrangement order of each content unit in the plurality of content units "I like to walk to the office".
104. And the electronic equipment carries out oral evaluation on the user reading the evaluation unit according to the picked oral pronunciation when the user reads the evaluation unit, so as to obtain the oral evaluation result of the user reading the evaluation unit.
The electronic equipment can compare the spoken pronunciation of the user reading the evaluation unit with the standard pronunciation of the evaluation unit, so that the spoken evaluation result of the user reading the evaluation unit can be obtained. For example, the spoken language evaluation result of the user reading the evaluation unit can be classified into accurate and inaccurate.
105. The electronic equipment controls the presented evaluation unit to display the color corresponding to the spoken language evaluation result.
For example, if the spoken language evaluation result is accurate, the electronic device may control the evaluation unit displayed on the screen to display a green color corresponding to the spoken language evaluation result; on the contrary, if the spoken language evaluation result is inaccurate, the electronic device may control the evaluation unit displayed on the screen to display a red color corresponding to the spoken language evaluation result.
Therefore, by implementing the spoken language evaluation method described in fig. 1, the human-computer interaction in the spoken language evaluation process can be improved, so that students can be better guided to perform spoken language evaluation on the evaluation content (such as words), and the accuracy of pronunciation of the evaluation content (such as words) by the students can be improved.
In addition, by implementing the spoken language assessment method described in fig. 1, power consumption caused by the fact that the electronic device shoots the learning module selected by the user's finger can be reduced, and thus battery endurance of the electronic device can be improved.
In addition, by implementing the spoken language assessment method described in fig. 1, a supervisor (such as a classroom teacher or a parent) of the user can flexibly select a learning module for the user, and does not cause sound interference to surrounding people in the process of selecting the learning module for the user.
In addition, by implementing the spoken language assessment method described in fig. 1, different learning modules can be respectively selected for a plurality of users in a classroom (such as a training classroom) according to respective different learning progresses of the plurality of users in the classroom, so that flexibility and convenience in selecting different learning modules for the plurality of users in the classroom can be improved.
Referring to fig. 2, fig. 2 is a schematic flow chart of a spoken language evaluation method according to a second embodiment of the disclosure. In the spoken language evaluation method described in fig. 2, the spoken language evaluation method is described with an electronic device as an execution subject. As shown in fig. 2, the spoken language assessment method may include the steps of:
201. the electronic equipment acquires evaluation content, and the evaluation content is composed of a plurality of content units.
For example, the implementation manner of step 201 may refer to step 101, which is not described herein again in this embodiment of the present application.
202. The electronic device captures the user's mouth from a real-time representation of the user presented on the screen.
203. The electronic equipment presents the evaluation unit read currently by the user at a designated position close to the mouth of the user; wherein, the evaluation unit belongs to any one of the content units.
204. And the electronic equipment carries out oral evaluation on the user reading the evaluation unit according to the picked oral pronunciation when the user reads the evaluation unit, so as to obtain the oral evaluation result of the user reading the evaluation unit.
205. The electronic equipment controls the presented evaluation unit to display the color corresponding to the spoken language evaluation result.
206. The electronic equipment identifies whether the spoken pronunciation of the user reading the evaluation unit is accurate or not according to the spoken evaluation result of the user reading the evaluation unit; if so, go to step 207; if not, go to step 208.
207. And the electronic equipment controls the evaluation unit which displays the color corresponding to the spoken language evaluation result to slide out of the screen from the specified position according to the preset slide-out direction of the screen, and the process is ended.
When the evaluation unit slides out of the screen from the designated position according to the preset sliding-out direction of the screen, the user can watch the animation when the evaluation unit slides out of the screen from the designated position according to the preset sliding-out direction of the screen.
For example, the preset sliding-out direction of the screen may be a direction from the designated position toward the bottom of the screen (e.g., vertically downward or obliquely downward); alternatively, the preset sliding-out direction of the screen may also be a direction from the designated position toward the top of the screen (e.g., vertically upward or obliquely upward); alternatively, the preset sliding-out direction of the screen may also be a direction from the designated position toward the left side of the screen (e.g., toward the left of the water surface or obliquely toward the left); or, the preset sliding-out direction of the screen may also be a direction from the specified position toward the right side of the screen (e.g., toward the right of the water surface or toward the right obliquely); the embodiments of the present application are not limited.
In some embodiments, the preset sliding direction of the screen can be flexibly adjusted by the electronic device. For example, the electronic device may determine four distance values from the center of the user's mouth to the bottom, the top, the left side, and the right side of the screen, determine a maximum distance value from the four distance values, and adjust a direction from the designated position toward a side (e.g., the bottom) corresponding to the maximum distance value to a preset sliding-out direction of the screen. Therefore, by implementing the embodiment, even if the center of the mouth of the user deviates, the user can watch the animation when the evaluation unit slides out of the screen from the specified position according to the preset sliding-out direction of the screen for as long as possible, and the timeliness of human-computer interaction can be improved.
208. And the electronic equipment slides the evaluation unit which displays the color corresponding to the spoken language evaluation result to the selected area of the screen for displaying, and ends the process.
In the step 208, the selected area of the screen may be the evaluation unit with inaccurate spoken language pronunciation, which is displayed by the user in a centralized manner, so that the user can practice the spoken language pronunciation of the evaluation unit with inaccurate spoken language pronunciation repeatedly, and the accuracy of the spoken language pronunciation of the user is improved.
Therefore, by implementing the spoken language evaluation method described in fig. 2, the man-machine interaction in the spoken language evaluation process can be improved, so that students can be better guided to perform spoken language evaluation on the evaluation content (such as words), and the accuracy of pronunciations of the evaluation content (such as words) by the students can be improved.
In addition, by implementing the spoken language evaluation method described in fig. 2, power consumption caused by the fact that the electronic device shoots the learning module selected by the user's finger can be reduced, and thus battery endurance of the electronic device can be improved.
In addition, by implementing the spoken language assessment method described in fig. 2, a supervisor (such as a classroom teacher or a parent) of the user can flexibly select a learning module for the user, and does not cause sound interference to surrounding people in the process of selecting the learning module for the user.
In addition, by implementing the spoken language assessment method described in fig. 2, different learning modules can be respectively selected for a plurality of users in a classroom (such as a training classroom) according to respective different learning progresses of the plurality of users in the classroom, so that flexibility and convenience in selecting different learning modules for the plurality of users in the classroom can be improved.
In addition, by implementing the spoken language evaluation method described in fig. 2, the user can watch the animation of the evaluation unit sliding out of the screen from the designated position according to the preset sliding-out direction of the screen for as long as possible, so that the timeliness of human-computer interaction can be improved.
In addition, the implementation of the spoken language assessment method described in fig. 2 is beneficial to the user to repeatedly practice the spoken language pronunciation of the assessment unit with inaccurate spoken language pronunciation, and improves the accuracy of the spoken language pronunciation of the user.
Referring to fig. 3, fig. 3 is a schematic flow chart of a spoken language evaluation method according to a third embodiment of the disclosure. In the spoken language evaluation method described in fig. 3, the spoken language evaluation method is described with an electronic device as an execution subject. As shown in fig. 3, the spoken language assessment method may include the steps of:
301. the electronic equipment acquires evaluation content, and the evaluation content is composed of a plurality of content units.
For example, the implementation manner of step 301 may refer to step 101, and details of the embodiment of the present application are not described herein.
302. The electronic equipment transversely displays the evaluation content at the bottom of the screen.
303. And the electronic equipment controls the to-be-read evaluation unit in the displayed evaluation content to be highlighted according to the reading sequence.
Taking the interface schematic diagram of the screen shown in fig. 4 as an example, the evaluation content acquired by the electronic device is "I like to walk to the office", and the evaluation content includes 7 content units of "I", "like", "to", "walk", "to", "the" and "office" arranged sequentially from left to right; wherein, the electronic device may laterally display the evaluation content "I like to walk to the office" at the bottom of the screen; and the electronic equipment can control the 5 th evaluation unit 'to' in the displayed evaluation content to be read to be highlighted in a bold mode according to the reading sequence. In some embodiments, the evaluation unit to be read is highlighted in a designated color or a designated font, which is not limited in the examples of the present application.
304. The electronic device captures the user's mouth from a real-time representation of the user presented on the screen.
305. The electronic equipment presents the evaluation unit read currently by the user at a designated position close to the mouth of the user; wherein, the evaluation unit belongs to any one of the content units.
Taking the interface diagram of the screen shown in fig. 4 as an example, the electronic device may present the evaluation unit "walk" currently read by the user at a designated position near the mouth of the user; among them, the evaluation unit "walk" belongs to the 4 th content unit among the above-mentioned "I", "like", "to", "walk", "to", "the", and "office" 7 content units.
306. And the electronic equipment carries out oral evaluation on the user reading the evaluation unit according to the picked oral pronunciation when the user reads the evaluation unit, so as to obtain the oral evaluation result of the user reading the evaluation unit.
Taking the interface schematic diagram of the screen shown in fig. 4 as an example, the electronic device may perform spoken language evaluation on the user reading the evaluation unit "walk" according to the spoken language pronunciation when the user reads the evaluation unit "walk" and obtain a spoken language evaluation result of the user reading the evaluation unit "walk".
307. The electronic equipment controls the presented evaluation unit to display the color corresponding to the spoken language evaluation result.
Taking the interface schematic diagram of the screen shown in fig. 4 as an example, if the spoken language evaluation result read by the user by the evaluation unit "walk" is accurate, the electronic device may control the evaluation unit "walk" displayed on the screen to display a green color (color is not shown in fig. 4) corresponding to the spoken language evaluation result; on the contrary, if the user reads the spoken language evaluation result of the evaluation unit "walk" inaccurately, the electronic device may control the evaluation unit "walk" displayed on the screen to display a red color (color not shown in fig. 4) corresponding to the spoken language evaluation result.
308. The electronic equipment identifies whether the spoken pronunciation of the user reading the evaluation unit is accurate or not according to the spoken evaluation result of the user reading the evaluation unit; if true, go to step 309; if not, go to step 310.
Taking the interface schematic diagram of the screen shown in fig. 4 as an example, the electronic device may identify whether the spoken utterance read by the user by the evaluation unit "walk" is accurate according to the spoken evaluation result read by the user by the evaluation unit "walk"; if true, go to step 309; if not, go to step 310.
309. The electronic device controls the evaluation unit displaying the color corresponding to the spoken language evaluation result to slide out of the screen from the designated position according to the preset slide-out direction of the screen, and step 311 is performed.
Taking the interface diagram of the screen shown in fig. 4 as an example, if the electronic device recognizes that the spoken utterance of the evaluation unit "walk" read by the user is accurate, the electronic device may control the evaluation unit "walk" displaying a color corresponding to the spoken evaluation result to slide out of the screen from the designated position according to a slide-out direction (the slide-out direction is indicated by an arc line with an arrow) preset by the screen.
310. The electronic device slides the evaluation unit displaying the color corresponding to the spoken language evaluation result to the selected area of the screen for displaying, and performs step 311.
Taking the interface diagram of the screen shown in fig. 4 as an example, if the electronic device recognizes that the spoken utterance of the evaluation unit "like" read by the user is inaccurate, the electronic device may slide the evaluation unit "like" displaying a color corresponding to the spoken evaluation result to a selected region in the upper right corner of the screen.
311. After the user finishes reading the content units in the evaluation content, the electronic equipment detects whether the evaluation content is associated with the object to be unlocked; if so, go to step 312-step 314; if not, the process is ended.
For example, the object to be unlocked may be an APP to be unlocked, an electronic screen to be unlocked, an intelligent door lock to be unlocked, and the like, which is not limited in the embodiment of the present application.
312. The electronic equipment acquires an unlocking permission threshold value configured for an object to be unlocked; wherein the unlock allowance threshold is a specified number of content units that are spoken accurately.
The unlocking permission threshold value can be configured by the electronic device for the object to be unlocked, or the unlocking permission threshold value can be configured by a wrist-worn device worn by a supervisor (such as a classroom teacher or a parent) of a user of the electronic device for the object to be unlocked.
313. And the electronic equipment counts the total number of the accurate spoken language pronunciation evaluation units in the content units.
314. The electronic device compares whether the total number exceeds the specified number, and if so, executes step 315; if not, the process is ended.
315. And the electronic equipment unlocks the object to be unlocked.
In some application scenarios, the electronic device may be located in a certain indoor environment, and the to-be-unlocked smart door lock set in the indoor environment may be used as the to-be-unlocked object. In this application scenario, the method for the electronic device to unlock the object to be unlocked in step 317 may be as follows:
the electronic equipment determines current spatial position information of a user using the electronic equipment based on an indoor image shot by an internal camera of the intelligent door lock to be unlocked;
the electronic device can check whether the current spatial position information of the user using the electronic device is matched with the three-dimensional position information of the monitored object which is specially configured by a supervisor (such as a parent) of the user (belonging to the monitored object) relative to the internal camera of the intelligent door lock to be unlocked, and if the current spatial position information is matched with the three-dimensional position information, the intelligent door lock to be unlocked is controlled to be unlocked; wherein, when the user is located in the three-dimensional position information of the monitored object (belonging to the monitored object) specially configured for the user by the user's supervisor (such as a parent) relative to the internal camera of the intelligent door lock to be unlocked, the user can be directly observed by the user's supervisor in the indoor environment. Therefore, a user of the electronic equipment can be required to allow the electronic equipment to control the intelligent door lock to be unlocked to unlock only at a certain spatial position which is specially configured by a supervisor and visible to the supervisor, so that the supervisor can intuitively know which monitored object unlocks the intelligent door lock to be unlocked, the visibility of the user of the electronic equipment when the intelligent door lock to be unlocked is unlocked can be improved, and accidents (such as children being turned) caused by the fact that the user of the electronic equipment secretly unlocks the intelligent door lock to be unlocked under the condition that the supervisor does not know are prevented.
Compared with the previous embodiment, the oral evaluation method described in fig. 3 has the following advantages:
for children in indoor environment, if the intelligent door lock to be unlocked needs to be unlocked to go out, the intelligent door lock is required to be located at a certain spatial position visible to a supervisor, and the total number of the evaluation units required to make spoken language pronunciation accurate exceeds a specified number, so that the purpose of urging indoor children to practice spoken language pronunciation and improving the accuracy of spoken language pronunciation can be achieved.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a spoken language evaluation device according to a first embodiment of the disclosure. The spoken language evaluation device may include:
a first obtaining module 501, configured to obtain an evaluation content, where the evaluation content is composed of a plurality of content units;
a capture module 502 for capturing a user's mouth from a real-time representation of the user presented on a screen;
a presentation module 503, configured to present the evaluation unit currently read by the user at a specified position close to the mouth of the user; wherein, the evaluation unit belongs to any one of the content units;
the evaluation module 504 is configured to perform spoken evaluation on the user reading the evaluation unit according to the retrieved spoken pronunciation of the user reading the evaluation unit, so as to obtain a spoken evaluation result of the user reading the evaluation unit;
and the color control module 505 is used for controlling the presented evaluation unit to display the color corresponding to the oral evaluation result.
For example, the first obtaining module 501 may capture, by using a camera device (e.g., a camera), the evaluation content selected by the user (e.g., clicked by the user) in the learning module. The learning module may be a certain learning page (e.g., a paper learning page or an electronic learning page) corresponding to the user, or may be a certain learning section included in the certain learning page corresponding to the user.
In some embodiments, the spoken language evaluation device may be a part of the electronic device or an external device communicatively connected to the electronic device, and the electronic device may locate the learning module selected by the user's finger, pen, or voice, and use the learning module selected by the user's finger, pen, or voice as the learning module corresponding to the user. For example, the electronic device may use a camera (e.g., a camera) to capture a learning module selected by a finger or a writing pen of a user as a learning module corresponding to the user; alternatively, the electronic device may use a sound pickup device (e.g., a microphone) to pick up a learning module selected by the voice uttered by the user as the learning module corresponding to the user. In some embodiments, the camera device (e.g., a camera) may be disposed on a ring worn by a finger of a user, and when the ring detects that the finger of the user worn by the ring is straightened, the ring may start the camera device (e.g., the camera) to shoot a learning module selected by the finger of the user, and the ring transmits the shot learning module selected by the finger of the user to the electronic device, so that the electronic device may determine the learning module corresponding to the user. By the implementation of the implementation mode, power consumption caused by the fact that the electronic equipment shoots the learning module selected by the finger of the user can be reduced, and therefore battery endurance of the electronic equipment can be improved.
In other examples, the electronic device may obtain the learning module selected by the other external device for the user, and use the learning module selected by the other external device for the user as the learning module corresponding to the user. For example, the electronic device may establish a communication connection with a wrist wearable device worn by a supervisor (such as a classroom teacher or a parent) of a user in advance, the supervisor holds a certain finger of a palm where a wrist wearing the wrist wearable device is located against a root of an ear to make the ear form a closed sound cavity, and the supervisor may send out a voice signal with a volume lower than a certain threshold value for selecting a learning module for the user; the voice signal is transmitted into the wrist type wearing equipment as a vibration signal through a bone medium of a palm, and the voice signal is transmitted to the electronic equipment through the wrist type wearing equipment. By implementing the implementation mode, a supervisor (such as a classroom teacher or a parent) of the user can flexibly select the learning module for the user, and sound interference to surrounding people is avoided in the process of selecting the learning module for the user.
In some examples, when the external device may be a wrist-worn device worn by a classroom teacher, the wrist-worn device may simultaneously establish a communication connection with an electronic device used by each of a plurality of users (i.e., students) in the classroom, and accordingly, the voice signal emitted by the supervisor for selecting a learning module for the user with a volume below a certain threshold may include an identifier (e.g., a chapter number) of the selected learning module and an identifier (e.g., a name and/or a seat number) of the user; further, the wrist-worn device may transmit the voice signal to the electronic device used by the user according to an identification (such as a name and/or a seat number) of the user, so that the electronic device used by the user may determine the learning module corresponding to the user according to an identification (such as a chapter number) of the selected learning module included in the voice signal. By implementing the implementation mode, a classroom teacher can respectively select different learning modules for a plurality of users in a classroom according to different respective learning progresses of the users in the classroom (such as a training classroom), so that the flexibility and convenience of respectively selecting different learning modules for the users in the classroom can be improved.
Therefore, the implementation of the spoken language evaluation device described in fig. 5 can improve the man-machine interaction in the spoken language evaluation process, so that students can be better guided to perform spoken language evaluation on the evaluation content (such as words), and the accuracy of pronunciation of the evaluation content (such as words) by the students can be improved.
In addition, the implementation of the spoken language evaluation device described in fig. 5 can reduce power consumption caused by the learning module selected by the user's finger when the electronic device shoots, so that the battery life of the electronic device can be improved.
In addition, with the implementation of the spoken language assessment apparatus described in fig. 5, a supervisor (such as a classroom teacher or a parent) of a user can flexibly select a learning module for the user, and does not cause sound interference to surrounding people in the process of selecting the learning module for the user.
In addition, the spoken language assessment apparatus described in fig. 5 may select different learning modules for a plurality of users in a classroom according to their respective different learning progresses in the classroom (e.g., a training classroom), so as to improve flexibility and convenience when selecting different learning modules for a plurality of users in the classroom.
Referring to fig. 6, fig. 6 is a schematic structural diagram of a spoken language evaluation device according to a second embodiment of the disclosure. The spoken language evaluation device shown in fig. 6 is optimized by the spoken language evaluation device shown in fig. 5. In the spoken language evaluation device shown in fig. 6, the spoken language evaluation device further includes:
and a sliding control module 506, configured to control the evaluation unit displaying the color corresponding to the spoken evaluation result to slide out of the screen from the designated position according to a sliding-out direction preset by the screen after the color control module 505 controls the presented evaluation unit to display the color corresponding to the spoken evaluation result.
Optionally, the spoken language assessment apparatus further includes:
the triggering module 507 is configured to identify whether the spoken utterance of the user reading evaluation unit is accurate according to the spoken evaluation result of the user reading evaluation unit after the color control module 505 controls the presented evaluation unit to display a color corresponding to the spoken evaluation result; if the result is accurate, triggering the sliding control module 506 to execute the control and display of the evaluation unit of the color corresponding to the spoken language evaluation result, and sliding out the screen from the designated position according to the preset sliding-out direction of the screen;
the sliding control module 506 is further configured to slide the evaluation unit displaying the color corresponding to the spoken language evaluation result to the selected area of the screen for displaying when the triggering module 507 identifies that the spoken language pronunciation of the evaluation unit read by the user is inaccurate.
In some embodiments, the preset sliding direction of the screen can be flexibly adjusted by the electronic device. For example, the electronic device may determine four distance values from the center of the user's mouth to the bottom, the top, the left side, and the right side of the screen, determine a maximum distance value from the four distance values, and adjust a direction from the designated position toward a side (e.g., the bottom) corresponding to the maximum distance value to a preset sliding-out direction of the screen. Therefore, by implementing the embodiment, even if the center of the mouth of the user deviates, the user can watch the animation when the evaluation unit slides out of the screen from the specified position according to the preset sliding-out direction of the screen for as long as possible, and the timeliness of human-computer interaction can be improved.
It can be seen that, compared with the spoken language evaluation device shown in fig. 5, the spoken language evaluation device shown in fig. 6 may enable the user to view the animation of the evaluation unit sliding out of the screen from the designated position according to the sliding-out direction preset by the screen for as long as possible, so that the timeliness of human-computer interaction may be improved.
And the method is favorable for the user to repeatedly practice the spoken pronunciation of the inaccurate spoken pronunciation evaluation unit, and improves the accuracy of the spoken pronunciation of the user.
Referring to fig. 7, fig. 7 is a schematic structural diagram of a spoken language evaluation device according to a third embodiment of the disclosure. The spoken language evaluation device shown in fig. 7 is optimized by the spoken language evaluation device shown in fig. 6. The spoken language assessment apparatus shown in fig. 7 further includes:
the detection module 508 is configured to detect whether the content to be evaluated is associated with an object to be unlocked after the user finishes reading the content units in the content to be evaluated;
a second obtaining module 509, configured to obtain an unlocking permission threshold configured for the object to be unlocked when the detection module 508 detects that the evaluation content is associated with the object to be unlocked; wherein the unlocking allowing threshold is the appointed number of the content units with accurate spoken language pronunciation;
a counting module 510, configured to count the total number of the evaluation units with accurate spoken language pronunciation in the content units;
and the unlocking module 511 is used for comparing whether the total number exceeds the specified number, and if so, unlocking the object to be unlocked.
Optionally, the spoken language assessment apparatus further comprises:
a presentation module 512, configured to present the evaluation content laterally at the bottom of the screen after the first obtaining module 501 obtains the evaluation content and before the capture module captures the mouth of the user from the real-time representation of the user presented on the screen; and controlling the displayed evaluation content to be read to be highlighted in the evaluation content according to the reading sequence.
In some application scenarios, the electronic device including the spoken language evaluation device may be located in an indoor environment, and an intelligent door lock to be unlocked provided in the indoor environment may be used as the object to be unlocked. In this application scenario, the way for the unlocking module 511 to unlock the object to be unlocked may be as follows:
determining current spatial position information of a user using the electronic equipment based on an indoor image shot by an internal camera of the intelligent door lock to be unlocked;
whether the current spatial position information of the user using the electronic equipment is matched with the three-dimensional position information of the monitored object configured by a supervisor (such as a parent) of the user (belonging to the monitored object) relative to the internal camera of the intelligent door lock to be unlocked can be checked, and if the current spatial position information is matched with the three-dimensional position information, the intelligent door lock to be unlocked is controlled to be unlocked; wherein, when the user is located in the three-dimensional position information of the monitored object configured for the user (belonging to the monitored object) by the user's supervisor (such as a parent) relative to the internal camera of the intelligent door lock to be unlocked, the user's supervisor can directly observe the user in the indoor environment. Therefore, a user of the electronic equipment can be required to allow the electronic equipment to control the intelligent door lock to be unlocked to unlock only at a certain spatial position which is specially configured by a supervisor and visible to the supervisor, so that the supervisor can intuitively know which monitored object unlocks the intelligent door lock to be unlocked, the visibility of the user of the electronic equipment when the intelligent door lock to be unlocked is unlocked can be improved, and accidents (such as children being turned) caused by the fact that the user of the electronic equipment secretly unlocks the intelligent door lock to be unlocked under the condition that the supervisor does not know are prevented.
Compared with the previous embodiment, the spoken language evaluation device depicted in fig. 7 has the following advantages:
for children in indoor environment, if the intelligent door lock to be unlocked needs to be unlocked to go out, the intelligent door lock is required to be located at a certain spatial position visible to a supervisor, and the total number of the evaluation units required to make spoken language pronunciation accurate exceeds a specified number, so that the purpose of urging indoor children to practice spoken language pronunciation and improving the accuracy of spoken language pronunciation can be achieved.
Referring to fig. 8, fig. 8 is a schematic structural diagram of an electronic device according to a first embodiment of the disclosure. As shown in fig. 8, the electronic device may include any one of the spoken language evaluation devices in the above embodiments.
Referring to fig. 9, fig. 9 is a schematic structural diagram of an electronic device according to a second embodiment of the disclosure. As shown in fig. 9, may include:
memory 901 storing executable program code
A processor 902 coupled to a memory;
the processor 902 calls the executable program code stored in the memory 901 to execute all or part of the steps of the spoken language assessment method.
It should be noted that, in this embodiment of the application, the electronic device shown in fig. 9 may further include components that are not displayed, such as a speaker module, a display screen, a light projection module, a battery module, a wireless communication module (such as a mobile communication module, a WIFI module, a bluetooth module, and the like), a sensor module (such as a proximity sensor, and the like), an input module (such as a microphone, a key), and a user interface module (such as a charging interface, an external power supply interface, a card slot, a wired headset interface, and the like).
The embodiment of the invention discloses a computer-readable storage medium, which is stored with computer instructions, and the computer instructions make a computer execute all or part of the steps of the spoken language assessment method when in operation.
It will be understood by those skilled in the art that all or part of the steps in the methods of the embodiments described above may be implemented by instructions associated with a program, which may be stored in a computer-readable storage medium, where the storage medium includes Read-Only Memory (ROM), Random Access Memory (RAM), Programmable Read-Only Memory (PROM), Erasable Programmable Read-Only Memory (EPROM), One-time Programmable Read-Only Memory (OTPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), compact disc-Read-Only Memory (CD-ROM), or other Memory, magnetic disk, magnetic tape, or magnetic tape, Or any other medium which can be used to carry or store data and which can be read by a computer.
The spoken language assessment method and apparatus, the electronic device, and the storage medium disclosed in the embodiments of the present invention are introduced in detail, and a specific example is applied in the present document to explain the principle and the implementation of the present invention, and the description of the embodiments is only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (12)

1. A spoken language assessment method, comprising:
obtaining evaluation content, wherein the evaluation content consists of a plurality of content units;
capturing a user's mouth from a real-time representation of the user presented on a screen;
presenting the evaluation unit currently read by the user at a designated position close to the mouth of the user; the evaluation unit belongs to any one of the content units;
according to the picked spoken pronunciation when the user reads the evaluation unit, performing spoken evaluation on the user reading the evaluation unit to obtain a spoken evaluation result of the user reading the evaluation unit;
and controlling the presented evaluation unit to display a color corresponding to the spoken language evaluation result.
2. The spoken language evaluation method of claim 1, wherein after the evaluation unit of the control presentation displays a color corresponding to the spoken language evaluation result, the method further comprises:
and controlling the evaluation unit which displays the color corresponding to the spoken language evaluation result to slide out of the screen from the specified position according to the preset slide-out direction of the screen.
3. The spoken language evaluation method of claim 2, wherein after the evaluation unit of the control presentation displays a color corresponding to the spoken language evaluation result, the method further comprises:
identifying whether the spoken pronunciation of the user reading the evaluation unit is accurate or not according to the spoken evaluation result of the user reading the evaluation unit;
if the spoken language assessment result is accurate, the assessment unit which controls and displays the color corresponding to the spoken language assessment result slides out of the screen from the specified position according to a sliding-out direction preset by the screen;
and if the test result is not accurate, sliding the test and evaluation unit which displays the color corresponding to the spoken language test and evaluation result to a selected area of the screen for displaying.
4. The spoken language assessment method according to claim 3, further comprising:
after the user finishes reading the content units in the evaluation content, detecting whether the evaluation content is associated with an object to be unlocked;
if the evaluation content is associated with the object to be unlocked, acquiring an unlocking permission threshold value configured for the object to be unlocked; wherein the unlock allowance threshold is a specified number of content units that are spoken accurately;
counting the total number of the accurate spoken language pronunciation evaluation units in the content units;
and comparing whether the total number exceeds the specified number, and if so, unlocking the object to be unlocked.
5. The spoken language assessment method according to any one of claims 1 to 4, wherein after the obtaining of the assessment content and before capturing the user's mouth from the real-time representation of the user on the screen, the method further comprises:
transversely displaying the evaluation content at the bottom of the screen;
and controlling the displayed evaluation unit to be read in the evaluation content to be highlighted according to the reading sequence.
6. A spoken language assessment device, comprising:
the first acquisition module is used for acquiring the evaluation content, and the evaluation content consists of a plurality of content units;
a capture module for capturing a user's mouth from a real-time representation of the user presented on a screen;
the presentation module is used for presenting the evaluation unit read by the user at the current time at a specified position close to the mouth of the user; the evaluation unit belongs to any one of the content units;
the evaluation module is used for carrying out oral evaluation on the evaluation unit read by the user according to the picked oral pronunciation when the user reads the evaluation unit to obtain an oral evaluation result of the evaluation unit read by the user;
and the color control module is used for controlling the presented evaluation unit to display the color corresponding to the spoken language evaluation result.
7. The spoken language assessment device according to claim 6, further comprising:
and the sliding control module is used for controlling the evaluation unit which displays the color corresponding to the spoken language evaluation result to slide out of the screen from the specified position according to the preset sliding-out direction of the screen after the evaluation unit which is controlled by the color control module to display displays the color corresponding to the spoken language evaluation result.
8. The spoken language assessment device according to claim 7, further comprising:
the triggering module is used for identifying whether the spoken pronunciation of the evaluation unit read by the user is accurate or not according to the spoken evaluation result read by the user after the evaluation unit controlled by the color control module to display the color corresponding to the spoken evaluation result; if the spoken language assessment result is accurate, triggering the sliding control module to execute the control to display the assessment unit of the color corresponding to the spoken language assessment result, and sliding the assessment unit out of the screen from the specified position according to a sliding-out direction preset by the screen;
the sliding control module is further used for sliding the evaluation unit displaying the color corresponding to the spoken evaluation result to a selected area of the screen for displaying when the triggering module identifies that the spoken pronunciation of the evaluation unit read by the user is inaccurate.
9. The spoken language assessment device according to claim 8, further comprising:
the detection module is used for detecting whether the evaluation content is related to an object to be unlocked after the user finishes reading the content units in the evaluation content;
the second obtaining module is used for obtaining an unlocking permission threshold configured for the object to be unlocked when the detection module detects that the evaluation content is associated with the object to be unlocked; wherein the unlock allowance threshold is a specified number of content units that are spoken accurately;
the statistical module is used for counting the total number of the accurate spoken language pronunciation evaluation units in the content units;
and the unlocking module is used for comparing whether the total number exceeds the specified number, and unlocking the object to be unlocked if the total number exceeds the specified number.
10. An electronic device comprising the spoken language assessment device according to any one of claims 6 to 9.
11. An electronic device, comprising:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to execute all or part of the steps of the spoken language assessment method according to any one of claims 1 to 5.
12. A computer-readable storage medium having stored thereon computer instructions which, when executed, cause a computer to perform all or part of the steps of the spoken language assessment method according to any one of claims 1 to 5.
CN202010408128.7A 2020-05-14 2020-05-14 Spoken language evaluation method and device, electronic equipment and storage medium Pending CN111639220A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010408128.7A CN111639220A (en) 2020-05-14 2020-05-14 Spoken language evaluation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010408128.7A CN111639220A (en) 2020-05-14 2020-05-14 Spoken language evaluation method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111639220A true CN111639220A (en) 2020-09-08

Family

ID=72329352

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010408128.7A Pending CN111639220A (en) 2020-05-14 2020-05-14 Spoken language evaluation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111639220A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7051022B1 (en) * 2000-12-19 2006-05-23 Oracle International Corporation Automated extension for generation of cross references in a knowledge base
CN101114943A (en) * 2007-09-14 2008-01-30 中兴通讯股份有限公司 Method for performing optimized exhibition to network management data uploading comparative result
CN106409030A (en) * 2016-12-08 2017-02-15 河南牧业经济学院 Customized foreign spoken language learning system
CN108122561A (en) * 2017-12-19 2018-06-05 广东小天才科技有限公司 Spoken language voice evaluation method based on electronic equipment and electronic equipment
CN109272992A (en) * 2018-11-27 2019-01-25 北京粉笔未来科技有限公司 A kind of spoken language assessment method, device and a kind of device for generating spoken appraisal model
CN110379221A (en) * 2019-08-09 2019-10-25 陕西学前师范学院 A kind of pronunciation of English test and evaluation system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7051022B1 (en) * 2000-12-19 2006-05-23 Oracle International Corporation Automated extension for generation of cross references in a knowledge base
CN101114943A (en) * 2007-09-14 2008-01-30 中兴通讯股份有限公司 Method for performing optimized exhibition to network management data uploading comparative result
CN106409030A (en) * 2016-12-08 2017-02-15 河南牧业经济学院 Customized foreign spoken language learning system
CN108122561A (en) * 2017-12-19 2018-06-05 广东小天才科技有限公司 Spoken language voice evaluation method based on electronic equipment and electronic equipment
CN109272992A (en) * 2018-11-27 2019-01-25 北京粉笔未来科技有限公司 A kind of spoken language assessment method, device and a kind of device for generating spoken appraisal model
CN110379221A (en) * 2019-08-09 2019-10-25 陕西学前师范学院 A kind of pronunciation of English test and evaluation system

Similar Documents

Publication Publication Date Title
CN108537207B (en) Lip language identification method, device, storage medium and mobile terminal
CN106104569B (en) For establishing the method and apparatus of connection between electronic device
US11138422B2 (en) Posture detection method, apparatus and device, and storage medium
CN108363706A (en) The method and apparatus of human-computer dialogue interaction, the device interacted for human-computer dialogue
CN107360157A (en) A kind of user registering method, device and intelligent air conditioner
CN105224601B (en) A kind of method and apparatus of extracting time information
CN110992989B (en) Voice acquisition method and device and computer readable storage medium
US20170199543A1 (en) Glass-type terminal and method of controling the same
CN101393694A (en) Chinese character pronunciation studying device with pronunciation correcting function of Chinese characters, and method therefor
CN109410984B (en) Reading scoring method and electronic equipment
CN109558788A (en) Silent voice inputs discrimination method, computing device and computer-readable medium
CN104965589A (en) Human living body detection method and device based on human brain intelligence and man-machine interaction
CN112837687A (en) Answering method, answering device, computer equipment and storage medium
CN113327620A (en) Voiceprint recognition method and device
CN113033245A (en) Function adjusting method and device, storage medium and electronic equipment
CN111739534B (en) Processing method and device for assisting speech recognition, electronic equipment and storage medium
CN113822187A (en) Sign language translation, customer service, communication method, device and readable medium
KR101567154B1 (en) Method for processing dialogue based on multiple user and apparatus for performing the same
CN110491384B (en) Voice data processing method and device
CN113409770A (en) Pronunciation feature processing method, pronunciation feature processing device, pronunciation feature processing server and pronunciation feature processing medium
CN111639220A (en) Spoken language evaluation method and device, electronic equipment and storage medium
CN106778622A (en) Recognize method, device and the mobile terminal of color
CN111639567B (en) Interactive display method of three-dimensional model, electronic equipment and storage medium
CN111639227B (en) Spoken language control method of virtual character, electronic equipment and storage medium
CN111639635B (en) Processing method and device for shooting pictures, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200908

RJ01 Rejection of invention patent application after publication