CN116403604B - Child reading ability evaluation method and system - Google Patents

Child reading ability evaluation method and system Download PDF

Info

Publication number
CN116403604B
CN116403604B CN202310670058.6A CN202310670058A CN116403604B CN 116403604 B CN116403604 B CN 116403604B CN 202310670058 A CN202310670058 A CN 202310670058A CN 116403604 B CN116403604 B CN 116403604B
Authority
CN
China
Prior art keywords
characters
text
tone
misreading
text characters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310670058.6A
Other languages
Chinese (zh)
Other versions
CN116403604A (en
Inventor
张辰
张芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Qiqu Everything Technology Co ltd
Original Assignee
Beijing Qiqu Everything Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Qiqu Everything Technology Co ltd filed Critical Beijing Qiqu Everything Technology Co ltd
Priority to CN202310670058.6A priority Critical patent/CN116403604B/en
Publication of CN116403604A publication Critical patent/CN116403604A/en
Application granted granted Critical
Publication of CN116403604B publication Critical patent/CN116403604B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3343Query execution using phonetics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/005Language recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/04Segmentation; Word boundary detection
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/45Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of analysis window
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit
    • G10L2015/025Phonemes, fenemes or fenones being the recognition units
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application provides a method and a system for evaluating reading ability of children, wherein the method comprises the following steps: receiving input information of a user, displaying preset reading materials corresponding to the age, sex and language of the child, and prompting the child to read the preset reading materials in a sounding manner; acquiring an audio record of sound reading of a child, and extracting characteristic information of the audio record, wherein the characteristic information comprises text characters, tones, speech speed and katon points corresponding to the text characters; comparing the text characters with the text characters of the preset reading material to obtain text reading accuracy and text misreading characters; comparing the tone corresponding to the text characters in the characteristic information with the tone corresponding to the characters of the preset reading material to obtain tone accuracy and tone misreading characters; and performing intelligent test question synthesis according to the text misreading characters and the tone misreading characters. The application can more accurately carry out scientific and objective practical evaluation on the reading ability of children.

Description

Child reading ability evaluation method and system
Technical Field
The application relates to the technical field of language capability test, in particular to a method and a system for evaluating reading capability of children.
Background
At present, the reading ability of children is evaluated, often in the form of paper or electronic examination questions, and the answer accuracy of the children is manually scored and graded. However, the test questions have certain randomness and contingency, so that the test result may have certain randomness and cannot completely and accurately reflect the true reading level and capability of the child.
Disclosure of Invention
In view of the above, the present application aims to provide a method and a system for evaluating the reading ability of children, which can solve the existing problems in a targeted manner.
Based on the above object, the present application provides a method for evaluating the reading ability of children, comprising:
displaying a user interface, requesting to input the age and sex of the child and selecting the language of reading;
receiving input information of a user, displaying preset reading materials corresponding to the age, sex and language of the child, and prompting the child to read the preset reading materials in a sounding manner;
acquiring an audio record of sound production reading of a child, analyzing the audio record, and extracting characteristic information of the audio record, wherein the characteristic information comprises text characters, tones corresponding to the text characters, speech speed and katon points;
comparing the text characters with the text characters of the preset reading material to obtain text reading accuracy and text misreading characters;
comparing the tone corresponding to the text characters in the characteristic information with the tone corresponding to the characters of the preset reading material to obtain tone accuracy and tone misreading characters;
judging the fluency degree of reading through the speech speed and the click point;
and performing intelligent test question synthesis according to the text misreading characters and the tone misreading characters, prompting the child again to answer, judging whether the child really grasps the misreading characters according to the answer result, and eliminating possible misjudgment.
Further, the parsing the audio record, extracting feature information of the audio record, includes:
performing voice recognition on the audio record to obtain text characters;
carrying out frame windowing on the audio record, solving linear prediction parameters of each frame of voice, calculating gain parameters of each frame of audio record, obtaining a gain track curve of the audio record, comparing the gain track curve with a standard voice tone curve, and determining the tone corresponding to the text characters;
identifying a phoneme sequence and a time division point corresponding to each phoneme from the audio record, identifying a word sequence and a time division point corresponding to each word according to the identified phoneme sequence and the time division point corresponding to each phoneme, and calculating the speech rate of the audio record according to the identified word sequence and the time division point corresponding to each word;
marking time division points of each character on the audio record, calculating time difference between every two characters, and marking the time division point where the previous character is positioned as a blocking point when the time difference exceeds a preset threshold value, so as to obtain all the blocking points;
and taking the text characters and the tone, the speech speed and the katon point corresponding to the text characters as characteristic information of the audio record.
Further, the comparing the text characters in the characteristic information with the text characters of the preset reading material to obtain text reading accuracy and text misreading characters comprises the following steps:
a first matching step, namely matching a first character in the text characters in the characteristic information with the text characters of the preset reading material, if the first character appears in the text characters of the preset reading material, continuously matching a string formed by the first character and a second character with the text characters of the preset reading material, and so on until the text characters in the characteristic information cannot be matched, obtaining the maximum matching length of the text characters in the characteristic information and the text characters of the preset reading material and the initial matching position in the text characters of the preset reading material; if the initial character does not appear in the text characters of the preset reading material, the initial matching position is smaller than 0, the matching is failed, the second matching step is executed, and if the initial matching position is not smaller than 0, the characters before the initial matching position are set to be different points;
a second matching step: continuing to match a second character in the text characters in the characteristic information with the text characters of the preset reading material, if the second character appears in the text characters of the preset reading material, continuing to match a string formed by the second character and the third character with the text characters of the preset reading material, if the second character does not appear in the text characters of the preset reading material, continuing to match the third character with the text characters of the preset reading material, and so on until the text characters cannot be matched, obtaining the maximum matching length and the starting matching position of the text characters in the characteristic information to the text characters of the preset reading material, if all the characters in the text characters in the characteristic information do not appear in the text characters of the preset reading material, and the starting matching position is smaller than 0, failing to match, and if one starting matching position is not smaller than 0, setting the characters before the starting matching position to be different points, and ending the comparison;
and taking the characters corresponding to the different points as text misreading characters, and calculating the percentage of the matched characters except the text misreading characters in all the characters to be used as the reading accuracy.
Further, the comparing the tone corresponding to the text word with the tone corresponding to the word of the preset reading material to obtain the tone accuracy and the tone misreading word includes:
comparing the tone corresponding to the text word with the tone corresponding to the word of the preset reading material one by one to obtain a correct tone and an incorrect tone;
and marking the characters corresponding to the wrong tone as the tone misreading characters, calculating the proportion of the correct tone to all the tones, and marking the proportion as the tone accuracy.
Further, the judging the fluency of reading through the speech speed and the click point includes:
training a preset marked voice speed, a preset marked click number and a preset marked fluency data set to obtain a fluency detection neural network with a judging function, inputting the voice speed and the data of the click number into the fluency detection neural network, and outputting a fluency detection result.
Further, the intelligent test question synthesis is performed according to the text misreading characters and the tone misreading characters, and the intelligent test question synthesis comprises the following steps:
inquiring topics which are the same as the text misreading characters and the tone misreading characters and serve as the source of the lesson chapters from a pre-established topic database according to the text misreading characters and the tone misreading characters, and extracting a preset number of topics from the topics to serve as the intelligent test questions; or alternatively, the process may be performed,
generating printed text and corresponding handwritten text according to the text misreading text and the tone misreading text, searching a question containing the printed text from a preset question database, and replacing the printed text with the corresponding handwritten text in the question to generate the intelligent test question; or alternatively, the process may be performed,
preprocessing the text misreading characters and the tone misreading characters to obtain misquestion labels; associating the wrong question label with a pre-constructed wrong question set, and acquiring category identifiers corresponding to the text misreading characters and the tone misreading characters from the wrong question set; and according to an Ebinhaos forgetting curve, periodically taking the wrong question set associated with the wrong question label according to a forgetting period, and randomly extracting at least one test question to be used as the intelligent test question.
Further, the step of judging whether the child really grasps the misread text according to the answer result comprises the following steps:
if the answer result is correct, judging that the children actually master the misread characters, updating and calculating the character reading accuracy and the tone accuracy;
if the answer result is wrong, judging that the child does not actually master the wrongly read characters, and keeping the original character reading accuracy and tone accuracy.
Based on the above object, the present application further provides a child reading ability evaluation system, including:
the interface display module is used for displaying a user interface, requesting to input the age and sex of the child and selecting the language of reading;
the input module is used for receiving input information of a user, displaying preset reading materials corresponding to the age, sex and language of the child, and prompting the child to read the preset reading materials in a sounding manner;
the audio analysis module is used for acquiring an audio record of the voice reading of the child, analyzing the audio record, and extracting characteristic information of the audio record, wherein the characteristic information comprises text characters and tone, speech speed and katon points corresponding to the text characters;
the text comparison module is used for comparing the text characters with the text characters of the preset reading material to obtain text reading accuracy and text misreading characters;
the tone comparison module is used for comparing the tone corresponding to the text characters in the characteristic information with the tone corresponding to the characters of the preset reading material to obtain tone accuracy and tone misreading characters;
the fluency judging module is used for judging the fluency degree of reading through the speech speed and the jamming point;
and the test question synthesis module is used for carrying out intelligent test question synthesis according to the text misreading characters and the tone misreading characters, prompting the child to answer again, judging whether the child really grasps the misreading characters according to the answer result, and eliminating possible misjudgment conditions.
Overall, the advantages of the application and the experience brought to the user are: personalized and targeted child reading capability evaluation can be developed according to different user requirements, so that the user generates reading interests and concentration; the characteristic voice analysis is used, so that the fluency of children reading can be obtained after the voice is converted, disassembled and calculated and analyzed; the intelligent test question synthesis is carried out on the suspected misread places of the children, whether the children really master misread vocabularies or not is checked again, the possible misjudgment condition is eliminated, the grasping and understanding condition of the children on the words is really examined from the two layers of sound reading and silently reading, and therefore the reading capability of the children can be more accurately evaluated scientifically and objectively and practically.
Drawings
In the drawings, the same reference numerals refer to the same or similar parts or elements throughout the several views unless otherwise specified. The figures are not necessarily drawn to scale. It is appreciated that these drawings depict only some embodiments according to the disclosure and are not therefore to be considered limiting of its scope.
FIG. 1 shows a flow chart of a child readability evaluation method according to an embodiment of the present application.
FIG. 2 shows a schematic diagram illustrating a user and input interface according to an embodiment of the application.
FIG. 3 shows a schematic diagram of a display reading material and recording interface according to an embodiment of the application.
Fig. 4 is a diagram showing the constitution of a child reading ability evaluation system according to an embodiment of the present application.
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Fig. 6 is a schematic diagram of a storage medium according to an embodiment of the present application.
Detailed Description
The application is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. It should be noted that, for convenience of description, only the portions related to the present application are shown in the drawings.
It should be noted that, without conflict, the embodiments of the present application and features of the embodiments may be combined with each other. The application will be described in detail below with reference to the drawings in connection with embodiments.
FIG. 1 shows a flow chart of a child readability evaluation method according to an embodiment of the present application. As shown in fig. 1, the method for evaluating the reading ability of the child comprises the following steps:
s1, displaying a user interface, requesting to input the age and sex of the child, and selecting the language of reading; as shown in fig. 2, the user interface may be displayed through an electronic device such as a mobile phone, a tablet computer, a computer, etc., so that the user may select and/or input corresponding options. Personalized and targeted child reading capability evaluation can be developed according to different user requirements, so that the user generates reading interests and concentration.
S2, receiving input information of a user, displaying preset reading materials corresponding to the age, sex and language of the child, and prompting to guide the child to read the preset reading materials in a sounding mode; as shown in fig. 3, showing the reading material, the user may start recording by clicking on the "voice recordable" button to activate the underlying recorder. The reading material can comprise different languages, such as English, chinese, and the like, and should comprise a reading material with moderate difficulty corresponding to the age and sex of children, and the reading material can be set, selected and adjusted according to the education outline of the national education for middle and primary schools, and can also be designed into a preset question bank to cover the characters, words, sentences, paragraphs, articles, and the like of the corresponding reading difficulty of each age group.
S3, acquiring an audio record of sounding reading of a child, analyzing the audio record, and extracting characteristic information of the audio record, wherein the characteristic information comprises text characters, tones, speech speed and katon points corresponding to the text characters;
s4, comparing the text characters with the text characters of the preset reading material to obtain text reading accuracy and text misreading characters;
s5, comparing the tone corresponding to the text characters in the characteristic information with the tone corresponding to the characters of the preset reading material to obtain tone accuracy and tone misreading characters;
s6, judging the fluency degree of reading through the speech speed and the click point;
s7, intelligent test question synthesis is carried out according to the text misreading characters and the tone misreading characters, the children are prompted again to answer, whether the children really master the misreading characters or not is judged according to the answer result, and possible misjudgment is eliminated.
The following focus is explained by the specific implementation of steps S3-S7 to explain the implementation of the present application.
Further, in step S3, it includes:
performing voice recognition on the audio record to obtain text characters;
carrying out frame windowing on the audio record, solving linear prediction parameters of each frame of voice, calculating gain parameters of each frame of audio record, obtaining a gain track curve of the audio record, comparing the gain track curve with a standard voice tone curve, and determining the tone corresponding to the text characters;
identifying a phoneme sequence and a time division point corresponding to each phoneme from the audio record, identifying a word sequence and a time division point corresponding to each word according to the identified phoneme sequence and the time division point corresponding to each phoneme, and calculating the speech rate of the audio record according to the identified word sequence and the time division point corresponding to each word;
marking time division points of each character on the audio record, calculating time difference between every two characters, and marking the time division point where the previous character is positioned as a blocking point when the time difference exceeds a preset threshold value, so as to obtain all the blocking points;
and taking the text characters and the tone, the speech speed and the katon point corresponding to the text characters as characteristic information of the audio record.
In step S3, through the characteristic voice analysis, the characteristics of children reading can be obtained after the voice is converted, disassembled and calculated and analyzed, and the technical disassembly and the laying are carried out for further evaluating the reading smoothness of the children.
Further, in step S4, it includes:
a first matching step, namely matching a first character in the text characters in the characteristic information with the text characters of the preset reading material, if the first character appears in the text characters of the preset reading material, continuously matching a string formed by the first character and a second character with the text characters of the preset reading material, and so on until the text characters in the characteristic information cannot be matched, obtaining the maximum matching length of the text characters in the characteristic information and the text characters of the preset reading material and the initial matching position in the text characters of the preset reading material; if the initial character does not appear in the text characters of the preset reading material, the initial matching position is smaller than 0, the matching is failed, the second matching step is executed, and if the initial matching position is not smaller than 0, the characters before the initial matching position are set to be different points;
a second matching step: continuing to match a second character in the text characters in the characteristic information with the text characters of the preset reading material, if the second character appears in the text characters of the preset reading material, continuing to match a string formed by the second character and the third character with the text characters of the preset reading material, if the second character does not appear in the text characters of the preset reading material, continuing to match the third character with the text characters of the preset reading material, and so on until the text characters cannot be matched, obtaining the maximum matching length and the starting matching position of the text characters in the characteristic information to the text characters of the preset reading material, if all the characters in the text characters in the characteristic information do not appear in the text characters of the preset reading material, and the starting matching position is smaller than 0, failing to match, and if one starting matching position is not smaller than 0, setting the characters before the starting matching position to be different points, and ending the comparison;
and taking the characters corresponding to the different points as text misreading characters, and calculating the percentage of the matched characters except the text misreading characters in all the characters to be used as the reading accuracy.
The characters in the step S4 are matched, so that the characters read by the children can be quickly compared with the standard characters of the reading material, and the characters which are read by the children are determined to be mispronounced. However, a great number of polyphones, one-tone polyphones and four-tone tones exist in the Chinese characters, so that a certain misjudgment rate may be generated in the process of actually performing voice recognition to convert text and matching, and the situation that children read the Chinese characters correctly but misjudge the Chinese characters as misread or misread the Chinese characters as misread is generated. Therefore, the application designs the tone comparison in the step S5, and the step S7 is used for further confirming whether the child does not really know the character or not in the form that the character which is confirmed to be wrong in the tone comparison is subjected to the problem again. Thus, the reading ability of the children is evaluated more accurately.
Further, in step S5, it includes:
comparing the tone corresponding to the text word with the tone corresponding to the word of the preset reading material one by one to obtain a correct tone and an incorrect tone;
and marking the characters corresponding to the wrong tone as the tone misreading characters, calculating the proportion of the correct tone to all the tones, and marking the proportion as the tone accuracy.
Further, in step S6, it includes:
training a preset marked voice speed, a preset marked click number and a preset marked fluency data set to obtain a fluency detection neural network with a judging function, inputting the voice speed and the data of the click number into the fluency detection neural network, and outputting a fluency detection result. Specifically, the manner of determining the degree of flow is not limited. In one possible implementation manner, a neural network with a judging function, such as a fluency detection neural network, can be obtained through training, and data of the speech speed and the katon point are input into the neural network, so that a fluency detection result can be output. The specific implementation form of the neural network can be flexibly determined according to actual situations, and the embodiment of the disclosure is not limited.
Through the step S6, the application initiates a mode of combining the click point and the speech speed to judge the fluency degree of the reading of the children. Because the reading fluency is an important measurement factor in actual education and reading ability assessment. According to the application, through objective speech speed and jamming conditions, the reading fluency of the children, which accords with actual quantitative calculation, is obtained, and the reading capability can be evaluated more accurately and objectively.
Further, in step S7, it includes:
inquiring topics which are the same as the text misreading characters and the tone misreading characters and serve as the source of the lesson chapters from a pre-established topic database according to the text misreading characters and the tone misreading characters, and extracting a preset number of topics from the topics to serve as the intelligent test questions; or alternatively, the process may be performed,
generating printed text and corresponding handwritten text according to the text misreading text and the tone misreading text, searching a question containing the printed text from a preset question database, and replacing the printed text with the corresponding handwritten text in the question to generate the intelligent test question; or alternatively, the process may be performed,
preprocessing the text misreading characters and the tone misreading characters to obtain misquestion labels; associating the wrong question label with a pre-constructed wrong question set, and acquiring category identifiers corresponding to the text misreading characters and the tone misreading characters from the wrong question set; and according to an Ebinhaos forgetting curve, periodically taking the wrong question set associated with the wrong question label according to a forgetting period, and randomly extracting at least one test question to be used as the intelligent test question.
Further, in step S7, it includes:
if the answer result is correct, judging that the children actually master the misread characters, updating and calculating the character reading accuracy and the tone accuracy;
if the answer result is wrong, judging that the child does not actually master the wrongly read characters, and keeping the original character reading accuracy and tone accuracy.
In step S7, intelligent test question synthesis is carried out on suspected misreading places of children, whether the children really master misreading vocabularies or not is checked again, possible misjudgment conditions are eliminated, the grasping understanding conditions of the children on words are really examined from two layers of sound reading and silently reading, and therefore the reading ability of the children can be more accurately evaluated scientifically and objectively and practically.
An embodiment of the present application provides a child reading ability evaluation system, which is configured to execute the child reading ability evaluation method described in the foregoing embodiment, as shown in fig. 4, and the system includes:
the interface display module 401 is configured to display a user interface, request to input age and gender of the child, and select a language for reading;
the input module 402 is configured to receive input information of a user, display preset reading materials corresponding to age, gender and language of a child, and prompt the child to read the preset reading materials in a sounding manner;
the audio parsing module 403 is configured to obtain an audio record of a child reading with sound, parse the audio record, and extract feature information of the audio record, where the feature information includes text characters, tones, speech speed, and katon points corresponding to the text characters;
a text comparison module 404, configured to obtain text reading accuracy and text misreading text by comparing the text with the text of the preset reading material;
a tone comparison module 405, configured to obtain a tone accuracy and a tone misreading text by comparing a tone corresponding to text characters in the feature information with a tone corresponding to characters of the preset reading material;
a fluency judging module 406, configured to judge the fluency of reading according to the speech speed and the click point;
the test question synthesis module 407 is configured to perform intelligent test question synthesis according to the text misreading text and the tone misreading text, prompt the child to answer again, and judge whether the child really grasps the misreading text according to the answer result, so as to eliminate possible misjudgment.
The child reading ability evaluation system provided by the embodiment of the application and the child reading ability evaluation method provided by the embodiment of the application have the same beneficial effects as the method adopted, operated or realized by the application program stored in the child reading ability evaluation system due to the same inventive concept.
The embodiment of the application also provides electronic equipment corresponding to the child reading ability evaluation method provided by the embodiment, so as to execute the child reading ability evaluation method. The embodiment of the application is not limited.
Referring to fig. 5, a schematic diagram of an electronic device according to some embodiments of the present application is shown. As shown in fig. 5, the electronic device 20 includes: a processor 200, a memory 201, a bus 202 and a communication interface 203, the processor 200, the communication interface 203 and the memory 201 being connected by the bus 202; the memory 201 stores a computer program that can be run on the processor 200, and the processor 200 executes the child reading ability evaluation method provided in any one of the foregoing embodiments of the present application when running the computer program.
The memory 201 may include a high-speed random access memory (RAM: random Access Memory), and may further include a non-volatile memory (non-volatile memory), such as at least one disk memory. The communication connection between the system network element and at least one other network element is implemented via at least one communication interface 203 (which may be wired or wireless), the internet, a wide area network, a local network, a metropolitan area network, etc. may be used.
Bus 202 may be an ISA bus, a PCI bus, an EISA bus, or the like. The buses may be classified as address buses, data buses, control buses, etc. The memory 201 is configured to store a program, and the processor 200 executes the program after receiving an execution instruction, and the method for evaluating the reading ability of a child disclosed in any of the foregoing embodiments of the present application may be applied to the processor 200 or implemented by the processor 200.
The processor 200 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in the processor 200 or by instructions in the form of software. The processor 200 may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU for short), a network processor (Network Processor, NP for short), etc.; but may also be a Digital Signal Processor (DSP), application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be embodied directly in the execution of a hardware decoding processor, or in the execution of a combination of hardware and software modules in a decoding processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in the memory 201, and the processor 200 reads the information in the memory 201, and in combination with its hardware, performs the steps of the above method.
The electronic equipment provided by the embodiment of the application and the child reading ability evaluation method provided by the embodiment of the application are the same in conception and have the same beneficial effects as the method adopted, operated or realized by the electronic equipment.
The embodiment of the present application further provides a computer readable storage medium corresponding to the method for evaluating the reading ability of a child provided in the foregoing embodiment, referring to fig. 6, the computer readable storage medium is shown as an optical disc 30, on which a computer program (i.e. a program product) is stored, where the computer program, when executed by a processor, performs the method for evaluating the reading ability of a child provided in any of the foregoing embodiments.
It should be noted that examples of the computer readable storage medium may also include, but are not limited to, a phase change memory (PRAM), a Static Random Access Memory (SRAM), a Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a flash memory, or other optical or magnetic storage medium, which will not be described in detail herein.
The computer readable storage medium provided by the above embodiment of the present application has the same advantages as the method adopted, operated or implemented by the application program stored in the computer readable storage medium, because the computer readable storage medium and the method for evaluating the child reading ability provided by the embodiment of the present application are the same inventive concept.
It should be noted that:
the algorithms and displays presented herein are not inherently related to any particular computer, virtual system, or other apparatus. Various general-purpose systems may also be used with the teachings herein. The required structure for a construction of such a system is apparent from the description above. In addition, the present application is not directed to any particular programming language. It will be appreciated that the teachings of the present application described herein may be implemented in a variety of programming languages, and the above description of specific languages is provided for disclosure of enablement and best mode of the present application.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the application may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the above description of exemplary embodiments of the application, various features of the application are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be construed as reflecting the intention that: i.e., the claimed application requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this application.
Those skilled in the art will appreciate that the modules in the apparatus of the embodiments may be adaptively changed and disposed in one or more apparatuses different from the embodiments. The modules or units or components of the embodiments may be combined into one module or unit or component and, furthermore, they may be divided into a plurality of sub-modules or sub-units or sub-components. Any combination of all features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or units of any method or apparatus so disclosed, may be used in combination, except insofar as at least some of such features and/or processes or units are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the application and form different embodiments. For example, in the following claims, any of the claimed embodiments can be used in any combination.
Various component embodiments of the application may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that some or all of the functions of some or all of the components in a virtual machine creation system according to embodiments of the application may be implemented in practice using a microprocessor or Digital Signal Processor (DSP). The present application can also be implemented as an apparatus or system program (e.g., a computer program and a computer program product) for performing a portion or all of the methods described herein. Such a program embodying the present application may be stored on a computer readable medium, or may have the form of one or more signals. Such signals may be downloaded from an internet website, provided on a carrier signal, or provided in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the application, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The application may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, third, etc. do not denote any order. These words may be interpreted as names.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that various changes and substitutions are possible within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (8)

1. A method for evaluating the reading ability of a child, comprising:
displaying a user interface, requesting to input the age and sex of the child and selecting the language of reading;
receiving input information of a user, displaying preset reading materials corresponding to the age, sex and language of the child, and prompting the child to read the preset reading materials in a sounding manner;
acquiring an audio record of sound production reading of a child, analyzing the audio record, and extracting characteristic information of the audio record, wherein the characteristic information comprises text characters, tones corresponding to the text characters, speech speed and katon points;
comparing the text characters with the text characters of the preset reading material to obtain text reading accuracy and text misreading characters, wherein the method comprises the following steps: a first matching step, namely matching a first character in the text characters in the characteristic information with the text characters of the preset reading material, if the first character appears in the text characters of the preset reading material, continuously matching a string formed by the first character and a second character with the text characters of the preset reading material, and so on until the text characters in the characteristic information cannot be matched, obtaining the maximum matching length of the text characters in the characteristic information and the text characters of the preset reading material and the initial matching position in the text characters of the preset reading material; if the initial character does not appear in the text characters of the preset reading material, the initial matching position is smaller than 0, the matching is failed, the second matching step is executed, and if the initial matching position is not smaller than 0, the characters before the initial matching position are set to be different points; a second matching step: continuing to match a second character in the text characters in the characteristic information with the text characters of the preset reading material, if the second character appears in the text characters of the preset reading material, continuing to match a string formed by the second character and the third character with the text characters of the preset reading material, if the second character does not appear in the text characters of the preset reading material, continuing to match the third character with the text characters of the preset reading material, and so on until the text characters cannot be matched, obtaining the maximum matching length and the starting matching position of the text characters in the characteristic information to the text characters of the preset reading material, if all the characters in the text characters in the characteristic information do not appear in the text characters of the preset reading material, and the starting matching position is smaller than 0, failing to match, and if one starting matching position is not smaller than 0, setting the characters before the starting matching position to be different points, and ending the comparison; the characters corresponding to the different points are used as text misreading characters, and the percentage of the matched characters except the text misreading characters in all the characters is calculated and used as the reading accuracy;
comparing the tone corresponding to the text characters in the characteristic information with the tone corresponding to the characters of the preset reading material to obtain tone accuracy and tone misreading characters;
judging the fluency degree of reading through the speech speed and the click point;
performing intelligent test question synthesis according to the text misreading characters and the tone misreading characters, prompting the child again to answer, judging whether the child really grasps the misreading characters according to the answer result, and eliminating possible misjudgment conditions;
the intelligent test question synthesis is carried out according to the text misreading characters and the tone misreading characters, and the intelligent test question synthesis comprises the following steps:
preprocessing the text misreading characters and the tone misreading characters to obtain misquestion labels; associating the wrong question label with a pre-constructed wrong question set, and acquiring category identifiers corresponding to the text misreading characters and the tone misreading characters from the wrong question set; and according to an Ebinhaos forgetting curve, periodically taking the wrong question set associated with the wrong question label according to a forgetting period, and randomly extracting at least one test question to be used as the intelligent test question.
2. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the parsing the audio record, extracting characteristic information of the audio record, includes:
performing voice recognition on the audio record to obtain text characters;
carrying out frame windowing on the audio record, solving linear prediction parameters of each frame of voice, calculating gain parameters of each frame of audio record, obtaining a gain track curve of the audio record, comparing the gain track curve with a standard voice tone curve, and determining the tone corresponding to the text characters;
identifying a phoneme sequence and a time division point corresponding to each phoneme from the audio record, identifying a word sequence and a time division point corresponding to each word according to the identified phoneme sequence and the time division point corresponding to each phoneme, and calculating the speech rate of the audio record according to the identified word sequence and the time division point corresponding to each word;
marking time division points of each character on the audio record, calculating time difference between every two characters, and marking the time division point where the previous character is positioned as a blocking point when the time difference exceeds a preset threshold value, so as to obtain all the blocking points;
and taking the text characters and the tone, the speech speed and the katon point corresponding to the text characters as characteristic information of the audio record.
3. The method of claim 2, wherein the step of determining the position of the substrate comprises,
the method for obtaining the tone accuracy and the tone misreading characters by comparing the tone corresponding to the text characters with the tone corresponding to the characters of the preset reading material comprises the following steps:
comparing the tone corresponding to the text word with the tone corresponding to the word of the preset reading material one by one to obtain a correct tone and an incorrect tone;
and marking the characters corresponding to the wrong tone as the tone misreading characters, calculating the proportion of the correct tone to all the tones, and marking the proportion as the tone accuracy.
4. The method of claim 2, wherein the step of determining the position of the substrate comprises,
the step of judging the fluency degree of reading through the speech speed and the click point comprises the following steps:
training a preset marked voice speed, a preset marked click number and a preset marked fluency data set to obtain a fluency detection neural network with a judging function, inputting the voice speed and the data of the click number into the fluency detection neural network, and outputting a fluency detection result.
5. A method according to claim 1 or 2, characterized in that,
the method for judging whether the children really master the misread characters according to the answer result comprises the following steps:
if the answer result is correct, judging that the children actually master the misread characters, updating and calculating the character reading accuracy and the tone accuracy;
if the answer result is wrong, judging that the child does not actually master the wrongly read characters, and keeping the original character reading accuracy and tone accuracy.
6. A child reading ability evaluation system, comprising:
the interface display module is used for displaying a user interface, requesting to input the age and sex of the child and selecting the language of reading;
the input module is used for receiving input information of a user, displaying preset reading materials corresponding to the age, sex and language of the child, and prompting the child to read the preset reading materials in a sounding manner;
the audio analysis module is used for acquiring an audio record of the voice reading of the child, analyzing the audio record, and extracting characteristic information of the audio record, wherein the characteristic information comprises text characters and tone, speech speed and katon points corresponding to the text characters;
the text comparison module is used for comparing the text characters with the text characters of the preset reading material to obtain text reading accuracy and text misreading characters, and comprises the following steps: a first matching step, namely matching a first character in the text characters in the characteristic information with the text characters of the preset reading material, if the first character appears in the text characters of the preset reading material, continuously matching a string formed by the first character and a second character with the text characters of the preset reading material, and so on until the text characters in the characteristic information cannot be matched, obtaining the maximum matching length of the text characters in the characteristic information and the text characters of the preset reading material and the initial matching position in the text characters of the preset reading material; if the initial character does not appear in the text characters of the preset reading material, the initial matching position is smaller than 0, the matching is failed, the second matching step is executed, and if the initial matching position is not smaller than 0, the characters before the initial matching position are set to be different points; a second matching step: continuing to match a second character in the text characters in the characteristic information with the text characters of the preset reading material, if the second character appears in the text characters of the preset reading material, continuing to match a string formed by the second character and the third character with the text characters of the preset reading material, if the second character does not appear in the text characters of the preset reading material, continuing to match the third character with the text characters of the preset reading material, and so on until the text characters cannot be matched, obtaining the maximum matching length and the starting matching position of the text characters in the characteristic information to the text characters of the preset reading material, if all the characters in the text characters in the characteristic information do not appear in the text characters of the preset reading material, and the starting matching position is smaller than 0, failing to match, and if one starting matching position is not smaller than 0, setting the characters before the starting matching position to be different points, and ending the comparison; the characters corresponding to the different points are used as text misreading characters, and the percentage of the matched characters except the text misreading characters in all the characters is calculated and used as the reading accuracy;
the tone comparison module is used for comparing the tone corresponding to the text characters in the characteristic information with the tone corresponding to the characters of the preset reading material to obtain tone accuracy and tone misreading characters;
the fluency judging module is used for judging the fluency degree of reading through the speech speed and the jamming point;
the test question synthesis module is used for carrying out intelligent test question synthesis according to the text misreading characters and the tone misreading characters, prompting the children to answer again, judging whether the children really master the misreading characters according to the answer result, and eliminating possible misjudgment conditions; the intelligent test question synthesis is carried out according to the text misreading characters and the tone misreading characters, and the intelligent test question synthesis comprises the following steps: preprocessing the text misreading characters and the tone misreading characters to obtain misquestion labels; associating the wrong question label with a pre-constructed wrong question set, and acquiring category identifiers corresponding to the text misreading characters and the tone misreading characters from the wrong question set; and according to an Ebinhaos forgetting curve, periodically taking the wrong question set associated with the wrong question label according to a forgetting period, and randomly extracting at least one test question to be used as the intelligent test question.
7. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor runs the computer program to implement the method of any one of claims 1-5.
8. A computer readable storage medium having stored thereon a computer program, wherein the program is executed by a processor to implement the method of any of claims 1-5.
CN202310670058.6A 2023-06-07 2023-06-07 Child reading ability evaluation method and system Active CN116403604B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310670058.6A CN116403604B (en) 2023-06-07 2023-06-07 Child reading ability evaluation method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310670058.6A CN116403604B (en) 2023-06-07 2023-06-07 Child reading ability evaluation method and system

Publications (2)

Publication Number Publication Date
CN116403604A CN116403604A (en) 2023-07-07
CN116403604B true CN116403604B (en) 2023-11-03

Family

ID=87010878

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310670058.6A Active CN116403604B (en) 2023-06-07 2023-06-07 Child reading ability evaluation method and system

Country Status (1)

Country Link
CN (1) CN116403604B (en)

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20020030483A (en) * 2000-10-18 2002-04-25 방정선 A communications network system for language study and test
CN101739868A (en) * 2008-11-19 2010-06-16 中国科学院自动化研究所 Automatic evaluation and diagnosis method of text reading level for oral test
CN102354495A (en) * 2011-08-31 2012-02-15 中国科学院自动化研究所 Testing method and system of semi-opened spoken language examination questions
CN102737012A (en) * 2011-04-06 2012-10-17 鸿富锦精密工业(深圳)有限公司 Text information comparison method and system
CN104244812A (en) * 2012-02-07 2014-12-24 远端临场医疗公司 Medical kiosk and method of use
CN104859345A (en) * 2015-04-02 2015-08-26 秦健 Palette
JP2016045467A (en) * 2014-08-26 2016-04-04 日本放送協会 Utterance evaluation device, utterance evaluation method and program
CN106856095A (en) * 2015-12-09 2017-06-16 中国科学院声学研究所 The voice quality evaluating system that a kind of phonetic is combined into syllables
CN107808658A (en) * 2016-09-06 2018-03-16 深圳声联网科技有限公司 Based on real-time baby's audio serial behavior detection method under domestic environment
CN109545244A (en) * 2019-01-29 2019-03-29 北京猎户星空科技有限公司 Speech evaluating method, device, electronic equipment and storage medium
CN112184503A (en) * 2020-09-21 2021-01-05 书丸子(北京)科技有限公司 Children multinomial ability scoring method and system for preschool education quality evaluation
CN112908360A (en) * 2021-02-02 2021-06-04 早道(大连)教育科技有限公司 Online spoken language pronunciation evaluation method and device and storage medium
CN113409770A (en) * 2020-11-25 2021-09-17 腾讯科技(深圳)有限公司 Pronunciation feature processing method, pronunciation feature processing device, pronunciation feature processing server and pronunciation feature processing medium
CN113486970A (en) * 2021-07-15 2021-10-08 北京全未来教育科技有限公司 Reading capability evaluation method and device
CN113742453A (en) * 2021-09-08 2021-12-03 四川盘古牛教育咨询有限公司 Artificial intelligence wrong question correlation method and system
CN114943032A (en) * 2022-05-17 2022-08-26 咪咕数字传媒有限公司 Information processing method, information processing device, electronic equipment and computer readable storage medium
CN115081912A (en) * 2022-06-28 2022-09-20 北京奇趣万物科技有限公司 Hierarchical reading method and system for helping children enhance reading ability and increase reading interest
CN115295020A (en) * 2022-09-14 2022-11-04 科大讯飞股份有限公司 Voice evaluation method and device, electronic equipment and storage medium
CN115440193A (en) * 2022-09-06 2022-12-06 苏州智言信息科技有限公司 Pronunciation evaluation scoring method based on deep learning

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108154735A (en) * 2016-12-06 2018-06-12 爱天教育科技(北京)有限公司 Oral English Practice assessment method and device
CN107808674B (en) * 2017-09-28 2020-11-03 上海流利说信息技术有限公司 Method, medium and device for evaluating voice and electronic equipment
CN112151014B (en) * 2020-11-04 2023-07-21 平安科技(深圳)有限公司 Speech recognition result evaluation method, device, equipment and storage medium

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20020030483A (en) * 2000-10-18 2002-04-25 방정선 A communications network system for language study and test
CN101739868A (en) * 2008-11-19 2010-06-16 中国科学院自动化研究所 Automatic evaluation and diagnosis method of text reading level for oral test
CN102737012A (en) * 2011-04-06 2012-10-17 鸿富锦精密工业(深圳)有限公司 Text information comparison method and system
CN102354495A (en) * 2011-08-31 2012-02-15 中国科学院自动化研究所 Testing method and system of semi-opened spoken language examination questions
CN104244812A (en) * 2012-02-07 2014-12-24 远端临场医疗公司 Medical kiosk and method of use
JP2016045467A (en) * 2014-08-26 2016-04-04 日本放送協会 Utterance evaluation device, utterance evaluation method and program
CN104859345A (en) * 2015-04-02 2015-08-26 秦健 Palette
CN106856095A (en) * 2015-12-09 2017-06-16 中国科学院声学研究所 The voice quality evaluating system that a kind of phonetic is combined into syllables
CN107808658A (en) * 2016-09-06 2018-03-16 深圳声联网科技有限公司 Based on real-time baby's audio serial behavior detection method under domestic environment
CN109545244A (en) * 2019-01-29 2019-03-29 北京猎户星空科技有限公司 Speech evaluating method, device, electronic equipment and storage medium
CN112184503A (en) * 2020-09-21 2021-01-05 书丸子(北京)科技有限公司 Children multinomial ability scoring method and system for preschool education quality evaluation
CN113409770A (en) * 2020-11-25 2021-09-17 腾讯科技(深圳)有限公司 Pronunciation feature processing method, pronunciation feature processing device, pronunciation feature processing server and pronunciation feature processing medium
CN112908360A (en) * 2021-02-02 2021-06-04 早道(大连)教育科技有限公司 Online spoken language pronunciation evaluation method and device and storage medium
CN113486970A (en) * 2021-07-15 2021-10-08 北京全未来教育科技有限公司 Reading capability evaluation method and device
CN113742453A (en) * 2021-09-08 2021-12-03 四川盘古牛教育咨询有限公司 Artificial intelligence wrong question correlation method and system
CN114943032A (en) * 2022-05-17 2022-08-26 咪咕数字传媒有限公司 Information processing method, information processing device, electronic equipment and computer readable storage medium
CN115081912A (en) * 2022-06-28 2022-09-20 北京奇趣万物科技有限公司 Hierarchical reading method and system for helping children enhance reading ability and increase reading interest
CN115440193A (en) * 2022-09-06 2022-12-06 苏州智言信息科技有限公司 Pronunciation evaluation scoring method based on deep learning
CN115295020A (en) * 2022-09-14 2022-11-04 科大讯飞股份有限公司 Voice evaluation method and device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于神经网络的机器阅读理解综述;顾迎捷;桂小林;李德福;沈毅;廖东;;软件学报(07);全文 *

Also Published As

Publication number Publication date
CN116403604A (en) 2023-07-07

Similar Documents

Publication Publication Date Title
Jin et al. Distinguishing features in scoring L2 Chinese speaking performance: How do they work?
Yoon et al. Quantifying disciplinary voices: An automated approach to interactional metadiscourse in successful student writing
US10770062B2 (en) Adjusting a ranking of information content of a software application based on feedback from a user
US9818406B1 (en) Adjusting user experience based on paralinguistic information
Crowther et al. Methodological synthesis of cluster analysis in second language research
Schonlau et al. Automatic classification of open-ended questions: check-all-that-apply questions
US11182605B2 (en) Search device, search method, search program, and recording medium
CN110659352B (en) Test question examination point identification method and system
Sarandi Reexamining elicited imitation as a measure of implicit grammatical knowledge and beyond…?
Yan et al. Dimensionality of speech fluency: Examining the relationships among complexity, accuracy, and fluency (CAF) features of speaking performances on the Aptis test
CN117480543A (en) System and method for automatically generating paragraph-based items for testing or evaluation
Camilleri et al. Dynamic assessment of word learning skills of pre-school children with primary language impairment
Guerra et al. Representations of the English as a Lingua Franca framework: Identifying ELF-aware activities in Portuguese and Turkish coursebooks
Ellis Meta-analysis in second language acquisition research: A critical appraisal
Carbajal et al. A meta‐analysis of infants’ word‐form recognition
CN111753553A (en) Statement type identification method and device, electronic equipment and storage medium
CN116385230A (en) Child reading ability evaluation method and system
Murphy Odo A meta-analysis of the effect of phonological awareness and/or phonics instruction on word and pseudo word reading of English as an L2
Xu et al. Assessing L2 English speaking using automated scoring technology: examining automarker reliability
Stewart et al. Establishing meaning recall and meaning recognition vocabulary knowledge as distinct psychometric constructs in relation to reading proficiency
CN116401466B (en) Book classification recommendation method and system
CN116403604B (en) Child reading ability evaluation method and system
CN112700763A (en) Voice annotation quality evaluation method, device, equipment and storage medium
Alahmadi et al. Evaluation of image accessibility for visually impaired users
Freedman et al. Using whole-word production measures to determine the influence of phonotactic probability and neighborhood density on bilingual speech production

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant