CN113870635A - Voice answering method and device - Google Patents
Voice answering method and device Download PDFInfo
- Publication number
- CN113870635A CN113870635A CN202111165226.3A CN202111165226A CN113870635A CN 113870635 A CN113870635 A CN 113870635A CN 202111165226 A CN202111165226 A CN 202111165226A CN 113870635 A CN113870635 A CN 113870635A
- Authority
- CN
- China
- Prior art keywords
- answer
- question
- information
- answered
- answering
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 55
- 238000012545 processing Methods 0.000 claims description 11
- 230000002194 synthesizing effect Effects 0.000 claims description 3
- 238000005516 engineering process Methods 0.000 description 13
- 230000008569 process Effects 0.000 description 11
- 230000015572 biosynthetic process Effects 0.000 description 5
- 238000003786 synthesis reaction Methods 0.000 description 5
- 206010047571 Visual impairment Diseases 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 208000029257 vision disease Diseases 0.000 description 4
- 230000004393 visual impairment Effects 0.000 description 4
- 230000004438 eyesight Effects 0.000 description 3
- 230000009191 jumping Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 238000013480 data collection Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000008676 import Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 239000004984 smart glass Substances 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B7/00—Electrically-operated teaching apparatus or devices working with questions and answers
- G09B7/02—Electrically-operated teaching apparatus or devices working with questions and answers of the type wherein the student is expected to construct an answer to the question which is presented or wherein the machine gives an answer to the question presented by a student
- G09B7/04—Electrically-operated teaching apparatus or devices working with questions and answers of the type wherein the student is expected to construct an answer to the question which is presented or wherein the machine gives an answer to the question presented by a student characterised by modifying the teaching programme in response to a wrong answer, e.g. repeating the question, supplying a further explanation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/3331—Query processing
- G06F16/334—Query execution
- G06F16/3343—Query execution using phonetics
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Educational Technology (AREA)
- Educational Administration (AREA)
- Health & Medical Sciences (AREA)
- Business, Economics & Management (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- Electrically Operated Instructional Devices (AREA)
Abstract
The application provides a voice answering method and a voice answering device. The method comprises the steps of S10, receiving an answer instruction, extracting questions to be answered from a target voice question library based on the answer instruction, and generating a question set; s20, sequentially playing questions to be answered in the question set; s30, continuously collecting user voice data; s40, sequentially identifying and displaying the current answer information in the user voice data, executing the step S30 under the condition that the answer information is not acquired, and executing the step S50 under the condition that the answer information is acquired; s50, judging whether the current answer information is the correct answer of the question to be answered, if so, executing a step S51, and if not, executing a step S52; s51, generating a prompt for correct answer, and continuing to execute the step S20; s52, generating an answer error prompt, and continuing to execute the step S40. The method and the device are simple to operate, convenient to use and wide in application range.
Description
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for answering questions with voice, a computing device, and a computer-readable storage medium.
Background
With the rapid development of computer technology and the continuous deepening of the computer technology in the field of education informatization, various tool education products are more and more accepted and used by parents and students. The tool education product mainly provides technical support and help for students, parents and teachers in education and guidance.
The prior tool education products can provide a plurality of voice question reading functions, but do not realize the voice question answering function, and need manual input of users to answer questions, are not friendly to users before school ages, firstly, children before school ages need to know and can write numbers and symbols by hand to exercise, and secondly, watching the mobile phone for a long time is not beneficial to the development of the eyesight of the eyes of the children.
Disclosure of Invention
In view of the above, embodiments of the present application provide a method and an apparatus for answering questions with voice, a computing device, and a computer-readable storage medium, so as to solve technical defects in the prior art.
The embodiment of the application discloses a voice answering method, which comprises the following steps:
s10, receiving an answer instruction, extracting questions to be answered from a target voice question library based on the answer instruction, and generating a question set;
s20, sequentially playing the questions to be answered in the question set;
s30, continuously collecting user voice data;
s40, sequentially identifying and displaying the current answer information in the user voice data, executing the step S30 under the condition that the answer information is not acquired, and executing the step S50 under the condition that the answer information is acquired;
s50, judging whether the current answer information is the correct answer of the question to be answered, if so, executing a step S51, and if not, executing a step S52;
s51, generating a prompt for correct answer, and continuing to execute the step S20;
s52, generating an answer error prompt, and continuing to execute the step S40.
Further, before the step S10, the method further includes:
s01, acquiring original voice data and at least one text topic library carrying category information;
and S02, synthesizing a corresponding voice topic library carrying category information based on the original voice data and the text topic library.
Further, the step S10 includes:
s11, receiving an answer instruction carrying category information and question number information;
s12, matching a voice question library which is the same as the type information of the answer instruction as a target voice question library based on the type information carried by the answer instruction;
s13, extracting the questions to be answered in a target number from the target voice question library based on the question number information carried by the question answering instruction, and generating a question set.
Further, the step S40 includes:
s41, processing the user voice data to obtain at least one word unit;
s42, judging whether the current word unit is answer information;
in the case that the current word unit is answer information, displaying the current answer information, and continuing to execute the step S50;
in a case where the word unit is not answer information, continuing to perform step S43;
s43, judging whether the word unit is the last word unit;
if yes, the step S30 is continued;
if not, the step S42 is continued.
Further, after the step S20, the method further includes:
s22, judging whether the answering time exceeds a second preset threshold value;
if yes, the step S20 is continued;
if not, the step S41 is continued.
Further, before the step S22, the method further includes:
s21, judging whether the answering time exceeds a first preset threshold value;
if yes, generating a countdown prompt, and continuing to execute the step S22;
if not, the step S22 is continued.
Further, the step S52 includes:
an answer error prompt is generated and the execution of the step S43 is continued.
The application also discloses pronunciation answer device includes:
the receiving module is configured to receive an answer instruction, extract questions to be answered in a target voice question library based on the answer instruction and generate a question set;
the playing module is configured to sequentially play the questions to be answered in the question set;
a collection module configured to continuously collect user voice data;
the recognition module is configured to sequentially recognize and display current answer information in the user voice data, execute the acquisition module under the condition that the answer information is not acquired, and execute the judgment module under the condition that the answer information is acquired;
the judging module is configured to judge whether the current answer information is a correct answer of the question to be answered, if so, the correct module is executed, and if not, the wrong module is executed;
a correct module configured to generate a prompt for correct answer and continue to execute the play module;
an error module configured to generate an answer error prompt and continue execution of the identification module.
The application also discloses a computing device, which comprises a memory, a processor and computer instructions stored on the memory and capable of running on the processor, wherein the processor executes the instructions to realize the steps of the voice answering method.
A computer readable storage medium storing computer instructions which, when executed by a processor, implement the steps of the voice question answering method.
The voice question answering method and the voice question answering device achieve the purposes of voice question reading and voice question answering by playing questions through voice, collecting voice data of users, identifying answer information in the voice data of the users and judging whether the answer information is correct or not, can effectively reduce visual impairment caused by the fact that the users watch mobile phones for a long time, solve the problem that some special people such as preschool children are inconvenient to manually input answers, and are simple to operate, convenient to use and wide in application range.
Drawings
FIG. 1 is a schematic block diagram of a computing device according to an embodiment of the present application;
fig. 2 is a schematic flowchart illustrating steps of a voice answering method according to an embodiment of the present application;
fig. 3 is a schematic flowchart illustrating steps of a voice answering method according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a voice answering device according to an embodiment of the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is capable of implementation in many different ways than those herein set forth and of similar import by those skilled in the art without departing from the spirit of this application and is therefore not limited to the specific implementations disclosed below.
The terminology used in the description of the one or more embodiments is for the purpose of describing the particular embodiments only and is not intended to be limiting of the description of the one or more embodiments. As used in one or more embodiments of the present specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used in one or more embodiments of the present specification refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It will be understood that, although the terms first, second, etc. may be used herein in one or more embodiments to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, a first can also be referred to as a second and, similarly, a second can also be referred to as a first without departing from the scope of one or more embodiments of the present description. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
First, the noun terms to which one or more embodiments of the present invention relate are explained.
Speech synthesis (Text To Speech, TTS): TTS technology for short relates to a plurality of subject technologies such as acoustics, linguistics, digital signal processing technology, multimedia technology and the like, and is a leading-edge technology in the field of Chinese information processing. The speech synthesis is a process of converting a text into a speech and outputting the speech, and the process mainly comprises the steps of decomposing the input text into phonemes according to characters or words, analyzing symbols to be specially processed such as numbers, currency units, word deformation and punctuation in the text, generating digital audio by the phonemes, playing the digital audio by a loudspeaker or playing the digital audio by multimedia software after storing the digital audio as a sound file.
Voice denoising: that is, the speech enhancement technology is a technology for extracting a useful speech signal from a noise background when a speech signal is interfered or even submerged by noise, and suppressing and reducing the noise interference.
The voice recognition technology comprises the following steps: also known as Automatic Speech Recognition (ASR), the goal is to convert the vocabulary content in human Speech into computer-readable input, such as keystrokes, binary codes, or character sequences. Unlike speaker recognition and speaker verification, the latter attempts to recognize or verify the speaker who uttered the speech rather than the vocabulary content contained therein.
In the present application, a method and an apparatus for answering questions by voice, a computing device and a computer scale storage medium are provided, which are described in detail in the following embodiments one by one.
Fig. 1 is a block diagram illustrating a configuration of a computing device 100 according to an embodiment of the present specification. The components of the computing device 100 include, but are not limited to, memory 110 and processor 120. The processor 120 is coupled to the memory 110 via a bus 130 and a database 150 is used to store data.
Computing device 100 also includes access device 140, access device 140 enabling computing device 100 to communicate via one or more networks 160. Examples of such networks include the Public Switched Telephone Network (PSTN), a Local Area Network (LAN), a Wide Area Network (WAN), a Personal Area Network (PAN), or a combination of communication networks such as the internet. Access device 140 may include one or more of any type of network interface (e.g., a Network Interface Card (NIC)) whether wired or wireless, such as an IEEE802.11 Wireless Local Area Network (WLAN) wireless interface, a worldwide interoperability for microwave access (Wi-MAX) interface, an ethernet interface, a Universal Serial Bus (USB) interface, a cellular network interface, a bluetooth interface, a Near Field Communication (NFC) interface, and so forth.
In one embodiment of the present description, the above-described components of computing device 100 and other components not shown in FIG. 1 may also be connected to each other, such as by a bus. It should be understood that the block diagram of the computing device architecture shown in FIG. 1 is for purposes of example only and is not limiting as to the scope of the description. Those skilled in the art may add or replace other components as desired.
Computing device 100 may be any type of stationary or mobile computing device, including a mobile computer or mobile computing device (e.g., tablet, personal digital assistant, laptop, notebook, netbook, etc.), a mobile phone (e.g., smartphone), a wearable computing device (e.g., smartwatch, smartglasses, etc.), or other type of mobile device, or a stationary computing device such as a desktop computer or PC. Computing device 100 may also be a mobile or stationary server.
Wherein the processor 120 may perform the steps of the method shown in fig. 2. Fig. 2 is a schematic flowchart illustrating a voice question answering method according to an embodiment of the present application, including step S201 to step S207.
S201, receiving an answer instruction, extracting questions to be answered in a target voice question library based on the answer instruction, and generating a question set.
Specifically, the answer instruction is a computer instruction, and may be various types of instructions such as "answer start", "READY GO", and the like, which is not limited in the present application. The voice question bank is a database which stores a large number of voice questions, and the target voice question bank is a database which stores voice questions of types required by users.
The answer instruction carries subject category information and subject quantity information, the subject category information may be subject category information such as "mathematics" and "English", or difficulty category information such as "within ten plus minus" and "within hundred plus minus" or the like, or grade category information such as "grade one of primary school", "grade two of primary school", or any combination thereof, which may be determined according to specific situations, and the present application is not limited thereto. The question quantity information can be selected by the user, such as '5-way questions' and '10-way questions', the corresponding voice question library can be selected as a target voice question library according to the question type matching carried in the answer instruction, and corresponding quantity of questions can be randomly extracted from the target voice question library according to the question quantity information to form a question set.
The questions to be answered are extracted from the target voice question library based on the answering instructions, the purposes of flexibly selecting different types of question libraries and flexibly selecting the number of disposable answers according to the answering instructions of the users can be achieved, the answering freedom degree is high, and the requirements of different users are met.
S202, sequentially playing the questions to be answered in the question set.
Specifically, each question to be answered is sequentially played according to the sequence of the questions to be answered in the question set, a certain answering time is reserved between every two adjacent questions to be answered, the length of the answering time can be different according to the type of the question or the difficulty of the question, for example, the answering time of a primary school grade-one mathematical question can be 10 seconds, the answering time of a primary school grade-two mathematical question can be 20 seconds, and the like, and the method is not limited in this application.
The questions to be answered are played in a voice mode, so that the user can be effectively prevented from watching the mobile phone for a long time, eyes are liberated, and eyesight is protected.
And S203, continuously collecting the voice data of the user.
Specifically, after a voice of a question to be answered is played, the voice data of the user starts to be collected. It should be noted that, in this step, the collected user voice data is continuously collected in real time after the question to be answered is played.
During this process, the voice capture component of computing device 100 remains operational to enable continuous capture of the user's voice data. The voice capturing component may be disposed on the computing device 100, for example, a microphone disposed on the computing device 100, or may be disposed separately from the computing device 100 and connected to the computing device in a wired or wireless manner, for example, a microphone.
The purpose of voice answering is achieved by collecting the voice data of the user in real time, the hands of the user can be liberated, and particularly for the preschool user, the problem that the user of special crowds such as the preschool user cannot manually input answers is solved through voice answering.
And S204, sequentially identifying and displaying the current answer information in the user voice data.
And S205, judging whether answer information is acquired.
If not, the step S203 is executed.
If yes, go to step S206.
The answer information is language information of a type corresponding to the question to be answered, for example, the answer information is a number when the question to be answered is a calculation question, the answer information is an English when the question to be answered is an English question, and the rest can be repeated by analogy in other cases.
Specifically, in the recognition process, the collected user voice data can be processed by utilizing a voice noise reduction technology so as to eliminate noise and more accurately recognize answer information in the user voice.
After the user voice data is collected, sequentially identifying the contents in the user voice data, detecting whether answer information is obtained or not, continuously collecting the user voice data under the condition that the answer information is not obtained in answer time, and sequentially displaying and judging whether each answer information in the user voice data is correct or not under the condition that the answer information is obtained until the answer information is a correct answer or the answer time is used up.
For example, it is assumed that during the answering of computational questions, the content of the collected user speech data includes: the answer is 3, if not, 4, the content of the user voice data is identified word by word, the answer is not answer information, the answer is abandoned, the case is not answer information, the answer is abandoned, and the like, if the answer is 3, the answer information is displayed, 3 is judged whether the answer is correct or not, if the answer is 3, the identification is stopped, and if the answer is 3, the identification is continued.
By identifying the answer information in the user voice data, the content irrelevant to the question to be answered in the user voice data can be filtered and removed, accuracy of answer identification and judgment is improved, the answer information is displayed regardless of correctness of the answer information, better feedback can be given to the user, and user experience is improved.
S206, judging whether the current answer information is a correct answer to the question to be answered, if so, executing a step S207, and if not, executing a step S208.
And S207, generating a prompt for correct answer, and continuing to execute the step S202.
And S208, generating an answer error prompt, and continuing to execute the step S204.
Specifically, under the condition that answer information in user voice data is detected, whether the answer information is a correct answer is judged, if yes, a correct answer prompt is generated, the next question is skipped to, the question to be answered is continuously played, and if not, an incorrect answer prompt is generated, and the answer information in the user voice data is continuously identified.
The prompt for correct answer and the prompt for incorrect answer may be single prompt modes such as voice prompt, text prompt, vibration prompt, and the like, or may be any combination of the above prompt modes, which is not limited in this application.
The present embodiment will be further described with reference to specific examples.
For example, if the received answer instruction carries question category information "within ten plus-minus method" and the question number information is "2-way question", two questions to be answered are randomly extracted from the "within ten plus-minus method" speech question library to form a question set.
Play the 1 st question to be answered "plus one equals several? ", collect user voice data and time.
Assuming that the collected user voice data content comprises '3' and '2', sequentially identifying and displaying answer information in the user voice data, wherein the '3' is answer information, displaying the user answer as '3' in an answering page, judging the answer as a wrong answer, generating an answer wrong prompt, continuously identifying the answer information in the user voice data, obtaining the '2' as answer information, displaying the user answer as '2' in the answering page, judging the answer as a correct answer, generating an answer correct prompt, and jumping to the 2 nd question.
Play the 2 nd question to be answered "two minus one equals several? ", collect user voice data and time.
Assuming that the content of the collected user voice data comprises '1', sequentially identifying and displaying answer information in the user voice data, wherein '1' is answer information, displaying the current answer information of the user as '1' in an answering page, judging that the answer '1' is a correct answer, and generating an answer correct prompt. And finishing the whole answering process after all the questions in the question set are played, and counting and displaying the answering results.
According to the voice question answering method provided by the embodiment, the purposes of voice question reading and voice question answering are achieved by voice question playing, user voice data acquisition, recognition of answer information in the user voice data and judgment of whether the answer information is correct, visual impairment caused by long-time mobile phone watching of a user can be effectively reduced, the problem that some special people such as preschool children are inconvenient to manually input answers is solved, and the voice question answering method is simple to operate, convenient to use and wide in application range.
As shown in fig. 3, fig. 3 shows a schematic flowchart of a voice question answering method according to an embodiment of the present application, including step S301 to step S315.
S301, acquiring original voice data and at least one text topic library carrying category information.
The original voice data is a pre-recorded corpus, which may be a female voice corpus, a male voice corpus or a child voice corpus, or may be various types of voice corpora such as cartoon type voices, so as to improve the attractiveness of users at different ages and increase the audience range, which is not limited in the present application.
The text topic library is a database storing a large number of text topics, and the category information of the text topic library may be subject category information such as "mathematics" and "English", may also be difficulty category information such as "within ten addition and subtraction" and "within hundred addition and subtraction", may also be grade category information such as "grade one of primary school", and "grade two of primary school", or any combination thereof, which may be determined according to specific situations, and the present application is not limited thereto. The text topic database can be updated regularly to enrich the types of the topics and ensure the novelty of the topics.
S302, synthesizing a corresponding voice topic library carrying category information based on the original voice data and the text topic library.
Specifically, the original speech data and the text topic can be synthesized into a speech topic by a TTS speech synthesis technique. The voice questions are synthesized by using the voice synthesis technology, and voices in different styles can be flexibly selected for the audiences with different difficulties and different types of questions and at different age stages, so that the interestingness of voice answering and the attraction to target audiences are improved.
It should be noted that, both the step S301 and the step S302 are preparation works before starting the voice question answering, and the text question library and the voice question library can be updated periodically without repeating the above two steps before answering each question.
S303, receiving an answer instruction carrying the category information and the number of questions.
Specifically, the answer instruction is a computer instruction, and may be various types of instructions such as "answer start", "READY GO", and the like, which is not limited in the present application. The voice question bank is a database which stores a large number of voice questions, and the target voice question bank is a database which stores voice questions of types required by users.
The answer instruction carries question type information and question quantity information, the question type information is the same as the type information of the text question library and the voice question library, and can be subject type information, difficulty type information, grade type information or any combination of the above, which can be determined according to specific situations, and the application is not limited to this. The question quantity information can be selected by a user, such as '5-channel questions' and '10-channel questions', the question type information carried in the answer instruction is used for matching and selecting a corresponding voice question library as a target voice question library, and the question quantity information is used for randomly extracting a corresponding quantity of questions from the target voice question library to form a question set.
And S304, matching a voice question library which is the same as the type information of the answer instruction as a target voice question library based on the type information carried by the answer instruction.
Specifically, for example, the category information is taken as the combination of the grade category information and the subject category information, and assuming that the category information carried in the answer instruction is "primary school grade mathematics", the matching category information in all the voice question databases is also taken as the "primary school grade mathematics" voice question database as the target voice question database.
S305, extracting a target number of questions to be answered from the target voice question library based on the question number information carried by the question answering instruction, and generating a question set.
Specifically, taking the question number information as "20 questions" as an example, assuming that one thousand questions are shared in the target phonetic question library, 20 questions are randomly extracted from the one thousand questions in the target phonetic question library to generate a question set.
S306, sequentially playing the questions to be answered in the question set.
Specifically, each question to be answered is sequentially played according to the sequence of the questions to be answered in the question set, a certain answering time is reserved between every two adjacent questions to be answered, and the length of the answering time can be different according to the type of the question or the difficulty of the question. The questions to be answered are played in a voice mode, so that the user can be effectively prevented from watching the mobile phone for a long time, eyes are liberated, and eyesight is protected.
And S307, continuously collecting the voice data of the user.
Specifically, after a voice of a question to be answered is played, the voice data of the user starts to be collected. It should be noted that, in this step, the collected user voice data is continuously collected in real time after the question to be answered is played.
The voice answer system has the advantages that the voice answer aim is achieved by collecting the voice data of the user in real time, the hands of the user can be liberated, and particularly for preschool users, the problem that the preschool users cannot manually input answers is solved through voice answer.
And S308, judging whether the answering time exceeds a first preset threshold value.
If yes, go to step S3081, and then continue to step S309.
If not, the process continues to step S309.
S3081, generating a countdown prompt.
The first preset threshold of the answering time is slightly smaller than the preset maximum answering time, and the difference value between the first preset threshold of the answering time and the maximum answering time can be 5 seconds, 10 seconds, 15 seconds and the like, which can be determined according to specific situations. Taking the preset maximum answer time as 30 seconds as an example, and the first preset threshold of the answer time is 25 seconds, if the answer time exceeds 25 seconds, a countdown prompt is generated. The countdown prompt is set, so that a good time reminding effect can be achieved, and a user is reminded that answer time is about to end.
It should be noted that the execution processes of step S307 and step S308 may overlap. Specifically, timing is started when voice data collection is started, whether answer time exceeds a first preset threshold value or not can be judged in real time in the voice data collection process, and a prompt is sent out in time.
S309, judging whether the answering time exceeds a second preset threshold value.
If yes, the step S306 is continued.
If not, the step S310 is continuously executed.
The second preset threshold of the answer time is the preset maximum answer time of each question, and the specific numerical value of the second preset threshold can be flexibly set according to the type and difficulty of the questions to be answered in the question set, for example, if the questions to be answered are primary school grade mathematics questions, the second preset threshold can be 10 seconds, if the questions to be answered are primary school grade mathematics questions, the second preset threshold can be 20 seconds, the specific situation can be determined, and the application is not limited to this.
Specifically, timing is started after the voice of the question to be answered is played, the answering time of the user is counted, if the answering time exceeds a second preset threshold value, the user still does not answer or makes an error, the question is skipped, and the next voice is played continuously.
The second preset threshold value, namely the setting of the longest answering time, can effectively avoid that a user does not answer all the time, delay the situation of too long time on one question and assist to advance the progress of answering.
S310, processing the user voice data to obtain at least one word unit.
Specifically, the processing of the user voice data includes converting the user voice data into a text, and performing sentence segmentation and word segmentation on the text to obtain at least one word unit. Taking the example that the user voice data comprises the answer of 1, after the user voice data is processed, four word units of answer, case, yes and 1 are obtained.
S311, judging whether the current word unit is answer information.
If yes, go on to step S313.
If not, the process continues to step S312.
The answer information is language information of a type corresponding to the question to be answered, for example, the answer information is a number when the question to be answered is a calculation question, the answer information is an English when the question to be answered is an English question, and the rest can be repeated by analogy in other cases.
Taking the question to be answered as an example of the calculation question, assuming that the word units comprise 'answer', 'case', 'yes' and '1', sequentially identifying and judging whether each word unit is answer information, namely that the word units 'answer', 'case' and 'yes' are not answer information, and the word unit '1' is answer information.
And S312, judging whether the word unit is the last word unit.
If yes, the step S307 is continued.
If not, the step S311 is continuously executed.
Specifically, whether the current word unit is the last word unit or not is judged, that is, whether the user voice data collected in the longest answering time is completely recognized or not is judged, and under the condition that recognition is completed, the user voice data is collected continuously, and under the condition that recognition is not completed, recognition analysis is continued.
Judging whether the word unit is the last word unit can ensure the comprehensiveness and integrity of the recognition and analysis of the user voice data and avoid the omission of key information in the recognition process.
S313, judging whether the current answer information is the correct answer of the question to be answered, if so, executing a step S314, and if not, executing a step S315.
And S314, generating a prompt for correct answer, and continuing to execute the step S306.
And S315, generating an answer error prompt, and continuing to execute the step S312.
Specifically, if the current answer information is a correct answer, generating a correct answer prompt, skipping to the next question, continuing to play the question to be answered, if the current answer information is an incorrect answer, generating an incorrect answer prompt, continuing to judge whether the current answer information, namely a word unit, is the last word unit in the user voice data, if so, skipping to the next question, and if not, continuing to recognize and analyze whether the next answer information is a correct answer.
The prompt for correct answer and the prompt for incorrect answer may be single prompt modes such as voice prompt, text prompt, vibration prompt, and the like, or may be any combination of the above prompt modes, which is not limited in this application.
In practical application, after all questions in the question set are answered, the answer conditions of the questions can be counted, a question answering feedback table is generated, and the answer conditions of the questions are fed back to the user.
The present embodiment will be further described with reference to specific examples.
For example, it is assumed that the received answer instruction carries category information "ten plus or minus methods" and title quantity information "5 questions". Matching a 'more than ten plus-minus method' voice question library in a plurality of pre-generated voice question libraries based on the category information, randomly extracting three questions to form a question set, presetting the longest answer time of each question to be 20 seconds, and sending a countdown prompt after starting timing for 15 seconds, wherein the first preset threshold of the answer time is 15 seconds, and the second preset threshold is 20 seconds.
Begin playing the 1 st question, "five equal to a few? After playing is finished, collecting and timing user voice data, sending a countdown prompt after the user voice data is not collected 15 seconds after timing is started, and skipping a first question after countdown is finished and the longest answer time is 20 seconds and the user voice data is not collected.
Begin playing the 2 nd question, "nine minus three equals a few? After playing is finished, collecting user voice data and timing, wherein the collected user voice data in answering time comprises 5, 6 and 8, processing the user voice data to obtain three word units of 5, 6 and 8, and sequentially identifying the three word units. The word unit '5' is answer information but not a correct answer, the word unit '5' is displayed and an answer wrong prompt is generated, the word unit '5' is not the last word unit, the next word unit is continuously identified, the word unit '6' is answer information and is a correct answer, the answer is correct, the word unit '6' is displayed and an answer correct prompt is generated, identification is stopped, and the next question is jumped to.
Begin playing title 3, "two plus three equals a few? After playing is finished, collecting user voice data and timing, collecting voice data of a user in answering time, wherein the voice data of the user comprises 'should be 5 bars', processing the user voice data to obtain 'should', '5' and 'bar' five word units, sequentially identifying the five word units, identifying the word unit 'should' is not answer information and is not the last word unit, continuously identifying the next word unit, identifying the word unit 'should' is not answer information and is not the last word unit, identifying the next word unit if the word unit 'is not answer information and is not the last word unit, continuously identifying the next word unit if the word unit' is not answer information and is not the last word unit, identifying the word unit '5' is answer information and is correct, stopping identifying, displaying the word unit '5', generating a correct answer prompt, and jumping to the next question.
Begin playing the 4 th question, "seven plus one equals a few? After playing is finished, collecting user voice data and timing, collecting the voice data of a user in answer time, wherein the voice data of the user comprises unknown word units, processing the user voice data to obtain three word units of ' not ', ' knowing ' and ' saying ', identifying the three word units in sequence, identifying the word unit ' not being answer information and not being the last word unit, continuously identifying the next word unit, identifying the word unit ' knowing ' not being answer information and not being the last word unit, continuously identifying the next word unit, identifying the word unit ' saying ' not being answer information and being the last word unit, identifying no answer information in the user voice data in the answer time, judging an answer error, and jumping to the next question.
Begin playing the 5 th question, "eight minus three equals a few? After playing is finished, collecting voice data of a user and timing, collecting voice data of the user including '5' in answering time, processing the voice data of the user to obtain a word unit '5', identifying the word unit, displaying the word unit as answer information and correct answer, and generating a correct answer prompt. The question is the last question in the question set, the voice answering is completed, and an answering condition feedback table is generated to skip one question, correct three questions and wrong one question.
According to the voice question answering method provided by the embodiment, the purposes of voice question reading and voice question answering are achieved by playing questions through voice, collecting user voice data, identifying answer information in the user voice data and judging whether the answer information is correct or not, the accuracy of identification and judgment results can be effectively guaranteed by sequentially identifying and judging each word unit in the user voice data, and under the condition that correct answers are identified, the subsequent word units do not perform identification and judgment steps any more, so that the calculation amount can be reduced.
The voice answer method provided by the embodiment can effectively reduce the visual impairment caused by the fact that a user watches a mobile phone for a long time, solves the problem that some special people such as preschool children are inconvenient to manually input answers, and is simple to operate, convenient to use and wide in application range.
A speech answering device comprising:
the receiving module 401 is configured to receive an answer instruction, extract a question to be answered in a target speech question library based on the answer instruction, and generate a question set;
a playing module 402 configured to sequentially play the questions to be answered in the question set;
an acquisition module 403 configured to continuously acquire user voice data;
an identifying module 404 configured to sequentially identify and display current answer information in the user voice data, execute the acquiring module 403 when the answer information is not acquired, and execute the judging module 405 when the answer information is acquired;
a determining module 405 configured to determine whether the current answer information is a correct answer to the question to be answered, if so, execute a correct module 406, and if not, execute an error module 407;
a correct module 406 configured to generate a correct answer prompt and continue to execute the play module 402;
an error module 407 configured to generate an answer error prompt and to continue executing the recognition module 404.
Optionally, the voice answering device further includes:
the system comprises an acquisition module, a classification module and a classification module, wherein the acquisition module is configured to acquire original voice data and at least one text topic library carrying category information;
and the synthesis module is configured to synthesize a corresponding voice topic library carrying category information based on the original voice data and the text topic library.
Optionally, the receiving module 401 is further configured to:
receiving an answer instruction carrying category information and question quantity information;
matching a voice question library which is the same as the type information of the answer instruction as a target voice question library based on the type information carried by the answer instruction;
and extracting the questions to be answered in a target quantity from the target voice question library based on the question quantity information carried by the question answering instruction, and generating a question set.
Optionally, the identifying module 404 is further configured to:
the processing module is configured to process the user voice data to obtain at least one word unit;
the answer information judging module is configured to judge whether the current word unit is answer information or not;
and under the condition that the current word unit is answer information, displaying the current answer information, and continuously judging whether the current answer information is the correct answer of the question to be answered.
Under the condition that the word unit is not answer information, continuously judging whether the word unit is the last word unit or not;
if yes, the acquisition module 403 continues to be executed;
if not, the answer information judgment module is continuously executed.
Optionally, the voice answering device further includes:
the second time judging module is configured to judge whether the answering time exceeds a second preset threshold value;
if yes, continue to execute the playing module 402;
if not, the processing module is continuously executed.
Optionally, the voice answering device further includes:
the first time judgment module is configured to judge whether the answer time exceeds a first preset threshold value;
if yes, generating a countdown prompt, and continuously executing a second time judgment module;
if not, the second time judgment module is continuously executed.
The application provides a pronunciation answer device through pronunciation broadcast topic, gather user voice data, discern the answer information among the user voice data and judge whether answer information is correct, reach the purpose of pronunciation reading, pronunciation answer, can effectively reduce the user and watch the visual impairment that the cell-phone brought for a long time, solved like the inconvenient manual input answer's of part special crowds such as preschool children problem, easy operation, convenient to use, application scope is wide.
An embodiment of the present application further provides a computing device, including a memory, a processor, and computer instructions stored on the memory and executable on the processor, where the processor executes the instructions to implement the following steps:
s10, receiving an answer instruction, extracting questions to be answered in the target voice question library based on the answer instruction, and generating a question set.
And S20, sequentially playing the questions to be answered in the question set.
And S30, continuously collecting the voice data of the user.
And S40, sequentially identifying and displaying the current answer information in the user voice data, executing the step S30 when the answer information is not acquired, and executing the step S50 when the answer information is acquired.
S50, judging whether the current answer information is the correct answer of the question to be answered, if so, executing a step S51, and if not, executing a step S52.
And S51, generating a prompt for correct answer, and continuing to execute the step S20.
S52, generating an answer error prompt, and continuing to execute the step S40.
An embodiment of the present application further provides a computer-readable storage medium, which stores computer instructions, and the instructions, when executed by a processor, implement the steps of the voice question answering method.
The above is an illustrative scheme of a computer-readable storage medium of the present embodiment. It should be noted that the technical solution of the storage medium belongs to the same concept as the technical solution of the voice answer method, and details that are not described in detail in the technical solution of the storage medium can be referred to the description of the technical solution of the voice answer method.
The computer instructions comprise computer program code which may be in the form of source code, object code, an executable file or some intermediate form, or the like. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
It should be noted that, for the sake of simplicity, the above-mentioned method embodiments are described as a series of acts or combinations, but those skilled in the art should understand that the present application is not limited by the described order of acts, as some steps may be performed in other orders or simultaneously according to the present application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
The preferred embodiments of the present application disclosed above are intended only to aid in the explanation of the application. Alternative embodiments are not exhaustive and do not limit the invention to the precise embodiments described. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the application and the practical application, to thereby enable others skilled in the art to best understand and utilize the application. The application is limited only by the claims and their full scope and equivalents.
Claims (11)
1. A speech question answering method, comprising:
s10, receiving an answer instruction, and creating a question set based on the answer instruction;
s20, sequentially playing the questions to be answered in the question set;
s30, continuously collecting user voice data;
s40, sequentially identifying current answer information in the user voice data, detecting the current answer information based on the language information of the type corresponding to the question to be answered, executing the step S30 under the condition that the answer information is not acquired, and executing the step S50 under the condition that the answer information is acquired;
s50, judging whether the current answer information is the correct answer of the question to be answered, if so, executing a step S51, and if not, executing a step S52;
s51, generating a prompt for correct answer, and continuing to execute the step S20;
s52, generating an answer error prompt, and continuing to execute the step S40.
2. The speech answering method according to claim 1, wherein the step S10 includes:
and extracting questions to be answered from a target voice question library based on the question answering instruction, and creating the question set.
3. The speech answering method according to claim 1 or 2, wherein before the step S10, further comprising:
s01, acquiring original voice data and at least one text topic library carrying category information;
and S02, synthesizing a corresponding voice topic library carrying category information based on the original voice data and the text topic library.
4. The speech answering method according to claim 3, wherein the step S10 includes:
s11, receiving an answer instruction carrying category information and question number information;
s12, matching a voice question library which is the same as the type information of the answer instruction as a target voice question library based on the type information carried by the answer instruction;
s13, extracting the questions to be answered in a target number from the target voice question library based on the question number information carried by the question answering instruction, and generating a question set.
5. The speech answering method according to claim 1, wherein the step S40 includes:
s41, processing the user voice data to obtain at least one word unit;
s42, judging whether the current word unit is answer information or not based on the language information of the type corresponding to the question to be answered;
in the case that the current word unit is answer information, displaying the current answer information, and continuing to execute the step S50;
in a case where the word unit is not answer information, continuing to perform step S43;
s43, judging whether the word unit is the last word unit;
if yes, the step S30 is continued;
if not, the step S42 is continued.
6. The speech answering method according to claim 5, further comprising, after said step S20:
s22, judging whether the answering time exceeds a second preset threshold value;
if yes, the step S20 is continued;
if not, the step S41 is continued.
7. The speech answering method according to claim 6, wherein before the step S22, further comprising:
s21, judging whether the answering time exceeds a first preset threshold value;
if yes, generating a countdown prompt, and continuing to execute the step S22;
if not, the step S22 is continued.
8. The speech answering method according to claim 5, wherein the step S52 includes:
an answer error prompt is generated and the execution of the step S43 is continued.
9. A speech answering device, comprising:
the receiving module is configured to receive an answering instruction and create a question set based on the answering instruction;
the playing module is configured to sequentially play the questions to be answered in the question set;
a collection module configured to continuously collect user voice data;
the recognition module is configured to sequentially recognize current answer information in the user voice data, detect the current answer information based on language information of a type corresponding to the question to be answered, execute the acquisition module under the condition that the answer information is not acquired, and execute the judgment module under the condition that the answer information is acquired;
the judging module is configured to judge whether the current answer information is a correct answer of the question to be answered, if so, the correct module is executed, and if not, the wrong module is executed;
a correct module configured to generate a prompt for correct answer and continue to execute the play module;
an error module configured to generate an answer error prompt and continue execution of the identification module.
10. A computing device comprising a memory, a processor, and computer instructions stored on the memory and executable on the processor, wherein the processor implements the steps of the method of any one of claims 1-8 when executing the instructions.
11. A computer-readable storage medium storing computer instructions, which when executed by a processor, perform the steps of the method of any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111165226.3A CN113870635A (en) | 2019-10-25 | 2019-10-25 | Voice answering method and device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911025075.4A CN110706536B (en) | 2019-10-25 | 2019-10-25 | Voice answering method and device |
CN202111165226.3A CN113870635A (en) | 2019-10-25 | 2019-10-25 | Voice answering method and device |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911025075.4A Division CN110706536B (en) | 2019-10-25 | 2019-10-25 | Voice answering method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113870635A true CN113870635A (en) | 2021-12-31 |
Family
ID=69202386
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111165226.3A Pending CN113870635A (en) | 2019-10-25 | 2019-10-25 | Voice answering method and device |
CN201911025075.4A Active CN110706536B (en) | 2019-10-25 | 2019-10-25 | Voice answering method and device |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911025075.4A Active CN110706536B (en) | 2019-10-25 | 2019-10-25 | Voice answering method and device |
Country Status (1)
Country | Link |
---|---|
CN (2) | CN113870635A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114566167A (en) * | 2022-02-28 | 2022-05-31 | 安徽淘云科技股份有限公司 | Voice answer method and device, electronic equipment and storage medium |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111353066B (en) * | 2020-02-20 | 2023-11-21 | 联想(北京)有限公司 | Information processing method and electronic equipment |
CN111369998B (en) * | 2020-02-28 | 2023-07-21 | 联想(北京)有限公司 | Data processing method and electronic equipment |
CN111524045A (en) * | 2020-04-13 | 2020-08-11 | 北京猿力教育科技有限公司 | Dictation method and device |
CN111405381A (en) * | 2020-04-17 | 2020-07-10 | 深圳市即构科技有限公司 | Online video playing method, electronic device and computer readable storage medium |
CN111698566A (en) * | 2020-06-04 | 2020-09-22 | 北京奇艺世纪科技有限公司 | Video playing method and device, electronic equipment and storage medium |
CN111785109B (en) * | 2020-07-07 | 2022-07-12 | 上海茂声智能科技有限公司 | Medical robot answering method, device, system, equipment and storage medium |
CN111985395A (en) * | 2020-08-19 | 2020-11-24 | 北京猿力未来科技有限公司 | Video generation method and device |
CN112289308A (en) * | 2020-10-23 | 2021-01-29 | 上海凯石信息技术有限公司 | Voice dictation scoring method and device and electronic equipment |
CN113179422A (en) * | 2021-04-20 | 2021-07-27 | 上海松鼠课堂人工智能科技有限公司 | Method and system for prompting students to answer questions through bullet screen |
CN113138828A (en) * | 2021-05-10 | 2021-07-20 | 上海松鼠课堂人工智能科技有限公司 | Method and system for prompting student to answer questions by displaying dynamic images |
CN113362666A (en) * | 2021-06-02 | 2021-09-07 | 母宏伟 | Child education method based on OID technology |
CN114035690A (en) * | 2021-11-26 | 2022-02-11 | 郑州捷安高科股份有限公司 | Electric power practical training method and device based on electric shock somatosensory system |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20090094577A (en) * | 2008-03-03 | 2009-09-08 | 옥종석 | Method for evaluating spoken ability through computer-lead speech recognition |
CN106202165A (en) * | 2016-06-24 | 2016-12-07 | 北京小米移动软件有限公司 | The intellectual learning method and device of man-machine interaction |
CN107454436A (en) * | 2017-09-28 | 2017-12-08 | 广州酷狗计算机科技有限公司 | Interactive approach, device, server and storage medium |
CN107564354A (en) * | 2017-09-26 | 2018-01-09 | 北京光年无限科技有限公司 | A kind of child intelligence robot interactive output intent and system |
CN107591039A (en) * | 2017-09-28 | 2018-01-16 | 武汉海鲸教育科技有限公司 | A kind of intellectual education learning platform |
CN107680019A (en) * | 2017-09-30 | 2018-02-09 | 百度在线网络技术(北京)有限公司 | A kind of implementation method of Examination Scheme, device, equipment and storage medium |
CN109493665A (en) * | 2018-12-28 | 2019-03-19 | 南京红松信息技术有限公司 | Quick answer method and its system based on speech recognition |
CN109545015A (en) * | 2019-01-23 | 2019-03-29 | 广东小天才科技有限公司 | Subject type identification method and family education equipment |
CN109801527A (en) * | 2019-01-31 | 2019-05-24 | 百度在线网络技术(北京)有限公司 | Method and apparatus for output information |
Family Cites Families (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2099264B (en) * | 1978-04-28 | 1983-06-29 | Texas Instruments Inc | Speech synthesis system with parameter look-up table |
US6224383B1 (en) * | 1999-03-25 | 2001-05-01 | Planetlingo, Inc. | Method and system for computer assisted natural language instruction with distracters |
EP1575031A3 (en) * | 2002-05-15 | 2010-08-11 | Pioneer Corporation | Voice recognition apparatus |
KR20100111349A (en) * | 2009-04-07 | 2010-10-15 | 장석연 | Event game machine |
CN103186658B (en) * | 2012-12-24 | 2016-05-25 | 中国科学院声学研究所 | Reference grammer for Oral English Exam automatic scoring generates method and apparatus |
CN105161098A (en) * | 2015-07-31 | 2015-12-16 | 北京奇虎科技有限公司 | Speech recognition method and speech recognition device for interaction system |
CN105551486A (en) * | 2015-12-02 | 2016-05-04 | 珠海市杰理科技有限公司 | Voice recognition toy and voice interactive device |
CN105590626B (en) * | 2015-12-29 | 2020-03-03 | 百度在线网络技术(北京)有限公司 | Continuous voice man-machine interaction method and system |
CN105771234A (en) * | 2016-04-02 | 2016-07-20 | 深圳市熙龙玩具有限公司 | Riddle guessing toy and implementation method thereof |
WO2017199433A1 (en) * | 2016-05-20 | 2017-11-23 | 三菱電機株式会社 | Information provision control device, navigation device, equipment inspection operation assistance device, interactive robot control device, and information provision control method |
CN106205612B (en) * | 2016-07-08 | 2019-12-24 | 北京光年无限科技有限公司 | Information processing method and system for intelligent robot |
CN106128453A (en) * | 2016-08-30 | 2016-11-16 | 深圳市容大数字技术有限公司 | The Intelligent Recognition voice auto-answer method of a kind of robot and robot |
CN106897950B (en) * | 2017-01-16 | 2020-07-28 | 北京师范大学 | Adaptive learning system and method based on word cognitive state model |
CN107221318B (en) * | 2017-05-12 | 2020-03-31 | 广东外语外贸大学 | English spoken language pronunciation scoring method and system |
CN107240394A (en) * | 2017-06-14 | 2017-10-10 | 北京策腾教育科技有限公司 | A kind of dynamic self-adapting speech analysis techniques for man-machine SET method and system |
CN107688608A (en) * | 2017-07-28 | 2018-02-13 | 合肥美的智能科技有限公司 | Intelligent sound answering method, device, computer equipment and readable storage medium storing program for executing |
CN108960650A (en) * | 2018-07-11 | 2018-12-07 | 太仓煜和网络科技有限公司 | A kind of student's learning evaluation method based on artificial intelligence |
CN109410937A (en) * | 2018-11-20 | 2019-03-01 | 深圳市神经科学研究院 | Chinese speech training method and system |
CN109686160A (en) * | 2019-01-31 | 2019-04-26 | 上海车轮互联网服务有限公司 | Client-based answer reminding method, system and computer readable storage medium |
CN110164447B (en) * | 2019-04-03 | 2021-07-27 | 苏州驰声信息科技有限公司 | Spoken language scoring method and device |
CN110211704A (en) * | 2019-05-05 | 2019-09-06 | 平安科技(深圳)有限公司 | The engine method and server of matter of opening |
-
2019
- 2019-10-25 CN CN202111165226.3A patent/CN113870635A/en active Pending
- 2019-10-25 CN CN201911025075.4A patent/CN110706536B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20090094577A (en) * | 2008-03-03 | 2009-09-08 | 옥종석 | Method for evaluating spoken ability through computer-lead speech recognition |
CN106202165A (en) * | 2016-06-24 | 2016-12-07 | 北京小米移动软件有限公司 | The intellectual learning method and device of man-machine interaction |
CN107564354A (en) * | 2017-09-26 | 2018-01-09 | 北京光年无限科技有限公司 | A kind of child intelligence robot interactive output intent and system |
CN107454436A (en) * | 2017-09-28 | 2017-12-08 | 广州酷狗计算机科技有限公司 | Interactive approach, device, server and storage medium |
CN107591039A (en) * | 2017-09-28 | 2018-01-16 | 武汉海鲸教育科技有限公司 | A kind of intellectual education learning platform |
CN107680019A (en) * | 2017-09-30 | 2018-02-09 | 百度在线网络技术(北京)有限公司 | A kind of implementation method of Examination Scheme, device, equipment and storage medium |
CN109493665A (en) * | 2018-12-28 | 2019-03-19 | 南京红松信息技术有限公司 | Quick answer method and its system based on speech recognition |
CN109545015A (en) * | 2019-01-23 | 2019-03-29 | 广东小天才科技有限公司 | Subject type identification method and family education equipment |
CN109801527A (en) * | 2019-01-31 | 2019-05-24 | 百度在线网络技术(北京)有限公司 | Method and apparatus for output information |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114566167A (en) * | 2022-02-28 | 2022-05-31 | 安徽淘云科技股份有限公司 | Voice answer method and device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN110706536B (en) | 2021-10-01 |
CN110706536A (en) | 2020-01-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110706536B (en) | Voice answering method and device | |
CN110648690B (en) | Audio evaluation method and server | |
Kumar et al. | Improving literacy in developing countries using speech recognition-supported games on mobile devices | |
KR101904455B1 (en) | Learner customized foreign language learning method and apparatus performing the same | |
CN110111778B (en) | Voice processing method and device, storage medium and electronic equipment | |
US10089898B2 (en) | Information processing device, control method therefor, and computer program | |
CN111260761A (en) | Method and device for generating mouth shape of animation character | |
Ikeno et al. | The effect of listener accent background on accent perception and comprehension | |
JP2010282058A (en) | Method and device for supporting foreign language learning | |
CN111524045A (en) | Dictation method and device | |
KR102292477B1 (en) | Server and method for automatic assessment of oral language proficiency | |
KR20150126176A (en) | A word study system using infinity mnemotechniques and method of the same | |
CN113486970B (en) | Reading capability evaluation method and device | |
JP6656529B2 (en) | Foreign language conversation training system | |
CN108831503B (en) | Spoken language evaluation method and device | |
KR20220048958A (en) | Method of filtering subtitles of a foreign language video and system performing the same | |
JP2006208644A (en) | Server system and method for measuring linguistic speaking ability | |
JP6166831B1 (en) | Word learning support device, word learning support program, and word learning support method | |
CN116884282A (en) | Question answering method, device, electronic equipment and storage medium | |
JP2020038371A (en) | Computer program, pronunciation learning support method and pronunciation learning support device | |
JP2017021245A (en) | Language learning support device, language learning support method, and language learning support program | |
KR102011595B1 (en) | Device and method for communication for the deaf person | |
Shukla | Development of a human-AI teaming based mobile language learning solution for dual language learners in early and special educations | |
Jo et al. | Effective computer‐assisted pronunciation training based on phone‐sensitive word recommendation | |
CN111951826A (en) | Language testing device, method, medium and computing equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information | ||
CB02 | Change of applicant information |
Address after: 100102 unit F01, 5th floor, building 1, yard 8, Guangshun South Street, Chaoyang District, Beijing Applicant after: Beijing Ape Power Technology Co.,Ltd. Address before: 100102 unit F01, 5th floor, building 1, yard 8, Guangshun South Street, Chaoyang District, Beijing Applicant before: Beijing ape force Education Technology Co.,Ltd. |
|
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20211231 |