WO2021170094A1 - 用于信息交互的方法和装置 - Google Patents

用于信息交互的方法和装置 Download PDF

Info

Publication number
WO2021170094A1
WO2021170094A1 PCT/CN2021/078186 CN2021078186W WO2021170094A1 WO 2021170094 A1 WO2021170094 A1 WO 2021170094A1 CN 2021078186 W CN2021078186 W CN 2021078186W WO 2021170094 A1 WO2021170094 A1 WO 2021170094A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
information
task
target
practice
Prior art date
Application number
PCT/CN2021/078186
Other languages
English (en)
French (fr)
Inventor
黄浩然
罗希
李福祥
李航
Original Assignee
北京字节跳动网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字节跳动网络技术有限公司 filed Critical 北京字节跳动网络技术有限公司
Priority to EP21759798.8A priority Critical patent/EP4113320A4/en
Priority to KR1020227029762A priority patent/KR20220127935A/ko
Priority to JP2022551245A priority patent/JP2023514863A/ja
Publication of WO2021170094A1 publication Critical patent/WO2021170094A1/zh
Priority to US17/888,258 priority patent/US11854422B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3343Query execution using phonetics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/335Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • G09B19/02Counting; Calculating
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • G09B19/06Foreign languages
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
    • G09B5/065Combinations of audio and video presentations, e.g. videotapes, videodiscs, television systems

Definitions

  • the embodiments of the present disclosure relate to the field of computer technology, and in particular to methods and devices for information interaction.
  • the existing technology can already achieve oral practice based on computer technology.
  • the oral exercises here can be oral exercises for various languages, such as English and French.
  • Automatic scoring allows users to read templates or sentences provided by the system, and the system scores them.
  • the embodiments of the present disclosure propose methods and devices for information interaction.
  • an embodiment of the present disclosure provides a method for information interaction, the method includes: in response to receiving a user-initiated oral practice request, outputting task information for indicating a target oral practice task, wherein the task The information corresponds to the task intention information and task keyword information; obtain the voice information input by the user for the task information; recognize the voice information to determine the user’s corresponding user intention information and user keyword information; generate to indicate whether the user has completed the target spoken language
  • the matching result of the practice task where the matching result is obtained by the following steps: respectively matching user intent information and task intent information, and user keyword information and task keyword information to obtain matching results; presenting the matching results to the user.
  • the method further includes: generating a score for characterizing the spoken language ability of the user based on the voice information and the matching result; and presenting the generated score to the user.
  • the method further includes: in response to the matching result indicating that the user has not completed the target oral practice task, outputting auxiliary information for assisting the user in completing the target oral practice task; acquiring supplementary voice information input by the user for the auxiliary information; based on Supplement the voice information to generate a new matching result indicating whether the user has completed the target oral practice task.
  • outputting auxiliary information for assisting the user in completing the target oral practice task includes: in response to the matching result indicating that the user has not completed the target oral practice task, determining whether to receive To the end request of the oral practice input by the user; in response to not receiving the end request of the oral practice, output auxiliary information for assisting the user in completing the target oral practice task.
  • outputting auxiliary information for assisting the user in completing the target oral practice task includes: in response to the matching result indicating that the user has not completed the target oral practice task, determining the output Whether the number of times of auxiliary information for assisting the user in completing the target oral practice task is less than or equal to the preset number; in response to the number of outputting auxiliary information being less than or equal to the preset number of times, outputting auxiliary information for assisting the user in completing the target oral practice task.
  • outputting task information for indicating the target oral practice task includes: obtaining the user's historical oral practice results; determining the target oral practice task based on the obtained historical oral practice results; obtaining the task for indicating the target oral practice Task information, and output the obtained task information.
  • an embodiment of the present disclosure provides a device for information interaction.
  • the device includes: a first output unit configured to output a spoken language practice request for indicating a target spoken language practice in response to receiving a user-initiated spoken language practice request The task information of the task, where the task information corresponds to the task intent information and the task keyword information; the first obtaining unit is configured to obtain the voice information input by the user for the task information; the recognition unit is configured to recognize the voice information to Determine user intention information and user keyword information corresponding to the user; the first generating unit is configured to generate matching results indicating whether the user has completed the target oral practice task, wherein the matching results are obtained by the following steps: Matching with task intention information and user keyword information and task keyword information to obtain the matching result; the first presentation unit is configured to present the matching result to the user.
  • the device further includes: a second generating unit configured to generate a score for characterizing the user's oral ability based on the voice information and the matching result; the second presenting unit is configured to: The generated score is presented to the user.
  • the device further includes: a second output unit configured to output auxiliary information for assisting the user in completing the target oral practice task in response to the matching result indicating that the user has not completed the target oral practice task; the second acquisition unit , Is configured to obtain supplementary voice information input by the user for the auxiliary information; the third generating unit is configured to generate a new matching result for indicating whether the user has completed the target oral practice task based on the supplementary voice information.
  • the second output unit includes: a first determination module, configured to determine whether a user-input end request for oral practice is received in response to the matching result indicating that the user has not completed the target oral practice task; a first output module, It is configured to output auxiliary information for assisting the user in completing the target oral practice task in response to not receiving the oral practice end request.
  • the first output unit includes: an acquiring module configured to acquire the user's historical oral practice results; a third determining module configured to determine the target oral practice task based on the acquired historical oral practice results; The three-output module is configured to obtain task information used to indicate the target spoken language practice task, and output the obtained task information.
  • embodiments of the present disclosure provide an electronic device, including: one or more processors; a storage device, on which one or more programs are stored, when one or more programs are processed by one or more The processor executes, so that one or more processors implement the method of any one of the foregoing methods for information interaction.
  • the method and device for information interaction output task information for indicating a target oral practice task in response to receiving a spoken language practice request initiated by a user, wherein the task information corresponds to the task intention information and the task Keyword information, and then obtain the voice information input by the user for the task information, and then recognize the voice information to determine the user's corresponding user intention information and user keyword information, and finally the user intention information and task intention information, and the user key
  • the word information is matched with the task keyword information to generate a matching result indicating whether the user has completed the target oral practice task, and present the matching result to the user, so that the oral practice can be performed in the form of task-based dialogue, compared with the existing
  • the template-based spoken language practice method in the technology, the solution of the present disclosure is more intelligent, and the user can organize the language by himself to complete the task, which helps to achieve more flexible and efficient spoken language practice.
  • Fig. 2 is a flowchart of an embodiment of a method for information exchange according to the present disclosure
  • Fig. 4 is a flowchart of another embodiment of a method for information interaction according to the present disclosure.
  • Fig. 5 is a schematic structural diagram of an embodiment of an apparatus for information interaction according to the present disclosure.
  • FIG. 1 shows an exemplary system architecture 100 to which an embodiment of the method for information interaction or the apparatus for information interaction of the present disclosure can be applied.
  • the user can use the terminal devices 101, 102, and 103 to interact with the server 105 through the network 104 to receive or send messages and so on.
  • Various client applications may be installed on the terminal devices 101, 102, 103, such as language teaching applications, voice interaction applications, web browser applications, search applications, instant messaging tools, social platform software, and so on.
  • the method for information interaction can be executed by the terminal devices 101, 102, 103, or by the server 105. Accordingly, the information interaction device can be set in the terminal. The devices 101, 102, and 103 may also be set in the server 105.
  • terminal devices, networks, and servers in FIG. 1 are merely illustrative. There can be any number of terminal devices, networks, and servers according to implementation needs.
  • the above system architecture may not include the network, but only include the terminal device or server.
  • the method for information interaction includes the following steps:
  • Step 201 In response to receiving a spoken language practice request initiated by a user, output task information for indicating a target spoken language practice task.
  • users can use various methods to initiate oral practice requests, for example, they can click a button displayed on the page for oral practice; or they can touch a preset switch for oral practice.
  • the execution subject may output task information for indicating the target oral practice task.
  • the target oral practice task is the oral practice task to be completed by the user who initiated the oral practice request.
  • Spoken language practice tasks are tasks that users can complete through voice conversations.
  • the oral practice task can be a meal ordering task, a self-introduction task, a flight booking task, and so on.
  • the target oral practice task may be a preset task, or may be a task selected by the user from a plurality of preset oral practice tasks.
  • the task information can be used to characterize the specific content of the target oral practice task. For example, if the target oral practice task is a meal ordering task, the corresponding task information may be "Order a Gongbao Chicken Rice Bowl and send it to Yingdu Building on Zhichun Road". Specifically, the task information may be information preset for the target spoken language practice task, or may be information generated after receiving the aforementioned spoken language practice request. For example, the target spoken language practice task is a meal ordering task. After receiving the spoken language practice request, the above-mentioned executor can detect that the current location is "Xidan Joy City", and then generate the task information "Order a Gongbao Chicken Rice Bowl, and send it To Xidan Joy City".
  • the task information corresponds to task intention information and task keyword information.
  • Task intention information can be used to characterize the goal of the target oral practice task.
  • the task keyword information can be used to characterize the key points of the above-mentioned goals.
  • Task intention information and task keyword information can be extracted from the task information. For example, for the task information "Order a Gongbao Chicken Rice Bowl and send it to Xidan Joy City", the task intention information can be extracted as "Order Meal”, and the task keyword information is "Kongbao Chicken Rice Bowl; Xidan” Joy".
  • the execution subject when the execution subject is the user terminal, the execution subject can directly detect the user's operation to receive the oral practice request initiated by the user, and output task information to the user; when the execution subject is the communication with the user terminal When connected to the server, the above-mentioned execution subject may receive the oral practice request sent by the user terminal, and output task information to the user terminal, so that the user terminal can present the task information to the user.
  • the above-mentioned execution subject can output task information for indicating the target oral practice task through the following steps: First, the above-mentioned execution subject can obtain the user's historical oral practice results. Then, the execution subject can determine the target oral practice task based on the acquired historical oral practice results. Finally, the above-mentioned execution subject can obtain task information used to indicate the target spoken language practice task, and output the obtained task information.
  • this implementation method can determine a task for the user that is more in line with the user's oral ability, and is helpful for realizing more effective oral practice.
  • Step 202 Acquire voice information input by the user for the task information.
  • the user can input voice information for the acquired task information, and then the execution subject can obtain the voice information input by the user for the task information.
  • the voice information is information used to complete the target oral practice task corresponding to the task information. It is understandable that the user can input the above voice information in the language requested for oral practice.
  • the above-mentioned executive subject can recognize the voice information to determine the user's corresponding user intention information and user keyword information.
  • the intention information is used to characterize the user's intention.
  • User keyword information is used to characterize the key points of user intent. For example, the voice information input by the user is "I want to check the weather in Xiamen today", and after recognizing the voice information, the user intention information "check weather” and the user keyword information "Today; Xiamen” can be obtained.
  • the above-mentioned execution subject may use an existing natural language processing method to recognize voice information, and obtain user intention information and user keyword information.
  • the above-mentioned executive body may first use a voice recognition method to convert voice information into text information, and then use a semantic recognition method to identify user intention information and user keyword information from the converted text information.
  • the above-mentioned execution subject may also translate the voice information so that the obtained user intention information and user keyword information are respectively the same as the language corresponding to the task intention information and task keyword information. , Which will help the execution of subsequent matching steps.
  • Step 204 Generate a matching result indicating whether the user has completed the target oral practice task.
  • the above-mentioned execution subject can match the user intention information obtained in step 203 with the task intention information obtained in step 201, as well as the user keyword information obtained in step 203 and the task key information obtained in step 201.
  • the word information is matched to generate a matching result indicating whether the user has completed the target oral practice task.
  • the above-mentioned execution subject may use various methods to generate the above-mentioned matching result based on the matching of intent information (including user intent information and task intent information) and the matching of keyword information (including user keyword information and task keyword information). For example, the above-mentioned execution subject can generate a matching result indicating that the user completes the target oral practice task when the intent information is successfully matched and the keyword information is matched successfully, and any one of the intent information and the keyword information is unsuccessfully matched.
  • intent information including user intent information and task intent information
  • keyword information including user keyword information and task keyword information
  • the above-mentioned execution subject may also generate a matching result indicating that the user has not completed the match when the intent information is not successfully matched and the keyword information is not matched successfully.
  • the matching result of the target spoken language practice task and in the case that any one of the intention information and the keyword information is matched successfully, a matching result used to instruct the user to complete the target spoken language practice task is generated.
  • the above-mentioned execution subject may use various methods to perform information matching (including matching of intent information and keyword information). For example, the above-mentioned execution subject may perform information matching by comparing whether two pieces of information are the same, and further, When the two pieces of information are the same, the matching is successful; or, the above-mentioned executive body can also use similarity calculations to match the information, and further, when the similarity of the two pieces of information is greater than or equal to the preset similarity threshold, the matching is successful .
  • the above-mentioned executive body can first translate the information so that the languages corresponding to the two information are the same, and then the two information correspond to the same language. Information to match.
  • the task intention information and task keyword information can determine the unique target oral practice task, and then by determining whether the user's corresponding user intention information and task keyword information match the task intention information and task keyword information, either Determine whether the user has completed the target oral practice task.
  • Step 205 Present the matching result to the user.
  • the above-mentioned execution subject may present the matching result to the user.
  • the above-mentioned execution subject may present the above-mentioned matching result in various forms, for example, it may be presented in the form of audio, in the form of images, in the form of text, and so on.
  • the user can know whether the task is completed according to the matching result presented by the above-mentioned execution subject.
  • the above-mentioned execution subject may also generate a score for characterizing the user's oral ability based on the voice information and the matching result, and present the generated score to the user.
  • the above-mentioned execution subject may also score the user's oral ability based on the matching result and the user's voice information, and present the obtained score.
  • the above-mentioned executive body can score the user's oral ability based on the matching result, the fluency of the voice information, the accuracy of the wording of the voice information, etc., and the proportion of the above-mentioned various influencing factors in the scoring process can be determined by the technical staff. Pre-set.
  • the terminal device 301 can obtain the voice information 307 input by the user 302 for the task information 304. Then, the terminal device 301 can recognize the voice information 307 to determine the user intent information 308 corresponding to the user 302 (for example, "order a meal") and user keyword information 309 (for example, "Gongbao Chicken Rice Bowl; Yingdu Building") . Then, the terminal device 301 can match the user intention information 308 with the task intention information 305, and the user keyword information 309 with the task keyword information 306, respectively, to generate a matching result 310 indicating whether the user has completed the target oral practice task. Finally, the terminal device 301 may present the matching result 310 to the user 302.
  • the terminal device 301 may present the matching result 310 to the user 302.
  • the method provided by the above-mentioned embodiments of the present disclosure can be used for oral practice in the form of task-based dialogue.
  • the solution of the present disclosure is more intelligent, and users can organize the language by themselves to complete Tasks help to achieve more flexible and efficient oral practice.
  • FIG. 4 shows a flow 400 of another embodiment of a method for information exchange.
  • the process 400 of the method for information interaction includes the following steps:
  • Step 401 In response to receiving a spoken language practice request initiated by a user, output task information for indicating a target spoken language practice task.
  • the execution subject of the method for information interaction may respond to receiving a spoken practice request initiated by the user through a wired connection or a wireless connection, and output an output for indicating the target Task information for oral practice tasks.
  • the oral practice request is used to request oral practice.
  • the target oral practice task is the oral practice task to be completed by the user who initiated the oral practice request.
  • Spoken language practice tasks are tasks that users can complete through voice conversations.
  • the task information can be used to characterize the specific content of the target oral practice task.
  • the task information corresponds to task intention information and task keyword information.
  • Task intention information can be used to characterize the goal of the target oral practice task.
  • the task keyword information can be used to characterize the key points of the above-mentioned goals.
  • Task intention information and task keyword information can be extracted from the task information.
  • Step 402 Acquire the voice information input by the user for the task information.
  • Step 403 Recognizing the voice information to determine user intention information and user keyword information corresponding to the user.
  • the above-mentioned execution subject may recognize the voice information to determine user intention information and user keyword information corresponding to the user.
  • the intention information is used to characterize the user's intention.
  • User keyword information is used to characterize the key points of user intent.
  • Step 404 Generate a matching result indicating whether the user has completed the target oral practice task.
  • the above-mentioned execution subject can match the user intent information obtained in step 403 with the task intent information obtained in step 401, as well as the user keyword information obtained in step 403 and the task key information obtained in step 401.
  • the word information is matched to generate a matching result indicating whether the user has completed the target oral practice task.
  • Step 405 Present the matching result to the user.
  • the above-mentioned execution subject may present the matching result to the above-mentioned user.
  • steps 401, 402, 403, step 404, and step 405 can be respectively performed in a manner similar to step 201, step 202, step 203, step 204, and step 205 in the foregoing embodiment.
  • the descriptions of 202, step 203, step 204, and step 205 are also applicable to step 401, step 402, step 403, step 404, and step 405, and will not be repeated here.
  • Step 406 In response to the matching result indicating that the user has not completed the target oral practice task, output auxiliary information for assisting the user in completing the target oral practice task.
  • the above-mentioned execution subject may indicate that the user has not completed the target oral practice task in response to the matching result obtained in step 404, and output auxiliary information for assisting the user in completing the target oral practice task.
  • the auxiliary information may be information generated for user information (user intention information and/or user keyword information) that does not match the information corresponding to the task (task intention information and/or task keyword information), and is used to guide the user to input Information that can match the information corresponding to the task.
  • the target spoken language practice task may be a meal ordering task
  • the above-mentioned execution subject may determine the user key after matching user intention information and task intention information, and user keyword information and task keyword information based on voice information If the ordering address information in the word information and the ordering address information in the task keyword information do not match successfully, the above-mentioned execution subject can generate the auxiliary information "please enter the correct ordering address".
  • the above-mentioned execution subject may indicate that the user has not completed the target oral practice task in response to the matching result, determine whether the oral practice end request input by the user is received, and in response to not receiving the oral practice End the request and output auxiliary information used to assist the user in completing the target oral practice task.
  • the oral practice end request is used to request the end of this oral practice.
  • completing the oral practice task may not be the purpose of the user.
  • the user may only want to obtain the matching result, and then after the matching result is output, continuing to output auxiliary information may arouse the user's disgust.
  • the operation of outputting auxiliary information can be controlled by the user.
  • the auxiliary information is not output, and when the user does not input the request to end the oral practice, the auxiliary information is output. While improving the completeness of oral practice, improve the flexibility of oral practice.
  • Step 407 Obtain supplementary voice information input by the user for the auxiliary information.
  • the above-mentioned execution subject may obtain the supplementary voice information input by the user for the auxiliary information.
  • the supplementary voice information may be voice information input by the user after obtaining the auxiliary information and used to supplement the voice information input in step 402.
  • the above-mentioned executive body determines the ordering address information in the user keyword information and the ordering address in the task keyword information after respectively matching the user intention information corresponding to the voice information obtained in step 402 and the user keyword information. If the information ("Yingdu Building") is not matched successfully, the above-mentioned executive body can output the auxiliary information "Please enter the correct order address". Then, the user can input the supplementary voice information "Yingdu Building" for the auxiliary information.
  • Step 408 Based on the supplementary voice information, generate a new matching result indicating whether the user has completed the target oral practice task.
  • the above-mentioned executive body since the auxiliary information is generated for mismatched information (including intent information and keyword information), and the supplementary voice information is for the input of auxiliary information, here, the above-mentioned executive body only needs to combine the supplementary voice information with The unmatched task information (including task intent information and task keyword information) can be matched, and the information that is not involved in the supplementary voice information is the information of successful matching.
  • the above executive body can recognize the voice information "Yingdu Building” and obtain the text information "Yingdu Building” ", then match the text information "Yingdu Building” with the ordering address information "Yingdu Building” in the task keyword information, and obtain a new matching result that is used to instruct the user to complete the target oral practice task.
  • the above-mentioned execution subject may indicate that the user has not completed the target oral practice task in response to the matching result, and determine whether the number of times of outputting auxiliary information for assisting the user in completing the target oral practice task is less than or equal to The preset number of times, in response to the number of outputting auxiliary information being less than or equal to the preset number of times, outputting auxiliary information for assisting the user in completing the target oral practice task.
  • the flow 400 of the method for information interaction in this embodiment highlights that when the matching result indicates that the user has not completed the target oral practice task, the output is used for Auxiliary information to assist the user in completing the target oral practice task. Therefore, the solution described in this embodiment can guide the user to complete the target oral practice task by outputting auxiliary information, thereby improving the integrity of the oral practice and helping to improve the oral teaching performance in the oral practice process.
  • the present disclosure provides an embodiment of a device for information interaction.
  • the device embodiment corresponds to the method embodiment shown in FIG.
  • the device can be applied to various electronic devices.
  • the first output unit 501 of the apparatus 500 for information interaction may output task information for indicating the target oral practice task in response to receiving the oral practice request initiated by the user through a wired connection or a wireless connection.
  • the oral practice request is used to request oral practice.
  • the target oral practice task is the oral practice task to be completed by the user who initiated the oral practice request.
  • Spoken language practice tasks are tasks that users can complete through voice conversations.
  • the task information can be used to characterize the specific content of the target oral practice task.
  • the task information corresponds to task intention information and task keyword information.
  • Task intention information can be used to characterize the goal of the target oral practice task.
  • the task keyword information can be used to characterize the key points of the above-mentioned goals.
  • Task intention information and task keyword information can be extracted from the task information.
  • the user can input voice information for the acquired task information, and the first acquisition unit 502 can acquire the voice information input by the user for the task information.
  • the voice information is information used to complete the target oral practice task corresponding to the task information. It is understandable that the user can input the above voice information in the language requested for oral practice.
  • the recognition unit 503 may recognize the voice information to determine the user intention information and user keyword information corresponding to the user.
  • the intention information is used to characterize the user's intention.
  • User keyword information is used to characterize the key points of user intent.
  • the first generating unit 504 can match the user intent information obtained by the identifying unit 503 with the task intent information obtained by the first output unit 501, and compare the user keyword information obtained by the identifying unit 503 with the first output.
  • the task keyword information obtained by the unit 501 is matched to generate a matching result indicating whether the user has completed the target oral practice task.
  • the first presenting unit 505 may present the matching result to the user.
  • the device 500 further includes: a second generating unit (not shown in the figure), configured to generate a score that is used to characterize the user's oral ability based on the voice information and the matching result ;
  • the second presentation unit (not shown in the figure) is configured to present the generated score to the user.
  • the device 500 further includes: a second output unit (not shown in the figure), configured to respond to the matching result indicating that the user has not completed the target oral practice task, and output for assistance Auxiliary information for the user to complete the target oral practice task;
  • the second acquisition unit (not shown in the figure) is configured to acquire supplementary voice information input by the user for the auxiliary information;
  • the third generation unit (not shown in the figure) is configured Based on the supplementary voice information, a new matching result is generated to indicate whether the user has completed the target oral practice task.
  • the second output unit includes: a first determination module (not shown in the figure), configured to determine whether to receive the target oral practice task in response to the matching result indicating that the user has not completed the target oral practice task To the oral practice end request input by the user; the first output module (not shown in the figure) is configured to output auxiliary information for assisting the user in completing the target oral practice task in response to not receiving the oral practice end request.
  • the second output unit includes: a second determination module (not shown in the figure), configured to determine the output usage in response to the matching result indicating that the user has not completed the target oral practice task Whether the number of times of auxiliary information to assist the user in completing the target oral practice task is less than or equal to the preset number; the second output module (not shown in the figure) is configured to respond to the number of output auxiliary information being less than or equal to the preset number, Output auxiliary information used to assist the user in completing the target oral practice task.
  • the first output unit 501 includes: an acquisition module (not shown in the figure) configured to acquire the user’s historical oral practice results; and a third determination module (not shown in the figure) Out), is configured to determine the target oral practice task based on the obtained historical oral practice results; the third output module (not shown in the figure) is configured to obtain task information for indicating the target oral practice task, and output The task information obtained.
  • the device 500 provided by the above-mentioned embodiment of the present disclosure can be used for oral practice in the form of task-based dialogue.
  • the solution of the present disclosure is more intelligent, and users can organize the language by themselves. Completing tasks will help achieve more flexible and efficient oral practice.
  • FIG. 6 shows a schematic structural diagram of an electronic device (such as the terminal device in FIG. 1) 600 suitable for implementing embodiments of the present disclosure.
  • Terminal devices in the embodiments of the present disclosure may include, but are not limited to, mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablets), PMPs (portable multimedia players), vehicle-mounted terminals (e.g. Mobile terminals such as car navigation terminals) and fixed terminals such as digital TVs, desktop computers, etc.
  • the electronic device shown in FIG. 6 is only an example, and should not bring any limitation to the function and scope of use of the embodiments of the present disclosure.
  • the electronic device 600 may include a processing device (such as a central processing unit, a graphics processor, etc.) 601, which can be loaded into a random access device according to a program stored in a read-only memory (ROM) 602 or from a storage device 608.
  • the program in the memory (RAM) 603 executes various appropriate actions and processing.
  • various programs and data required for the operation of the electronic device 600 are also stored.
  • the processing device 601, the ROM 602, and the RAM 603 are connected to each other through a bus 604.
  • An input/output (I/O) interface 605 is also connected to the bus 604.
  • the following devices can be connected to the I/O interface 605: including input devices 606 such as touch screens, touch pads, keyboards, mice, cameras, microphones, accelerometers, gyroscopes, etc.; including, for example, liquid crystal displays (LCD), speakers, vibration An output device 607 such as a device; a storage device 608 such as a magnetic tape, a hard disk, etc.; and a communication device 609.
  • the communication device 609 may allow the electronic device 600 to perform wireless or wired communication with other devices to exchange data.
  • FIG. 6 shows an electronic device 600 having various devices, it should be understood that it is not required to implement or have all of the illustrated devices. It may be implemented alternatively or provided with more or fewer devices.
  • an embodiment of the present disclosure includes a computer program product, which includes a computer program carried on a computer-readable medium, and the computer program contains program code for executing the method shown in the flowchart.
  • the computer program may be downloaded and installed from the network through the communication device 609, or installed from the storage device 608, or installed from the ROM 602.
  • the processing device 601 the above-mentioned functions defined in the method of the embodiment of the present disclosure are executed.
  • the computer-readable medium described in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the two.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or a combination of any of the above. More specific examples of computer-readable storage media may include, but are not limited to: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program, and the program may be used by or in combination with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in a baseband or as a part of a carrier wave, and a computer-readable program code is carried therein. This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • the computer-readable signal medium may also be any computer-readable medium other than the computer-readable storage medium.
  • the computer-readable signal medium may send, propagate, or transmit the program for use by or in combination with the instruction execution system, apparatus, or device .
  • the program code contained on the computer-readable medium can be transmitted by any suitable medium, including but not limited to: wire, optical cable, RF (Radio Frequency), etc., or any suitable combination of the above.
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; or it may exist alone without being assembled into the electronic device.
  • the above-mentioned computer-readable medium carries one or more programs.
  • the electronic device in response to receiving a user-initiated oral practice request, output for indicating the target oral practice
  • the task information of the task where the task information corresponds to the task intent information and the task keyword information; obtain the voice information input by the user for the task information; recognize the voice information to determine the user’s corresponding user intent information and user keyword information; generate Matching results used to indicate whether the user has completed the target oral practice task, where the matching results are obtained through the following steps: respectively matching user intent information and task intent information, and user keyword information and task keyword information to obtain matching results; Present the matching results to the user.
  • the computer program code used to perform the operations of the present disclosure can be written in one or more programming languages or a combination thereof.
  • the programming languages include object-oriented programming languages—such as Java, Smalltalk, C++, and also conventional The procedural programming language-such as "C" language or similar programming language.
  • the program code can be executed entirely on the user's computer, partly on the user's computer, executed as an independent software package, partly on the user's computer and partly executed on a remote computer, or entirely executed on the remote computer or server.
  • the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (for example, using an Internet service provider to pass Internet connection).
  • LAN local area network
  • WAN wide area network
  • Internet service provider for example, using an Internet service provider to pass Internet connection.
  • each block in the flowchart or block diagram may represent a module, program segment, or part of code, and the module, program segment, or part of code contains one or more for realizing the specified logical function Executable instructions.
  • the functions marked in the block may also occur in a different order from the order marked in the drawings. For example, two blocks shown in succession can actually be executed substantially in parallel, and they can sometimes be executed in the reverse order, depending on the functions involved.
  • each block in the block diagram and/or flowchart, and the combination of the blocks in the block diagram and/or flowchart can be implemented by a dedicated hardware-based system that performs the specified functions or operations Or it can be realized by a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments described in the present disclosure can be implemented in software or hardware. Wherein, the name of the unit does not constitute a limitation on the unit itself under certain circumstances.
  • the first output unit can also be described as "a unit that outputs task information.”

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Educational Technology (AREA)
  • Educational Administration (AREA)
  • Entrepreneurship & Innovation (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Multimedia (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Acoustics & Sound (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Machine Translation (AREA)
  • Electrically Operated Instructional Devices (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Liquid Crystal Substances (AREA)

Abstract

一种用于信息交互的方法和装置,该方法包括:响应于接收到用户发起的口语练习请求,输出用于指示目标口语练习任务的任务信息(201),其中,任务信息对应任务意图信息和任务关键词信息;获取用户针对任务信息输入的语音信息(202);对语音信息进行识别,以确定用户对应的用户意图信息和用户关键词信息(203);生成用于指示用户是否完成目标口语练习任务的匹配结果(204),其中,匹配结果通过以下步骤得到:分别对用户意图信息与任务意图信息,以及用户关键词信息与任务关键词信息进行匹配,获得匹配结果;将匹配结果呈现给用户(205)。该方法可以实现更为灵活、高效的口语练习。

Description

用于信息交互的方法和装置
相关申请的交叉引用
本申请要求于2020年02月26日提交的,申请号为202010120450.X、发明名称为“用于信息交互的方法和装置”的中国专利申请的优先权,该申请的全文通过引用结合在本申请中。
技术领域
本公开的实施例涉及计算机技术领域,尤其涉及用于信息交互的方法和装置。
背景技术
随着计算机技术的发展,现有技术已经可以实现基于计算机技术的口语练习。具体的,这里的口语练习可以是针对各种语言的口语练习,例如英语、法语等。
目前,口语练习以自动评分为主要形式,自动评分是让用户念出系统提供的模板或语句,系统对其进行打分。
发明内容
本公开的实施例提出了用于信息交互的方法和装置。
第一方面,本公开的实施例提供了一种用于信息交互的方法,该 方法包括:响应于接收到用户发起的口语练习请求,输出用于指示目标口语练习任务的任务信息,其中,任务信息对应任务意图信息和任务关键词信息;获取用户针对任务信息输入的语音信息;对语音信息进行识别,以确定用户对应的用户意图信息和用户关键词信息;生成用于指示用户是否完成目标口语练习任务的匹配结果,其中,匹配结果通过以下步骤得到:分别对用户意图信息与任务意图信息,以及用户关键词信息与任务关键词信息进行匹配,获得匹配结果;将匹配结果呈现给用户。
在一些实施例中,该方法还包括:基于语音信息和匹配结果,生成用于表征用户的口语能力的分数;将所生成的分数呈现给用户。
在一些实施例中,该方法还包括:响应于匹配结果指示用户未完成目标口语练习任务,输出用于辅助用户完成目标口语练习任务的辅助信息;获取用户针对辅助信息输入的补充语音信息;基于补充语音信息,生成用于指示用户是否完成目标口语练习任务的新匹配结果。
在一些实施例中,响应于匹配结果指示用户未完成目标口语练习任务,输出用于辅助用户完成目标口语练习任务的辅助信息包括:响应于匹配结果指示用户未完成目标口语练习任务,确定是否接收到用户输入的口语练习结束请求;响应于未接收到口语练习结束请求,输出用于辅助用户完成目标口语练习任务的辅助信息。
在一些实施例中,响应于匹配结果指示用户未完成目标口语练习任务,输出用于辅助用户完成目标口语练习任务的辅助信息包括:响应于匹配结果指示用户未完成目标口语练习任务,确定输出用于辅助 用户完成目标口语练习任务的辅助信息的次数是否小于或等于预设次数;响应于输出辅助信息的次数小于或等于预设次数,输出用于辅助用户完成目标口语练习任务的辅助信息。
在一些实施例中,输出用于指示目标口语练习任务的任务信息包括:获取用户的历史口语练习结果;基于所获取的历史口语练习结果,确定目标口语练习任务;获取用于指示目标口语练习任务的任务信息,以及输出所获取的任务信息。
第二方面,本公开的实施例提供了一种用于信息交互的装置,该装置包括:第一输出单元,被配置成响应于接收到用户发起的口语练习请求,输出用于指示目标口语练习任务的任务信息,其中,任务信息对应任务意图信息和任务关键词信息;第一获取单元,被配置成获取用户针对任务信息输入的语音信息;识别单元,被配置成对语音信息进行识别,以确定用户对应的用户意图信息和用户关键词信息;第一生成单元,被配置成生成用于指示用户是否完成目标口语练习任务的匹配结果,其中,匹配结果通过以下步骤得到:分别对用户意图信息与任务意图信息,以及用户关键词信息与任务关键词信息进行匹配,获得所述匹配结果;第一呈现单元,被配置成将匹配结果呈现给用户。
在一些实施例中,该装置还包括:第二生成单元,被配置成基于语音信息和所述匹配结果,生成用于表征用户的口语能力的分数;第二呈现单元,被配置成;将所生成的分数呈现给用户。
在一些实施例中,该装置还包括:第二输出单元,被配置成响应 于匹配结果指示用户未完成目标口语练习任务,输出用于辅助用户完成目标口语练习任务的辅助信息;第二获取单元,被配置成获取用户针对辅助信息输入的补充语音信息;第三生成单元,被配置成基于补充语音信息,生成用于指示用户是否完成目标口语练习任务的新匹配结果。
在一些实施例中,第二输出单元包括:第一确定模块,被配置成响应于匹配结果指示用户未完成目标口语练习任务,确定是否接收到用户输入的口语练习结束请求;第一输出模块,被配置成响应于未接收到口语练习结束请求,输出用于辅助用户完成所述目标口语练习任务的辅助信息。
在一些实施例中,第二输出单元包括:第二确定模块,被配置成响应于匹配结果指示用户未完成目标口语练习任务,确定输出用于辅助用户完成目标口语练习任务的辅助信息的次数是否小于或等于预设次数;第二输出模块,被配置成响应于输出辅助信息的次数小于或等于预设次数,输出用于辅助用户完成目标口语练习任务的辅助信息。
在一些实施例中,第一输出单元包括:获取模块,被配置成获取用户的历史口语练习结果;第三确定模块,被配置成基于所获取的历史口语练习结果,确定目标口语练习任务;第三输出模块,被配置成获取用于指示目标口语练习任务的任务信息,以及输出所获取的任务信息。
第三方面,本公开的实施例提供了一种电子设备,包括:一个或多个处理器;存储装置,其上存储有一个或多个程序,当一个或多个 程序被一个或多个处理器执行,使得一个或多个处理器实现上述用于信息交互的方法中任一实施例的方法。
第四方面,本公开的实施例提供了一种计算机可读介质,其上存储有计算机程序,该程序被处理器执行时实现上述用于信息交互的方法中任一实施例的方法。
本公开的实施例提供的用于信息交互的方法和装置,通过响应于接收到用户发起的口语练习请求,输出用于指示目标口语练习任务的任务信息,其中,任务信息对应任务意图信息和任务关键词信息,而后获取用户针对任务信息输入的语音信息,接着对语音信息进行识别,以确定用户对应的用户意图信息和用户关键词信息,最后分别对用户意图信息与任务意图信息,以及用户关键词信息与任务关键词信息进行匹配,生成用于指示用户是否完成目标口语练习任务的匹配结果,并将匹配结果呈现给用户,从而可以采用任务型对话的形式进行口语练习,相较于现有技术中基于模板的口语练习方式,本公开的方案智能程度更高,用户可以自行组织语言以完成任务,有助于实现更为灵活、高效的口语练习。
附图说明
通过阅读参照以下附图所作的对非限制性实施例所作的详细描述,本公开的其它特征、目的和优点将会变得更明显:
图1是本公开的一个实施例可以应用于其中的示例性系统架构图;
图2是根据本公开的用于信息交互的方法的一个实施例的流程图;
图3是根据本公开的实施例的用于信息交互的方法的一个应用场景的示意图;
图4是根据本公开的用于信息交互的方法的又一个实施例的流程图;
图5是根据本公开的用于信息交互的装置的一个实施例的结构示意图;
图6是适于用来实现本公开的实施例的电子设备的计算机系统的结构示意图。
具体实施方式
下面结合附图和实施例对本公开作进一步的详细说明。可以理解的是,此处所描述的具体实施例仅仅用于解释相关发明,而非对该发明的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与有关发明相关的部分。
需要说明的是,在不冲突的情况下,本公开中的实施例及实施例中的特征可以相互组合。下面将参考附图并结合实施例来详细说明本公开。
图1示出了可以应用本公开的用于信息交互的方法或用于信息交互的装置的实施例的示例性系统架构100。
如图1所示,系统架构100可以包括终端设备101、102、103, 网络104和服务器105。网络104用以在终端设备101、102、103和服务器105之间提供通信链路的介质。网络104可以包括各种连接类型,例如有线、无线通信链路或者光纤电缆等等。
用户可以使用终端设备101、102、103通过网络104与服务器105交互,以接收或发送消息等。终端设备101、102、103上可以安装有各种客户端应用,例如语言教学类应用、语音交互类应用、网页浏览器应用、搜索类应用、即时通信工具、社交平台软件等。
终端设备101、102、103可以是硬件,也可以是软件。当终端设备101、102、103为硬件时,可以是具有语音获取功能的各种电子设备,包括但不限于智能手机、平板电脑、电子书阅读器、MP3播放器(Moving Picture Experts Group Audio Layer III,动态影像专家压缩标准音频层面3)、MP4(Moving Picture Experts Group Audio Layer IV,动态影像专家压缩标准音频层面4)播放器、膝上型便携计算机和台式计算机等等。当终端设备101、102、103为软件时,可以安装在上述所列举的电子设备中。其可以实现成多个软件或软件模块(例如用来提供分布式服务的多个软件或软件模块),也可以实现成单个软件或软件模块。在此不做具体限定。
服务器105可以是提供各种服务的服务器,例如可以为终端设备101、102、103上安装的语言教学类应用的后台服务器。上述后台服务器可以响应于接收到用户通过终端设备101、102、103发起的口语练习请求,输出目标口语练习任务的任务信息,获取用户利用终端设备101、102、103输入的语音信息,并对接收到的语音信息等数据进 行分析等处理,获得处理结果(例如用于指示用户是否完成目标口语练习任务的匹配结果)及输出。
需要说明的是,本公开的实施例所提供的用于信息交互的方法可以由终端设备101、102、103执行,也可以由服务器105执行,相应地,用于信息交互的装置可以设置于终端设备101、102、103中,也可以设置于服务器105中。
需要说明的是,服务器可以是硬件,也可以是软件。当服务器为硬件时,可以实现成多个服务器组成的分布式服务器集群,也可以实现成单个服务器。当服务器为软件时,可以实现成多个软件或软件模块(例如用来提供分布式服务的多个软件或软件模块),也可以实现成单个软件或软件模块。在此不做具体限定。
应该理解,图1中的终端设备、网络和服务器的数目仅仅是示意性的。根据实现需要,可以具有任意数目的终端设备、网络和服务器。特别的,在生成用于指示用户是否完成目标口语练习任务的匹配结果的过程中所使用的数据不需要从其他电子设备获取的情况下,上述系统架构可以不包括网络,而只包括终端设备或服务器。
继续参考图2,示出了根据本公开的用于信息交互的方法的一个实施例的流程200。该用于信息交互的方法,包括以下步骤:
步骤201,响应于接收到用户发起的口语练习请求,输出用于指示目标口语练习任务的任务信息。
在本实施例中,用于信息交互的方法的执行主体(例如图1所示 的终端设备)可以响应于通过有线连接方式或者无线连接方式接收到用户发起的口语练习请求,输出用于指示目标口语练习任务的任务信息。其中,口语练习请求用于请求进行口语练习。具体的,用户所请求的口语练习可以为各种语言的口语练习,例如可以是英语口语练习、法语口语练习、汉语口语练习等。
实践中,用户可以采用各种方法发起口语练习请求,例如可以点击页面上显示的、用于进行口语练习的按钮;或者可以触摸预先设置的、用于进行口语练习的开关等。
在本实施例中,上述执行主体可以响应于接收到上述口语练习请求,输出用于指示目标口语练习任务的任务信息。其中,目标口语练习任务为发起口语练习请求的用户待完成的口语练习任务。口语练习任务为用户通过语音对话可以完成的任务。例如,口语练习任务可以为订餐任务、自我介绍任务、订机票任务等。具体的,目标口语练习任务可以为预先设置的任务,也可以为用户从预设的多个口语练习任务中选择的任务。
任务信息可以用于表征目标口语练习任务的具体内容。例如,目标口语练习任务为订餐任务,则对应的任务信息可以为“订一份宫保鸡丁盖饭,送至知春路盈都大厦”。具体的,任务信息可以为针对目标口语练习任务预先设置的信息,也可以为在接收到上述口语练习请求后生成的信息。例如,目标口语练习任务为订餐任务,上述执行主体可以在接收到口语练习请求后,检测当前所在位置为“西单大悦城”,则生成任务信息“订一份宫保鸡丁盖饭,送至西单大悦城”。
在本实施例中,任务信息对应任务意图信息和任务关键词信息。任务意图信息可以用于表征目标口语练习任务的目标。任务关键词信息可以用于表征上述目标的关键点。任务意图信息和任务关键词信息可以从任务信息中提取。例如,对于任务信息“订一份宫保鸡丁盖饭,送至西单大悦城”,可以从中提取出任务意图信息为“订餐”,任务关键词信息为“宫保鸡丁盖饭;西单大悦城”。
具体的,可以采用各种方法从任务信息中提取任务意图信息和任务关键词信息。例如,当任务信息为预先设置的信息时,可以由技术人员预先从任务信息中提取任务意图信息和任务关键词信息;或者可以采用现有的自然语言处理的方法从任务信息中提取任务意图信息和任务关键词信息。
需要说明的是,当上述执行主体为用户终端时,上述执行主体可以直接检测用户的操作,以接收用户发起的口语练习请求,并将任务信息输出给用户;当上述执行主体为与用户终端通信连接的服务器时,上述执行主体可以接收用户终端发送的口语练习请求,并将任务信息输出给用户终端,以便用户终端将任务信息呈现给用户。
在本实施例的一些可选的实现方式中,上述执行主体可以通过以下步骤输出用于指示目标口语练习任务的任务信息:首先,上述执行主体可以获取用户的历史口语练习结果。然后,上述执行主体可以基于所获取的历史口语练习结果,确定目标口语练习任务。最后,上述执行主体可以获取用于指示目标口语练习任务的任务信息,以及输出所获取的任务信息。
其中,历史口语练习结果可以为用户执行历史口语练习任务时所获得的结果。例如可以为用户上次执行口语练习任务时所获得的、用于表征用户的口语能力的分数。
在本实现方式中,预设的口语练习任务可以有难易之分(例如可以用预设的难度系数表征难易程度)。进而上述执行主体可以在历史口语练习结果较好(例如上次获得的分数大于预设分数)时,为用户确定较难的目标口语练习任务(例如对应的难度系数大于预设难度系数的口语练习任务);在历史口语练习结果较差(例如上次获得的分数小于或等于预设分数)时,为用户确定较容易的目标口语练习任务(例如对应的难度系数小于或等于预设难度系数的口语练习任务)。
本实现方式通过用户的历史口语练习结果,可以为用户确定出更符合用户的口语能力的任务,有助于实现更为有效的口语练习。
步骤202,获取用户针对任务信息输入的语音信息。
在本实施例中,输出任务信息后,用户可以针对所获取的任务信息,输入语音信息,进而上述执行主体可以获取用户针对任务信息输入的语音信息。其中,语音信息为用于完成任务信息对应的目标口语练习任务的信息。可以理解的是,用户可以使用其所请求进行口语练习的语言输入上述语音信息。
步骤203,对语音信息进行识别,以确定用户对应的用户意图信息和用户关键词信息。
在本实施例中,基于步骤202中得到的语音信息,上述执行主体可以对该语音信息进行识别,以确定用户对应的用户意图信息和用户 关键词信息。其中,用于意图信息用于表征用户意图。用户关键词信息用于表征用户意图的关键点。例如用户输入的语音信息为“我想查询今天厦门的天气”,则对该语音信息进行识别后,可以获得用户意图信息“查询天气”和用户关键词信息“今天;厦门”。
具体的,上述执行主体可以采用现有的自然语言处理的方法对语音信息进行识别,获得用户意图信息和用户关键词信息。作为示例,上述执行主体可以首先采用语音识别的方法将语音信息转化为文本信息,然后可以采用语义识别的方法从转化成的文本信息中识别出用户意图信息和用户关键词信息。
特别的,在对语音信息进行识别的过程中,上述执行主体还可以对语音信息进行翻译,以使获得的用户意图信息和用户关键词信息分别与任务意图信息和任务关键词信息对应的语言相同,这样有助于后续匹配步骤的执行。
步骤204,生成用于指示用户是否完成目标口语练习任务的匹配结果。
在本实施例中,上述执行主体可以对步骤203中得到的用户意图信息和步骤201中得到的任务意图信息进行匹配,以及对步骤203中得到的用户关键词信息和步骤201中得到的任务关键词信息进行匹配,生成用于指示用户是否完成目标口语练习任务的匹配结果。
具体的,上述执行主体可以基于意图信息(包括用户意图信息和任务意图信息)的匹配和关键词信息(包括用户关键词信息和任务关键词信息)的匹配,采用各种方法生成上述匹配结果,例如上述执行 主体可以在意图信息匹配成功且关键词信息匹配成功的情况下,生成用于指示用户完成目标口语练习任务的匹配结果,而在意图信息和关键词信息中的任一项匹配不成功的情况下,生成用于指示用户未完成目标口语练习任务的匹配结果;或者,上述执行主体也可以在意图信息未匹配成功且关键词信息未匹配成功的情况下,生成用于指示用户未完成目标口语练习任务的匹配结果,而在意图信息和关键词信息中的任一项匹配成功的情况下,生成用于指示用户完成目标口语练习任务的匹配结果。
具体的,上述执行主体可以采用各种方式进行信息匹配(包括意图信息的匹配和关键词信息的匹配),例如,上述执行主体可以采用比对两个信息是否相同的方式进行信息匹配,进而,当两个信息相同时,则匹配成功;或者,上述执行主体也可以采用相似度计算的方式进行信息匹配,进而,当两个信息的相似度大于或等于预设相似度阈值时,则匹配成功。
特别的,需要说明的是,当进行信息匹配时,若两个信息对应的语言不同,则上述执行主体可以首先对信息进行翻译,以使两个信息对应的语言相同,然后再对这两个信息进行匹配。
可以理解,任务意图信息和任务关键词信息可以确定出唯一的目标口语练习任务,进而通过确定用户对应的用户意图信息和任务关键词信息是否分别与任务意图信息和任务关键词信息匹配,既可以确定出用户是否完成目标口语练习任务。
步骤205,将匹配结果呈现给用户。
在本实施例中,基于步骤204中得到的匹配结果,上述执行主体可以将匹配结果呈现给用户。
具体的,上述执行主体可以以各种形式呈现上述匹配结果,例如可以以音频的形式呈现、以图像的形式呈现、以文本的形式呈现等。用户根据上述执行主体呈现的匹配结果既可以获知是否完成任务。
在本实施例的一些可选的实现方式中,上述执行主体还可以基于语音信息和匹配结果,生成用于表征用户的口语能力的分数,以及将所生成的分数呈现给用户。在这里,上述执行主体在呈现匹配结果的同时,还可以基于匹配结果和用户的语音信息,对用户的口语能力进行打分,并呈现所获得的分数。具体的,上述执行主体可以基于匹配结果、语音信息的流利程度、语音信息的用词准确程度等对用户的口语能力进行打分,而上述各个影响因素在打分过程中所占的比重可以由技术人员预先设置。
继续参见图3,图3是根据本实施例的用于信息交互的方法的应用场景的一个示意图。在图3的应用场景中,终端设备301可以首先响应于接收到用户302发起的口语练习请求303,输出用于指示目标口语练习任务(例如订餐任务)的任务信息304(例如“订一份宫保鸡丁盖饭,送至知春路盈都大厦”),其中,任务信息304对应任务意图信息305(例如“订餐”)和任务关键词信息306(例如“宫保鸡丁盖饭;知春路盈都大厦”)。然后,终端设备301可以获取用户302针对任务信息304输入的语音信息307。接着,终端设备301可以对语 音信息307进行识别,以确定用户302对应的用户意图信息308(例如“订餐”)和用户关键词信息309(例如“宫保鸡丁盖饭;盈都大厦”)。然后,终端设备301可以分别对用户意图信息308与任务意图信息305,以及用户关键词信息309与任务关键词信息306进行匹配,生成用于指示用户是否完成目标口语练习任务的匹配结果310。最后,终端设备301可以将匹配结果310呈现给用户302。
本公开的上述实施例提供的方法可以采用任务型对话的形式进行口语练习,相较于现有技术中基于模板的口语练习方式,本公开的方案智能程度更高,用户可以自行组织语言以完成任务,有助于实现更为灵活、高效的口语练习。
进一步参考图4,其示出了用于信息交互的方法的又一个实施例的流程400。该用于信息交互的方法的流程400,包括以下步骤:
步骤401,响应于接收到用户发起的口语练习请求,输出用于指示目标口语练习任务的任务信息。
在本实施例中,用于信息交互的方法的执行主体(例如图1所示的终端设备)可以响应于通过有线连接方式或者无线连接方式接收到用户发起的口语练习请求,输出用于指示目标口语练习任务的任务信息。其中,口语练习请求用于请求进行口语练习。目标口语练习任务为发起口语练习请求的用户待完成的口语练习任务。口语练习任务为用户通过语音对话可以完成的任务。任务信息可以用于表征目标口语练习任务的具体内容。
在本实施例中,任务信息对应任务意图信息和任务关键词信息。任务意图信息可以用于表征目标口语练习任务的目标。任务关键词信息可以用于表征上述目标的关键点。任务意图信息和任务关键词信息可以从任务信息中提取。
步骤402,获取用户针对任务信息输入的语音信息。
在本实施例中,输出任务信息后,用户可以针对所获取的任务信息,输入语音信息,进而上述执行主体可以获取用户针对任务信息输入的语音信息。其中,语音信息为用于完成任务信息对应的目标口语练习任务的信息。可以理解的是,用户可以使用其所请求进行口语练习的语言输入上述语音信息。
步骤403,对语音信息进行识别,以确定用户对应的用户意图信息和用户关键词信息。
在本实施例中,基于步骤402中得到的语音信息,上述执行主体可以对该语音信息进行识别,以确定用户对应的用户意图信息和用户关键词信息。其中,用于意图信息用于表征用户意图。用户关键词信息用于表征用户意图的关键点。
步骤404,生成用于指示用户是否完成目标口语练习任务的匹配结果。
在本实施例中,上述执行主体可以对步骤403中得到的用户意图信息和步骤401中得到的任务意图信息进行匹配,以及对步骤403中得到的用户关键词信息和步骤401中得到的任务关键词信息进行匹配,生成用于指示用户是否完成目标口语练习任务的匹配结果。
步骤405,将匹配结果呈现给用户。
在本实施例中,基于步骤404中得到的匹配结果,上述执行主体可以将匹配结果呈现给上述用户。
上述步骤401、步骤402、步骤403、步骤404、步骤405可以分别采用与前述实施例中的步骤201、步骤202、步骤203、步骤204和步骤205类似的方式执行,上文针对步骤201、步骤202、步骤203、步骤204和步骤205的描述也适用于步骤401、步骤402、步骤403、步骤404和步骤405,此处不再赘述。
步骤406,响应于匹配结果指示用户未完成目标口语练习任务,输出用于辅助用户完成目标口语练习任务的辅助信息。
在本实施例中,上述执行主体可以响应于步骤404中得到的匹配结果指示用户未完成目标口语练习任务,输出用于辅助用户完成目标口语练习任务的辅助信息。其中,辅助信息可以是针对与任务对应的信息(任务意图信息和/或任务关键词信息)不匹配的用户信息(用户意图信息和/或用户关键词信息)生成的信息,用于引导用户输入能够与任务对应的信息匹配的信息。
具体的,作为示例,目标口语练习任务可以为订餐任务,上述执行主体可以在基于语音信息,进行用户意图信息和任务意图信息,以及用户关键词信息和任务关键词信息的匹配后,确定用户关键词信息中的订餐地址信息和任务关键词信息中的订餐地址信息未匹配成功,则上述执行主体可以生成辅助信息“请输入正确的订餐地址”。
在本实施例的一些可选的实现方式中,上述执行主体可以响应于 匹配结果指示用户未完成目标口语练习任务,确定是否接收到用户输入的口语练习结束请求,以及响应于未接收到口语练习结束请求,输出用于辅助用户完成目标口语练习任务的辅助信息。其中,口语练习结束请求用于请求结束本次的口语练习。
实践中,完成口语练习任务可能并不是用户的目的,用户可能只想获得匹配结果,进而在输出了匹配结果后,继续输出辅助信息可能会引起用户的反感。本实现方式可以将输出辅助信息的操作交给用户控制,在用户输入口语练习结束请求时,则不输出辅助信息,而在用户未输入口语练习结束请求时,则输出辅助信息,以此,可以在提高口语练习的完整性的同时,提高口语练习的灵活性。
步骤407,获取用户针对辅助信息输入的补充语音信息。
在本实施例中,在输出辅助信息后,上述执行主体可以获取用户针对辅助信息输入的补充语音信息。其中,补充语音信息可以是用户获取到辅助信息后输入的、用于对步骤402中输入的语音信息进行补充的语音信息。
作为示例,上述执行主体在分别对步骤402中获得的语音信息所对应的用户意图信息和用户关键词信息匹配完成后,确定用户关键词信息中的订餐地址信息和任务关键词信息中的订餐地址信息(“盈都大厦”)未匹配成功,则上述执行主体可以输出辅助信息“请输入正确的订餐地址”。然后,用户可以针对辅助信息,输入补充语音信息“盈都大厦”。
步骤408,基于补充语音信息,生成用于指示用户是否完成目标 口语练习任务的新匹配结果。
在本实施例中,基于步骤407中得到的补充语音信息,上述执行主体可以生成用于指示用户是否完成目标口语练习任务的新匹配结果。
具体的,由于辅助信息是针对不匹配的信息(包括意图信息和关键词信息)生成的,且补充语音信息是针对辅助信息输入的信息,所以在这里,上述执行主体只需将补充语音信息与未匹配成功的任务信息(包括任务意图信息和任务关键词信息)进行匹配即可,补充语音信息未涉及到的信息则是匹配成功的信息。
具体的,若未匹配成功的信息是意图信息,则上述执行主体可以首先对补充语音信息进行识别,获得补充意图信息,然后对补充意图信息和任务意图信息进行匹配,获得新匹配结果。若未匹配成功的信息是关键词信息,则上述执行主体可以首先对补充语音信息进行识别,获得补充关键词信息,然后对补充关键词信息和任务关键词信息进行匹配,获得新匹配结果。
继续上述示例,用户针对辅助信息“请输入正确的订餐地址”,输入补充语音信息“盈都大厦”后,上述执行主体可以对语音信息“盈都大厦”进行识别,获得文本信息“盈都大厦”,然后对文本信息“盈都大厦”和任务关键词信息中的订餐地址信息“盈都大厦”进行匹配,获得用于指示用户完成目标口语练习任务的新匹配结果。
可选的,在用户输入补充语音信息后,仍然未完成目标口语练习任务的情况下(即新匹配结果指示用户未完成目标口语练习任务的情 况下),上述执行主体可以进一步基于未匹配成功的信息,输出辅助信息,进而进一步引导用户完成目标口语练习任务。
在本实施例的一些可选的实现方式中,上述执行主体可以响应于匹配结果指示用户未完成目标口语练习任务,确定输出用于辅助用户完成目标口语练习任务的辅助信息的次数是否小于或等于预设次数,响应于输出辅助信息的次数小于或等于预设次数,输出用于辅助用户完成目标口语练习任务的辅助信息。
本实现方式可以限制输出辅助信息的次数,有助于减小无节制地输出辅助信息,进而造成用户反感的可能性;并且,输出辅助信息需要消耗设备资源,进而,限制辅助信息的输出次数也有助于较小设备资源的消耗。
从图4中可以看出,与图2对应的实施例相比,本实施例中的用于信息交互的方法的流程400突出了当匹配结果指示用户未完成目标口语练习任务时,输出用于辅助用户完成目标口语练习任务的辅助信息。由此,本实施例描述的方案可以通过输出辅助信息,引导用户完成目标口语练习任务,以此,可以提高口语练习的完整性,且有助于提升口语练习过程中的口语教学性能。
进一步参考图5,作为对上述各图所示方法的实现,本公开提供了一种用于信息交互的装置的一个实施例,该装置实施例与图2所示的方法实施例相对应,该装置具体可以应用于各种电子设备中。
如图5所示,本实施例的用于信息交互的装置500包括:第一获 输出单元501、第一获取单元502、识别单元503、第一生成单元504和第一呈现单元505。其中,第一输出单元501被配置成响应于接收到用户发起的口语练习请求,输出用于指示目标口语练习任务的任务信息,其中,任务信息对应任务意图信息和任务关键词信息;第一获取单元502被配置成获取用户针对任务信息输入的语音信息;识别单元503被配置成对语音信息进行识别,以确定用户对应的用户意图信息和用户关键词信息;第一生成单元504被配置成生成用于指示用户是否完成目标口语练习任务的匹配结果,其中,匹配结果通过以下步骤得到:分别对用户意图信息与任务意图信息,以及用户关键词信息与任务关键词信息进行匹配,获得所述匹配结果;第一呈现单元505被配置成将匹配结果呈现给用户。
在本实施例中,用于信息交互的装置500的第一输出单元501可以响应于通过有线连接方式或者无线连接方式接收到用户发起的口语练习请求,输出用于指示目标口语练习任务的任务信息。其中,口语练习请求用于请求进行口语练习。目标口语练习任务为发起口语练习请求的用户待完成的口语练习任务。口语练习任务为用户通过语音对话可以完成的任务。任务信息可以用于表征目标口语练习任务的具体内容。
在本实施例中,任务信息对应任务意图信息和任务关键词信息。任务意图信息可以用于表征目标口语练习任务的目标。任务关键词信息可以用于表征上述目标的关键点。任务意图信息和任务关键词信息可以从任务信息中提取。
在本实施例中,第一输出单元501输出任务信息后,用户可以针对所获取的任务信息,输入语音信息,进而第一获取单元502可以获取用户针对任务信息输入的语音信息。其中,语音信息为用于完成任务信息对应的目标口语练习任务的信息。可以理解的是,用户可以使用其所请求进行口语练习的语言输入上述语音信息。
在本实施例中,基于第一获取单元502中得到的语音信息,识别单元503可以对该语音信息进行识别,以确定用户对应的用户意图信息和用户关键词信息。其中,用于意图信息用于表征用户意图。用户关键词信息用于表征用户意图的关键点。
在本实施例中,第一生成单元504可以对识别单元503得到的用户意图信息和第一输出单元501得到的任务意图信息进行匹配,以及对识别单元503得到的用户关键词信息和第一输出单元501得到的任务关键词信息进行匹配,生成用于指示用户是否完成目标口语练习任务的匹配结果。
在本实施例中,基于第一生成单元504得到的匹配结果,第一呈现单元505可以将匹配结果呈现给用户。
在本实施例的一些可选的实现方式中,装置500还包括:第二生成单元(图中未示出),被配置成基于语音信息和匹配结果,生成用于表征用户的口语能力的分数;第二呈现单元(图中未示出),被配置成;将所生成的分数呈现给用户。
在本实施例的一些可选的实现方式中,装置500还包括:第二输出单元(图中未示出),被配置成响应于匹配结果指示用户未完成目 标口语练习任务,输出用于辅助用户完成目标口语练习任务的辅助信息;第二获取单元(图中未示出),被配置成获取用户针对辅助信息输入的补充语音信息;第三生成单元(图中未示出),被配置成基于补充语音信息,生成用于指示用户是否完成目标口语练习任务的新匹配结果。
在本实施例的一些可选的实现方式中,第二输出单元包括:第一确定模块(图中未示出),被配置成响应于匹配结果指示用户未完成目标口语练习任务,确定是否接收到用户输入的口语练习结束请求;第一输出模块(图中未示出),被配置成响应于未接收到口语练习结束请求,输出用于辅助用户完成所述目标口语练习任务的辅助信息。
在本实施例的一些可选的实现方式中,第二输出单元包括:第二确定模块(图中未示出),被配置成响应于匹配结果指示用户未完成目标口语练习任务,确定输出用于辅助用户完成目标口语练习任务的辅助信息的次数是否小于或等于预设次数;第二输出模块(图中未示出),被配置成响应于输出辅助信息的次数小于或等于预设次数,输出用于辅助用户完成目标口语练习任务的辅助信息。
在本实施例的一些可选的实现方式中,第一输出单元501包括:获取模块(图中未示出),被配置成获取用户的历史口语练习结果;第三确定模块(图中未示出),被配置成基于所获取的历史口语练习结果,确定目标口语练习任务;第三输出模块(图中未示出),被配置成获取用于指示目标口语练习任务的任务信息,以及输出所获取的任务信息。
可以理解的是,该装置500中记载的诸单元与参考图2描述的方法中的各个步骤相对应。由此,上文针对方法描述的操作、特征以及产生的有益效果同样适用于装置500及其中包含的单元,在此不再赘述。
本公开的上述实施例提供的装置500可以采用任务型对话的形式进行口语练习,相较于现有技术中基于模板的口语练习方式,本公开的方案智能程度更高,用户可以自行组织语言以完成任务,有助于实现更为灵活、高效的口语练习。
下面参考图6,其示出了适于用来实现本公开实施例的电子设备(例如图1中的终端设备)600的结构示意图。本公开实施例中的终端设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。图6示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图6所示,电子设备600可以包括处理装置(例如中央处理器、图形处理器等)601,其可以根据存储在只读存储器(ROM)602中的程序或者从存储装置608加载到随机访问存储器(RAM)603中的程序而执行各种适当的动作和处理。在RAM 603中,还存储有电子设备600操作所需的各种程序和数据。处理装置601、ROM 602以及RAM 603通过总线604彼此相连。输入/输出(I/O)接口605也连接 至总线604。
通常,以下装置可以连接至I/O接口605:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置606;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置607;包括例如磁带、硬盘等的存储装置608;以及通信装置609。通信装置609可以允许电子设备600与其他设备进行无线或有线通信以交换数据。虽然图6示出了具有各种装置的电子设备600,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置609从网络上被下载和安装,或者从存储装置608被安装,或者从ROM 602被安装。在该计算机程序被处理装置601执行时,执行本公开实施例的方法中限定的上述功能。
需要说明的是,本公开所述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、 只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:响应于接收到用户发起的口语练习请求,输出用于指示目标口语练习任务的任务信息,其中,任务信息对应任务意图信息和任务关键词信息;获取用户针对任务信息输入的语音信息;对语音信息进行识别,以确定用户对应的用户意图信息和用户关键词信息;生成用于指示用户是否完成目标口语练习任务的匹配结果,其中,匹配结果通过以下步骤得到:分别对用户意图信息与任务意图信息,以 及用户关键词信息与任务关键词信息进行匹配,获得匹配结果;将匹配结果呈现给用户。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,所述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现, 或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元的名称在某种情况下并不构成对该单元本身的限定,例如,第一输出单元还可以被描述为“输出任务信息的单元”。
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。

Claims (14)

  1. 一种用于信息交互的方法,包括:
    响应于接收到用户发起的口语练习请求,输出用于指示目标口语练习任务的任务信息,其中,所述任务信息对应任务意图信息和任务关键词信息;
    获取所述用户针对所述任务信息输入的语音信息;
    对所述语音信息进行识别,以确定所述用户对应的用户意图信息和用户关键词信息;
    生成用于指示所述用户是否完成所述目标口语练习任务的匹配结果,其中,所述匹配结果通过以下步骤得到:分别对用户意图信息与任务意图信息,以及用户关键词信息与任务关键词信息进行匹配,获得所述匹配结果;
    将所述匹配结果呈现给所述用户。
  2. 根据权利要求1所述的方法,其中,所述方法还包括:
    基于所述语音信息和所述匹配结果,生成用于表征所述用户的口语能力的分数;
    将所生成的分数呈现给所述用户。
  3. 根据权利要求1所述的方法,其中,所述方法还包括:
    响应于所述匹配结果指示所述用户未完成所述目标口语练习任务,输出用于辅助所述用户完成所述目标口语练习任务的辅助信息;
    获取所述用户针对所述辅助信息输入的补充语音信息;
    基于所述补充语音信息,生成用于指示所述用户是否完成所述目标口语练习任务的新匹配结果。
  4. 根据权利要求3所述的方法,其中,所述响应于所述匹配结果指示所述用户未完成所述目标口语练习任务,输出用于辅助所述用户完成所述目标口语练习任务的辅助信息包括:
    响应于所述匹配结果指示所述用户未完成所述目标口语练习任务,确定是否接收到所述用户输入的口语练习结束请求;
    响应于未接收到所述口语练习结束请求,输出用于辅助所述用户完成所述目标口语练习任务的辅助信息。
  5. 根据权利要求3所述的方法,其中,所述响应于所述匹配结果指示所述用户未完成所述目标口语练习任务,输出用于辅助所述用户完成所述目标口语练习任务的辅助信息包括:
    响应于所述匹配结果指示所述用户未完成所述目标口语练习任务,确定输出用于辅助所述用户完成所述目标口语练习任务的辅助信息的次数是否小于或等于预设次数;
    响应于输出辅助信息的次数小于或等于预设次数,输出用于辅助所述用户完成所述目标口语练习任务的辅助信息。
  6. 根据权利要求1-5之一所述的方法,其中,所述输出用于指示目标口语练习任务的任务信息包括:
    获取所述用户的历史口语练习结果;
    基于所获取的历史口语练习结果,确定目标口语练习任务;
    获取用于指示所述目标口语练习任务的任务信息,以及输出所获取的任务信息。
  7. 一种用于信息交互的装置,包括:
    第一输出单元,被配置成响应于接收到用户发起的口语练习请求,输出用于指示目标口语练习任务的任务信息,其中,所述任务信息对应任务意图信息和任务关键词信息;
    第一获取单元,被配置成获取所述用户针对所述任务信息输入的语音信息;
    识别单元,被配置成对所述语音信息进行识别,以确定所述用户对应的用户意图信息和用户关键词信息;
    第一生成单元,被配置成生成用于指示所述用户是否完成所述目标口语练习任务的匹配结果,其中,所述匹配结果通过以下步骤得到:分别对用户意图信息与任务意图信息,以及用户关键词信息与任务关键词信息进行匹配,获得所述匹配结果;
    第一呈现单元,被配置成将所述匹配结果呈现给所述用户。
  8. 根据权利要求7所述的装置,其中,所述装置还包括:
    第二生成单元,被配置成基于所述语音信息和所述匹配结果,生成用于表征所述用户的口语能力的分数;
    第二呈现单元,被配置成;将所生成的分数呈现给所述用户。
  9. 根据权利要求7所述的装置,其中,所述装置还包括:
    第二输出单元,被配置成响应于所述匹配结果指示所述用户未完成所述目标口语练习任务,输出用于辅助所述用户完成所述目标口语练习任务的辅助信息;
    第二获取单元,被配置成获取所述用户针对所述辅助信息输入的补充语音信息;
    第三生成单元,被配置成基于所述补充语音信息,生成用于指示所述用户是否完成所述目标口语练习任务的新匹配结果。
  10. 根据权利要求8所述的装置,其中,所述第二输出单元包括:
    第一确定模块,被配置成响应于所述匹配结果指示所述用户未完成所述目标口语练习任务,确定是否接收到所述用户输入的口语练习结束请求;
    第一输出模块,被配置成响应于未接收到所述口语练习结束请求,输出用于辅助所述用户完成所述目标口语练习任务的辅助信息。
  11. 根据权利要求8所述的装置,其中,所述第二输出单元包括:
    第二确定模块,被配置成响应于所述匹配结果指示所述用户未完成所述目标口语练习任务,确定输出用于辅助所述用户完成所述目标口语练习任务的辅助信息的次数是否小于或等于预设次数;
    第二输出模块,被配置成响应于输出辅助信息的次数小于或等于预设次数,输出用于辅助所述用户完成所述目标口语练习任务的辅助 信息。
  12. 根据权利要求7-11之一所述的装置,其中,所述第一输出单元包括:
    获取模块,被配置成获取所述用户的历史口语练习结果;
    第三确定模块,被配置成基于所获取的历史口语练习结果,确定目标口语练习任务;
    第三输出模块,被配置成获取用于指示所述目标口语练习任务的任务信息,以及输出所获取的任务信息。
  13. 一种电子设备,包括:
    一个或多个处理器;
    存储装置,其上存储有一个或多个程序,
    当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1-6中任一所述的方法。
  14. 一种计算机可读介质,其上存储有计算机程序,其中,该程序被处理器执行时实现如权利要求1-6中任一所述的方法。
PCT/CN2021/078186 2020-02-26 2021-02-26 用于信息交互的方法和装置 WO2021170094A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
EP21759798.8A EP4113320A4 (en) 2020-02-26 2021-02-26 INFORMATION INTERACTION METHOD AND DEVICE
KR1020227029762A KR20220127935A (ko) 2020-02-26 2021-02-26 정보 상호작용을 위한 방법 및 장치
JP2022551245A JP2023514863A (ja) 2020-02-26 2021-02-26 情報を交換するための方法及び装置
US17/888,258 US11854422B2 (en) 2020-02-26 2022-08-15 Method and device for information interaction

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010120450.XA CN112307162A (zh) 2020-02-26 2020-02-26 用于信息交互的方法和装置
CN202010120450.X 2020-02-26

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/888,258 Continuation US11854422B2 (en) 2020-02-26 2022-08-15 Method and device for information interaction

Publications (1)

Publication Number Publication Date
WO2021170094A1 true WO2021170094A1 (zh) 2021-09-02

Family

ID=74336596

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/078186 WO2021170094A1 (zh) 2020-02-26 2021-02-26 用于信息交互的方法和装置

Country Status (6)

Country Link
US (1) US11854422B2 (zh)
EP (1) EP4113320A4 (zh)
JP (1) JP2023514863A (zh)
KR (1) KR20220127935A (zh)
CN (1) CN112307162A (zh)
WO (1) WO2021170094A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112307162A (zh) 2020-02-26 2021-02-02 北京字节跳动网络技术有限公司 用于信息交互的方法和装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105741831A (zh) * 2016-01-27 2016-07-06 广东外语外贸大学 一种基于语法分析的口语评测方法和系统
CN106407333A (zh) * 2016-09-05 2017-02-15 北京百度网讯科技有限公司 基于人工智能的口语查询识别方法及装置
CN108831503A (zh) * 2018-06-07 2018-11-16 深圳习习网络科技有限公司 一种口语评测方法及装置
CN109039647A (zh) * 2018-07-19 2018-12-18 深圳乐几科技有限公司 终端及其口语学习方法
US20200005767A1 (en) * 2018-11-01 2020-01-02 Baidu Online Network Technology (Beijing) Co., Ltd. Information processing method, apparatus and storage medium
CN112307162A (zh) * 2020-02-26 2021-02-02 北京字节跳动网络技术有限公司 用于信息交互的方法和装置

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002297185A (ja) * 2001-03-29 2002-10-11 Pioneer Electronic Corp 情報処理装置および情報処理方法
JP2005106876A (ja) * 2003-09-26 2005-04-21 Cai Media Kyodo Kaihatsu:Kk 語学学習用ロボット及び語学学習システム
JP2005274830A (ja) * 2004-03-24 2005-10-06 Central Information System Corp 音読評価プログラム、音読評価装置及び音読評価方法
JP4797597B2 (ja) * 2005-11-24 2011-10-19 ヤマハ株式会社 語学学習装置
US8260809B2 (en) * 2007-06-28 2012-09-04 Microsoft Corporation Voice-based search processing
US10241752B2 (en) * 2011-09-30 2019-03-26 Apple Inc. Interface for a virtual digital assistant
KR101211796B1 (ko) 2009-12-16 2012-12-13 포항공과대학교 산학협력단 외국어 학습 장치 및 그 제공 방법
JP2012255866A (ja) * 2011-06-08 2012-12-27 Konica Minolta Business Technologies Inc プレゼンテーションコーチシステム
NL2008809C2 (en) * 2012-05-14 2013-11-18 Stichting Katholieke Universtiteit Automated system for training oral language proficiency.
US9002835B2 (en) * 2013-08-15 2015-04-07 Google Inc. Query response using media consumption history
TWI566107B (zh) * 2014-05-30 2017-01-11 蘋果公司 用於處理多部分語音命令之方法、非暫時性電腦可讀儲存媒體及電子裝置
US10102844B1 (en) * 2016-03-29 2018-10-16 Amazon Technologies, Inc. Systems and methods for providing natural responses to commands
US10489393B1 (en) * 2016-03-30 2019-11-26 Amazon Technologies, Inc. Quasi-semantic question answering
US10536579B2 (en) * 2016-10-24 2020-01-14 Sriram Venkataramanan Iyer System, method and marketplace for real-time interactive video/voice services using artificial intelligence
US10755051B2 (en) * 2017-09-29 2020-08-25 Apple Inc. Rule-based natural language processing

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105741831A (zh) * 2016-01-27 2016-07-06 广东外语外贸大学 一种基于语法分析的口语评测方法和系统
CN106407333A (zh) * 2016-09-05 2017-02-15 北京百度网讯科技有限公司 基于人工智能的口语查询识别方法及装置
CN108831503A (zh) * 2018-06-07 2018-11-16 深圳习习网络科技有限公司 一种口语评测方法及装置
CN109039647A (zh) * 2018-07-19 2018-12-18 深圳乐几科技有限公司 终端及其口语学习方法
US20200005767A1 (en) * 2018-11-01 2020-01-02 Baidu Online Network Technology (Beijing) Co., Ltd. Information processing method, apparatus and storage medium
CN112307162A (zh) * 2020-02-26 2021-02-02 北京字节跳动网络技术有限公司 用于信息交互的方法和装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4113320A4 *

Also Published As

Publication number Publication date
JP2023514863A (ja) 2023-04-11
US11854422B2 (en) 2023-12-26
EP4113320A1 (en) 2023-01-04
KR20220127935A (ko) 2022-09-20
US20230081000A1 (en) 2023-03-16
CN112307162A (zh) 2021-02-02
EP4113320A4 (en) 2023-07-26

Similar Documents

Publication Publication Date Title
WO2020238320A1 (zh) 用于生成表情包的方法和装置
JP6681450B2 (ja) 情報処理方法および装置
US20240021202A1 (en) Method and apparatus for recognizing voice, electronic device and medium
WO2017186050A1 (zh) 人机智能问答系统的断句识别方法和装置
WO2022037419A1 (zh) 音频内容识别方法、装置、设备和计算机可读介质
JP6595912B2 (ja) 既存の単一言語プロセスからマルチ言語プロセスを構築すること
CN112509562B (zh) 用于文本后处理的方法、装置、电子设备和介质
WO2022228041A1 (zh) 翻译模型的训练方法、装置、设备和存储介质
JP2020174339A (ja) 段落と映像を整列させるための方法、装置、サーバー、コンピュータ可読記憶媒体およびコンピュータプログラム
WO2023082931A1 (zh) 用于语音识别标点恢复的方法、设备和存储介质
JP2023550211A (ja) テキストを生成するための方法および装置
WO2020224294A1 (zh) 用于处理信息的系统、方法和装置
US20240079002A1 (en) Minutes of meeting processing method and apparatus, device, and medium
RU2654789C2 (ru) Способ (варианты) и электронное устройство (варианты) обработки речевого запроса пользователя
CN112182255A (zh) 用于存储媒体文件和用于检索媒体文件的方法和装置
WO2021170094A1 (zh) 用于信息交互的方法和装置
CN114064943A (zh) 会议管理方法、装置、存储介质及电子设备
WO2024032413A1 (zh) 书籍信息显示方法、装置、设备和存储介质
CN111324626B (zh) 基于语音识别的搜索方法、装置、计算机设备及存储介质
CN112309389A (zh) 信息交互方法和装置
US20240096347A1 (en) Method and apparatus for determining speech similarity, and program product
CN111859971A (zh) 用于处理信息的方法、装置、设备和介质
CN112309387A (zh) 用于处理信息的方法和装置
CN111461095A (zh) 一种语音点读方法、装置、设备和可读介质
CN111681660B (zh) 语音识别方法、装置、电子设备和计算机可读介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21759798

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022551245

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2021759798

Country of ref document: EP

Effective date: 20220926