CN108491246B - Voice processing method and electronic equipment - Google Patents
Voice processing method and electronic equipment Download PDFInfo
- Publication number
- CN108491246B CN108491246B CN201810296940.8A CN201810296940A CN108491246B CN 108491246 B CN108491246 B CN 108491246B CN 201810296940 A CN201810296940 A CN 201810296940A CN 108491246 B CN108491246 B CN 108491246B
- Authority
- CN
- China
- Prior art keywords
- progress
- control
- task
- module
- processing result
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/451—Execution arrangements for user interfaces
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Health & Medical Sciences (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The present disclosure provides a speech processing method comprising obtaining a speech input; obtaining a processing result aiming at the voice input, wherein the processing result is used for representing a control task, and the control task comprises a plurality of control steps; and responding to the processing result, executing the control task and displaying progress information, wherein the progress information is used for displaying the completion progress of the plurality of control steps, and the executing of the control task is the automatic execution of the plurality of control steps. The present disclosure also provides an electronic device.
Description
Technical Field
The disclosure relates to a voice processing method and an electronic device.
Background
Along with the continuous increase of user's demand, the application APP of installing on the terminal equipment is more and more, and the APP function is also more and more complicated. Taking a mobile phone as an example, generally, dozens of or even dozens of APPs can be installed on the mobile phone, each APP generally has one or more functions, and when a user needs to realize a certain function, the user often needs to spend a long time and perform a plurality of operations according to a complex fixed flow to reach a final operation interface, thereby realizing the corresponding function.
However, in implementing the embodiments of the present disclosure, the inventors found that there are at least the following drawbacks in the related art: in the process of executing the control task, the user experience is poor due to the constantly changing interface.
Disclosure of Invention
One aspect of the present disclosure provides a method of speech processing, comprising obtaining a speech input; obtaining a processing result aiming at the voice input, wherein the processing result is used for representing a control task, and the control task comprises a plurality of control steps; and responding to the processing result, executing the control task and displaying progress information, wherein the progress information is used for displaying the completion progress of the control steps, and the executing of the control task is the automatic execution of the control steps.
Optionally, the voice processing method further includes waking up a voice engine to display the voice interaction interface.
Optionally, the displaying the progress information includes displaying the progress information on the voice interaction interface, where the progress information is a mark condition of different positions on a progress bar.
Optionally, the responding to the processing result, executing the control task and displaying the progress information includes obtaining the number of control steps of the control task module corresponding to the processing result; dividing the progress bar into corresponding progress units based on the number of the control steps; and obtaining an execution result of each operation and control step, and marking a corresponding progress unit on the progress bar based on the execution result of each operation and control step, wherein the operation and control steps have a sequence relation.
Optionally, the responding to the processing result, executing the control task and displaying the progress information includes obtaining an execution time of a control task module corresponding to the processing result; dividing the progress bar into corresponding progress units according to the execution time; and if the task control module executes the task, timing and marking the corresponding progress unit according to the timing information.
Optionally, the responding to the processing result, executing the control task and displaying the progress information includes obtaining an execution time of a control task module corresponding to the processing result; dividing the progress bar into corresponding progress units according to the execution time; and if the task control module executes the task, dynamically marking the corresponding progress unit according to the execution time of each control step.
Another aspect of the disclosure provides an electronic device including a first acquisition module, a second acquisition module, and a response module. The first acquisition module is used for acquiring voice input; the second acquisition module is used for acquiring a processing result aiming at the voice input, wherein the processing result is used for representing a control task, and the control task comprises a plurality of control steps; and the response module is used for responding to the processing result, executing the control task and displaying progress information, wherein the progress information is used for displaying the completion progress of the control steps, and the execution of the control task is the automatic execution of the control steps.
Optionally, the electronic device further includes a wake-up module, configured to wake up the speech engine and display the speech interaction interface.
Optionally, the response module includes a display unit, configured to display the progress information on the voice interaction interface, where the progress information is a mark condition of different positions on a progress bar.
Optionally, the response module includes a first obtaining unit, a first dividing unit, and a first marking unit. The first acquisition unit is used for acquiring the number of control steps of the control task module corresponding to the processing result; the first dividing unit is used for dividing the progress bar into corresponding progress units based on the number of the control steps; and the first marking unit is used for obtaining the execution result of each operation and control step, and marking the corresponding progress unit on the progress bar based on the execution result of each operation and control step, wherein the operation and control steps have a sequence relation.
Optionally, the response module includes a second obtaining unit, a second dividing unit, and a second marking unit. The second obtaining unit is used for obtaining the execution time of the control task module corresponding to the processing result; the second dividing unit is used for dividing the progress bar into corresponding progress units according to the execution time; and the second marking unit is used for timing and marking the corresponding progress unit according to the timing information under the condition that the control task module executes the task.
Optionally, the response module includes a third obtaining unit, a third dividing unit, and a third marking unit. The third obtaining unit is used for obtaining the execution time of the control task module corresponding to the processing result; the third dividing unit is used for dividing the progress bar into corresponding progress units according to the execution time; and the third marking unit is used for dynamically marking the corresponding progress unit according to the execution time of each control step under the condition that the control task module executes the task.
Another aspect of the disclosure provides a computer system comprising a processor and a computer-readable storage medium. The computer-readable medium stores computer-executable instructions that, when executed by the processor, are for implementing the speech processing method as described above.
Another aspect of the present disclosure provides a computer-readable medium storing computer-executable instructions for implementing the speech processing method as described above when executed.
Another aspect of the present disclosure provides a computer program comprising computer executable instructions for implementing the speech processing method as described above when executed.
Drawings
For a more complete understanding of the present disclosure and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which:
fig. 1 schematically illustrates an application scenario of a speech processing method and an electronic device according to an embodiment of the present disclosure;
FIG. 2 schematically shows a flow chart of a speech processing method according to an embodiment of the present disclosure;
FIG. 3 schematically shows a flow chart of response processing results according to an embodiment of the disclosure;
FIG. 4 schematically shows a flow diagram of response processing results according to another embodiment of the present disclosure;
FIG. 5 schematically shows a flow diagram of response processing results according to another embodiment of the present disclosure;
FIG. 6 schematically shows a block diagram of an electronic device according to an embodiment of the present disclosure;
FIG. 7 schematically shows a block diagram of an electronic device according to another embodiment of the present disclosure;
FIG. 8 schematically shows a block diagram of an electronic device according to another embodiment of the present disclosure;
FIG. 9 schematically illustrates a block diagram of a response module according to an embodiment of the disclosure;
FIG. 10 schematically illustrates a block diagram of a response module according to another embodiment of the present disclosure;
FIG. 11 schematically illustrates a block diagram of a response module according to another embodiment of the present disclosure; and
FIG. 12 schematically illustrates a block diagram of a computer system suitable for implementing the speech processing methods of the present disclosure, in accordance with an embodiment of the present disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is illustrative only and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). Where a convention analogous to "A, B or at least one of C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B or C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). It will be further understood by those within the art that virtually any disjunctive word and/or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the terms, either of the terms, or both terms. For example, the phrase "a or B" should be understood to include the possibility of "a" or "B", or "a and B".
Some block diagrams and/or flow diagrams are shown in the figures. It will be understood that some blocks of the block diagrams and/or flowchart illustrations, or combinations thereof, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the instructions, which execute via the processor, create means for implementing the functions/acts specified in the block diagrams and/or flowchart block or blocks.
Accordingly, the techniques of this disclosure may be implemented in hardware and/or software (including firmware, microcode, etc.). In addition, the techniques of this disclosure may take the form of a computer program product on a computer-readable medium having instructions stored thereon for use by or in connection with an instruction execution system. In the context of this disclosure, a computer-readable medium may be any medium that can contain, store, communicate, propagate, or transport the instructions. For example, the computer readable medium can include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. Specific examples of the computer readable medium include: magnetic storage devices, such as magnetic tape or Hard Disk Drives (HDDs); optical storage devices, such as compact disks (CD-ROMs); a memory, such as a Random Access Memory (RAM) or a flash memory; and/or wired/wireless communication links.
The embodiment of the disclosure provides a voice processing method and an electronic device, wherein the voice processing method comprises the steps of obtaining voice input; obtaining a processing result aiming at the voice input, wherein the processing result is used for representing a control task, and the control task comprises a plurality of control steps; and responding to the processing result, executing the control task and displaying progress information, wherein the progress information is used for displaying the completion progress of the plurality of control steps, and the executing of the control task is the automatic execution of the plurality of control steps.
Fig. 1 schematically shows an application scenario of a speech processing method and an electronic device according to an embodiment of the present disclosure.
As shown in fig. 1, an electronic device 100 may be included in the application scenario, and voice software (not shown) for voice interaction and other application software (such as a wechat and a player, etc.) may be installed on the electronic device 100, and a user may interact with the voice software by inputting voice. For example, according to an embodiment of the present disclosure, a user may input the voice information 103 through the voice software, and then the voice software performs local processing according to the voice information 103, such as converting the voice information 103 into text information and/or performing semantic understanding on the voice information 103. According to the embodiment of the disclosure, the voice information 103 can be transmitted to the cloud, the voice information 103 is converted into text information at the cloud, and/or semantic understanding is performed on the voice information 103, so that a corresponding processing result is obtained.
According to the embodiment of the disclosure, a processing result obtained after processing the voice input may represent a control task, where the control task includes a plurality of control steps, and after each control step is completed, a display interface of the electronic device 100 may be changed correspondingly, or after a plurality of control steps are completed, a display interface of the electronic device 100 may be changed correspondingly.
According to the embodiment of the disclosure, in the process of responding to the processing result, the manipulation task including the plurality of manipulation steps is executed, the plurality of manipulation steps are executed one by one, and progress information showing the completion progress of the plurality of manipulation steps is displayed in the process of executing the plurality of manipulation steps, for example, the progress information showing the completion progress of the plurality of manipulation steps may be displayed by using the progress bar 102 in fig. 1.
According to the embodiment of the disclosure, for example, a user may wake up a speech engine by inputting the speech information "watch movie a", and then display the speech interactive interface 101 of the speech software, and text information corresponding to the speech information "watch movie a" may be displayed on the speech interactive interface 101. According to an embodiment of the present disclosure, the method further comprises the step of performing a plurality of manipulation steps, e.g., step 1; calling an AA player by a background; step 2, finding the video file named as a in the background, and step 3, opening the video file named as a with the AA player to realize that the progress information of the completion progress can be displayed on the voice interactive interface 101 in the process of "watching movie a", where, as indicated by the progress bar 102 in fig. 1, the current progress is that the AA player is being opened (i.e., step 1 in the above example).
Through the embodiment of the disclosure, the processing result obtained after the voice input is processed comprises a plurality of control steps, the control steps can be automatically executed, and in the process of automatically executing the control steps, the embodiment of the disclosure can show the completion progress of the control task, namely show the progress information for representing the completion progress of the control steps, and at least solve the problem that the page is continuously changed in the process of executing the control steps, which results in poor user experience.
It should be noted that fig. 1 is only an example of a scenario in which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, but does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments or scenarios.
FIG. 2 schematically shows a flow chart of a speech processing method according to an embodiment of the present disclosure.
As shown in fig. 2, the voice processing method includes operations S210 to S230.
In operation S210, a voice input is obtained.
In operation S220, a processing result for the voice input is obtained, the processing result being used to characterize a manipulation task, the manipulation task including a plurality of manipulation steps.
According to the embodiment of the disclosure, after the voice input is acquired, the voice input can be processed locally, or the voice input can be processed at the cloud after being transmitted to the cloud. Such as converting voice input into textual information and/or semantically interpreting the voice input. According to the embodiment of the disclosure, a plurality of control steps to be completed when the task needs to be executed can be determined according to the voice information, and the processing result obtained after the voice input is processed comprises the control task of the plurality of control steps.
For example, user a inputs the voice information "red packet to user B" through the voice software. The semantic understanding of the voice information "red envelope for user B" can be performed locally, and a plurality of control steps to be completed when the task of "red envelope for user B" needs to be performed are determined, for example, the following three control steps are included: and starting an application program, searching a user B, and clicking a red packet button on an interface for displaying the user B. It should be noted that the number of the determined operation steps is illustrative, and it is not limited that the steps are necessarily performed in the actual execution process, or that other steps are not required to be performed in the actual execution process.
The number of manipulation steps to be performed when performing a task may be determined according to the current operating state of the electronic device and/or the design of the application. For example, "give user B a red packet" is determined to adopt the wechat program to perform the red packet, if the electronic device has currently run the wechat program, the wechat program does not need to be started again, and only user B needs to be searched in the wechat program, and the red packet button is clicked on the interface for displaying user B. The number of manipulation steps determined at this time is two.
According to the embodiment of the disclosure, the text information input by the user can also be directly obtained, and the processing result is obtained after the text information is semantically understood and processed.
In operation S230, in response to the processing result, a manipulation task is executed and progress information is displayed, wherein the progress information is used to display completion progress of a plurality of manipulation steps, and the executing of the manipulation task is to automatically execute the plurality of manipulation steps.
According to the embodiment of the present disclosure, in the process of responding to the processing result, the manipulation tasks of the plurality of manipulation steps are automatically performed. The multiple control steps are executed one by one in the automatic execution process, and progress information is displayed in the process of executing the multiple control steps, wherein the progress information is used for displaying the completion progress of the multiple control steps.
Through the embodiment of the disclosure, the processing result obtained after the voice input is processed comprises a plurality of control steps, the control steps can be automatically executed, and in the process of automatically executing the control steps, the embodiment of the disclosure can show the completion progress of the control task, namely show the progress information for representing the completion progress of the control steps, and at least solve the problem that the page is continuously changed in the process of executing the control steps, which results in poor user experience.
According to the embodiment of the disclosure, the voice processing method further comprises the step of waking up the voice engine and displaying the voice interaction interface.
According to the embodiment of the present disclosure, the manner of waking up the voice engine includes various manners, for example, the voice engine can be woken up through voice input, and the voice engine can also be woken up through manual operation. After the voice engine is awakened, the voice interaction interface can be displayed on the display interface of the electronic equipment, and a user can further input voice information on the voice interaction interface, so that the user experience can be improved.
According to the embodiment of the disclosure, displaying the progress information includes displaying the progress information on the voice interaction interface, wherein the progress information is the marking condition of different positions on the progress bar.
According to the embodiment of the disclosure, after the voice interaction interface is displayed on the display interface of the electronic device, a progress bar can be displayed on the voice interaction interface, and the progress bar can be marked at different positions on the progress bar and used for representing the progress information of the currently executed task.
According to the embodiment of the disclosure, the manner of displaying the progress information is not limited to the form of the progress bar, for example, the progress information may be displayed by using a dynamically changing hourglass identifier, or the progress information may also be displayed by using dynamically changing numbers.
According to the embodiment of the disclosure, in the process of executing a plurality of control steps, the same voice interaction interface can be displayed on the display interface of the electronic equipment all the time, the progress information is displayed on the voice interaction interface at the same time, a result interface which needs to be displayed at last is skipped to the voice input after the plurality of control steps are executed, and the result interface is displayed on the display interface of the electronic equipment at last.
According to the embodiment of the disclosure, in the process of executing a plurality of control steps, different voice interaction interfaces can be displayed on the display interface of the electronic equipment along with the execution of different control steps, progress information is displayed on different voice interaction interfaces at the same time, a result interface which needs to be displayed at last in voice input is skipped until the execution of a plurality of control steps, and the result interface is displayed at last on the display interface of the electronic equipment.
Through the embodiment of the disclosure, in the process of executing a plurality of control steps, the progress information is displayed on the voice interaction interface, so that the completion degree of the steps executed by the electronic equipment of the user can be prompted. Meanwhile, in the process of executing the plurality of control steps, the display interface of the electronic equipment may also be correspondingly changed, and the voice interaction interface is displayed in the process of executing the plurality of control steps, so that a user can see the voice interaction interface instead of the display interface with the changed execution of the plurality of control steps, and the user can be prevented from mistakenly thinking that the electronic equipment has hacker attack or the electronic equipment has wrong execution and poor user experience.
The method shown in fig. 2 is further described with reference to fig. 3-5 in conjunction with specific embodiments.
Fig. 3 schematically shows a flow chart of response processing results according to an embodiment of the present disclosure.
As shown in fig. 3, in response to the processing result, performing the manipulation task and presenting the progress information includes operations S231 to S233.
In operation S231, the number of manipulation steps of the manipulation task module corresponding to the processing result is obtained.
According to the embodiment of the disclosure, after the voice input is processed, the number of the control steps required to be executed in the processing result can be determined, and the corresponding control task module can be called when the control steps are executed. According to an embodiment of the present disclosure, for example, the voice input is "send conference flow to user C through QQ program". After the voice input is processed, it can be determined that the manipulation steps are 4 manipulation steps of starting a QQ program, searching a user C, inputting a conference flow on a corresponding display interface, clicking and sending, and the like.
In operation S232, the progress bar is divided into corresponding progress units based on the number of manipulation steps.
According to an embodiment of the present disclosure, for example, if the determined number of manipulation steps is 4, the progress bar may be divided into 4 progress units. According to the embodiment of the disclosure, the length of the progress bar may be preset, for example, the length corresponding to each manipulation step is defined as a preset value, and the length of the progress bar may be determined based on the number of manipulation steps and the length corresponding to each manipulation step.
In operation S233, an execution result of each manipulation step is obtained, and a corresponding progress unit is marked on the progress bar based on the execution result of each manipulation step, wherein a plurality of manipulation steps have a sequential relationship therebetween.
According to the embodiment of the disclosure, one progress unit can be displayed in the progress bar after each execution of one manipulation step, and a corresponding number of progress units are displayed in the progress bar after all the manipulation steps are completely executed.
Through the embodiment of the disclosure, in the process of executing a plurality of control steps, the progress information is displayed on the voice interaction interface, the progress bars are divided according to the number of the control steps, the execution condition of the control steps can be dynamically displayed, the completion degree of the execution steps of the electronic equipment of a user can be prompted, and the user experience is improved.
Fig. 4 schematically shows a flow chart of a response processing result according to another embodiment of the present disclosure.
As shown in fig. 4, in response to the processing result, performing the manipulation task and presenting the progress information includes operations S234 to S236.
In operation S234, an execution time of the manipulation task module corresponding to the processing result is obtained.
According to the embodiment of the disclosure, the execution time of the control task module to execute the plurality of control steps can be preset. For example, if the execution time is set to 10 seconds in advance, a plurality of manipulation steps are executed within 10 seconds.
In operation S235, the progress bar is divided into corresponding progress units according to the execution time.
According to the embodiment of the present disclosure, for example, if the execution time is preset to be 10 seconds, the progress bar is divided into a certain number of progress units, such as 10 or 5 progress units.
In operation S236, if the task module is manipulated to execute the task, timing is performed and the corresponding progress unit is marked according to the timing information.
According to the embodiment of the disclosure, in the process of controlling the task module to execute the task, timing is carried out, and the corresponding progress unit is marked according to the timing information. For example, the progress bar is divided into 10 progress units. When the counted time is 1 second, 1 progress unit may be marked on the progress bar. When the counted time is 10 seconds, 10 progress units may be marked on the progress bar.
Through the embodiment of the disclosure, in the process of executing a plurality of control steps, the progress information is displayed on the voice interaction interface, the progress bar is divided according to the preset execution time of the control task module, the execution condition of the control steps can be dynamically displayed, the completion degree of the execution steps of the electronic equipment of a user can be prompted, and the user experience is improved.
Fig. 5 schematically shows a flow chart of a response processing result according to another embodiment of the present disclosure.
As shown in fig. 5, in response to the processing result, performing the manipulation task and presenting the progress information includes operations S237 to S239.
In operation S237, an execution time of the manipulation task module corresponding to the processing result is obtained.
According to the embodiment of the disclosure, the execution time of the manipulation task module may be estimated in advance according to the number of the execution manipulation steps and/or the execution difficulty degree. The estimated execution time of the control task module corresponding to the processing result may not be counted in the time required for starting the corresponding application program, for example, before the execution time of the control task module corresponding to the processing result is obtained, it may be determined whether the corresponding application program has been started, and in the case that the corresponding application program has been started, the execution time of the control task module may be estimated in advance according to the number of execution control steps and/or the degree of difficulty of execution. Under the condition that the corresponding application program is not started, the execution time of the control task module can be estimated in advance according to the time required for starting the corresponding application program, the number of the execution control steps and/or the execution difficulty degree.
In operation S238, the progress bar is divided into corresponding progress units according to the execution time.
In operation S239, if the task module is controlled to execute the task, the corresponding progress unit is dynamically marked according to the execution time of each control step.
According to an embodiment of the present disclosure, for example, when there are 5 manipulation steps, the acquired pre-estimated execution time is 10 seconds, the progress bar may be divided into 5 progress units, each progress unit representing 2 seconds, and/or represent one manipulation step. According to the embodiment of the disclosure, in the process of executing the manipulation steps, the execution time of each manipulation step can be calculated, for example, the execution time of the first manipulation step is 1.9 seconds, and after the first manipulation step is executed, a progress unit can be dynamically marked in advance in the progress bar. For example, the execution time of the second manipulation step is 2.2 seconds, and since 4.1 seconds have been reached after the second manipulation step is executed, the second progress unit can be dynamically pre-marked in the progress bar when the time reaches 4 seconds.
According to the embodiment of the disclosure, the execution time of each operation and control step can be recorded, so that the next time the execution time of the operation and control task module is estimated can be used as a reference.
Through the embodiment of the disclosure, the execution condition of the control step can be dynamically displayed, the completion degree of the step execution of the electronic equipment of the user can be prompted, and the user experience is improved. By recording the execution time of each operation and control step, the next time the execution time of the operation and control task module is estimated can be used as a reference, the accuracy of the estimated execution time of the operation and control task module can be improved, the progress bar can be accurately divided into corresponding progress units, and the dynamic adjustment precision of the progress bar is improved.
Fig. 6 schematically shows a block diagram of an electronic device according to an embodiment of the disclosure.
As shown in fig. 6, the electronic device 300 includes a first acquisition module 310, a second acquisition module 320, and a response module 330.
The first obtaining module 310 is used for obtaining a voice input.
The second obtaining module 320 is configured to obtain a processing result for the voice input, where the processing result is used to characterize a control task, and the control task includes a plurality of control steps.
The response module 330 is configured to execute the control task and display progress information in response to the processing result, where the progress information is used to display completion progress of a plurality of control steps, and executing the control task is to automatically execute the plurality of control steps.
Through the embodiment of the disclosure, the processing result obtained after the voice input is processed comprises a plurality of control steps, the control steps can be automatically executed, and in the process of automatically executing the control steps, the embodiment of the disclosure can show the completion progress of the control task, namely show the progress information for representing the completion progress of the control steps, and at least solve the problem that the page is continuously changed in the process of executing the control steps, which results in poor user experience.
Fig. 7 schematically shows a block diagram of an electronic device according to another embodiment of the present disclosure.
As shown in fig. 7, the electronic device 300 further includes a wake-up module 340 for waking up the speech engine to display the speech interactive interface, in addition to the first obtaining module 310, the second obtaining module 320 and the response module 330.
According to the embodiment of the present disclosure, the manner of waking up the voice engine includes various manners, for example, the voice engine can be woken up through voice input, and the voice engine can also be woken up through manual operation. After the voice engine is awakened, the voice interaction interface can be displayed on the display interface of the electronic equipment, and a user can further input voice information on the voice interaction interface, so that the user experience can be improved.
Fig. 8 schematically shows a block diagram of an electronic device according to another embodiment of the present disclosure.
As shown in fig. 8, according to the embodiment of the present disclosure, the response module 330 includes a presentation unit 331, configured to present progress information on the voice interaction interface, where the progress information is a marking condition of different positions on the progress bar.
Through the embodiment of the disclosure, in the process of executing a plurality of control steps, the progress information is displayed on the voice interaction interface, so that the completion degree of the steps executed by the electronic equipment of the user can be prompted. Meanwhile, in the process of executing the plurality of control steps, the display interface of the electronic equipment may also be correspondingly changed, and the voice interaction interface is displayed in the process of executing the plurality of control steps, so that a user can see the voice interaction interface instead of the display interface with the changed execution of the plurality of control steps, and the user can be prevented from mistakenly thinking that the electronic equipment has hacker attack or the electronic equipment has wrong execution and poor user experience.
FIG. 9 schematically shows a block diagram of a response module according to an embodiment of the disclosure.
As shown in fig. 9, the response module 330 includes a first obtaining unit 332, a first dividing unit 333, and a first marking unit 334 in addition to the presentation unit 331.
The first obtaining unit 332 is configured to obtain the number of the manipulation steps of the manipulation task module corresponding to the processing result.
The first dividing unit 333 serves to divide the progress bar into the respective progress units based on the number of manipulation steps.
The first marking unit 334 is configured to obtain an execution result of each manipulation step, and mark a corresponding progress unit on the progress bar based on the execution result of each manipulation step, where a plurality of manipulation steps have a sequential relationship therebetween.
Through the embodiment of the disclosure, in the process of executing a plurality of control steps, the progress information is displayed on the voice interaction interface, the progress bars are divided according to the number of the control steps, the execution condition of the control steps can be dynamically displayed, the completion degree of the execution steps of the electronic equipment of a user can be prompted, and the user experience is improved.
FIG. 10 schematically shows a block diagram of a response module according to another embodiment of the disclosure.
As shown in fig. 10, the response module 330 includes a second obtaining unit 335, a second dividing unit 336 and a second marking unit 337 in addition to the presenting unit 331.
The second obtaining unit 335 is configured to obtain an execution time of the control task module corresponding to the processing result.
The second dividing unit 336 is configured to divide the progress bar into corresponding progress units according to the execution time.
The second marking unit 337 is configured to count time and mark a corresponding progress unit according to the timing information when the task module is controlled to execute the task.
Through the embodiment of the disclosure, in the process of executing a plurality of control steps, the progress information is displayed on the voice interaction interface, the progress bar is divided according to the preset execution time of the control task module, the execution condition of the control steps can be dynamically displayed, the completion degree of the execution steps of the electronic equipment of a user can be prompted, and the user experience is improved.
FIG. 11 schematically shows a block diagram of a response module according to another embodiment of the disclosure.
As shown in fig. 11, the response module 330 includes a third obtaining unit 338, a third dividing unit 339 and a third marking unit 3310 in addition to the presenting unit 331.
The third obtaining unit 338 is configured to obtain the execution time of the control task module corresponding to the processing result.
The third dividing unit 339 is configured to divide the progress bar into corresponding progress units according to the execution time.
The third marking unit 3310 is configured to dynamically mark the corresponding progress unit according to the execution time of each control step when the control task module executes the task.
Through the embodiment of the disclosure, the execution condition of the control step can be dynamically displayed, the completion degree of the step execution of the electronic equipment of the user can be prompted, and the user experience is improved. By recording the execution time of each operation and control step, the next time the execution time of the operation and control task module is estimated can be used as a reference, the accuracy of the estimated execution time of the operation and control task module can be improved, the progress bar can be accurately divided into corresponding progress units, and the dynamic adjustment precision of the progress bar is improved.
Any of the modules and units according to embodiments of the present disclosure, or at least part of the functionality of any of them, may be implemented in one module. Any one or more of the modules and units according to the embodiments of the present disclosure may be implemented by being split into a plurality of modules. Any one or more of the modules and units according to the embodiments of the present disclosure may be implemented at least partially as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in any other reasonable manner of hardware or firmware by integrating or packaging the circuits, or in any one of three implementations of software, hardware and firmware, or in any suitable combination of any of them. Alternatively, one or more of the modules and units according to embodiments of the disclosure may be implemented at least partly as computer program modules, which, when executed, may perform corresponding functions.
For example, any plurality of the first acquisition module 310, the second acquisition module 320, the response module 330, and the wake-up module 340 may be combined and implemented in one module, or any one of the modules may be split into a plurality of modules. Alternatively, at least part of the functionality of one or more of these modules may be combined with at least part of the functionality of the other modules and implemented in one module. According to an embodiment of the present disclosure, at least one of the first obtaining module 310, the second obtaining module 320, the responding module 330, and the waking module 340 may be at least partially implemented as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented by hardware or firmware in any other reasonable manner of integrating or packaging a circuit, or implemented in any one of three implementations of software, hardware, and firmware, or in a suitable combination of any of them. Alternatively, at least one of the first acquisition module 310, the second acquisition module 320, the response module 330 and the wake-up module 340 may be at least partially implemented as a computer program module, which when executed may perform a corresponding function.
FIG. 12 schematically illustrates a block diagram of a computer system suitable for implementing the speech processing methods of the present disclosure, in accordance with an embodiment of the present disclosure. The computer system illustrated in FIG. 12 is only one example and should not impose any limitations on the scope of use or functionality of embodiments of the disclosure.
As shown in fig. 12, computer system 400 includes a processor 410 and a computer-readable storage medium 420. The computer system 400 may perform a speech processing method according to an embodiment of the present disclosure.
In particular, processor 410 may include, for example, a general purpose microprocessor, an instruction set processor and/or related chip set and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), and/or the like. The processor 410 may also include onboard memory for caching purposes. Processor 410 may be a single processing unit or a plurality of processing units for performing different actions of a method flow according to embodiments of the disclosure.
Computer-readable storage medium 420 may be, for example, any medium that can contain, store, communicate, propagate, or transport the instructions. For example, a readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. Specific examples of the readable storage medium include: magnetic storage devices, such as magnetic tape or Hard Disk Drives (HDDs); optical storage devices, such as compact disks (CD-ROMs); a memory, such as a Random Access Memory (RAM) or a flash memory; and/or wired/wireless communication links.
The computer-readable storage medium 420 may comprise a computer program 421, which computer program 421 may comprise code/computer-executable instructions that, when executed by the processor 410, cause the processor 410 to perform a method according to an embodiment of the disclosure, or any variant thereof.
The computer program 421 may be configured with, for example, computer program code comprising computer program modules. For example, in an example embodiment, code in computer program 421 may include one or more program modules, including for example 421A, modules 421B, … …. It should be noted that the division and number of the modules are not fixed, and those skilled in the art may use suitable program modules or program module combinations according to actual situations, so that the processor 410 may execute the method according to the embodiment of the present disclosure or any variation thereof when the program modules are executed by the processor 410.
According to an embodiment of the present invention, at least one of the first obtaining module 310, the second obtaining module 320, the response module 330, the wake-up module 340, the presentation unit 331, the first obtaining unit 332, the first dividing unit 333, the first marking unit 334, the second obtaining unit 335, the second dividing unit 336, the second marking unit 337, the third obtaining unit 338, the third dividing unit 339, and the third marking unit 3310 may be implemented as a computer program module described with reference to fig. 12, which, when executed by the processor 410, may implement the corresponding operations described above.
The present disclosure also provides a computer-readable medium, which may be embodied in the apparatus/device/system described in the above embodiments; or may exist separately and not be assembled into the device/apparatus/system. The computer readable medium carries one or more programs which, when executed, implement: obtaining a voice input; obtaining a processing result aiming at the voice input, wherein the processing result is used for representing a control task, and the control task comprises a plurality of control steps; and responding to the processing result, executing the control task and displaying progress information, wherein the progress information is used for displaying the completion progress of the plurality of control steps, and executing the control task is to automatically execute the plurality of control steps. Optionally, the voice processing method further includes waking up the voice engine to display the voice interaction interface. Optionally, the displaying the progress information includes displaying the progress information on the voice interaction interface, where the progress information is a marking condition of different positions on the progress bar. Optionally, in response to the processing result, executing the control task and displaying the progress information includes obtaining the number of control steps of the control task module corresponding to the processing result; dividing the progress bar into corresponding progress units based on the number of the manipulation steps; and obtaining an execution result of each manipulation step, and marking a corresponding progress unit on the progress bar based on the execution result of each manipulation step, wherein a plurality of manipulation steps have a sequential relationship. Optionally, in response to the processing result, executing the control task and displaying the progress information includes obtaining an execution time of a control task module corresponding to the processing result; dividing the progress bar into corresponding progress units according to the execution time; and if the task module is controlled to execute the task, timing and marking the corresponding progress unit according to the timing information. Optionally, in response to the processing result, executing the control task and displaying the progress information includes obtaining an execution time of a control task module corresponding to the processing result; dividing the progress bar into corresponding progress units according to the execution time; and if the task control module executes the task, dynamically marking the corresponding progress unit according to the execution time of each control step.
The voice software provided by the embodiment of the invention; the voice software comprises a plurality of operation steps if the operation task triggered by the input voice command is automatically performed in the background. Displaying a progress bar on an interactive interface of the voice software to show the progress of a plurality of operation steps corresponding to the operation task.
According to embodiments of the present disclosure, a computer readable medium may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer-readable signal medium may include a propagated data signal with computer-readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, optical fiber cable, radio frequency signals, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that various combinations and/or combinations of features recited in the various embodiments and/or claims of the present disclosure can be made, even if such combinations or combinations are not expressly recited in the present disclosure. In particular, various combinations and/or combinations of the features recited in the various embodiments and/or claims of the present disclosure may be made without departing from the spirit or teaching of the present disclosure. All such combinations and/or associations are within the scope of the present disclosure.
While the disclosure has been shown and described with reference to certain exemplary embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the disclosure as defined by the appended claims and their equivalents. Accordingly, the scope of the present disclosure should not be limited to the above-described embodiments, but should be defined not only by the appended claims, but also by equivalents thereof.
Claims (7)
1. A method of speech processing comprising:
obtaining a voice input;
based on the voice input, a voice engine is awakened, and a voice interaction interface is displayed;
obtaining a processing result aiming at the voice input, wherein the processing result is used for representing a control task corresponding to the voice input, the control task comprises a plurality of control steps, a corresponding control task module is called when the control step is executed, and the control steps comprise at least one execution operation related to an application program of a non-voice engine; and
responding to the processing result, executing the control task and displaying progress information, wherein the progress information is used for displaying the completion progress of the control steps, and the executing the control task is automatically executing the control steps;
and after the plurality of control steps are executed, skipping to a result interface which is required to be displayed at last in the voice input and displaying the result interface.
2. The method of claim 1, wherein the progress information is a marking of different positions on a progress bar.
3. The method of claim 2, wherein the executing the manipulation task and presenting progress information in response to the processing result comprises:
acquiring the number of control steps of a control task module corresponding to the processing result;
dividing the progress bar into corresponding progress units based on the number of the manipulation steps; and
obtaining an execution result of each manipulation step, and marking a corresponding progress unit on the progress bar based on the execution result of each manipulation step, wherein the plurality of manipulation steps have a sequential relationship.
4. The method of claim 2, wherein the executing the manipulation task and presenting progress information in response to the processing result comprises:
obtaining the execution time of the control task module corresponding to the processing result;
dividing the progress bar into corresponding progress units according to the execution time; and
and if the task control module executes the task, timing and marking the corresponding progress unit according to the timing information.
5. The method of claim 2, wherein the executing the manipulation task and presenting progress information in response to the processing result comprises:
obtaining the execution time of the control task module corresponding to the processing result;
dividing the progress bar into corresponding progress units according to the execution time; and
and if the task control module executes the task, dynamically marking the corresponding progress unit according to the execution time of each control step.
6. An electronic device, comprising:
the first acquisition module is used for acquiring voice input;
the awakening module is used for awakening a voice engine based on the voice input and displaying a voice interaction interface;
the second obtaining module is used for obtaining a processing result aiming at the voice input, the processing result is used for representing a control task corresponding to the voice input, the control task comprises a plurality of control steps, the control step is executed by calling a corresponding control task module, and the control steps comprise at least one execution operation related to an application program of a non-voice engine; and
a response module, configured to respond to the processing result, execute the control task and display progress information, where the progress information is used to display completion progress of the plurality of control steps, and the executing the control task is to automatically execute the plurality of control steps;
and after the plurality of control steps are executed, skipping to a result interface which is required to be displayed at last in the voice input and displaying the result interface.
7. The electronic device of claim 6, wherein the response module comprises:
the display unit is used for displaying the progress information on the voice interaction interface, wherein the progress information is the marking conditions of different positions on the progress bar;
wherein the response module further comprises:
the first acquisition unit is used for acquiring the number of control steps of the control task module corresponding to the processing result;
a first dividing unit for dividing the progress bar into corresponding progress units based on the number of the manipulation steps; and
the first marking unit is used for obtaining an execution result of each operation and control step execution and marking a corresponding progress unit on the progress bar based on the execution result of each operation and control step execution, wherein the operation and control steps have a sequence relation;
or, wherein the response module comprises:
the second acquisition unit is used for acquiring the execution time of the control task module corresponding to the processing result;
the second dividing unit is used for dividing the progress bar into corresponding progress units according to the execution time; and
the second marking unit is used for timing and marking the corresponding progress unit according to the timing information under the condition that the task control module executes the task;
or, wherein the response module comprises:
the third acquisition unit is used for acquiring the execution time of the control task module corresponding to the processing result;
the third dividing unit is used for dividing the progress bar into corresponding progress units according to the execution time; and
and the third marking unit is used for dynamically marking the corresponding progress unit according to the execution time of each control step under the condition that the control task module executes the task.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810296940.8A CN108491246B (en) | 2018-03-30 | 2018-03-30 | Voice processing method and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810296940.8A CN108491246B (en) | 2018-03-30 | 2018-03-30 | Voice processing method and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108491246A CN108491246A (en) | 2018-09-04 |
CN108491246B true CN108491246B (en) | 2021-06-15 |
Family
ID=63314531
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810296940.8A Active CN108491246B (en) | 2018-03-30 | 2018-03-30 | Voice processing method and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108491246B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103838487A (en) * | 2014-03-28 | 2014-06-04 | 联想(北京)有限公司 | Information processing method and electronic device |
CN104898821A (en) * | 2014-03-03 | 2015-09-09 | 联想(北京)有限公司 | Information processing method and electronic equipment |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107346228B (en) * | 2017-07-04 | 2021-07-16 | 联想(北京)有限公司 | Voice processing method and system of electronic equipment |
-
2018
- 2018-03-30 CN CN201810296940.8A patent/CN108491246B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104898821A (en) * | 2014-03-03 | 2015-09-09 | 联想(北京)有限公司 | Information processing method and electronic equipment |
CN103838487A (en) * | 2014-03-28 | 2014-06-04 | 联想(北京)有限公司 | Information processing method and electronic device |
Also Published As
Publication number | Publication date |
---|---|
CN108491246A (en) | 2018-09-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR20220091500A (en) | Methods and devices, electronic devices and media for displaying music points | |
CN107728783B (en) | Artificial intelligence processing method and system | |
JP2019185011A (en) | Processing method for waking up application program, apparatus, and storage medium | |
CN110267113B (en) | Video file processing method, system, medium, and electronic device | |
KR20230016049A (en) | Video processing method and device, electronic device, and computer readable storage medium | |
CN108897575B (en) | Configuration method and configuration system of electronic equipment | |
EP3902280A1 (en) | Short video generation method and platform, electronic device, and storage medium | |
US11468881B2 (en) | Method and system for semantic intelligent task learning and adaptive execution | |
US11270690B2 (en) | Method and apparatus for waking up device | |
WO2019128829A1 (en) | Action execution method and apparatus, storage medium and electronic apparatus | |
US9607617B2 (en) | Concept cloud in smart phone applications | |
CN107045498A (en) | Synchronous translation equipment, method, device and the electronic equipment of a kind of double-sided display | |
US20170206059A1 (en) | Apparatus and method for voice recognition device in vehicle | |
CN112306447A (en) | Interface navigation method, device, terminal and storage medium | |
CN110727869A (en) | Page construction method and device | |
US9137483B2 (en) | Video playback device, video playback method, non-transitory storage medium having stored thereon video playback program, video playback control device, video playback control method and non-transitory storage medium having stored thereon video playback control program | |
CN113672748A (en) | Multimedia information playing method and device | |
US20240079002A1 (en) | Minutes of meeting processing method and apparatus, device, and medium | |
CN111246245B (en) | Method and device for pushing video aggregation page, server and terminal equipment | |
US11960535B2 (en) | Method for recommending podcast in music application and device | |
EP3174312A1 (en) | Playback method and playback device for a multiroom sound system | |
CN108491246B (en) | Voice processing method and electronic equipment | |
CN111105781B (en) | Voice processing method, device, electronic equipment and medium | |
US9996148B1 (en) | Rule-based presentation of media items | |
WO2023056850A1 (en) | Page display method and apparatus, and device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |