CN109658926B - Voice instruction updating method and mobile terminal - Google Patents

Voice instruction updating method and mobile terminal Download PDF

Info

Publication number
CN109658926B
CN109658926B CN201811448953.9A CN201811448953A CN109658926B CN 109658926 B CN109658926 B CN 109658926B CN 201811448953 A CN201811448953 A CN 201811448953A CN 109658926 B CN109658926 B CN 109658926B
Authority
CN
China
Prior art keywords
voice
input
task
target
subtask
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811448953.9A
Other languages
Chinese (zh)
Other versions
CN109658926A (en
Inventor
韩桂敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201811448953.9A priority Critical patent/CN109658926B/en
Publication of CN109658926A publication Critical patent/CN109658926A/en
Application granted granted Critical
Publication of CN109658926B publication Critical patent/CN109658926B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Abstract

The invention provides a voice instruction updating method and a mobile terminal, wherein the method comprises the following steps: in the embodiment of the invention, a target voice task sequence is obtained, and the target task sequence is associated with a voice control instruction; extracting N voice subtasks in the target voice task sequence; displaying M application program icons corresponding to the N voice subtasks; receiving a first input of a user; in response to the first input, updating the target speech task sequence. Different voice subtasks can be identified through the specified segmentation operation, so that partial voice subtasks can be updated, and the error correction entry time is reduced.

Description

Voice instruction updating method and mobile terminal
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a method for updating a voice command and a mobile terminal.
Background
In order to reduce the operation steps of the user using the mobile terminal, the mobile terminal system gradually switches from key control to voice control. A simple voice command can replace the key operation of multiple steps.
In the prior art, a voice command can be predefined by a system, and a specific application in the system can be controlled by voice. For example, a voice command "call XXX" may control the system to automatically place a call XXX. However, the predefined voice command limitation of such a system is relatively large, and in order to solve such a limitation problem, an update function of the voice command occurs. The user can open the user-defined voice instruction function of the mobile terminal and input the voice instruction. However, this scheme can only ensure one-time entry, and if the user enters an error, the entry needs to be re-entered, which results in time consumption for error correction entry.
Disclosure of Invention
The embodiment of the invention provides a voice instruction updating method and a mobile terminal, and aims to solve the problem that time is consumed for error correction entry because the voice instruction needs to be re-entered when being updated.
In a first aspect, an embodiment of the present invention discloses a method for updating a voice command, which is applied to a mobile terminal, and includes:
acquiring a target voice task sequence, wherein the target task sequence is associated with a voice control instruction;
extracting N voice subtasks in the target voice task sequence;
displaying M application program icons corresponding to the N voice subtasks;
receiving a first input of a user;
in response to the first input, updating the target speech task sequence.
In a second aspect, an embodiment of the present invention further discloses a mobile terminal, including:
the task sequence acquisition module is used for acquiring a target voice task sequence, and the target task sequence is associated with a voice control instruction;
the subtask extraction submodule is used for extracting N voice subtasks in the target voice task sequence;
the application icon display module is used for displaying M application program icons corresponding to the N voice subtasks;
the first input receiving module is used for receiving a first input of a user;
and the task updating module is used for responding to the first input and updating the target voice task sequence.
In a third aspect, an embodiment of the present invention further discloses a mobile terminal, including a processor, a memory, and a computer program stored on the memory and operable on the processor, where the computer program, when executed by the processor, implements the steps of the method for updating a voice instruction according to any one of the above descriptions. #
In a fourth aspect, an embodiment of the present invention further discloses a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the steps of the method for updating the voice instruction according to any one of the above. #
In the embodiment of the invention, a target voice task sequence is obtained, and the target task sequence is associated with a voice control instruction; extracting N voice subtasks in the target voice task sequence; displaying M application program icons corresponding to the N voice subtasks; receiving a first input of a user; in response to the first input, updating the target speech task sequence. Different voice subtasks can be identified through the specified segmentation operation, so that partial voice subtasks can be updated, and the error correction entry time is reduced.
Drawings
FIG. 1 is a flowchart illustrating steps of a method for updating voice commands according to a first embodiment of the present invention;
FIG. 2 is a schematic diagram of a first interface for customizing a voice command according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating a second interface for customizing voice commands according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating an outer ring dynamic effect of a screen recording button according to an embodiment of the present invention;
FIG. 5 illustrates a first update interface for custom voice commands in a first embodiment of the present invention;
FIG. 6 illustrates a second update interface for custom voice commands in accordance with a first embodiment of the present invention;
fig. 7 is a block diagram illustrating a mobile terminal according to a second embodiment of the present invention;
fig. 8 is a diagram illustrating a hardware structure of a mobile terminal implementing various embodiments of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The following describes a method for updating a voice command and a mobile terminal according to the present invention in detail by taking several specific embodiments.
Referring to fig. 1, a flowchart illustrating steps of a method for updating a voice instruction according to a first embodiment of the present invention is shown, which may specifically include the following steps:
step 101, a target voice task sequence is obtained, and the target task sequence is associated with a voice control instruction.
The embodiment of the invention is applied to the mobile terminal with the screen recording function, including a tablet personal computer, a mobile phone and the like.
In practical application, a user can start a screen recording function of the mobile terminal, then perform teaching operation on the mobile terminal, and set a voice awakening word for the operation, so that the user can speak the voice awakening word to the mobile terminal, and the mobile terminal can perform the teaching operation.
As shown in fig. 2, the user may click on the plus sign next to my voice instruction tutoring to start the tutoring process, and enter the interface shown in fig. 3, where the user may operate the application therein. In the tutorial flow, the user may click one of the screen recording buttons BTN1 after completing one voice subtask each time to indicate that the voice subtask is completed. In addition, the user can also press the screen-recording button BTN1 for a long time after completing the teaching operation. As shown in fig. 4, when the outer ring dynamic effect of the screen recording button BTN1 makes a turn, the teaching process is completed, and at this time, the user can enter the wake-up word, and after the mobile terminal recognizes the wake-up word, the wake-up word and the user teaching operation are used as a new self-defined voice instruction. Therefore, the user can speak the awakening word to the mobile terminal, and the mobile terminal can perform user teaching operation. For example, when the wake-up word is "good morning", the user teaching operation is to set a seven-point alarm clock, so that the mobile terminal automatically sets the seven-point alarm clock after the user says "good morning".
It will be appreciated that the user tutorial action may include one or more voice subtasks, which may correspond to the same or different applications, and which are organized in sequence into a sequence of voice tasks. For example, the user teaching operation includes 2 voice subtasks, the 1 st voice subtask is to set an alarm clock, the 2 nd voice subtask is to run music playing software, and the 1 st voice subtask and the 2 nd voice subtask correspond to different applications. It can be understood that after the alarm clock is set, the user can inform the mobile terminal that the 1 st voice subtask entry is completed through the specified segmentation operation.
The voice subtask is one or more operation steps with a purpose, and one voice subtask corresponds to one application. For example, for the voice subtask of setting the alarm clock, 3 steps of turning on the alarm clock software, setting the alarm clock and turning on the alarm clock are included.
In the embodiment of the invention, the screen recording result is a video recording the teaching operation of the user, and the video is composed of a series of screenshots aiming at the interface of the mobile terminal. The video thus records the start, end, operational steps, path and application targeted for each voice subtask.
And 102, extracting N voice subtasks in the target voice task sequence.
In particular, the voice subtasks may be divided according to the specified division operation as described above. The specified dividing operation may be other types of operations besides a single click on the screen recording button, for example, shaking the mobile phone, pressing a physical button, and the like, which is not limited in the embodiments of the present invention.
And 103, displaying M application program icons corresponding to the N voice subtasks.
Specifically, the application icons may be displayed in the order during screen recording, when there are many application icons, a part of the application icons is displayed on the current page, and the user may request to display other application icons above, below, or to the left or right by sliding or clicking a designated button. As shown in fig. 5, the upper or lower application icons may be displayed by clicking a designated up-down button.
In another embodiment of the present invention, before step 104, the following steps a1 to a2 may be further included:
step a1, receiving a second input from the user to a target application icon of the M application icons.
Wherein the second input is for requesting all voice subtasks for the target application icon.
The second input may be a click input, a long-press input, or the like, and the second input may also be a second operation, and in practical applications, the second input may be a click operation on the target application program icon.
And step A2, responding to the second input, and displaying task identifiers of all voice subtasks corresponding to the target application program icon.
As shown in fig. 5, the user may click the target application icon (for example, the application APP1) twice in succession, and enter the interface shown in fig. 6, and the mobile terminal may display all the voice subtasks of the application APP1, so that the user may update the voice subtasks (including TASK1, TASK2, and TASK3) therein.
It can be seen that when there are more voice subtasks, the user can slide left and right, as shown in FIG. 6.
The embodiment of the invention can display all the voice subtasks corresponding to the specified application, thereby facilitating the user to delete and change one or more voice subtasks in a certain application.
Step 104, receiving a first input of a user.
The first input is used for initiating the update of the voice subtask, and may include one or more, and different inputs may initiate different types of updates. The first input may be a click input, a long press input, or the like, and the first input may also be a first operation.
Step 105, in response to the first input, updating the target speech task sequence.
In the embodiment of the present invention, since different voice subtasks corresponding to the same voice command are identified in step 101, it is possible to facilitate updating for each voice subtask; since the corresponding application is obtained through step 102, the voice subtask under the application can also be updated.
Wherein the updates include, but are not limited to: adding a new voice subtask, deleting a voice subtask, and changing a voice subtask.
Optionally, in another embodiment of the present invention, the first input is used to delete at least one voice subtask, and the step 105 includes the sub-steps 1051 to 1052 of:
sub-step 1051, in case said first input is an input to a first voice sub-task of a first application icon of the M application icons, deleting said first voice sub-task from said target voice task sequence.
Specifically, the first voice subtask of the first application icon may be dragged to a specified location (e.g., the garbage collector to the right of the instruction details in FIG. 5).
It can be understood that the deleting mode of the voice subtask may be set according to an actual application scenario, and the embodiment of the present invention does not limit the deleting mode.
In addition, the deletion of the voice subtask can also be performed in the interface as shown in fig. 6, and the steps are the same as those in fig. 5.
The embodiment of the invention can delete the voice subtasks one by one, thereby realizing accurate and flexible deletion of the voice subtasks.
Substep 1052, in case the first input is an input to a second application icon of the M application icons, deleting all voice subtasks corresponding to the second application icon from the target voice task sequence.
Specifically, when the user drags the second application icon into a specified location (e.g., the garbage collector to the right of the instruction details in FIG. 5), all voice subtasks in the second application icon are deleted.
It can be understood that the deleting modes of all the voice subtasks in the second application icon can also be flexibly set, and the embodiment of the present invention does not limit the deleting modes.
In the embodiment of the invention, a user can delete one or all voice subtasks of one application icon at a time, thereby realizing flexible voice subtask deletion.
Optionally, in another embodiment of the present invention, the first input is used to modify at least one voice subtask, and the step 105 includes the sub-steps 1053 to 1054:
sub-step 1053, in the case where said first input is an input to a second voice subtask of a third application icon of the M application icons, acquiring a re-entered third voice subtask and replacing said second voice subtask.
In the embodiment of the invention, when the user selects one of the voice subtasks and selects 'religion', the voice subtask is re-recorded to replace the voice subtask. As shown in fig. 6, the user may first select the voice subtask "TASK 1" and then select "religion", replacing the re-entered voice subtask with "TASK 1".
Substep 1054, in case that the first input is for a fourth application icon of the M application icons, obtaining a re-entered first voice task subsequence and replacing a second voice task subsequence corresponding to the fourth application icon; each application icon corresponds to one voice task subsequence, and each voice task subsequence comprises at least one voice subtask.
Specifically, the user first selects a third application icon (e.g., application APP1 in fig. 5), and then selects "religion" so that all voice subtasks in the third application icon will be modified.
It will be appreciated that the modification of the voice subtasks may also be performed in an interface as shown in FIG. 6, with the same steps as in FIG. 5.
The embodiment of the invention can re-input one or all voice subtasks in one application program icon, thereby realizing flexible change of the voice subtasks.
Optionally, in another embodiment of the present invention, the first input is used to add at least one voice subtask; the above step 105 comprises sub-steps 1055 to 1056:
and a substep 1055 of obtaining a target position corresponding to the first input.
The target position may be after or before a certain object in the target voice instruction, and may be a default foremost position or a default rearmost position. For example, as shown in FIG. 5, a user may click APP1, and the target location may be before or after APP 1; as shown in FIG. 6, the user can click on the voice subtask 1, and the target position is after or before the TASK 1.
Sub-step 1056, obtaining at least one voice sub-task newly entered.
It is understood that the method for entering the voice subtask can refer to the detailed description of step 101, and is not described herein again.
Specifically, as shown in fig. 5, the custom VOICE instruction VOICE1 includes a plurality of application icons, and the user can select one of the application icons (e.g., application APP1) and select "try me", adding a new VOICE subtask before or after the application icon; as shown in fig. 6, the application APP1 includes multiple voice subtasks, and the user can select one of the voice subtasks (e.g., TASK1) and select "try me", adding a new voice subtask before or after TASK 1.
Substep 1057 of adding said at least one voice subtask to said target location; each application program icon corresponds to one voice task subsequence, and each voice task subsequence comprises at least one voice subtask; the target position comprises any position in the voice task subsequence corresponding to any application program icon.
In the embodiment of the invention, the voice subtask can be added into the voice instruction, and the flexible addition of the voice instruction is realized.
In the embodiment of the invention, a target voice task sequence is obtained, and the target task sequence is associated with a voice control instruction; extracting N voice subtasks in the target voice task sequence; displaying M application program icons corresponding to the N voice subtasks; receiving a first input of a user; in response to the first input, updating the target speech task sequence. Different voice subtasks can be identified through the specified segmentation operation, so that partial voice subtasks can be updated, and the error correction entry time is reduced.
Referring to fig. 7, a block diagram of a mobile terminal according to a third embodiment of the present invention is shown.
The mobile terminal 200 includes: the task sequence acquisition module 201, the subtask extraction sub-module 202, the application icon display module 203, the first input receiving module 204, and the task update module 205.
The functions of the modules and the interaction relationship between the modules are described in detail below.
A task sequence obtaining module 201, configured to obtain a target voice task sequence, where the target task sequence is associated with a voice control instruction.
And the subtask extraction submodule 202 is configured to extract N voice subtasks in the target voice task sequence.
And the application icon display module 203 is configured to display M application icons corresponding to the N voice subtasks.
The first input receiving module 204 is configured to receive a first input of a user.
A task update module 205, configured to update the target speech task sequence in response to the first input.
Optionally, in another embodiment of the present invention, the mobile terminal 200 further includes:
and the second input receiving module is used for receiving second input of a user to a target application program icon in the M application program icons.
And the second task identifier display module is used for responding to the second input and displaying the task identifiers of all the voice subtasks corresponding to the target application program icon.
Optionally, in another embodiment of the present invention, the first input is used to delete at least one voice subtask, and the task update module 205 includes:
a first deletion submodule, configured to delete a first voice subtask from the target voice task sequence if the first input is an input to the first voice subtask in a first application icon of the M application icons.
And the second deletion sub-module is used for deleting all voice subtasks corresponding to the second application program icon from the target voice task sequence under the condition that the first input is input to the second application program icon in the M application program icons.
Optionally, in another embodiment of the present invention, the first input is used to modify at least one voice subtask, and the task update module 205 includes:
and the first re-recording sub-module is used for acquiring the re-recorded third voice subtask and replacing the second voice subtask when the first input is input to a second voice subtask in a third application icon in the M application icons.
The second re-recording sub-module is used for acquiring a re-recorded first voice task sub-sequence and replacing a second voice task sub-sequence corresponding to a fourth application icon in the M application icons when the first input is the fourth application icon; each application icon corresponds to one voice task subsequence, and each voice task subsequence comprises at least one voice subtask.
Optionally, in another embodiment of the present invention, the first input is used to add at least one voice subtask; the task update module 205 includes:
and the target position obtaining submodule is used for obtaining a target position corresponding to the first input.
And the re-recording task obtaining sub-module is used for obtaining at least one newly recorded voice sub-task.
A task adding sub-module, configured to add the at least one voice sub-task to the target location; each application program icon corresponds to one voice task subsequence, and each voice task subsequence comprises at least one voice subtask; the target position comprises any position in the voice task subsequence corresponding to any application program icon.
In the embodiment of the invention, a target voice task sequence is obtained, and the target task sequence is associated with a voice control instruction; extracting N voice subtasks in the target voice task sequence; displaying M application program icons corresponding to the N voice subtasks; receiving a first input of a user; in response to the first input, updating the target speech task sequence. Different voice subtasks can be identified through the specified segmentation operation, so that partial voice subtasks can be updated, and the error correction entry time is reduced.
The second embodiment is a corresponding device embodiment to the first embodiment, and the detailed description may refer to the first embodiment, which is not repeated herein.
Fig. 8 is a schematic diagram of a hardware structure of a mobile terminal for implementing various embodiments of the present invention, where the mobile terminal 300 includes but is not limited to: radio frequency unit 301, network module 302, audio output unit 303, input unit 304, sensor 305, display unit 306, user input unit 307, interface unit 308, memory 309, processor 310, and power supply 311. Those skilled in the art will appreciate that the mobile terminal architecture illustrated in fig. 8 is not intended to be limiting of mobile terminals, and that a mobile terminal may include more or fewer components than those illustrated, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the mobile terminal includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
A processor 310, configured to obtain a target voice task sequence, where the target task sequence is associated with a voice control instruction; extracting N voice subtasks in the target voice task sequence; displaying M application program icons corresponding to the N voice subtasks; receiving a first input of a user; in response to the first input, updating the target speech task sequence.
In the embodiment of the invention, a target voice task sequence is obtained, and the target task sequence is associated with a voice control instruction; extracting N voice subtasks in the target voice task sequence; displaying M application program icons corresponding to the N voice subtasks; receiving a first input of a user; in response to the first input, updating the target speech task sequence. Different voice subtasks can be identified through the specified segmentation operation, so that partial voice subtasks can be updated, and the error correction entry time is reduced.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 301 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 310; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 301 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 301 can also communicate with a network and other devices through a wireless communication system.
The mobile terminal provides the user with wireless broadband internet access through the network module 302, such as helping the user send and receive e-mails, browse webpages, access streaming media, and the like.
The audio output unit 303 may convert audio data received by the radio frequency unit 301 or the network module 302 or stored in the memory 309 into an audio signal and output as sound. Also, the audio output unit 303 may also provide audio output related to a specific function performed by the mobile terminal 300 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 303 includes a speaker, a buzzer, a receiver, and the like.
The input unit 304 is used to receive audio or video signals. The input Unit 304 may include a Graphics Processing Unit (GPU) 3041 and a microphone 3042, and the Graphics processor 3041 processes image data of a still picture or video obtained by an image capturing apparatus (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 306. The image frames processed by the graphic processor 3041 may be stored in the memory 309 (or other storage medium) or transmitted via the radio frequency unit 301 or the network module 302. The microphone 3042 may receive sounds and may be capable of processing such sounds into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 301 in case of the phone call mode.
The mobile terminal 300 also includes at least one sensor 305, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that adjusts the brightness of the display panel 3061 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 3061 and/or a backlight when the mobile terminal 300 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of the mobile terminal (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 305 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 306 is used to display information input by the user or information provided to the user. The Display unit 306 may include a Display panel 3061, and the Display panel 3061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 307 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal. Specifically, the user input unit 307 includes a touch panel 3071 and other input devices 3072. The touch panel 3071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 3071 (e.g., operations by a user on or near the touch panel 3071 using a finger, a stylus, or any suitable object or attachment). The touch panel 3071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 310, and receives and executes commands sent by the processor 310. In addition, the touch panel 3071 may be implemented using various types, such as resistive, capacitive, infrared, and surface acoustic wave. The user input unit 307 may include other input devices 3072 in addition to the touch panel 3071. Specifically, the other input devices 3072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described herein.
Further, the touch panel 3071 may be overlaid on the display panel 3061, and when the touch panel 3071 detects a touch operation on or near the touch panel, the touch operation is transmitted to the processor 310 to determine the type of the touch event, and then the processor 310 provides a corresponding visual output on the display panel 3061 according to the type of the touch event. Although the touch panel 3071 and the display panel 3061 are shown in fig. 8 as two separate components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 3071 and the display panel 3061 may be integrated to implement the input and output functions of the mobile terminal, which is not limited herein.
The interface unit 308 is an interface through which an external device is connected to the mobile terminal 300. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 308 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the mobile terminal 300 or may be used to transmit data between the mobile terminal 300 and external devices.
The memory 309 may be used to store software programs as well as various data. The memory 309 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 309 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 310 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by operating or executing software programs and/or modules stored in the memory 309 and calling data stored in the memory 309, thereby performing overall monitoring of the mobile terminal. Processor 310 may include one or more processing units; preferably, the processor 310 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 310.
The mobile terminal 300 may further include a power supply 311 (such as a battery) for supplying power to various components, and preferably, the power supply 311 may be logically connected to the processor 310 through a power update system, so as to implement functions of updating charging, discharging, and power consumption updating through the power update system.
In addition, the mobile terminal 300 includes some functional modules that are not shown, and thus, the detailed description thereof is omitted.
Preferably, an embodiment of the present invention further provides a mobile terminal, which includes a processor 310, a memory 309, and a computer program stored in the memory 309 and capable of running on the processor 310, where the computer program is executed by the processor 310 to implement each process of the foregoing voice instruction updating method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the embodiment of the method for updating a voice instruction, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. A voice task processing method is applied to a mobile terminal, and is characterized by comprising the following steps:
acquiring a target voice task sequence, wherein the target task sequence is associated with a voice control instruction;
extracting N voice subtasks in the target voice task sequence;
displaying M application program icons corresponding to the N voice subtasks;
receiving a first input of a user;
updating the target speech task sequence in response to the first input;
wherein one of said speech subtasks is configured to have purposeful one or more operation steps, one of said speech subtasks corresponding to one of said application programs.
2. The method of claim 1, wherein prior to receiving the first input from the user, further comprising:
receiving a second input of a user to a target application icon in the M application icons;
and responding to the second input, and displaying the task identifications of all the voice subtasks corresponding to the target application program icon.
3. The method of claim 1, wherein the first input is used to delete at least one voice subtask;
said updating the target speech task sequence in response to the first input, comprising:
deleting a first voice subtask from the target voice task sequence if the first input is an input to the first voice subtask in a first application icon of the M application icons;
and under the condition that the first input is input to a second application program icon in the M application program icons, deleting all voice subtasks corresponding to the second application program icon from the target voice task sequence.
4. The method of claim 1, wherein the first input is used to alter at least one voice subtask;
said updating the target speech task sequence in response to the first input, comprising:
acquiring a re-entered third voice subtask and replacing a second voice subtask in a third application icon of the M application icons when the first input is input to the second voice subtask;
under the condition that the first input is to a fourth application program icon in the M application program icons, acquiring a re-entered first voice task subsequence and replacing a second voice task subsequence corresponding to the fourth application program icon;
each application icon corresponds to one voice task subsequence, and each voice task subsequence comprises at least one voice subtask.
5. The method of claim 1, wherein the first input is used to add at least one voice subtask;
said updating the target speech task sequence in response to the first input, comprising:
acquiring a target position corresponding to the first input;
acquiring at least one newly-recorded voice subtask;
adding the at least one voice subtask to the target location;
each application program icon corresponds to one voice task subsequence, and each voice task subsequence comprises at least one voice subtask; the target position comprises any position in the voice task subsequence corresponding to any application program icon.
6. A mobile terminal, comprising:
the task sequence acquisition module is used for acquiring a target voice task sequence, and the target task sequence is associated with a voice control instruction;
the subtask extraction submodule is used for extracting N voice subtasks in the target voice task sequence;
the application icon display module is used for displaying M application program icons corresponding to the N voice subtasks;
the first input receiving module is used for receiving a first input of a user;
a task update module for updating the target speech task sequence in response to the first input;
wherein one of said speech subtasks is configured to have purposeful one or more operation steps, one of said speech subtasks corresponding to one of said application programs.
7. The mobile terminal of claim 6, wherein the mobile terminal further comprises:
the second input receiving module is used for receiving second input of a user to a target application program icon in the M application program icons;
and the second task identifier display module is used for responding to the second input and displaying the task identifiers of all the voice subtasks corresponding to the target application program icon.
8. The mobile terminal of claim 6, wherein the first input is used to delete at least one voice subtask;
the task update module comprises:
a first deletion submodule, configured to delete a first voice subtask from the target voice task sequence if the first input is an input to the first voice subtask in a first application icon of the M application icons;
and the second deletion sub-module is used for deleting all voice subtasks corresponding to the second application program icon from the target voice task sequence under the condition that the first input is input to the second application program icon in the M application program icons.
9. The mobile terminal of claim 6, wherein the first input is used to alter at least one voice subtask;
the task update module comprises:
the first re-recording sub-module is used for acquiring a re-recorded third voice subtask and replacing the second voice subtask when the first input is input to a second voice subtask in a third application icon in the M application icons;
the second re-recording sub-module is used for acquiring a re-recorded first voice task sub-sequence and replacing a second voice task sub-sequence corresponding to a fourth application icon in the M application icons when the first input is the fourth application icon;
each application icon corresponds to one voice task subsequence, and each voice task subsequence comprises at least one voice subtask.
10. The mobile terminal of claim 6, wherein the first input is used to add at least one voice subtask;
the task update module comprises:
the target position obtaining submodule is used for obtaining a target position corresponding to the first input;
the re-recording task obtaining sub-module is used for obtaining at least one newly recorded voice sub-task;
a task adding sub-module, configured to add the at least one voice sub-task to the target location;
each application program icon corresponds to one voice task subsequence, and each voice task subsequence comprises at least one voice subtask; the target position comprises any position in the voice task subsequence corresponding to any application program icon.
CN201811448953.9A 2018-11-28 2018-11-28 Voice instruction updating method and mobile terminal Active CN109658926B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811448953.9A CN109658926B (en) 2018-11-28 2018-11-28 Voice instruction updating method and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811448953.9A CN109658926B (en) 2018-11-28 2018-11-28 Voice instruction updating method and mobile terminal

Publications (2)

Publication Number Publication Date
CN109658926A CN109658926A (en) 2019-04-19
CN109658926B true CN109658926B (en) 2021-03-23

Family

ID=66112079

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811448953.9A Active CN109658926B (en) 2018-11-28 2018-11-28 Voice instruction updating method and mobile terminal

Country Status (1)

Country Link
CN (1) CN109658926B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113192490A (en) * 2021-04-14 2021-07-30 维沃移动通信有限公司 Voice processing method and device and electronic equipment
CN113823284B (en) * 2021-09-24 2023-10-24 浪潮金融信息技术有限公司 System, method and medium for setting voice assistant instruction based on cloud computing

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101957679A (en) * 2009-07-14 2011-01-26 株式会社泛泰 Mobile terminal for displaying menu information accordig to trace of touch signal
CN102945074A (en) * 2011-10-12 2013-02-27 微软公司 Population of lists and tasks from captured voice and audio content
CN108897517A (en) * 2018-06-27 2018-11-27 联想(北京)有限公司 A kind of information processing method and electronic equipment

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007013521A1 (en) * 2005-07-26 2007-02-01 Honda Motor Co., Ltd. Device, method, and program for performing interaction between user and machine
KR20090107364A (en) * 2008-04-08 2009-10-13 엘지전자 주식회사 Mobile terminal and its menu control method
EP2237140B1 (en) * 2009-03-31 2018-12-26 Lg Electronics Inc. Mobile terminal and controlling method thereof
CN103838461A (en) * 2014-02-14 2014-06-04 广州市久邦数码科技有限公司 Icon menu popping achieving method and system
CN105677152A (en) * 2015-12-31 2016-06-15 宇龙计算机通信科技(深圳)有限公司 Voice touch screen operation processing method and device and terminal

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101957679A (en) * 2009-07-14 2011-01-26 株式会社泛泰 Mobile terminal for displaying menu information accordig to trace of touch signal
CN102945074A (en) * 2011-10-12 2013-02-27 微软公司 Population of lists and tasks from captured voice and audio content
CN108897517A (en) * 2018-06-27 2018-11-27 联想(北京)有限公司 A kind of information processing method and electronic equipment

Also Published As

Publication number Publication date
CN109658926A (en) 2019-04-19

Similar Documents

Publication Publication Date Title
CN108255378B (en) Display control method and mobile terminal
CN108845853B (en) Application program starting method and mobile terminal
CN110062105B (en) Interface display method and terminal equipment
CN111338530B (en) Control method of application program icon and electronic equipment
CN110837327B (en) Message viewing method and terminal
CN108334272B (en) Control method and mobile terminal
CN110308834B (en) Setting method of application icon display mode and terminal
CN108958593B (en) Method for determining communication object and mobile terminal
CN109683768B (en) Application operation method and mobile terminal
CN110868633A (en) Video processing method and electronic equipment
CN108984066B (en) Application icon display method and mobile terminal
CN110764675A (en) Control method and electronic equipment
CN108093137B (en) Dialing method and mobile terminal
CN110096203B (en) Screenshot method and mobile terminal
CN109739423B (en) Alarm clock setting method and flexible terminal
CN109284146B (en) Light application starting method and mobile terminal
CN109658926B (en) Voice instruction updating method and mobile terminal
CN107765954B (en) Application icon updating method, mobile terminal and server
CN111417929A (en) Interface display method and control terminal
CN111694537B (en) Audio playing method, electronic equipment and readable storage medium
CN111240551B (en) Application program control method and electronic equipment
CN111130995B (en) Image control method, electronic device, and storage medium
CN111049977B (en) Alarm clock reminding method and electronic equipment
CN110213437B (en) Editing method and mobile terminal
CN109660657B (en) Application program control method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant