CN108766427B - Voice control method and device - Google Patents

Voice control method and device Download PDF

Info

Publication number
CN108766427B
CN108766427B CN201810551797.2A CN201810551797A CN108766427B CN 108766427 B CN108766427 B CN 108766427B CN 201810551797 A CN201810551797 A CN 201810551797A CN 108766427 B CN108766427 B CN 108766427B
Authority
CN
China
Prior art keywords
voice
application
executing
voice command
command
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810551797.2A
Other languages
Chinese (zh)
Other versions
CN108766427A (en
Inventor
李仁涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN201810551797.2A priority Critical patent/CN108766427B/en
Publication of CN108766427A publication Critical patent/CN108766427A/en
Application granted granted Critical
Publication of CN108766427B publication Critical patent/CN108766427B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Abstract

The disclosure relates to a voice control method and device. The method comprises the following steps: activating a voice interaction function of the voice interaction application running in the background under the condition that a preset condition is met; receiving voice information; determining a corresponding voice command according to the voice information through the voice interactive application; and executing the voice command. The technical scheme can convert the voice information input every time into a voice command, thereby achieving the purpose of continuously executing the command through the voice control device, realizing the control of the voice on the intelligent terminal and replacing manual input with voice to the maximum extent.

Description

Voice control method and device
Technical Field
The present disclosure relates to the field of communications, and in particular, to a voice control method and apparatus.
Background
Along with the development of science and technology, intelligent terminals are more and more popular, and almost all people use intelligent terminals. The intelligent terminal needs a finger of a user to touch a screen or presses an entity key to trigger execution of a certain operation.
Disclosure of Invention
The embodiment of the disclosure provides a voice control method and a voice control device. The technical scheme is as follows:
according to a first aspect of the embodiments of the present disclosure, there is provided a voice control method, including:
activating a voice interaction function of the voice interaction application running in the background under the condition that a preset condition is met;
receiving voice information;
determining a corresponding voice command according to the voice information through the voice interactive application;
and executing the voice command.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: every time the input voice information can be converted into a voice command, the command can be continuously executed through the voice control device, the control of voice on the intelligent terminal is realized, and the manual input is replaced by voice to the maximum extent.
In one embodiment, the determining, by the voice interaction application and according to the voice information, a corresponding voice command includes:
extracting keywords in the voice information through the voice interaction application;
determining a first application for executing the voice command and executing the voice command according to the keyword;
the executing the voice command comprises:
executing the voice command by the first application.
In one embodiment, said executing said voice command by said first application comprises:
when the current interface displays the first application, executing the voice command through the first application;
and when the current interface displays a second application, switching the current interface to the interface of the first application, and executing the voice command through the first application.
In one embodiment, before the activating the voice interaction function of the voice interaction application running in the background, the method comprises:
detecting a second operation input by the user;
and responding to the second operation, and starting the voice interaction application in the background.
According to a second aspect of the embodiments of the present disclosure, there is provided a voice control apparatus including:
the activation module is used for activating the voice interaction function of the voice interaction application running in the background when the preset condition is met;
the receiving module is used for receiving voice information;
the determining module is used for determining a corresponding voice command according to the voice information through the voice interaction application;
and the execution module is used for executing the voice command.
In one embodiment, the determining module comprises:
the extraction submodule is used for extracting the keywords in the voice information through the voice interaction application;
the determining submodule is used for determining a first application for executing the voice command and the voice command according to the keyword;
the execution module comprises:
and the execution sub-module is used for executing the voice command through the first application.
In one embodiment, the execution submodule includes:
the first execution unit is used for executing the voice command through the first application when the first application is displayed on the current interface;
and the second execution unit is used for switching the current interface to the interface of the first application when the current interface displays a second application, and executing the voice command through the first application.
In one embodiment, the apparatus comprises:
the detection module is used for detecting a second operation input by the user;
and the response module is used for responding to the second operation and starting the voice interaction application in a background.
According to a third aspect of the embodiments of the present disclosure, there is provided a voice control apparatus including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
activating a voice interaction function of the voice interaction application running in the background under the condition that a preset condition is met;
receiving voice information;
determining a corresponding voice command according to the voice information through the voice interactive application;
and executing the voice command.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method described above.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow chart illustrating a method of voice control according to an exemplary embodiment.
FIG. 2 is a flow diagram illustrating a voice control method according to an example embodiment.
FIG. 3 is a flow chart illustrating a method of voice control according to an example embodiment.
FIG. 4 is a flow chart illustrating a method of voice control according to an example embodiment.
Fig. 5 is a block diagram illustrating a command execution apparatus according to an exemplary embodiment.
FIG. 6 is a block diagram illustrating a voice-controlled device according to an example embodiment.
FIG. 7 is a block diagram illustrating a voice-controlled device according to an example embodiment.
FIG. 8 is a block diagram illustrating a voice-controlled device according to an example embodiment.
FIG. 9 is a block diagram illustrating a voice-controlled device according to an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
In the related art, in order to further facilitate the use of users, the intelligent terminal is additionally provided with a voice control function. Specifically, the user opens an interface of the voice control application, speaks a sentence to the intelligent terminal, for example, opens the application a, the intelligent terminal recognizes the sentence through the voice control application, generates an opening instruction corresponding to the sentence, and opens the application a according to the instruction. After the display interface jumps to the interface of the application A from the interface of the voice control application, the voice control application can not receive voice any more, and the user can only execute corresponding operation on the application A. The voice control described above cannot be implemented continuously, but only within the interface of the voice control application.
However, the method is not very useful in life, for example, a user is inconvenient to operate the intelligent terminal by fingers, the user cannot see clearly or cannot see the display content of the intelligent terminal, voice control can only realize half-voice control over the intelligent terminal, and voice cannot replace manual input to the maximum extent.
Example one
Fig. 1 is a flow chart illustrating a voice control method according to an exemplary embodiment, where the voice control method is used in a voice control device as shown in fig. 1, and the method includes the following steps 101 and 104:
in step 101, a voice interaction function of a voice interaction application is activated when a preset condition is met.
In this embodiment, there are many methods for activating the voice interaction function, and the preset condition may be as follows:
alternatively, the activation may be by a combination of physical keys, for example, by a power key and a volume key combination.
Alternatively, activation may be via virtual keys on the display interface.
Alternatively, a password may be entered by the user and activated when the password matches a preset password.
Here, the decryption and setting of the password can be realized in various ways, such as a character password, a voice password, a fingerprint password, and the like, which is not limited in this embodiment.
In step 102, voice information is received.
The voice interactive application waits for voice input in a background, if a user makes a sound and the loudness of the sound is greater than a certain threshold, the voice information is received, and if the time length of the sound made by the voice is greater than a certain time length, the step 103 is carried out; if the time length is insufficient, a prompt message can be generated and displayed to prompt that the user input is not clearly heard and ask the user to re-input.
In step 103, a corresponding voice command is determined according to the voice information through the voice interactive application.
Here, the voice command refers to a command executed by which application, and the application may be a displaying application, an application running in a background, or an application not yet started, which is not limited in this embodiment.
In step 104, a voice command is executed.
Here, the voice command is executed by the device, and which application the device executes by is determined according to the voice command.
After step 104, if the user inputs voice again, voice control can be performed as in the above-described step as well.
In the embodiment, the voice information input every time can be converted into the voice command, so that the command is continuously executed through the voice control device, the intelligent terminal is controlled by the voice, and the manual input is replaced by the voice to the maximum extent.
In one embodiment, fig. 2 is a flowchart illustrating a voice control method according to an exemplary embodiment, and as shown in fig. 2, the step 103 of determining a corresponding voice command according to voice information through a voice interaction application may include:
in step 1031, keywords in the voice information are extracted through the voice interaction application.
In step 1032, a first application to execute the voice command and to execute the voice command is determined based on the keyword.
For example, when the user inputs voice information to please open the application B, the extracted keyword is divided into two parts, one part is an action keyword and the other part is a noun keyword, where the opening is the action keyword and the B is the noun keyword; when the user inputs the voice message as the next page, the extracted keyword is only the noun keyword next page, and the action keyword is omitted, so it should be noted that the keyword at least includes a noun keyword.
Here, determining the first application to execute the voice command and the voice command according to the keyword includes:
when the noun keyword is the name of the first application or the specific function of the first application, the voice command to be executed is determined according to the action keyword and/or the noun keyword, and the corresponding first application is determined according to the noun keyword.
For example, when the voice information input by the user is to please open the application C, the action keyword extracted at this time is open, the noun keyword is C, and according to the action keyword and the noun keyword, it can be determined that the command is an application opening command, and it can be determined that the specific application opening is C through the noun keyword; when the user inputs the voice information as the next page, the extracted keywords only have the noun keywords and the next page, the completion action keywords are turned over, a page turning command can be determined according to the action keywords and the noun keywords, the noun keywords do not have application names, and the page turning is not a specific function of a certain application, so that the corresponding application is considered to be the currently displayed application.
Accordingly, step 104, executing the voice command, may include:
in step 1041, a voice command is executed by the first application.
In one embodiment, the step 1041 of executing the voice command by the first application may include:
when the current interface displays a first application, executing a voice command through the first application; and when the current interface displays the second application, switching the current interface to the interface of the first application, and executing the voice command through the first application.
It should be noted that when the current interface displays the second application, if the first application is already started and is running in the background, the interface can be directly switched, and if the voice command is only to open the first application, the command is completed without switching the interface, and the voice command does not need to be executed; if the first application is not opened, the switching interface needs to open and display the first application, and similarly, if the voice command is only to open the first application, the command is completed without switching the interface, and the voice command does not need to be executed.
In one embodiment, fig. 3 is a flowchart illustrating a voice control method according to an exemplary embodiment, before activating a voice interaction function of a voice interaction application running in the background, as shown in fig. 3, the method includes:
in step 105, a second operation input by the user is detected.
In step 106, the speech interaction application is started in the background in response to the second operation.
The second operation may be a specific pattern drawn on the screen, for example, a circle is drawn on the screen, so as to start the voice interaction application, and if the screen is not lighted, the specific pattern may also light the screen and start the voice interaction application; for the device that the presentity presses the home key, the user can start the voice interaction application in the background after pressing the home key for a long time.
Fig. 4 is a flowchart illustrating a voice control method according to an exemplary embodiment, where, as shown in fig. 4, the voice control method is used in a voice control device applied to an intelligent terminal, the method includes the following steps 201 and 208,
in step 201, a second operation input by the user is detected.
In step 202, in response to the second operation, the screen is lighted up, and the voice interaction application is started in the background.
In step 203, password information input by a user is received.
In step 204, when the password information is the same as the preset information, a voice interaction function of the voice interaction application is activated.
In step 205, voice information is received.
In step 206, the keywords in the voice message are extracted through the voice interactive application running in the background.
In step 207, based on the keywords, a first application to execute the voice command and to execute the voice command is determined.
In step 208, the voice command is executed by the first application.
The embodiment increases the starting of the voice interaction application and the activation of the voice interaction function, so that the user can start or close the function and the application according to the requirement of the user, and the user experience is improved.
The following are embodiments of the disclosed apparatus that may be used to perform embodiments of the disclosed methods.
Example four
Fig. 5 is a block diagram illustrating a voice-controlled apparatus that may be implemented as part or all of an electronic device via software, hardware, or a combination of both, according to an example embodiment. As shown in fig. 5, the voice control apparatus includes:
the activation module 301 is configured to activate a voice interaction function of a voice interaction application running in a background under a preset condition;
a receiving module 302, configured to receive voice information;
a determining module 303, configured to determine, according to the voice information, a corresponding voice command through the voice interaction application;
an execution module 304, configured to execute the voice command.
In the embodiment, the voice information input every time can be converted into the voice command, so that the command is continuously executed through the voice control device, the intelligent terminal is controlled by the voice, and the manual input is replaced by the voice to the maximum extent.
In one embodiment, as shown in fig. 6, the determining module 303 includes:
an extraction submodule 3031, configured to extract a keyword in the voice information through the voice interaction application;
a determining submodule 3032, configured to determine, according to the keyword, a first application for executing the voice command and executing the voice command;
the execution module 304 includes:
an execution submodule 3041 for executing the voice command by the first application.
In one embodiment, as shown in FIG. 7, the determination submodule 3041 includes:
a first execution unit 30411, configured to execute the voice command through the first application when the current interface displays the first application;
a second executing unit 30412, configured to switch the current interface to an interface of the first application when the current interface displays a second application, and execute the voice command through the first application.
In one embodiment, as shown in fig. 8, the apparatus comprises:
a detection module 305, configured to detect a second operation input by the user;
a response module 306, configured to, in response to the second operation, start the voice interaction application in the background.
According to a fourth aspect of the embodiments of the present disclosure, there is provided a voice control apparatus including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
activating a voice interaction function of the voice interaction application running in the background under the condition that a preset condition is met;
receiving voice information;
determining a corresponding voice command according to the voice information through the voice interactive application;
and executing the voice command.
The processor may be further configured to:
the determining, by the voice interactive application and according to the voice information, a corresponding voice command includes:
extracting keywords in the voice information through the voice interaction application;
determining a first application for executing the voice command and executing the voice command according to the keyword;
the executing the voice command comprises:
executing the voice command by the first application.
The determining, according to the keyword, the voice command and the first application executing the voice command includes:
when the current interface displays the first application, executing the voice command through the first application;
and when the current interface displays a second application, switching the current interface to the interface of the first application, and executing the voice command through the first application.
Before the voice interaction function of the voice interaction application running in the background is activated, the method comprises the following steps:
detecting a second operation input by the user;
and responding to the second operation, and starting the voice interaction application in the background.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 9 is a block diagram illustrating an apparatus for voice control, which is suitable for a terminal device, according to an exemplary embodiment. For example, the apparatus 1700 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Apparatus 1700 may include one or more of the following components: processing component 1702, memory 1704, power component 1706, multimedia component 1708, audio component 1710, input/output (I/O) interface 1712, sensor component 1714, and communications component 1716.
The processing component 1702 generally controls the overall operation of the apparatus 1700, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. Processing component 1702 may include one or more processors 1720 to execute instructions to perform all or a portion of the steps of the above-described method. Further, processing component 1702 may include one or more modules that facilitate interaction between processing component 1702 and other components. For example, processing component 1702 may include a multimedia module to facilitate interaction between multimedia component 1708 and processing component 1702.
The memory 1704 is configured to store various types of data to support operations at the apparatus 1700. Examples of such data include instructions for any application or method operating on the apparatus 1700, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 1704 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 1706 provides power to the various components of the device 1700. The power components 1706 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the apparatus 1700.
The multimedia component 1708 includes a screen providing an output interface between the device 1700 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 1708 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the apparatus 1700 is in an operation mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
Audio component 1710 is configured to output and/or input audio signals. For example, audio component 1710 includes a Microphone (MIC) configured to receive external audio signals when apparatus 1700 is in an operating mode, such as a call mode, a record mode, and a voice recognition mode. The received audio signal may further be stored in the memory 1704 or transmitted via the communication component 1716. In some embodiments, audio component 1710 also includes a speaker for outputting audio signals.
The I/O interface 1712 provides an interface between the processing component 1702 and peripheral interface modules, such as a keyboard, click wheel, buttons, and the like. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 1714 includes one or more sensors for providing various aspects of state assessment for the apparatus 1700. For example, sensor assembly 1714 may detect an open/closed state of apparatus 1700, the relative positioning of components, such as a display and keypad of apparatus 1700, the change in position of apparatus 1700 or a component of apparatus 1700, the presence or absence of user contact with apparatus 1700, the orientation or acceleration/deceleration of apparatus 1700, and the change in temperature of apparatus 1700. The sensor assembly 1714 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 1714 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 1714 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 1716 is configured to facilitate communications between the apparatus 1700 and other devices in a wired or wireless manner. The apparatus 1700 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 1716 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 1716 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 1700 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as the memory 1704 comprising instructions, executable by the processor 1720 of the apparatus 1700 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
A non-transitory computer readable storage medium, wherein instructions in the storage medium, when executed by a processor of an apparatus 1700, enable the apparatus 1700 to perform the above-described voice control method, the method comprising:
activating a voice interaction function of the voice interaction application running in the background under the condition that a preset condition is met;
receiving voice information;
determining a corresponding voice command according to the voice information through the voice interactive application;
and executing the voice command.
The determining, by the voice interactive application and according to the voice information, a corresponding voice command includes:
extracting keywords in the voice information through the voice interaction application;
determining a first application for executing the voice command and executing the voice command according to the keyword;
the executing the voice command comprises:
executing the voice command by the first application.
The determining, according to the keyword, the voice command and the first application executing the voice command includes:
when the current interface displays the first application, executing the voice command through the first application;
and when the current interface displays a second application, switching the current interface to the interface of the first application, and executing the voice command through the first application.
Before the voice interaction function of the voice interaction application running in the background is activated, the method comprises the following steps:
detecting a second operation input by the user;
and responding to the second operation, and starting the voice interaction application in the background.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (8)

1. A voice control method, comprising:
activating a voice interaction function of the voice interaction application running in the background under the condition that a preset condition is met;
receiving voice information;
determining a corresponding voice command according to the voice information through the voice interactive application;
executing the voice command;
before the voice interaction function of the voice interaction application running in the background is activated, the method comprises the following steps:
detecting a second operation input by the user;
and responding to the second operation, and starting the voice interaction application in the background.
2. The method of claim 1, wherein determining, by the voice interaction application, a corresponding voice command based on the voice information comprises:
extracting keywords in the voice information through the voice interaction application;
determining a first application for executing the voice command and executing the voice command according to the keyword;
the executing the voice command comprises:
executing the voice command by the first application.
3. The method of claim 2, wherein the executing the voice command by the first application comprises:
when the current interface displays the first application, executing the voice command through the first application;
and when the current interface displays a second application, switching the current interface to the interface of the first application, and executing the voice command through the first application.
4. A voice control apparatus, comprising:
the activation module is used for activating the voice interaction function of the voice interaction application running in the background under the condition that a preset condition is met;
the receiving module is used for receiving voice information;
the determining module is used for determining a corresponding voice command according to the voice information through the voice interaction application;
the execution module is used for executing the voice command;
the device comprises:
the detection module is used for detecting a second operation input by the user;
and the response module is used for responding to the second operation and starting the voice interaction application in a background.
5. The apparatus of claim 4, wherein the determining module comprises:
the extraction submodule is used for extracting the keywords in the voice information through the voice interaction application;
the determining submodule is used for determining a first application for executing the voice command and the voice command according to the keyword;
the execution module comprises:
and the execution sub-module is used for executing the voice command through the first application.
6. The apparatus of claim 5, wherein the execution submodule comprises:
the first execution unit is used for executing the voice command through the first application when the first application is displayed on the current interface;
and the second execution unit is used for switching the current interface to the interface of the first application when the current interface displays a second application, and executing the voice command through the first application.
7. A voice control apparatus, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
activating a voice interaction function of the voice interaction application running in the background under the condition that a preset condition is met;
receiving voice information;
determining a corresponding voice command according to the voice information through the voice interactive application;
executing the voice command;
before the voice interaction function of the voice interaction application running in the background is activated, the method comprises the following steps:
detecting a second operation input by the user;
and responding to the second operation, and starting the voice interaction application in the background.
8. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 3.
CN201810551797.2A 2018-05-31 2018-05-31 Voice control method and device Active CN108766427B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810551797.2A CN108766427B (en) 2018-05-31 2018-05-31 Voice control method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810551797.2A CN108766427B (en) 2018-05-31 2018-05-31 Voice control method and device

Publications (2)

Publication Number Publication Date
CN108766427A CN108766427A (en) 2018-11-06
CN108766427B true CN108766427B (en) 2020-10-16

Family

ID=64001383

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810551797.2A Active CN108766427B (en) 2018-05-31 2018-05-31 Voice control method and device

Country Status (1)

Country Link
CN (1) CN108766427B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112181350B (en) * 2020-09-25 2023-08-15 北京博睿维讯科技有限公司 Active terminal control method and device
CN112927687A (en) * 2021-01-25 2021-06-08 珠海格力电器股份有限公司 Method, device and system for controlling functions of equipment and storage medium
CN113488042B (en) * 2021-06-29 2022-12-13 荣耀终端有限公司 Voice control method and electronic equipment

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102668391A (en) * 2009-12-18 2012-09-12 三星电子株式会社 Method and system for controlling external output of a mobile device
CN102821214A (en) * 2012-06-29 2012-12-12 苏州思必驰信息科技有限公司 System and method for realizing voice interaction between terminal user and third-party application in communication process
CN103377028A (en) * 2012-04-20 2013-10-30 纽安斯通讯公司 Methods and systems for speech-enabling a human-to-machine interface
CN103802761A (en) * 2012-11-06 2014-05-21 罗伯特·博世有限公司 Method for activating a voice interaction with a passenger of a motor vehicle and voice interaction system for a vehicle
CN104468941A (en) * 2013-09-16 2015-03-25 联想(北京)有限公司 Information display method and device
CN105827877A (en) * 2015-01-06 2016-08-03 中国移动通信集团上海有限公司 IVR (Interactive Voice Response) platform based service processing method and IVR platform
CN106226905A (en) * 2016-08-23 2016-12-14 北京乐驾科技有限公司 A kind of head-up display device
CN106251863A (en) * 2016-07-26 2016-12-21 傲爱软件科技(上海)有限公司 A kind of instruction type speech control system based on smart machine and control method
CN106297780A (en) * 2015-06-03 2017-01-04 深圳市轻生活科技有限公司 A kind of voice interactive method and system and Intelligent voice broadcasting terminal
CN108038748A (en) * 2017-11-30 2018-05-15 苏宁云商集团股份有限公司 For aiding in response interactive interface display method and equipment

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102668391A (en) * 2009-12-18 2012-09-12 三星电子株式会社 Method and system for controlling external output of a mobile device
CN103377028A (en) * 2012-04-20 2013-10-30 纽安斯通讯公司 Methods and systems for speech-enabling a human-to-machine interface
CN102821214A (en) * 2012-06-29 2012-12-12 苏州思必驰信息科技有限公司 System and method for realizing voice interaction between terminal user and third-party application in communication process
CN103802761A (en) * 2012-11-06 2014-05-21 罗伯特·博世有限公司 Method for activating a voice interaction with a passenger of a motor vehicle and voice interaction system for a vehicle
CN104468941A (en) * 2013-09-16 2015-03-25 联想(北京)有限公司 Information display method and device
CN105827877A (en) * 2015-01-06 2016-08-03 中国移动通信集团上海有限公司 IVR (Interactive Voice Response) platform based service processing method and IVR platform
CN106297780A (en) * 2015-06-03 2017-01-04 深圳市轻生活科技有限公司 A kind of voice interactive method and system and Intelligent voice broadcasting terminal
CN106251863A (en) * 2016-07-26 2016-12-21 傲爱软件科技(上海)有限公司 A kind of instruction type speech control system based on smart machine and control method
CN106226905A (en) * 2016-08-23 2016-12-14 北京乐驾科技有限公司 A kind of head-up display device
CN108038748A (en) * 2017-11-30 2018-05-15 苏宁云商集团股份有限公司 For aiding in response interactive interface display method and equipment

Also Published As

Publication number Publication date
CN108766427A (en) 2018-11-06

Similar Documents

Publication Publication Date Title
CN107832036B (en) Voice control method, device and computer readable storage medium
EP3331226B1 (en) Method and device for reading messages
CN107102772B (en) Touch control method and device
WO2016107283A1 (en) Application control method and device
CN104317402B (en) Description information display method and device and electronic equipment
CN105786507B (en) Display interface switching method and device
CN111063354B (en) Man-machine interaction method and device
CN108766427B (en) Voice control method and device
US20180238748A1 (en) Pressure detection method and apparatus, and storage medium
CN109862169B (en) Electronic equipment control method, device and storage medium
CN108803892B (en) Method and device for calling third party application program in input method
CN112929561A (en) Multimedia data processing method and device, electronic equipment and storage medium
CN112346571A (en) Equipment control method and device and storage medium
CN108874450B (en) Method and device for waking up voice assistant
CN111679746A (en) Input method and device and electronic equipment
CN107885571B (en) Display page control method and device
CN106980781B (en) External equipment and control method and device of external equipment
US10671827B2 (en) Method and device for fingerprint verification
CN111667827B (en) Voice control method and device for application program and storage medium
CN114296628A (en) Display page control method and device, keyboard, electronic equipment and storage medium
CN108037875B (en) Method, device and storage medium for switching input modes
CN106791077B (en) Method and device for processing multimedia messages in instant messaging software
CN109558016B (en) Input method and device
CN112486603A (en) Interface adaptation method and device for adapting interface
CN112507162B (en) Information processing method, device, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant