CN112201230A - Voice response method, device, equipment and storage medium - Google Patents

Voice response method, device, equipment and storage medium Download PDF

Info

Publication number
CN112201230A
CN112201230A CN201910609650.9A CN201910609650A CN112201230A CN 112201230 A CN112201230 A CN 112201230A CN 201910609650 A CN201910609650 A CN 201910609650A CN 112201230 A CN112201230 A CN 112201230A
Authority
CN
China
Prior art keywords
response
voice
application
responded
wearable device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910609650.9A
Other languages
Chinese (zh)
Inventor
付浩翔
张鸣雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Huami Information Technology Co Ltd
Original Assignee
Anhui Huami Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Huami Information Technology Co Ltd filed Critical Anhui Huami Information Technology Co Ltd
Priority to CN201910609650.9A priority Critical patent/CN112201230A/en
Publication of CN112201230A publication Critical patent/CN112201230A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/28Constructional details of speech recognition systems
    • G10L15/30Distributed recognition, e.g. in client-server systems, for mobile phones or network applications
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Abstract

The present disclosure provides a voice response method, apparatus, device and computer readable storage medium, the method comprising: acquiring voice information obtained by recognizing the voice signal based on the acquired voice signal; acquiring one or more response parameters corresponding to the voice information; the response parameters comprise applications to be responded and/or application pages to be responded and corresponding response instructions; and if the currently operated application is the application to be responded and/or the currently displayed application page is the application page to be responded, executing the response instruction. The embodiment of the disclosure controls the specific functions of the application through voice, and improves the use experience of the user.

Description

Voice response method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of voice interaction, and in particular, to a voice response method, apparatus, device, and computer-readable storage medium.
Background
With the development of technology and the progress of life, people tend to use various intelligent wearable devices to improve the quality of life of the people. The existing intelligent wearable equipment supports the voice interaction function, and the information query function, the intelligent home control function or other personalized operations can be realized in a voice interaction mode. In a related voice interaction scene, a user can generally perform voice input on the intelligent wearable device in any scene so as to expect a response result of the intelligent wearable device; however, the voice response scheme of the existing intelligent wearable device has certain limitations, the response accuracy is not high, and sometimes the voice signal input by the user cannot make the intelligent wearable device to make the user specific intention, so that the response result desired by the user cannot be given.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides a voice response method, apparatus, device, and computer-readable storage medium.
According to a first aspect of the embodiments of the present disclosure, there is provided a voice response method, including:
acquiring voice information obtained by recognizing the voice signal based on the acquired voice signal;
acquiring one or more response parameters corresponding to the voice information; the response parameters comprise applications to be responded and/or application pages to be responded and corresponding response instructions;
and if the currently operated application is the application to be responded and/or the currently displayed application page is the application page to be responded, executing the response instruction.
Optionally, the response parameter further includes a response triggering manner;
the executing the response instruction comprises:
and triggering the currently running application to execute the response instruction based on the response triggering mode.
Optionally, the voice response method is applied to the intelligent wearable device;
the acquiring and recognizing the voice information obtained by the voice signal based on the collected voice signal comprises the following steps:
sending the collected voice signal to a cloud end; the voice signal is used for triggering the cloud to recognize the voice signal to obtain voice information, and matching preset intention keywords based on the voice information to obtain one or more response parameters corresponding to the intention keywords and return the response parameters to the intelligent wearable equipment; the intention keywords represent possible operations to be performed by the voice information;
the acquiring one or more response parameters corresponding to the voice information includes:
and receiving one or more response parameters which are sent by the cloud and correspond to the intention keywords.
Optionally, the voice response method is applied to the intelligent wearable device;
the acquiring and recognizing the voice information obtained by the voice signal based on the collected voice signal comprises the following steps:
sending the collected voice signal to a cloud end so as to acquire voice information obtained by identifying the voice signal from the cloud end;
the obtaining one or more response parameters corresponding to the voice information, and if the currently running application is the application to be responded and/or the currently displayed application page is the application page to be responded, executing the response instruction includes:
acquiring a response parameter corresponding to a currently running application and/or a currently displayed application page; the response parameters also comprise one or more pieces of preset text information;
and matching the text information with the voice information, and executing a response instruction corresponding to the preset text information if the voice information matches the preset text information.
Optionally, the voice response method is applied to a cloud;
the acquiring and recognizing the voice information obtained by the voice signal based on the collected voice signal comprises the following steps:
receiving a voice signal sent by intelligent wearable equipment, and identifying voice information obtained by the voice signal;
the acquiring one or more response parameters corresponding to the voice information includes:
matching preset intention keywords based on the voice information to obtain one or more response parameters corresponding to the intention keywords; the intention keywords represent possible operations to be performed by the voice information;
if the currently running application is the application to be responded and/or the currently displayed application page is the application page to be responded, executing the response instruction, including:
receiving a currently running application and/or a currently displayed application page sent by the intelligent wearable device;
if the currently running application is the application to be responded and/or the currently displayed application page is the application page to be responded, the response instruction is sent to the intelligent wearable device, so that the intelligent wearable device executes the response instruction.
Optionally, the voice response method is applied to a mobile terminal; the mobile terminal is associated with the intelligent wearable device;
the acquiring and recognizing the voice information obtained by the voice signal based on the collected voice signal comprises the following steps:
receiving voice information sent by a cloud; the voice information is obtained by identifying the voice signal by the cloud after the associated intelligent wearable device sends the acquired voice signal to the cloud;
the acquiring one or more response parameters corresponding to the voice information includes:
receiving one or more response parameters which are sent by the cloud and correspond to the voice information; the response parameters are obtained and returned by the cloud end based on the preset intention keywords matched by the voice information, and one or more response parameters corresponding to the intention keywords are obtained;
if the currently running application is the application to be responded and/or the currently displayed application page is the application page to be responded, executing the response instruction comprises:
receiving a currently running application and/or a currently displayed application page sent by the intelligent wearable device;
if the currently running application is the application to be responded, and/or the currently displayed application page is the application page to be responded, the response instruction is sent to the intelligent wearable device, so that the intelligent wearable device executes the response instruction.
Optionally, the smart wearable device comprises a sound collection unit;
the voice signal is acquired by the intelligent wearable device in response to the awakening operation of the user and starting the sound acquisition unit.
Optionally, the smart wearable device further comprises an inertial sensor;
the wake-up operation comprises a triggering operation of a designated control or a designated action of a user determined based on data collected by the inertial sensor.
According to a second aspect of the embodiments of the present disclosure, there is provided a voice response apparatus including:
the voice information acquisition module is configured to acquire voice information obtained by identifying the voice signal based on the acquired voice signal;
a response parameter acquisition module configured to acquire a response parameter corresponding to the voice information; the response parameters comprise applications to be responded and/or application pages to be responded and corresponding response instructions;
and the response instruction execution module is configured to execute the response instruction if the currently running application is the application to be responded and/or the currently displayed application page is the application page to be responded.
Optionally, the response parameter further includes a response triggering manner;
the response instruction execution module is configured to:
and if the currently running application is the application to be responded and/or the currently displayed application page is the application page to be responded, triggering the currently running application to execute the response instruction based on the response triggering mode.
Optionally, the voice response method is applied to the intelligent wearable device;
the voice information acquisition module is configured to:
sending the collected voice signal to a cloud end; the voice signal is used for triggering the cloud to recognize the voice signal to obtain voice information, and matching preset intention keywords based on the voice information to obtain one or more response parameters corresponding to the intention keywords and return the response parameters to the intelligent wearable equipment; the intention keywords represent possible performed operations of the voice information;
the response parameter acquisition module is configured to:
and receiving one or more response parameters which are sent by the cloud and correspond to the intention keywords.
Optionally, the voice response method is applied to the intelligent wearable device;
the voice information acquisition module is configured to:
sending the collected voice signal to a cloud end so as to acquire voice information obtained by identifying the voice signal from the cloud end;
the response parameter obtaining module and the response instruction executing module are configured to:
acquiring a response parameter corresponding to a currently running application and/or a currently displayed application page; the response parameters also comprise one or more pieces of preset text information;
and matching the text information with the voice information, and executing a response instruction corresponding to the preset text information if the voice information matches the preset text information.
Optionally, the voice response method is applied to a cloud;
the voice information acquisition module is configured to:
receiving a voice signal sent by intelligent wearable equipment, and identifying voice information obtained by the voice signal;
the response parameter acquisition module comprises:
matching preset intention keywords based on the voice information to obtain one or more response parameters corresponding to the intention keywords; the intention keywords represent possible performed operations of the voice information;
the response instruction execution module is configured to:
the application and/or application page receiving sub-module is configured to receive a currently running application and/or a currently displayed application page sent by the intelligent wearable device;
and the response instruction sending sub-module is configured to send the response instruction to the intelligent wearable device if the currently running application is the application to be responded and/or the currently displayed application page is the application page to be responded, so that the intelligent wearable device executes the response instruction.
Optionally, the voice response method is applied to a mobile terminal; the mobile terminal is associated with the intelligent wearable device;
the voice information acquisition module is configured to:
receiving voice information sent by a cloud; the voice information is obtained by identifying the voice signal by the cloud after the associated intelligent wearable device sends the acquired voice signal to the cloud;
the response parameter acquisition module is configured to:
receiving one or more response parameters which are sent by the cloud and correspond to the voice information; the response parameters are obtained and returned by the cloud end based on the preset intention keywords matched by the voice information, and one or more response parameters corresponding to the intention keywords are obtained;
the response instruction execution module is configured to:
the application and/or application page receiving sub-module is configured to receive a currently running application and/or a currently displayed application page sent by the intelligent wearable device;
the response instruction sending sub-module is configured to send the response instruction to the intelligent wearable device if the currently running application is the application to be responded and/or the currently displayed application page is the application page to be responded, so that the intelligent wearable device executes the response instruction.
Optionally, the smart wearable device comprises a sound collection unit;
the voice signal is acquired by the intelligent wearable device in response to the awakening operation of the user and starting the sound acquisition unit.
Optionally, the smart wearable device comprises an inertial sensor;
the wake-up operation comprises a triggering operation of a designated control or a designated action of a user determined based on data collected by the inertial sensor.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:
a processor;
a memory for storing processor-executable instructions;
wherein the content of the first and second substances,
the processor is configured to perform the operations of the method as described above.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program, which, when executed by one or more processors, causes the processors to perform the operations in the method as described above.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
according to the method and the device, after the voice signal of the user is collected, the voice information for identifying the voice signal is firstly obtained, then the response parameter corresponding to the voice information is obtained, the response parameter comprises the application to be responded, the application page to be responded and the corresponding response instruction, and therefore if the fact that the currently running application is the application to be responded and/or the currently displayed application page is the application page to be responded is detected, the response instruction is executed.
In the disclosure, the voice response method can be applied to an intelligent wearable device, the intelligent wearable device sends the acquired voice signal to a cloud end to obtain a response parameter from the cloud end, and through the response parameter, the intelligent wearable device can make clear the voice intention of a user according to an application and execute a response instruction in the application according to the voice intention of the user, so that the specific function of the application is controlled through voice, and the use experience of the user is improved; meanwhile, as only relevant response parameters are stored in the cloud end, the cloud end sends the matched response parameters to the intelligent wearable device, and the cloud end does not need to be improved, so that the method and the device can be suitable for the cloud ends of various manufacturers and have good compatibility.
In this disclosure, intelligence wearing equipment includes sound collection unit and inertial sensor, can respond to user's the operation of awakening up, starts the voice collection unit is in order to gather speech signal, awaken up the operation and include the operation of triggering of appointed controlling part or based on the user's of data determination that inertial sensor gathered appointed action realizes just starting the voice collection unit when the user needs, thereby saves intelligence wearing equipment's electric quantity avoids maintaining the continuous operation of voice collection unit and leads to intelligence wearing equipment's duration reduces by a wide margin.
The voice response method can be applied to intelligent wearable equipment, the intelligent wearable equipment sends the collected voice signals to a cloud end to acquire and recognize voice information obtained by the voice signals from the cloud end, the intelligent wearable equipment matches the recognized voice information with the currently running application and/or preset text information corresponding to the currently displayed application page, and if the matching indicates that the voice intention of a user is hit, the intelligent wearable equipment executes a response instruction corresponding to the preset text information to realize specific functions of application through voice control and improve the use experience of the user.
In the disclosure, the voice response method may be applied to a cloud, the cloud receives and recognizes a voice signal sent by an intelligent wearable device to obtain voice information, then matches a preset intention keyword based on the voice information to obtain a response parameter corresponding to the intention keyword, and after receiving a currently displayed application and/or application page sent by the intelligent wearable device, detects whether the currently running application matches an application to be responded, and/or whether the currently displayed application page matches the application page to be responded, if matching indicates that a voice intention of a user is hit, the cloud sends a corresponding response instruction to the intelligent wearable device, so that the intelligent wearable device executes a response instruction corresponding to a position to be responded, and the cloud can determine the voice intention of the user based on the application by improving the cloud, the specific function of the application is controlled through the voice, and the use experience of a user is improved.
In the disclosure, the voice response method may be applied to a mobile terminal associated with an intelligent wearable device, where the mobile terminal receives voice information returned by a cloud and one or more response parameters corresponding to the voice information, then after receiving a currently running application and/or a currently displayed application page sent by the intelligent wearable device, detects whether the currently running application matches an application to be responded, and/or whether the currently displayed application page matches an application page to be responded, and if the matching indicates that a voice intention of a user is hit, the mobile terminal sends a corresponding response instruction to the intelligent wearable device, so that the intelligent wearable device executes the response instruction, and by improving the mobile terminal, the mobile terminal may determine a voice intention of the user according to the application, the specific function of the application is controlled through the voice, and the use experience of a user is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a flow chart illustrating a voice response method according to an exemplary embodiment of the present disclosure.
FIG. 2 is a flow chart illustrating a second voice response method according to an exemplary embodiment of the present disclosure.
FIG. 3 is a flow chart illustrating a third voice response method according to an exemplary embodiment of the present disclosure.
FIG. 4 is a flow chart illustrating a fourth voice response method according to an exemplary embodiment of the present disclosure.
Fig. 5 is a block diagram illustrating a voice response apparatus according to an exemplary embodiment of the present disclosure.
FIG. 6 is an architecture diagram illustrating an electronic device according to an example embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
The intelligent wearable device or the mobile terminal in the related art supports the voice interaction function, and the information query function, the intelligent home control function or other personalized operations can be realized in a voice interaction mode. In a related voice interaction scenario, a user may generally perform voice input on an intelligent wearable device or a mobile terminal in any scenario to expect a response result of the intelligent wearable device or the mobile terminal, and since the intelligent wearable device and the mobile terminal implement the same voice interaction manner, the following description takes the intelligent wearable device as an example, and an implementation means in the related art is: the intelligent wearable device collects voice signals of a user, sends the collected voice signals to the cloud end, the cloud end identifies the voice signals to obtain voice identification results (namely voice information), then determines the intention of the user based on the voice identification results to determine corresponding functions, thereby executing the corresponding functions and returning response results to the intelligent wearable device, for example, the user inputs a voice signal of 'what is the weather today' at the intelligent wearable device, the cloud end identifies the voice signals to make sure that the user wants to know the weather condition and executes the function of inquiring the weather, thereby returning weather results to the intelligent wearable device, however, sometimes the voice signals input by the user cannot make the intelligent wearable device make sure the specific intention of the user, for example, the user inputs 'stored' voice signals at the intelligent wearable device, the intelligent wearable device or the cloud end cannot make clear the specific intention of the user from the voice information, so that the response result desired by the user cannot be given, and the user experience is poor.
Accordingly, to solve the problems in the related art, embodiments of the present disclosure provide a voice response method; the voice response method can clarify the intention of the voice signal input by the user according to the currently running application, thereby obtaining the response result desired by the user and realizing the specific function in the application controlled by the voice.
Referring to fig. 1, fig. 1 is a flowchart illustrating a voice response method according to an exemplary embodiment, where the voice response method may be executed by a smart wearable device or a mobile terminal, and the following description is made by taking the voice response method as an example, where the voice response method is executed by the smart wearable device: the intelligent wearable device can be a bracelet, a watch, a hand strap, a ring, an arm strap or a foot strap and other devices, and the method comprises the following steps:
in step S101, based on the collected voice signal, voice information obtained by recognizing the voice signal is acquired.
In step S102, one or more response parameters corresponding to the voice information are acquired; the response parameters comprise the application to be responded and/or the application page to be responded and the corresponding response instruction.
In step S103, if the currently running application is the application to be responded and/or the currently displayed application page is the application page to be responded, the response instruction is executed.
It should be noted that intelligence wearing equipment includes the sound collection unit, the sound collection unit is used for gathering user's speech signal, the sound collection unit can be equipment such as microphone, adapter.
In one possible implementation manner, the smart wearable device may, when detecting a wake-up operation of a user, the voice acquisition unit is started to acquire voice signals in response to a wake-up operation of a user, for example, the wake-up operation may be an operation that a designated control (which may be a virtual control or a physical button) is triggered, or in the intelligent wearable device including the inertial sensor (acceleration sensor, gyroscope, or the like), the designated motion of the user can be determined by detecting the data of the inertial sensor, for example, when the intelligent wearable device is a watch or a bracelet, the designated motion may be a wrist-lifting motion, then the intelligent wearable device starts the voice acquisition unit based on the detected wrist-lifting action, and the intelligent judgment process frees the hands of the user and improves the use experience of the user; meanwhile, the voice acquisition unit is started when the user needs the voice acquisition unit, so that the electric quantity of the intelligent wearable device is saved, and the situation that the duration of the intelligent wearable device is greatly shortened due to the fact that the voice acquisition unit is maintained to run continuously is avoided.
It should be noted that, the application does not limit the starting time of the sound collection unit, and the intelligent wearable device can respond to the user's wakeup operation to start the sound collection unit when opening any application; and the starting of the sound collection unit and the currently opened application are two functions which are independent of each other.
In an embodiment, an intention keyword and one or more response parameters corresponding to the intention keyword are configured in advance at a cloud, where the intention keyword represents an operation that may be executed by the voice message, for example, a voice signal such as "i want to reply a short message", "send a short message", or "send a short message" is received, and the corresponding intention keyword may be "short message reply", which represents an operation that a user wants to reply a short message; additionally, the response parameters may include the following:
in a first possible implementation manner, the response parameter may include an application to be responded, a response triggering manner, and a corresponding response instruction; the response triggering mode represents the triggering operation of executing the response instruction; in one example, the response triggering mode comprises a control (a virtual key or an entity control) and an operation triggering the control; for example, the intention keyword is "short message reply", and the corresponding response parameters include a short message application, "reply" control, an operation for triggering the control, and a reply instruction; if the intention keyword is "save", the intention keyword may correspond to multiple response parameters, for example, one of the response parameters may include a sports application, "save" control, an operation triggering the control, and a save instruction for saving sports data, and another response parameter may include a telephone application, "save" control, an operation triggering the control, and a save instruction for saving a telephone number.
In a second possible implementation manner, the response parameter may include an application page to be responded, a response triggering manner, and a corresponding response instruction; the application page represents a UI interface of the application displayed to the user by the application, such as a short message reply page.
In a third possible implementation manner, the response parameter may include an application to be responded, an application page to be responded, a response triggering manner, and a corresponding response instruction; taking the intention keyword as "voice reply" as an example, the following is an exemplary illustration of the form of two response parameters:
{"packageName":"com.huami.wear.notification",
"appAction":"com.huami.wear.notification.ACTION_MESSAGE",
"action":"voice_reply"}
or
{"packageName":"com.huami.wear.message",
"appAction":"com.huami.wear.message.ACTION_DETAILS",
"action":"voice_reply"}
The packageName represents an application to be responded, the appAction represents an application page to be responded, and the action represents a response triggering mode and a corresponding response instruction, namely how to trigger the response instruction to execute.
It can be seen that in the embodiment of the present disclosure, the cloud does not need to be improved, and only the configured intention keywords and the relevant data of the response parameters corresponding to the intention keywords need to be stored in the cloud.
In this disclosure, the intelligent wearable device sends the collected voice signal to a cloud after collecting the voice signal based on the voice collecting unit, the cloud carries out voice recognition on the voice signal after receiving the voice signal to obtain voice information, and then matches the voice information with a preset intention keyword, so as to obtain one or more response parameters corresponding to the intention keyword and return the response parameters to the intelligent wearable device, and the intelligent wearable device receives the one or more response parameters corresponding to the intention keyword sent by the cloud.
After the smart wearable device receives the response parameter, in order to correspond to the content included in the response parameter, the smart wearable device may be configured to acquire one of or both of a currently running application and a currently displayed application page, and in a first possible manner, if the smart wearable device acquires the currently running application and the currently displayed application page, it is detected whether the currently running application matches the application to be responded, and it is detected whether the currently displayed application page matches the application page to be responded; in a second possible manner, if the intelligent wearable device only obtains the currently running application, detecting whether the currently running application is matched with the application to be responded; in a third possible implementation manner, if the intelligent wearable device only acquires the currently displayed application page, detecting whether the currently displayed application page is matched with the application page to be responded; in the above three cases, if the current application is matched with the voice message, the intelligent wearable device triggers the currently displayed application to execute the response instruction based on the response triggering mode, otherwise, the intelligent wearable device does not respond to the voice message.
It can be seen that, for the cloud end, only the voice signal of the intelligent wearable device needs to be responded and corresponding parameters are returned, the specific execution meaning of the voice signal does not need to be known, but the specific meaning of the voice signal is executed by the intelligent wearable device, that is, the cloud end only needs to be executed based on a set flow, and any change on the flow of the cloud end is not needed, so that the voice response method of the embodiment of the disclosure can be accessed to the voice recognition cloud end of any manufacturer, has good compatibility, and can be realized only by improving the intelligent wearable device, and is also applicable to the intelligent wearable device which can only borrow the voice recognition cloud end of other manufacturers, and has wide applicability; furthermore, relevant parameters (intention keywords and corresponding response parameters) are configured at the cloud, when the relevant parameters need to be updated, only the modification or replacement needs to be performed on the cloud, all the intelligent wearable devices connected to the cloud can perform voice response according to the updated rules, the operation is simple and efficient, and the problem that the voice response is difficult to perform according to the updated rules due to the fact that part of users do not update firmware, systems and the like when the relevant parameters are configured on the intelligent wearable devices is avoided.
In an implementation manner, the response parameters include an application to be responded, an application page to be responded, a response triggering manner and a corresponding response instruction, and the response triggering manner includes a control and an operation of simulating a user to trigger the control is taken as an example to explain: the intelligent wearable device comprises an application program of a voice assistant, the voice assistant sends a voice signal acquired by the voice acquisition unit to a cloud end to receive voice information corresponding to the voice signal and one or more corresponding response parameters from the cloud end, the response parameters comprise an application to be responded, an application page to be responded, a response triggering mode and a response instruction triggered by the response triggering mode, the response triggering mode represents a triggering operation of executing the response instruction, and then the voice assistant simulates the operation of a user to trigger the control after detecting that the currently running application is the application to be responded and the currently displayed application page is the application page to be responded, so that the currently running application executes the response instruction.
As an example, it is assumed that the application to be responded is a music playing application, the currently displayed application page is a playing page of a certain song, and three function controls are arranged at three different positions in the playing page: a "previous" function control, a pause function control, and a "next" function control; in the related art, if a user wants to trigger any one of the function controls, the user needs to click on the function control. By using the scheme of the embodiment of the present specification, when the user says "pause", in the scheme of the embodiment, after recognizing that the user expects the application to execute the pause function, the operation of the user can be simulated to trigger the pause function control, and then the music playing application detects that the control is triggered through the pause function control, so that the pause function is executed, and thus the voice control is realized.
In a possible implementation manner, in order to further improve the user experience and reduce the operation steps of the user, the response instruction in the response parameter may be configured as a composite response instruction, where the composite response instruction represents a plurality of instructions that need to be responded to perform the following operation, for example, an application currently displayed by the smart wearable device is an exercise application, when a voice signal of the user "i want to run" is received, since the running mode function is performed on the premise that the running mode function is the function of opening the running mode and opening the positioning function, in order to reduce the tedious operation that the user needs to open the positioning application and open the positioning function, the response parameter may be preset to include the exercise application, a page of the exercise application, a trigger manner of opening the running mode, and a corresponding instruction of opening the running mode and opening the positioning function, or the response parameters include one of the sports application, a page of the sports application, and other parameters, thereby optimizing the use experience of the user.
It should be noted that, in addition to being able to respond to a voice signal in a specific application based on the voice response method to execute an application response instruction in the specific application, the smart wearable device also supports a voice response method commonly used in the related art, and responds to a voice signal input by a user to execute a general response instruction, for example, a voice signal of "i want to listen to a song" of the user is collected on an application display interface of the smart wearable device, and the smart wearable device may execute a general response instruction of opening a music application based on the voice signal.
It can be seen that the voice intention of the user can be determined according to the application, the response instruction of the application is executed based on the voice of the user, the response result wanted by the user is given, compared with the unfriendly experience brought by touch interaction, the specific functions of the application are controlled by the voice interaction to be more convenient for the devices with limited display screens of the intelligent wearable device, the two hands of the user are liberated, and the use experience of the user is obviously improved; moreover, according to the embodiment of the disclosure, the sound collection unit can be started under the condition that any application is opened, and the sound collection unit and the application are independent of each other, that is, the starting of the sound collection unit does not depend on whether the application has the related functions of voice collection and voice control, the application does not need to be configured with the related functions of voice collection and voice control, the application does not need to make any improvement on the related voice control, and the specific function of the application can be controlled through voice, so that the complicated operation brought by the development personnel in improving the application is avoided, and meanwhile, a user using the application does not need to update the application from an application store to enable the application to have the voice collection function, so that the use experience of the user is improved.
Referring to fig. 2, fig. 2 is a flowchart illustrating a second voice response method according to an exemplary embodiment of the present disclosure, where the voice response method may be executed by a smart wearable device, and the method includes:
in step S201, the collected voice signal is sent to the cloud end, so as to obtain the voice information obtained by recognizing the voice signal from the cloud end.
In step S202, a response parameter corresponding to the currently running application and/or the currently displayed application page is obtained, where the response parameter includes one or more pieces of preset text information and a corresponding response instruction.
In step S203, the text information and the voice information are matched, and if the voice information matches the preset text information, a response instruction corresponding to the preset text information is executed.
In an embodiment, text information corresponding to an application and an application page, or one of the application and the application page is configured on the intelligent wearable device in advance, for example, text information such as "i want to reply", "reply with a short message", or "send a short message" corresponding to a short message application and a short message reply page is configured; intelligence wearing equipment includes sound collection unit, sound collection unit is used for gathering user's speech signal, sound collection unit can be equipment such as microphone, adapter.
In the embodiment of the disclosure, the intelligent wearable device acquires a voice signal through the sound acquisition unit, sends the voice signal to the cloud, receives and identifies the voice signal by the cloud to obtain voice information, then returns the identified voice information to the intelligent wearable device, and simultaneously, in order to correspond to preconfigured information, the intelligent wearable device obtains a response parameter corresponding to one of the currently running application and the currently displayed application page, or both, the response parameter includes one or more pieces of preset text information and corresponding response instructions, then matches the preset text information corresponding to the currently running application and the application page, or one of the two with the voice information, and executes the response instruction corresponding to the preset text information if the voice information matches the preset text information, which indicates that the voice intention of the user is hit, otherwise, not responding to the voice signal; the embodiment of the disclosure realizes the specific functions of the voice control application, frees both hands of the user, and improves the use experience of the user.
Referring to fig. 3, fig. 3 is a flowchart illustrating a third voice response method according to an exemplary embodiment of the present disclosure, where the voice response method may be executed by a cloud, and the cloud may be a cloud server with a voice recognition function, and the method includes:
in step S301, a voice signal sent by the smart wearable device is received, and voice information obtained by the voice signal is recognized.
In step S302, matching a preset intention keyword based on the voice information to obtain one or more response parameters corresponding to the intention keyword; the intention keywords represent possible operations to be performed by the voice information; the response parameters comprise the application to be responded and/or the application page to be responded and the corresponding response instruction.
In step S303, a currently running application and/or a currently displayed application page sent by the smart wearable device is received, and if the currently running application is the application to be responded and/or the currently displayed application page is the application page to be responded, the response instruction is sent to the smart wearable device, so that the smart wearable device executes the response instruction.
In one embodiment, an intention keyword and one or more response parameters corresponding to the intention keyword are configured in the cloud in advance, wherein the intention keyword represents possible operations executed by the voice message voice information; additionally, the response parameters may include the following: in a first possible implementation manner, the response parameter may include an application to be responded, a response triggering manner, and a corresponding response instruction; in a second possible implementation manner, the response parameter may include an application page to be responded, a response triggering manner, and a corresponding response instruction; in a third possible implementation manner, the response parameter may include an application to be responded, an application page to be responded, a response triggering manner, and a corresponding response instruction.
In the embodiment of the disclosure, a voice signal sent by the smart wearable device is received at the cloud, and if the response parameter includes an application to be responded and an application page to be responded, the smart wearable device is configured to send a currently running application and a currently displayed application page; if the response parameters comprise the application to be responded, the intelligent wearable device is configured to send the currently running application; if the response parameters comprise the application page to be responded, the intelligent wearable device is configured to send the current display application page; the cloud identifies the voice signal to obtain voice information, matches the voice information with preset intention keywords, and obtains one or more response parameters corresponding to the intention keywords, if the response parameters comprise an application to be responded and an application page to be responded, the cloud detects whether the currently running application is the application to be responded or not, and detects whether the currently displayed application page is the application page to be responded or not; if the response parameter comprises the application to be responded, the cloud detects whether the currently running application is the application to be responded; if the response parameter comprises an application page to be responded, the cloud detects whether the currently displayed application page is the application page to be responded; in the three cases, if yes, the cloud sends the response triggering mode and the corresponding response instruction to the intelligent wearable device, so that the intelligent wearable device triggers the currently running application to execute the response instruction based on the response triggering mode, and otherwise, the intelligent wearable device does not respond to the voice information.
It is to be understood that the above embodiments are merely exemplary, and one or more of the currently running application and the currently displayed application page may be transmitted simultaneously with the voice signal or may not be transmitted simultaneously, which is not limited in the present application. According to the embodiment of the disclosure, the cloud is improved, the specific function based on the voice control application is realized by using the powerful computing resource of the cloud, a method of replacing touch interaction on the intelligent wearable device through a voice interaction method is realized, and the use experience of a user is improved; furthermore, relevant parameters (intention keywords and corresponding response parameters) are configured in the cloud, when the relevant parameters need to be updated or the corresponding functions of the voice equipment are improved, only modification or replacement needs to be performed on the cloud, all intelligent wearable devices connected to the cloud can perform voice response according to the updated rules, the operation is simple and efficient, and the problem that the relevant parameters are configured on the intelligent wearable devices because part of users do not update firmware, systems and the like and are difficult to perform voice response according to the updated rules is avoided.
Referring to fig. 4, fig. 4 is a flowchart illustrating a fourth voice response method according to an exemplary embodiment of the present disclosure, where the voice response method may be executed by a mobile terminal associated with a smart wearable device, where the mobile terminal may be a mobile phone, a personal tablet, a computer, or the like, and the method includes:
in step S401, receiving voice information sent by the cloud; the voice information is obtained by identifying the voice signal by the cloud after the collected voice signal is sent to the cloud by the associated intelligent wearable device.
In step S402, receiving one or more response parameters corresponding to the voice message sent by the cloud; the response parameters are obtained and returned by the cloud end based on the preset intention keywords matched by the voice information, and one or more response parameters corresponding to the intention keywords are obtained; the response parameters comprise the application to be responded and/or the application page to be responded and the corresponding response instruction.
In step S403, receiving the currently running application and/or the currently displayed application page sent by the smart wearable device, and if the currently running application is the application to be responded and/or the currently displayed application page is the application page to be responded, sending the response instruction to the smart wearable device, so that the smart wearable device executes the response instruction.
In one embodiment, an intention keyword and one or more response parameters corresponding to the intention keyword are configured in the cloud in advance, wherein the intention keyword represents possible operations executed by the voice message voice information; additionally, the response parameters may include the following: in a first possible implementation manner, the response parameter may include an application to be responded, a response triggering manner, and a corresponding response instruction; in a second possible implementation manner, the response parameter may include an application page to be responded, a response triggering manner, and a corresponding response instruction; in a third possible implementation manner, the response parameter may include an application to be responded, an application page to be responded, a response triggering manner, and a corresponding response instruction.
In this embodiment, the mobile terminal can connect through the bluetooth the intelligent wearing equipment to receive the speech signal that intelligent wearing equipment sent, intelligent wearing equipment will also simultaneously the speech signal sends to the high in the clouds, so that the high in the clouds receives and discerns the speech signal obtains speech information, the mobile terminal can be based on the speech signal to the high in the clouds launches the pronunciation callback instruction, in order to receive the speech information that the high in the clouds sent the speech signal corresponds, simultaneously the high in the clouds matches preset intention keyword based on the speech information who discerns, obtains with one or more response parameter that intention keyword corresponds and returns for the mobile terminal.
Corresponding to the response parameters, the mobile terminal also receives related information of one or both of a currently running application and a currently displayed application page sent by the intelligent wearable device, and if the response parameters received from the cloud include an application to be responded and an application page to be responded, the mobile terminal detects whether the currently running application is the application to be responded and detects whether the currently displayed application page is the application page to be responded; if the response parameter comprises the application to be responded, the mobile terminal detects whether the currently running application is the application to be responded; if the response parameter comprises an application page to be responded, the mobile terminal detects whether the currently displayed application page is the application page to be responded; in the three cases, if yes, the mobile terminal sends the response triggering mode and the corresponding response instruction to the intelligent wearable device, so that the intelligent wearable device triggers the currently running application to execute the response instruction based on the response triggering mode, and otherwise, the mobile terminal does not respond to the voice information.
It is to be understood that the above embodiments are merely exemplary, and one or more of the currently running application and the currently displayed application page may be transmitted simultaneously with the voice signal or may not be transmitted simultaneously, which is not limited in the present application.
According to the embodiment of the disclosure, the mobile terminal is improved, so that the mobile terminal can determine the voice intention of the user according to the application, realize the specific function of the application through voice control, and improve the use experience of the user; furthermore, according to the embodiment of the disclosure, related parameters (intention keywords and corresponding response parameters) are configured at the cloud, when the related parameters need to be updated, only modification or replacement needs to be performed on the cloud, all mobile terminals connected to the cloud can perform voice response according to updated rules, the operation is simple and efficient, and the problem that the related parameters are configured on intelligent wearable devices and mobile terminals because part of users do not update firmware, systems, application software and the like, and voice response is difficult to perform according to the updated rules is avoided.
In another embodiment, intention keywords and one or more response parameters corresponding to the intention keywords may also be configured on the mobile terminal, for example, the response parameters include an application to be responded, an application page to be responded, a response triggering manner and a corresponding response instruction, the mobile terminal acquires voice information obtained by recognizing the voice signal from a cloud based on an acquired voice signal, the mobile terminal matches preset intention keywords according to the voice information, acquires one or more response parameters corresponding to the intention keywords, receives the currently running application and the related information of the currently displayed application page sent by the intelligent wearable device, and if it is detected that the currently running application is the application to be responded and detects that the currently displayed application page is the application page to be responded, and sending the response triggering mode and the corresponding response instruction to the intelligent wearable device, so that the intelligent wearable device triggers the currently running application to execute the response instruction based on the response triggering mode, otherwise, the intelligent wearable device does not respond to the voice information.
As shown in fig. 5, fig. 5 is a block diagram of a voice response apparatus shown in accordance with an exemplary embodiment of the present disclosure, including:
the voice information obtaining module 501 is configured to obtain, based on the collected voice signal, voice information obtained by recognizing the voice signal.
A response parameter obtaining module 502 configured to obtain a response parameter corresponding to the voice information; the response parameters comprise the application to be responded and/or the application page to be responded and the corresponding response instruction.
The response instruction executing module 503 is configured to execute the response instruction if the currently running application is the application to be responded, and/or the currently displayed application page is the application page to be responded.
Optionally, the response parameter further includes a response triggering manner.
The response instruction execution module 503 is configured to:
and if the currently running application is the application to be responded and/or the currently displayed application page is the application page to be responded, triggering the currently running application to execute the response instruction based on the response triggering mode.
Optionally, the voice response method is applied to the intelligent wearable device.
The voice information acquisition module 501 is configured to:
sending the collected voice signal to a cloud end; the voice signal is used for triggering the cloud to recognize the voice signal to obtain voice information, and matching preset intention keywords based on the voice information to obtain one or more response parameters corresponding to the intention keywords and return the response parameters to the intelligent wearable equipment; the intention keywords represent operations that the speech information may perform.
The response parameter acquisition module 502 is configured to:
and receiving one or more response parameters which are sent by the cloud and correspond to the intention keywords.
Optionally, the voice response method is applied to the intelligent wearable device.
The voice information obtaining module 501 is configured to:
and sending the collected voice signal to a cloud end so as to acquire voice information obtained by identifying the voice signal from the cloud end.
The response parameter obtaining module 502 and the response instruction executing module 503 are configured to:
acquiring a response parameter corresponding to a currently running application and/or a currently displayed application page; the response parameters also comprise one or more pieces of preset text information; and matching the text information with the voice information, and executing a response instruction corresponding to the preset text information if the voice information matches the preset text information.
Optionally, the voice response method is applied to the cloud.
The voice information acquisition module 501 is configured to:
and receiving a voice signal sent by the intelligent wearable equipment, and identifying voice information obtained by the voice signal.
The response parameter acquisition module 502 is configured to:
matching preset intention keywords based on the voice information to obtain one or more response parameters corresponding to the intention keywords; the intention keywords represent possible performed operations of the speech information.
The response instruction execution module 503 is configured to:
and the application and/or application page receiving sub-module is configured to receive the currently running application and/or the currently displayed application page sent by the intelligent wearable device.
And the response instruction sending sub-module is configured to send the response instruction to the intelligent wearable device if the currently running application is the application to be responded and/or the currently displayed application page is the application page to be responded, so that the intelligent wearable device executes the response instruction.
Optionally, the voice response method is applied to a mobile terminal; the mobile terminal is associated with the intelligent wearable device.
The voice information acquisition module 501 is configured to:
receiving voice information sent by a cloud; the voice information is obtained by identifying the voice signal by the cloud after the collected voice signal is sent to the cloud by the associated intelligent wearable device.
The response parameter acquisition module 502 is configured to:
receiving one or more response parameters which are sent by the cloud and correspond to the voice information; and the response parameters are obtained and returned by the cloud end based on the preset intention keywords matched by the voice information, and one or more response parameters corresponding to the intention keywords.
The response instruction execution module 503 is configured to:
and the application and/or application page receiving sub-module is configured to receive the currently running application and/or the currently displayed application page sent by the intelligent wearable device.
The response instruction sending sub-module is configured to send the response instruction to the intelligent wearable device if the currently running application is the application to be responded and/or the currently displayed application page is the application page to be responded, so that the intelligent wearable device executes the response instruction.
Optionally, the smart wearable device comprises a sound collection unit.
The voice signal is acquired by the intelligent wearable device in response to the awakening operation of the user and starting the sound acquisition unit.
Optionally, the smart wearable device comprises an inertial sensor.
The wake-up operation comprises a triggering operation of a designated control or a designated action of a user determined based on data collected by the inertial sensor. The implementation process of the functions and actions of each module in the display device is specifically detailed in the implementation process of the corresponding step in the display method, and is not described herein again.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, wherein the modules described as separate parts may or may not be physically separate, and the parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules can be selected according to actual needs to achieve the purpose of the disclosed solution. One of ordinary skill in the art can understand and implement it without inventive effort.
Correspondingly, the present disclosure also provides an electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the content of the first and second substances,
the processor is configured to perform the operations of the voice response method as described above.
The electronic device can be an intelligent wearable device, a cloud server or a mobile terminal.
Fig. 6 is a schematic structural diagram of an electronic device to which a voice response apparatus is applied according to an exemplary embodiment.
As shown in fig. 6, according to an exemplary embodiment, an electronic device 600 is shown, and the electronic device 600 may be a smart wearable device, a cloud server, or a mobile terminal.
Referring to fig. 6, electronic device 600 may include one or more of the following components: a processing component 601, a memory 602, a power component 603, a multimedia component 604, an audio component 605, an interface for input/output (I/O) 606, a sensor component 607, and a communication component 608.
The processing component 601 generally controls overall operation of the electronic device 600, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 601 may include one or more processors 609 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 601 may include one or more modules that facilitate interaction between the processing component 601 and other components. For example, the processing component 601 may include a multimedia module to facilitate interaction between the multimedia component 604 and the processing component 601.
The memory 602 is configured to store various types of data to support operations at the electronic device 600. Examples of such data include instructions for any application or method operating on the electronic device 600, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 602 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 603 provides power to the various components of the electronic device 600. The power components 603 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 600.
The multimedia component 604 comprises a screen providing an output interface between the electronic device 600 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 604 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 600 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
Audio component 605 is configured to output and/or input audio signals. For example, the audio component 605 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 600 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 602 or transmitted via the communication component 608. In some embodiments, audio component 605 also includes a speaker for outputting audio signals.
The I/O interface 602 provides an interface between the processing component 601 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 607 includes one or more sensors for providing various aspects of status assessment for the electronic device 600. For example, the sensor component 607 may detect an open/closed state of the electronic device 600, the relative positioning of components, such as a display and keypad of the electronic device 600, the sensor component 607 may also detect a change in the position of the electronic device 600 or a component of the electronic device 600, the presence or absence of user contact with the electronic device 600, orientation or acceleration/deceleration of the electronic device 600, and a change in the temperature of the electronic device 600. The sensor component 607 may include a proximity sensor configured to detect the presence of a nearby object in the absence of any physical contact. The sensor component 607 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor component 607 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, a heart rate signal sensor, an electrocardiogram sensor, a fingerprint sensor, or a temperature sensor.
The communication component 608 is configured to facilitate wired or wireless communication between the electronic device 600 and other devices. The electronic device 600 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 608 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 608 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 600 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as the memory 602 including instructions executable by the processor 609 of the electronic device 600 to perform the above-described method. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Wherein the instructions in the storage medium, when executed by the processor 609, enable the electronic device 600 to perform the aforementioned voice response method.
A computer-readable storage medium, on which a computer program is stored which, when executed by one or more processors, causes the processors to perform the above-described voice response method.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
The above description is only exemplary of the present disclosure and should not be taken as limiting the disclosure, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.

Claims (18)

1. A voice response method, comprising:
acquiring voice information obtained by recognizing the voice signal based on the acquired voice signal;
acquiring one or more response parameters corresponding to the voice information; the response parameters comprise applications to be responded and/or application pages to be responded and corresponding response instructions;
and if the currently operated application is the application to be responded and/or the currently displayed application page is the application page to be responded, executing the response instruction.
2. The voice response method according to claim 1, wherein the response parameters further include a response trigger mode;
the executing the response instruction comprises:
and triggering the currently running application to execute the response instruction based on the response triggering mode.
3. The voice response method according to claim 1, wherein the voice response method is applied to a smart wearable device;
the acquiring and recognizing the voice information obtained by the voice signal based on the collected voice signal comprises the following steps:
sending the collected voice signal to a cloud end; the voice signal is used for triggering the cloud to recognize the voice signal to obtain voice information, and matching preset intention keywords based on the voice information to obtain one or more response parameters corresponding to the intention keywords and return the response parameters to the intelligent wearable equipment; the intention keywords represent possible operations to be performed by the voice information;
the acquiring one or more response parameters corresponding to the voice information includes:
and receiving one or more response parameters which are sent by the cloud and correspond to the intention keywords.
4. The voice response method according to claim 1, wherein the voice response method is applied to a smart wearable device;
the acquiring and recognizing the voice information obtained by the voice signal based on the collected voice signal comprises the following steps:
sending the collected voice signal to a cloud end so as to acquire voice information obtained by identifying the voice signal from the cloud end;
the obtaining one or more response parameters corresponding to the voice information, and if the currently running application is the application to be responded and/or the currently displayed application page is the application page to be responded, executing the response instruction includes:
acquiring a response parameter corresponding to a currently running application and/or a currently displayed application page; the response parameters also comprise one or more pieces of preset text information;
and matching the text information with the voice information, and executing a response instruction corresponding to the preset text information if the voice information matches the preset text information.
5. The voice response method according to claim 1, wherein the voice response method is applied to a cloud;
the acquiring and recognizing the voice information obtained by the voice signal based on the collected voice signal comprises the following steps:
receiving a voice signal sent by intelligent wearable equipment, and identifying voice information obtained by the voice signal;
the acquiring one or more response parameters corresponding to the voice information includes:
matching preset intention keywords based on the voice information to obtain one or more response parameters corresponding to the intention keywords; the intention keywords represent possible operations to be performed by the voice information;
if the currently running application is the application to be responded and/or the currently displayed application page is the application page to be responded, executing the response instruction, including:
receiving a currently running application and/or a currently displayed application page sent by the intelligent wearable device;
if the currently running application is the application to be responded and/or the currently displayed application page is the application page to be responded, the response instruction is sent to the intelligent wearable device, so that the intelligent wearable device executes the response instruction.
6. The voice response method according to claim 1, wherein the voice response method is applied to a mobile terminal; the mobile terminal is associated with the intelligent wearable device;
the acquiring and recognizing the voice information obtained by the voice signal based on the collected voice signal comprises the following steps:
receiving voice information sent by a cloud; the voice information is obtained by identifying the voice signal by the cloud after the associated intelligent wearable device sends the acquired voice signal to the cloud;
the acquiring one or more response parameters corresponding to the voice information includes:
receiving one or more response parameters which are sent by the cloud and correspond to the voice information; the response parameters are obtained and returned by the cloud end based on the preset intention keywords matched by the voice information, and one or more response parameters corresponding to the intention keywords are obtained;
if the currently running application is the application to be responded and/or the currently displayed application page is the application page to be responded, executing the response instruction comprises:
receiving a currently running application and/or a currently displayed application page sent by the intelligent wearable device;
if the currently running application is the application to be responded, and/or the currently displayed application page is the application page to be responded, the response instruction is sent to the intelligent wearable device, so that the intelligent wearable device executes the response instruction.
7. The voice response method according to any one of claims 1 to 6, wherein the smart wearable device includes a sound collection unit;
the voice signal is acquired by the intelligent wearable device in response to the awakening operation of the user and starting the sound acquisition unit.
8. The voice response method according to claim 7, wherein the smart wearable device further comprises an inertial sensor;
the wake-up operation comprises a triggering operation of a designated control or a designated action of a user determined based on data collected by the inertial sensor.
9. A voice response apparatus, comprising:
the voice information acquisition module is configured to acquire voice information obtained by identifying the voice signal based on the acquired voice signal;
a response parameter acquisition module configured to acquire a response parameter corresponding to the voice information; the response parameters comprise applications to be responded and/or application pages to be responded and corresponding response instructions;
and the response instruction execution module is configured to execute the response instruction if the currently running application is the application to be responded and/or the currently displayed application page is the application page to be responded.
10. The voice response apparatus of claim 9, wherein the response parameters further comprise a response trigger;
the response instruction execution module is configured to:
and if the currently running application is the application to be responded and/or the currently displayed application page is the application page to be responded, triggering the currently running application to execute the response instruction based on the response triggering mode.
11. The voice response device according to claim 9, wherein the voice response method is applied to a smart wearable device;
the voice information acquisition module is configured to:
sending the collected voice signal to a cloud end; the voice signal is used for triggering the cloud to recognize the voice signal to obtain voice information, and matching preset intention keywords based on the voice information to obtain one or more response parameters corresponding to the intention keywords and return the response parameters to the intelligent wearable equipment; the intention keywords represent possible operations to be performed by the voice information; the response parameter acquisition module is configured to:
and receiving one or more response parameters which are sent by the cloud and correspond to the intention keywords.
12. The voice response device according to claim 9, wherein the voice response method is applied to a smart wearable device;
the voice information acquisition module is configured to:
sending the collected voice signal to a cloud end so as to acquire voice information obtained by identifying the voice signal from the cloud end;
the response parameter obtaining module and the response instruction executing module are configured to:
acquiring a response parameter corresponding to a currently running application and/or a currently displayed application page; the response parameters also comprise one or more pieces of preset text information;
and matching the text information with the voice information, and executing a response instruction corresponding to the preset text information if the voice information matches the preset text information.
13. The voice response device according to claim 9, wherein the voice response method is applied to a cloud;
the voice information acquisition module is configured to:
receiving a voice signal sent by intelligent wearable equipment, and identifying voice information obtained by the voice signal;
the response parameter acquisition module is configured to:
matching preset intention keywords based on the voice information to obtain one or more response parameters corresponding to the intention keywords; the intention keywords represent possible operations to be performed by the voice information;
the response instruction execution module is configured to:
the application and/or application page receiving sub-module is configured to receive a currently running application and/or a currently displayed application page sent by the intelligent wearable device;
and the response instruction sending sub-module is configured to send the response instruction to the intelligent wearable device if the currently running application is the application to be responded and/or the currently displayed application page is the application page to be responded, so that the intelligent wearable device executes the response instruction.
14. The voice response apparatus according to claim 9, wherein the voice response method is applied to a mobile terminal; the mobile terminal is associated with the intelligent wearable device;
the voice information acquisition module is configured to:
receiving voice information sent by a cloud; the voice information is obtained by identifying the voice signal by the cloud after the associated intelligent wearable device sends the acquired voice signal to the cloud;
the response parameter acquisition module is configured to:
receiving one or more response parameters which are sent by the cloud and correspond to the voice information; the response parameters are obtained and returned by the cloud end based on the preset intention keywords matched by the voice information, and one or more response parameters corresponding to the intention keywords are obtained;
the response instruction execution module is configured to:
the application and/or application page receiving sub-module is configured to receive a currently running application and/or a currently displayed application page sent by the intelligent wearable device;
the response instruction sending sub-module is configured to send the response instruction to the intelligent wearable device if the currently running application is the application to be responded and/or the currently displayed application page is the application page to be responded, so that the intelligent wearable device executes the response instruction.
15. The voice response device according to any one of claims 9 to 14, wherein the smart wearable apparatus includes a sound collection unit;
the voice signal is acquired by the intelligent wearable device in response to the awakening operation of the user and starting the sound acquisition unit.
16. The voice response apparatus of claim 15, wherein the smart wearable device comprises an inertial sensor;
the wake-up operation comprises a triggering operation of a designated control or a designated action of a user determined based on data collected by the inertial sensor.
17. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the content of the first and second substances,
the processor configured to perform the voice response method of any of the above claims 1 to 8.
18. A computer-readable storage medium, having stored thereon a computer program which, when executed by one or more processors, causes the processors to perform the voice response method of any of claims 1 to 8.
CN201910609650.9A 2019-07-08 2019-07-08 Voice response method, device, equipment and storage medium Pending CN112201230A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910609650.9A CN112201230A (en) 2019-07-08 2019-07-08 Voice response method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910609650.9A CN112201230A (en) 2019-07-08 2019-07-08 Voice response method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112201230A true CN112201230A (en) 2021-01-08

Family

ID=74004589

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910609650.9A Pending CN112201230A (en) 2019-07-08 2019-07-08 Voice response method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112201230A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104219388A (en) * 2014-08-28 2014-12-17 小米科技有限责任公司 Voice control method and device
CN106373570A (en) * 2016-09-12 2017-02-01 深圳市金立通信设备有限公司 Voice control method and terminal
CN107112014A (en) * 2014-12-19 2017-08-29 亚马逊技术股份有限公司 Application foci in voice-based system
CN107608652A (en) * 2017-08-28 2018-01-19 三星电子(中国)研发中心 A kind of method and apparatus of Voice command graphical interfaces
CN108108142A (en) * 2017-12-14 2018-06-01 广东欧珀移动通信有限公司 Voice information processing method, device, terminal device and storage medium
EP3346400A1 (en) * 2017-01-09 2018-07-11 Apple Inc. Application integration with a digital assistant
CN108519871A (en) * 2018-03-30 2018-09-11 广东欧珀移动通信有限公司 Acoustic signal processing method and Related product
CN108683937A (en) * 2018-03-09 2018-10-19 百度在线网络技术(北京)有限公司 Interactive voice feedback method, system and the computer-readable medium of smart television
CN109389974A (en) * 2017-08-09 2019-02-26 阿里巴巴集团控股有限公司 A kind of method and device of voice operating

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104219388A (en) * 2014-08-28 2014-12-17 小米科技有限责任公司 Voice control method and device
CN107112014A (en) * 2014-12-19 2017-08-29 亚马逊技术股份有限公司 Application foci in voice-based system
CN106373570A (en) * 2016-09-12 2017-02-01 深圳市金立通信设备有限公司 Voice control method and terminal
EP3346400A1 (en) * 2017-01-09 2018-07-11 Apple Inc. Application integration with a digital assistant
CN109389974A (en) * 2017-08-09 2019-02-26 阿里巴巴集团控股有限公司 A kind of method and device of voice operating
CN107608652A (en) * 2017-08-28 2018-01-19 三星电子(中国)研发中心 A kind of method and apparatus of Voice command graphical interfaces
CN108108142A (en) * 2017-12-14 2018-06-01 广东欧珀移动通信有限公司 Voice information processing method, device, terminal device and storage medium
CN108683937A (en) * 2018-03-09 2018-10-19 百度在线网络技术(北京)有限公司 Interactive voice feedback method, system and the computer-readable medium of smart television
CN108519871A (en) * 2018-03-30 2018-09-11 广东欧珀移动通信有限公司 Acoustic signal processing method and Related product

Similar Documents

Publication Publication Date Title
US10728196B2 (en) Method and storage medium for voice communication
CN107919123B (en) Multi-voice assistant control method, device and computer readable storage medium
RU2641995C2 (en) Method and device of controlling equipment
EP3136793B1 (en) Method and apparatus for awakening electronic device
CN109920418B (en) Method and device for adjusting awakening sensitivity
CN111651263B (en) Resource processing method and device of mobile terminal, computer equipment and storage medium
EP3171270A1 (en) Method and device for information push
CN106791893A (en) Net cast method and device
CN105898032B (en) method and device for adjusting prompt tone
EP4184506A1 (en) Audio processing
CN108564943B (en) Voice interaction method and system
CN107666536B (en) Method and device for searching terminal
EP3089436A1 (en) Methods and devices for calling based on cloud card
US20220391446A1 (en) Method and device for data sharing
CN111696553A (en) Voice processing method and device and readable medium
CN108270661B (en) Information reply method, device and equipment
CN105242837A (en) Application page acquisition method and terminal
CN112153218B (en) Page display method and device, wearable device and storage medium
CN107707759B (en) Terminal control method, device and system, and storage medium
CN105786561B (en) Method and device for calling process
CN113168257B (en) Method for locking touch operation and electronic equipment
CN111399659A (en) Interface display method and related device
CN107948876B (en) Method, device and medium for controlling sound box equipment
CN105554080A (en) Information pushing method and information pushing device
CN110798721B (en) Episode management method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination