CN107316641B - Voice control method and electronic equipment - Google Patents

Voice control method and electronic equipment Download PDF

Info

Publication number
CN107316641B
CN107316641B CN201710525132.XA CN201710525132A CN107316641B CN 107316641 B CN107316641 B CN 107316641B CN 201710525132 A CN201710525132 A CN 201710525132A CN 107316641 B CN107316641 B CN 107316641B
Authority
CN
China
Prior art keywords
control
information
application scene
type
electronic equipment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710525132.XA
Other languages
Chinese (zh)
Other versions
CN107316641A (en
Inventor
张晓平
李辉
王哲鹏
武亚强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201710525132.XA priority Critical patent/CN107316641B/en
Publication of CN107316641A publication Critical patent/CN107316641A/en
Application granted granted Critical
Publication of CN107316641B publication Critical patent/CN107316641B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/226Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics

Abstract

The application discloses a voice control method and electronic equipment, wherein when a voice control instruction is obtained, application scene information of the electronic equipment when the voice control instruction is obtained is also obtained; then, analyzing the control type corresponding to the voice control instruction; on the basis, the electronic equipment is controlled by utilizing the control type and the application scene information. Therefore, the scheme of controlling the electronic equipment by utilizing the voice control instruction and combining the application scene information of the electronic equipment when the voice control instruction is obtained is provided, and the control of the equipment is more intelligent due to the fact that the application scene information of the electronic equipment is combined and referred when the voice control is carried out, and the problems that the voice control mode in the prior art is single in control and not intelligent enough are solved.

Description

Voice control method and electronic equipment
Technical Field
The invention belongs to the technical field of voice control of electronic equipment, and particularly relates to a voice control method and electronic equipment.
Background
Currently, when performing voice control on an electronic device, a control object currently provided by the electronic device is generally controlled only by parsing a control type indicated by a voice command.
For example, in a music playing control scenario, a user may input a music playing command to the electronic device, so that the electronic device may play current music on the device interface (e.g., music selected by the user on the device interface) by analyzing a control type corresponding to the command.
The voice control mode only controls the control object currently provided by the electronic equipment by analyzing the control type indicated by the voice command, so that the defects of single control and insufficient intelligence exist.
Disclosure of Invention
In view of the above, an object of the present invention is to provide a voice control method and an electronic device, which aim to overcome the problems of single control and insufficient intelligence existing in the voice control method in the prior art.
Therefore, the invention discloses the following technical scheme:
a voice control method is applied to electronic equipment and comprises the following steps:
acquiring a voice control instruction, and acquiring application scene information of the electronic equipment when the voice control instruction is acquired;
analyzing a control type corresponding to the voice control instruction;
and controlling the electronic equipment by using the control type and the application scene information.
In the above method, preferably, the acquiring application context information of the electronic device when the voice control instruction is acquired includes:
and acquiring the application scene information by using a sensing unit and/or a background system of the electronic equipment.
Preferably, the application scenario information includes one or more of a motion time, a motion frequency, and a heartbeat frequency of a user of the electronic device.
Preferably, the controlling the electronic device by using the control type and the application context information includes:
determining a control object in the electronic equipment according to the control type;
determining control parameter information correspondingly required when the control object is controlled according to the application scene information;
performing control corresponding to the control type on the control object based on the control parameter information.
Preferably, the determining, according to the application context information, control parameter information correspondingly required when the control object is controlled includes:
performing weighted operation on the data of the application scene information to determine the type of the application scene;
and determining control parameter information corresponding to the application scene type according to the application scene type.
An electronic device, comprising:
the acquisition unit is used for acquiring a voice control instruction and acquiring application scene information of the electronic equipment when the voice control instruction is acquired;
the analysis unit is used for analyzing the control type corresponding to the voice control instruction;
and the control unit is used for controlling the electronic equipment by utilizing the control type and the application scene information.
Preferably, the above electronic device, where the obtaining unit obtains application context information of the electronic device when obtaining the voice control instruction, specifically includes:
and acquiring the application scene information by using a sensing unit and/or a background system of the electronic equipment.
Preferably, in the electronic device, the application context information acquired by the acquiring unit includes one or more of a motion time, a motion frequency, and a heartbeat frequency of a user of the electronic device.
Preferably, the control unit of the electronic device is specifically configured to:
determining a control object in the electronic equipment according to the control type; determining control parameter information correspondingly required when the control object is controlled according to the application scene information; performing control corresponding to the control type on the control object based on the control parameter information.
Preferably, in the electronic device, the control unit determines, according to the application scene information, control parameter information correspondingly required when the control object is controlled, and specifically includes:
performing weighted operation on the data of the application scene information to determine the type of the application scene; and determining control parameter information corresponding to the application scene type according to the application scene type.
According to the scheme, when a voice control instruction is obtained, the application scene information of the electronic equipment when the voice control instruction is obtained is also obtained; then, analyzing the control type corresponding to the voice control instruction; on the basis, the electronic equipment is controlled by utilizing the control type and the application scene information. Therefore, the scheme of controlling the electronic equipment by utilizing the voice control instruction and combining the application scene information of the electronic equipment when the voice control instruction is obtained is provided, and the control of the equipment is more intelligent due to the fact that the application scene information of the electronic equipment is combined and referred when the voice control is carried out, and the problems that the voice control mode in the prior art is single in control and not intelligent enough are solved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a flowchart of a first embodiment of a voice control method provided in the present application;
fig. 2 is a flowchart of a second embodiment of a speech control method provided in the present application;
fig. 3 is a flowchart of a third embodiment of a voice control method provided in the present application;
fig. 4 is a schematic structural diagram of a fourth embodiment of an electronic device provided in the present application.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example one
The embodiment of the application provides a voice control method, which can be applied to electronic equipment, wherein the electronic equipment can be but is not limited to various intelligent terminals such as a smart phone, a tablet computer and a palm computer, or can also be various intelligent wearable devices such as a smart bracelet and a smart watch.
Referring to fig. 1, a flowchart of a first embodiment of a voice control method according to the present application is shown, where the method may include the following steps:
step 101, obtaining a voice control instruction, and obtaining application scene information of the electronic device when obtaining the voice control instruction.
In an actual application scenario, when a user performs voice control on an electronic device, for example, the voice controls the electronic device to perform music playing, video playing, or page turning, the voice can be input to the electronic device through a sound acquisition device of the electronic device, such as a microphone of a smart phone, and the like; or an external sound collection device of the electronic equipment, such as a microphone on an external earphone of a smart phone, can be adopted to realize the input of voice to the electronic equipment.
In this step, the electronic device may obtain the voice control instruction of the user through a sound collecting device of the device itself, or may also obtain the voice control instruction of the user through a sound collecting device externally connected to the device, which is not limited in this application.
The voice control instruction is at least used for indicating a corresponding control type when the electronic equipment is subjected to voice control, such as controlling the electronic equipment to perform music playing and video playing, or controlling the electronic equipment to perform photo switching and document page turning.
Compared with the prior art, the method and the device have the advantages that the electronic equipment is controlled only by analyzing the control type indicated by the voice control instruction, the control type corresponding to the voice control instruction is used as the basis of voice control, and the voice control of the electronic equipment is completed by simultaneously referring to the application scene information corresponding to the electronic equipment when the voice control instruction is obtained, namely, the voice control instruction and the application scene information corresponding to the electronic equipment when the voice control instruction is obtained are jointly used as the control basis to realize the voice control of the electronic equipment.
In view of this, besides obtaining the voice control instruction, it is also necessary to obtain application scenario information corresponding to the electronic device when obtaining the voice control instruction. The application scenario information may include, but is not limited to, motion information of a user of the electronic device, such as one or more of a motion time, a motion frequency, and a heartbeat frequency of the user of the electronic device, when the electronic device obtains the voice control instruction.
And 102, analyzing the control type corresponding to the voice control instruction.
After the voice control instruction of the user is obtained, the voice content of the voice control instruction can be analyzed, so that the control type corresponding to the voice control instruction can be obtained by analyzing the voice content of the voice control instruction.
For example, by analyzing the voice content of the voice control instruction, it is known whether the control type corresponding to the voice control instruction is to play music or video, or to switch photos or to turn pages of documents, so as to provide a basis for subsequent voice control.
And 103, controlling the electronic equipment by using the control type and the application scene information.
After the control type corresponding to the voice control instruction is analyzed, the electronic equipment can be controlled according to the control type and by combining the application scene information.
For example, when the control type is "play music" and the application scenario information is running information of the user, when responding to a voice control instruction of the user, the electronic device may be controlled to play music in combination with the control type of "play music" and motion information of the device user when obtaining the voice control instruction of "play music".
According to the scheme, the voice control method provided by the application further acquires the application scene information of the electronic equipment when the voice control instruction is acquired; then, analyzing the control type corresponding to the voice control instruction; on the basis, the electronic equipment is controlled by utilizing the control type and the application scene information. Therefore, the scheme of controlling the electronic equipment by utilizing the voice control instruction and combining the application scene information of the electronic equipment when the voice control instruction is obtained is provided, and the control of the equipment is more intelligent due to the fact that the application scene information of the electronic equipment is combined and referred when the voice control is carried out, and the problems that the voice control mode in the prior art is single in control and not intelligent enough are solved.
Example two
In the following second embodiment of the present application, referring to a flowchart of the second embodiment of the voice control method of the present application shown in fig. 2, the obtaining, in step 101, the application scenario information of the electronic device when obtaining the voice control instruction may be specifically implemented by the following processing procedures:
step 1011, acquiring the application scene information by using a sensing unit and/or a background system of the electronic device.
The application scenario information may be, but is not limited to, motion information of a user of the electronic device when the electronic device obtains the voice control instruction, and specifically, the motion information may include, but is not limited to, one or more of a motion time, a motion intensity, a motion rhythm, a motion frequency, and a heartbeat frequency of the user.
The sensing unit of the electronic device includes one or a group of sensing devices capable of collecting the application scenario information or basic data of the application scenario information, and taking the application scenario information as the motion information of the user as an example, the sensing unit of the electronic device may include one or more of a timer, a pedometer, an accelerometer, an optical heart rate sensor, and the like.
The background system may be a local processing system of the electronic device, or may also be a cloud processing system. The background system provides a processing process based on a corresponding software algorithm, and is used for performing required processing on data acquired by the sensing unit to obtain corresponding application scene information by processing the data acquired by the sensing unit, for example, obtaining exercise frequency information of a user by processing data of a pedometer and a timer, obtaining exercise intensity information of the user by processing data of the pedometer, the timer and an optical heart rate sensor, and the like.
Based on the above explanation, it is easily understood that the sensing unit and/or the background system of the electronic device may be utilized to acquire application scenario information corresponding to the electronic device when obtaining the voice control instruction of the user.
Specifically, in practical application, some information in the application scene information, such as the motion time, the motion frequency, and the like of the user, may be directly obtained from the sensing device of the sensing unit; and other information in the application scene information, such as historical motion data, for example, the total number of motion steps since a period of time, cannot be directly obtained from the sensing device of the sensing unit, and needs to be obtained from a background system of the electronic device. In another embodiment, the collected data of the sensing unit is only used as basic data/source data for confirming the type of the application scenario, and for this case, the collected data of the sensing device needs to be sent to the background system for processing, so that the collected data of one or more sensing devices is processed by the background system according to a corresponding algorithm to obtain the required application scenario information.
EXAMPLE III
In the third embodiment of the present application, referring to the flowchart of the third embodiment of the voice control method in the present application shown in fig. 3, in the step 103, the electronic device is controlled by using the control type and the application context information, which may specifically be implemented by the following processing procedures:
step 1031, determining a control object in the electronic equipment according to the control type.
The control type represents a type of control to be performed on the electronic device, for example, the control type may specifically be a type of playing music, playing video, or the like, or may also be another type, such as turning a page of a document, switching photos, or the like.
The control object may be various device objects which are provided in the electronic device and can be controlled by the user through voice, such as a music playing unit (e.g. music player), a video playing unit (e.g. video player), an album unit (album), and the like.
The control type of the voice control instruction can represent the type of control to be executed on the electronic equipment, such as music playing or video playing, and can indicate a corresponding control object of the instruction in the electronic equipment. Taking the control type of the voice control command as "play music" as an example, obviously, it can be determined that the control object corresponding to the command in the electronic device is a music playing unit, such as a music player, and the like, rather than other control objects, such as a video playing unit, an album unit, and the like, through the control type of "play music".
In view of this, after the control type of the voice control command is analyzed, the corresponding control object of the voice control command in the electronic device may be determined according to the control type.
Step 1032, determining control parameter information correspondingly required when the control object is controlled according to the application scene information.
As described above, the application context information may be, but is not limited to, motion information of a user of the electronic device when the electronic device obtains the voice control instruction, and specifically, may be, for example, one or more of a motion time, a motion intensity, a motion rhythm, a motion frequency, and a heartbeat frequency of the user.
In this application, the control parameter information is various parameter information associated with the application context information and based on which the voice control of the electronic device is implemented.
In this embodiment, the implementation process of determining the control parameter information according to the application scene information is specifically described by taking the control type as playing music and the application scene information as the motion information of the user as an example.
The control parameters and the size of the music to be played, such as the style/type of the music to be played (e.g. hip-hop, classical, pop, jazz, rock, etc.), the tempo range, etc., may be determined according to one or more of the exercise time, the exercise intensity, the exercise tempo, the exercise frequency, and the heartbeat frequency of the device user when the electronic device obtains the music playing instruction.
In specific implementation, a plurality of different application scene types can be predefined, and control parameters and sizes thereof needed when voice control is performed are respectively matched for the different application scene types.
Still taking the application scenario information as the motion information of the user as an example, a plurality of different motion types may be predefined, and the motion types may be fast running, slow running, long running, short running, walking, or may also be strenuous motion, gentle motion, and the like. For different types of sports, it is possible to match them with suitable control parameters and their magnitudes, for example, to match only a first type of sports with a first music style/type, to match a second type of sports with a second music style/type, and to match the second type of sports with a tempo in a corresponding range of values, etc.
In practical application, the matching relationship between different application scene types and different control parameters and the sizes thereof can be provided in the form of, but not limited to, a look-up table.
And the application scene type and the value of the application scene information also have a certain corresponding relation.
Taking the exercise type as an example, a certain particular exercise type may specifically correspond to a corresponding particular value range of various exercise information of the user, for example, for an exercise type of a strenuous exercise, it may correspond to a relatively higher value range of parameters such as exercise intensity, exercise rhythm, exercise frequency, and heartbeat frequency of the user, and for a gentle exercise, it may correspond to a relatively lower value range of the above-mentioned each exercise parameter; in addition, a specific exercise type may correspond to a weighted operation result value range of various exercise information values of the user, for example, for a strenuous exercise, the specific exercise type may correspond to a higher weighted sum value range of the parameter values of the exercise intensity, the exercise rhythm, the exercise frequency, and the heartbeat frequency of the user, and for a gentle exercise, the specific exercise type may correspond to a lower weighted sum value range of the parameter values of the exercise of the user.
The application scene type and the application scene information may be in various corresponding modes, and in practical application, the two corresponding modes provided in this embodiment are not limited, and specifically, technical personnel can perform personalized design of the corresponding modes according to actual requirements.
Therefore, after the application scene information of the electronic device when the voice control instruction is obtained, the application scene type of the user of the electronic device can be determined according to the application scene information, and further, the matched control parameter and the size (which can be a specific numerical value or a value range) thereof can be determined according to the application scene type.
For example, after the exercise type of the user is determined according to the value ranges to which one or more values of the exercise time, the exercise intensity, the exercise frequency, and the heartbeat frequency of the user respectively belong, or according to the range to which the weighted sum of the values of the exercise information belongs, the comparison table of the exercise type and the control parameter information may be further searched according to the exercise type of the user to determine the required control parameters and the size thereof, for example, to determine the style/type of music to be played, and/or the tempo range, the rhythm range, and the like.
Step 1033, based on the control parameter information, performing control corresponding to the control type on the control object.
After the control parameter information is determined, for example, the control parameter and the size thereof are determined, corresponding control may be performed on the control object in the electronic device based on the determined control parameter and the size thereof.
Still taking the control type as "playing music" and the control object as a music playing unit as an example, after determining the style/type of music to be played and/or the beat range and the rhythm range of the music to be played according to the motion type of the user, searching for music matched with the style/type and/or the beat range and the rhythm range from the local end or the cloud end of the electronic device, and providing the search result to the music playing unit of the electronic device for playing, for example, specifically playing the searched music in sequence and in order; or after the matching music is searched out, the music may be recommended to the user in a music playing unit of the electronic device, and the user selects the music of interest and then plays the music, which is not limited in this embodiment.
The voice control method of the embodiment controls the electronic equipment by utilizing the voice control instruction and combining with the application scene information of the electronic equipment when the voice control instruction is obtained, and controls the equipment more intelligently by combining with the application scene information of the electronic equipment when the voice control is carried out, thereby overcoming the problems of single control and insufficient intelligence existing in the voice control mode in the prior art.
Example four
The fourth embodiment of the application provides an electronic device, the electronic device can be but not limited to various intelligent terminals such as smart phones, tablet computers, palm computers, and the like, or can also be various intelligent wearable devices such as smart bracelets, smart watches, and the like.
Referring to fig. 4, a schematic structural diagram of a fourth embodiment of an electronic device according to the present application is shown, where the electronic device may include:
the obtaining unit 401 is configured to obtain a voice control instruction, and obtain application scenario information of the electronic device when the voice control instruction is obtained.
In an actual application scenario, when a user performs voice control on an electronic device, for example, the voice controls the electronic device to perform music playing, video playing, or page turning, the voice can be input to the electronic device through a sound acquisition device of the electronic device, such as a microphone of a smart phone, and the like; or an external sound collection device of the electronic equipment, such as a microphone on an external earphone of a smart phone, can be adopted to realize the input of voice to the electronic equipment.
In view of this, the electronic device may obtain the voice control instruction of the user through the sound collecting device of the device itself, or may also obtain the voice control instruction of the user through the sound collecting device externally connected to the device, which is not limited in this application.
The voice control instruction is at least used for indicating a corresponding control type when the electronic equipment is subjected to voice control, such as controlling the electronic equipment to perform music playing and video playing, or controlling the electronic equipment to perform photo switching and document page turning.
Compared with the prior art, the method and the device have the advantages that the electronic equipment is controlled only by analyzing the control type indicated by the voice control instruction, the control type corresponding to the voice control instruction is used as the basis of voice control, and the voice control of the electronic equipment is completed by simultaneously referring to the application scene information corresponding to the electronic equipment when the voice control instruction is obtained, namely, the voice control instruction and the application scene information corresponding to the electronic equipment when the voice control instruction is obtained are jointly used as the control basis to realize the voice control of the electronic equipment.
In view of this, besides obtaining the voice control instruction, it is also necessary to obtain application scenario information corresponding to the electronic device when obtaining the voice control instruction. The application scenario information may include, but is not limited to, motion information of a user of the electronic device, such as one or more of a motion time, a motion frequency, and a heartbeat frequency of the user of the electronic device, when the electronic device obtains the voice control instruction.
An analyzing unit 402, configured to analyze a control type corresponding to the voice control instruction.
After the voice control instruction of the user is obtained, the voice content of the voice control instruction can be analyzed, so that the control type corresponding to the voice control instruction can be obtained by analyzing the voice content of the voice control instruction.
For example, by analyzing the voice content of the voice control instruction, it is known whether the control type corresponding to the voice control instruction is to play music or video, or to switch photos or to turn pages of documents, so as to provide a basis for subsequent voice control.
A control unit 403, configured to control the electronic device by using the control type and the application scenario information.
After the control type corresponding to the voice control instruction is analyzed, the electronic equipment can be controlled according to the control type and by combining the application scene information.
For example, when the control type is "play music" and the application scenario information is running information of the user, when responding to a voice control instruction of the user, the electronic device may be controlled to play music in combination with the control type of "play music" and motion information of the device user when obtaining the voice control instruction of "play music".
According to the scheme, when the voice control instruction is obtained, the electronic equipment also obtains application scene information of the electronic equipment when the voice control instruction is obtained; then, analyzing the control type corresponding to the voice control instruction; on the basis, the electronic equipment is controlled by utilizing the control type and the application scene information. Therefore, the scheme of controlling the electronic equipment by utilizing the voice control instruction and combining the application scene information of the electronic equipment when the voice control instruction is obtained is provided, and the control of the equipment is more intelligent due to the fact that the application scene information of the electronic equipment is combined and referred when the voice control is carried out, and the problems that the voice control mode in the prior art is single in control and not intelligent enough are solved.
EXAMPLE five
In a fifth embodiment of the present application, the obtaining unit obtains application context information of the electronic device when obtaining the voice control instruction, and specifically may be implemented through the following processing procedures:
and acquiring the application scene information by using a sensing unit and/or a background system of the electronic equipment.
The application scenario information may be, but is not limited to, motion information of a user of the electronic device when the electronic device obtains the voice control instruction, and specifically, the motion information may include, but is not limited to, one or more of a motion time, a motion intensity, a motion rhythm, a motion frequency, and a heartbeat frequency of the user.
The sensing unit of the electronic device includes one or a group of sensing devices capable of collecting the application scenario information or basic data of the application scenario information, and taking the application scenario information as the motion information of the user as an example, the sensing unit of the electronic device may include one or more of a timer, a pedometer, an accelerometer, an optical heart rate sensor, and the like.
The background system may be a local processing system of the electronic device, or may also be a cloud processing system. The background system provides a processing process based on a corresponding software algorithm, and is used for performing required processing on data acquired by the sensing unit to obtain corresponding application scene information by processing the data acquired by the sensing unit, for example, obtaining exercise frequency information of a user by processing data of a pedometer and a timer, obtaining exercise intensity information of the user by processing data of the pedometer, the timer and an optical heart rate sensor, and the like.
Based on the above explanation, it is easily understood that the sensing unit and/or the background system of the electronic device may be utilized to acquire application scenario information corresponding to the electronic device when obtaining the voice control instruction of the user.
Specifically, in practical application, some information in the application scene information, such as the motion time, the motion frequency, and the like of the user, may be directly obtained from the sensing device of the sensing unit; and other information in the application scene information, such as historical motion data, for example, the total number of motion steps since a period of time, cannot be directly obtained from the sensing device of the sensing unit, and needs to be obtained from a background system of the electronic device. In another embodiment, the collected data of the sensing unit is only used as basic data/source data for confirming the type of the application scenario, and for this case, the collected data of the sensing device needs to be sent to the background system for processing, so that the collected data of one or more sensing devices is processed by the background system according to a corresponding algorithm to obtain the required application scenario information.
EXAMPLE six
In a sixth embodiment of the present application, the control unit controls the electronic device by using the control type and the application context information, and specifically may be implemented through the following processing procedures:
determining a control object in the electronic equipment according to the control type; determining control parameter information correspondingly required when the control object is controlled according to the application scene information; performing control corresponding to the control type on the control object based on the control parameter information.
The control type represents a type of control to be performed on the electronic device, for example, the control type may specifically be a type of playing music, playing video, or the like, or may also be another type, such as turning a page of a document, switching photos, or the like.
The control object may be various device objects which are provided in the electronic device and can be controlled by the user through voice, such as a music playing unit (e.g. music player), a video playing unit (e.g. video player), an album unit (album), and the like.
The control type of the voice control instruction can represent the type of control to be executed on the electronic equipment, such as music playing or video playing, and can indicate a corresponding control object of the instruction in the electronic equipment. Taking the control type of the voice control command as "play music" as an example, obviously, it can be determined that the control object corresponding to the command in the electronic device is a music playing unit, such as a music player, and the like, rather than other control objects, such as a video playing unit, an album unit, and the like, through the control type of "play music".
In view of this, after the control type of the voice control command is analyzed, the corresponding control object of the voice control command in the electronic device may be determined according to the control type.
As described above, the application context information may be, but is not limited to, motion information of a user of the electronic device when the electronic device obtains the voice control instruction, and specifically, may be, for example, one or more of a motion time, a motion intensity, a motion rhythm, a motion frequency, and a heartbeat frequency of the user.
In this application, the control parameter information is various parameter information associated with the application context information and based on which the voice control of the electronic device is implemented.
In this embodiment, the implementation process of determining the control parameter information according to the application scene information is specifically described by taking the control type as playing music and the application scene information as the motion information of the user as an example.
The control parameters and the size of the music to be played, such as the style/type of the music to be played (e.g. hip-hop, classical, pop, jazz, rock, etc.), the tempo range, etc., may be determined according to one or more of the exercise time, the exercise intensity, the exercise tempo, the exercise frequency, and the heartbeat frequency of the device user when the electronic device obtains the music playing instruction.
In specific implementation, a plurality of different application scene types can be predefined, and control parameters and sizes thereof needed when voice control is performed are respectively matched for the different application scene types.
Still taking the application scenario information as the motion information of the user as an example, a plurality of different motion types may be predefined, and the motion types may be fast running, slow running, long running, short running, walking, or may also be strenuous motion, gentle motion, and the like. For different types of sports, it is possible to match them with suitable control parameters and their magnitudes, for example, to match only a first type of sports with a first music style/type, to match a second type of sports with a second music style/type, and to match the second type of music with a corresponding range of tempos, beats, etc. at the same time.
In practical application, the matching relationship between different application scene types and different control parameters and the sizes thereof can be provided in the form of, but not limited to, a look-up table.
And the application scene type and the value of the application scene information also have a certain corresponding relation.
Taking the exercise type as an example, a certain particular exercise type may specifically correspond to a corresponding particular value range of various exercise information of the user, for example, for an exercise type of a strenuous exercise, it may correspond to a relatively higher value range of parameters such as exercise intensity, exercise rhythm, exercise frequency, and heartbeat frequency of the user, and for a gentle exercise, it may correspond to a relatively lower value range of the above-mentioned each exercise parameter; in addition, a specific exercise type may correspond to a weighted operation result value range of various exercise information values of the user, for example, for a strenuous exercise, the specific exercise type may correspond to a higher weighted sum value range of the parameter values of the exercise intensity, the exercise rhythm, the exercise frequency, and the heartbeat frequency of the user, and for a gentle exercise, the specific exercise type may correspond to a lower weighted sum value range of the parameter values of the exercise of the user.
The application scene type and the application scene information may be in various corresponding modes, and in practical application, the two corresponding modes provided in this embodiment are not limited, and specifically, technical personnel can perform personalized design of the corresponding modes according to actual requirements.
Therefore, after the application scene information of the electronic device when the voice control instruction is obtained, the application scene type of the user of the electronic device can be determined according to the application scene information, and further, the matched control parameter and the size (which can be a specific numerical value or a value range) thereof can be determined according to the application scene type.
For example, after the exercise type of the user is determined according to the value ranges to which one or more values of the exercise time, the exercise intensity, the exercise frequency, and the heartbeat frequency of the user respectively belong, or according to the range to which the weighted sum of the values of the exercise information belongs, the comparison table of the exercise type and the control parameter information may be further searched according to the exercise type of the user to determine the required control parameters and the size thereof, for example, to determine the style/type of music to be played, and/or the tempo range, the rhythm range, and the like.
After the control parameter information is determined, for example, the control parameter and the size thereof are determined, corresponding control may be performed on the control object in the electronic device based on the determined control parameter and the size thereof.
Still taking the control type as "playing music" and the control object as a music playing unit as an example, after determining the style/type of music to be played and/or the beat range and the rhythm range of the music to be played according to the motion type of the user, searching for music matched with the style/type and/or the beat range and the rhythm range from the local end or the cloud end of the electronic device, and providing the search result to the music playing unit of the electronic device for playing, for example, specifically playing the searched music in sequence and in order; or after the matching music is searched out, the music may be recommended to the user in a music playing unit of the electronic device, and the user selects the music of interest and then plays the music, which is not limited in this embodiment.
The electronic equipment of this embodiment utilizes the speech control instruction to combine and utilize the application scene information of electronic equipment when obtaining the speech control instruction, control electronic equipment, because when carrying out speech control, combine the application scene information who has referred to electronic equipment, thereby more intelligent to the control of equipment, overcome the problem that the speech control mode of prior art exists control comparatively single, intelligent inadequately.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other.
For convenience of description, the above system or apparatus is described as being divided into various modules or units by function, respectively. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application.
From the above description of the embodiments, it is clear to those skilled in the art that the present application can be implemented by software plus necessary general hardware platform. Based on such understanding, the technical solutions of the present application may be essentially or partially implemented in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments of the present application.
Finally, it is further noted that, herein, relational terms such as first, second, third, fourth, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (8)

1. A voice control method is applied to an electronic device, and the method comprises the following steps:
acquiring a voice control instruction, and acquiring application scene information of the electronic equipment when the voice control instruction is acquired;
analyzing a control type corresponding to the voice control instruction;
determining a control object indicated by the control type from different control objects included in the electronic equipment; different control types respectively indicate different control objects in the electronic equipment;
determining control parameter information correspondingly required when a control object indicated by the control type of the voice control instruction is controlled according to the application scene information;
determining an object to be processed matched with the control parameter information determined according to the application scene information, providing the object to be processed determined according to the control parameter information to the control object determined from the plurality of control objects, and performing control corresponding to the determined control type on the object to be processed determined according to the control parameter information by the determined control object;
when the control type is music playing, the control object is a music playing unit, and according to the application scene information, control parameter information correspondingly required when the control object indicated by the control type of the voice control instruction is controlled is determined, and the object to be processed matched with the control parameter information determined according to the application scene information is determined, including:
determining control parameter information correspondingly required when the music playing unit is controlled according to the application scene information;
and searching the music matched with the control parameter information from the local end or the cloud end of the electronic equipment, and providing the music to the music playing unit.
2. The method of claim 1, wherein the obtaining application context information of the electronic device when obtaining the voice control instruction comprises:
and acquiring the application scene information by using a sensing unit and/or a background system of the electronic equipment.
3. The method of claim 1, wherein the application context information comprises one or more of a motion time, a motion frequency, and a heartbeat frequency of a user of the electronic device.
4. The method according to claim 1, wherein the determining, according to the application scenario information, control parameter information that is required for controlling the control object includes:
performing weighted operation on the data of the application scene information to determine the type of the application scene;
and determining control parameter information corresponding to the application scene type according to the application scene type.
5. An electronic device, comprising:
the acquisition unit is used for acquiring a voice control instruction and acquiring application scene information of the electronic equipment when the voice control instruction is acquired;
the analysis unit is used for analyzing the control type corresponding to the voice control instruction;
a control unit for: determining a control object indicated by the control type from different control objects included in the electronic equipment; determining control parameter information correspondingly required when a control object indicated by the control type of the voice control instruction is controlled according to the application scene information; determining an object to be processed matched with the control parameter information determined according to the application scene information, providing the object to be processed determined according to the control parameter information for the control object determined from the plurality of control objects, and performing control corresponding to the determined control type on the object to be processed determined according to the control parameter information by the determined control object;
different control types respectively indicate different control objects in the electronic equipment;
when the control type is music playing, the control object is a music playing unit, and according to the application scene information, control parameter information correspondingly required when the control object indicated by the control type of the voice control instruction is controlled is determined, and the object to be processed matched with the control parameter information determined according to the application scene information is determined, including:
determining control parameter information correspondingly required when the music playing unit is controlled according to the application scene information;
and searching the music matched with the control parameter information from the local end or the cloud end of the electronic equipment, and providing the music to the music playing unit.
6. The electronic device according to claim 5, wherein the acquiring unit acquires application context information of the electronic device when the voice control instruction is acquired, and specifically includes:
and acquiring the application scene information by using a sensing unit and/or a background system of the electronic equipment.
7. The electronic device according to claim 5, wherein the application scenario information acquired by the acquiring unit includes one or more of a motion time, a motion frequency, and a heartbeat frequency of a user of the electronic device.
8. The electronic device according to claim 5, wherein the determining, by the control unit, control parameter information that is required for controlling the control object according to the application scenario information specifically includes:
performing weighted operation on the data of the application scene information to determine the type of the application scene; and determining control parameter information corresponding to the application scene type according to the application scene type.
CN201710525132.XA 2017-06-30 2017-06-30 Voice control method and electronic equipment Active CN107316641B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710525132.XA CN107316641B (en) 2017-06-30 2017-06-30 Voice control method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710525132.XA CN107316641B (en) 2017-06-30 2017-06-30 Voice control method and electronic equipment

Publications (2)

Publication Number Publication Date
CN107316641A CN107316641A (en) 2017-11-03
CN107316641B true CN107316641B (en) 2021-06-15

Family

ID=60181001

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710525132.XA Active CN107316641B (en) 2017-06-30 2017-06-30 Voice control method and electronic equipment

Country Status (1)

Country Link
CN (1) CN107316641B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107910003A (en) * 2017-12-22 2018-04-13 智童时刻(厦门)科技有限公司 A kind of voice interactive method and speech control system for smart machine
CN109308912B (en) * 2018-08-02 2024-02-20 平安科技(深圳)有限公司 Music style recognition method, device, computer equipment and storage medium
CN109918040B (en) * 2019-03-15 2022-08-16 阿波罗智联(北京)科技有限公司 Voice instruction distribution method and device, electronic equipment and computer readable medium
CN111885344A (en) * 2020-06-19 2020-11-03 西安万像电子科技有限公司 Data transmission method, equipment and system
CN112182282A (en) * 2020-09-01 2021-01-05 浙江大华技术股份有限公司 Music recommendation method and device, computer equipment and readable storage medium
CN112787899B (en) * 2021-01-08 2022-10-28 青岛海尔特种电冰箱有限公司 Equipment voice interaction method, computer readable storage medium and refrigerator

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105100356A (en) * 2015-07-07 2015-11-25 上海斐讯数据通信技术有限公司 Automatic volume adjustment method and system

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003076376A (en) * 2001-09-03 2003-03-14 Hitachi Information & Control Systems Inc Method and device for reproducing event music
JP5049930B2 (en) * 2008-09-09 2012-10-17 株式会社日立製作所 Distributed speech recognition system
CN102543119A (en) * 2011-12-31 2012-07-04 北京百纳威尔科技有限公司 Scene-based music playing processing method and music playing device
CN103714836A (en) * 2012-09-29 2014-04-09 联想(北京)有限公司 Method for playing audio information and electronic equipment
CN103472990A (en) * 2013-08-27 2013-12-25 小米科技有限责任公司 Appliance, and method and device for controlling same
WO2015097831A1 (en) * 2013-12-26 2015-07-02 株式会社東芝 Electronic device, control method, and program
DE112015003279T5 (en) * 2014-07-15 2017-04-06 Asahi Kasei Kabushiki Kaisha Input device, biosensor, program, computer-readable medium and mode setting method
CN104238369B (en) * 2014-09-02 2017-08-18 百度在线网络技术(北京)有限公司 Intelligent electrical appliance control and device
CN104361016B (en) * 2014-10-15 2018-05-29 广东小天才科技有限公司 A kind of method and device that music effect is adjusted according to motion state
CN104506901B (en) * 2014-11-12 2018-06-15 科大讯飞股份有限公司 Voice householder method and system based on tv scene state and voice assistant
CN104506944B (en) * 2014-11-12 2018-09-21 科大讯飞股份有限公司 Interactive voice householder method and system based on tv scene and voice assistant
CN104714648B (en) * 2015-03-25 2017-09-26 广东欧珀移动通信有限公司 A kind of switching method and apparatus of music scene
CN106328129B (en) * 2015-06-18 2020-11-27 中兴通讯股份有限公司 Instruction processing method and device
CN106328143A (en) * 2015-06-23 2017-01-11 中兴通讯股份有限公司 Voice control method and device and mobile terminal
CN105405442B (en) * 2015-10-28 2019-12-13 小米科技有限责任公司 voice recognition method, device and equipment
CN105654950B (en) * 2016-01-28 2019-07-16 百度在线网络技术(北京)有限公司 Adaptive voice feedback method and device
CN106057203A (en) * 2016-05-24 2016-10-26 深圳市敢为软件技术有限公司 Precise voice control method and device
CN106527734A (en) * 2016-11-30 2017-03-22 杭州联络互动信息科技股份有限公司 Music playing control method and device
CN106658854A (en) * 2016-12-28 2017-05-10 重庆金鑫科技产业发展有限公司 LED lamp control method, LED lamp and control system
CN106843882B (en) * 2017-01-20 2020-05-26 联想(北京)有限公司 Information processing method and device and information processing system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105100356A (en) * 2015-07-07 2015-11-25 上海斐讯数据通信技术有限公司 Automatic volume adjustment method and system

Also Published As

Publication number Publication date
CN107316641A (en) 2017-11-03

Similar Documents

Publication Publication Date Title
CN107316641B (en) Voice control method and electronic equipment
US11380316B2 (en) Speech interaction method and apparatus
US7333090B2 (en) Method and apparatus for analysing gestures produced in free space, e.g. for commanding apparatus by gesture recognition
CN104395953B (en) The assessment of bat, chord and strong beat from music audio signal
US8125314B2 (en) Distinguishing between user physical exertion biometric feedback and user emotional interest in a media stream
CN102207954B (en) Electronic equipment, content recommendation method and program thereof
US11511436B2 (en) Robot control method and companion robot
KR101114606B1 (en) Music interlocking photo-casting service system and method thereof
US20190347291A1 (en) Search Media Content Based Upon Tempo
Cheng et al. Convolutional neural networks approach for music genre classification
CN106777115A (en) Song processing method and processing device
US20210225408A1 (en) Content Pushing Method for Display Device, Pushing Device and Display Device
WO2016084453A1 (en) Information processing device, control method and program
EP3654194A1 (en) Information processing device, information processing method, and program
CN106066780B (en) Running data processing method and device
CN108932336A (en) Information recommendation method, electric terminal and computer readable storage medium message
CN111444383B (en) Audio data processing method and device and computer readable storage medium
CN111782858B (en) Music matching method and device
CN112685592B (en) Method and device for generating sports video soundtrack
CN113099305A (en) Play control method and device
CN107944056B (en) Multimedia file identification method, device, terminal and storage medium
US20210065869A1 (en) Versatile data structure for workout session templates and workout sessions
Singh et al. Study on Facial Recognition to Detect Mood and Suggest Songs
CN116955835B (en) Resource screening method, device, computer equipment and storage medium
Sathishkumar et al. EMO Player Using Deep Learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant