CN111010320A - Control device of voice equipment, voice interaction method and device and electronic equipment - Google Patents
Control device of voice equipment, voice interaction method and device and electronic equipment Download PDFInfo
- Publication number
- CN111010320A CN111010320A CN201910989799.4A CN201910989799A CN111010320A CN 111010320 A CN111010320 A CN 111010320A CN 201910989799 A CN201910989799 A CN 201910989799A CN 111010320 A CN111010320 A CN 111010320A
- Authority
- CN
- China
- Prior art keywords
- voice
- control device
- user identity
- information
- identity information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 34
- 230000003993 interaction Effects 0.000 title claims abstract description 17
- 238000004891 communication Methods 0.000 claims abstract description 15
- 238000004590 computer program Methods 0.000 claims description 9
- 230000008569 process Effects 0.000 abstract description 6
- 238000010586 diagram Methods 0.000 description 11
- 230000006870 function Effects 0.000 description 8
- 239000008267 milk Substances 0.000 description 4
- 210000004080 milk Anatomy 0.000 description 4
- 235000013336 milk Nutrition 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 2
- 206010071299 Slow speech Diseases 0.000 description 1
- 238000004378 air conditioning Methods 0.000 description 1
- 238000001816 cooling Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/28—Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
- H04L12/2803—Home automation networks
- H04L12/2807—Exchanging configuration information on appliance services in a home automation network
- H04L12/281—Exchanging configuration information on appliance services in a home automation network indicating a format for calling an appliance service function in a home automation network
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F24—HEATING; RANGES; VENTILATING
- F24F—AIR-CONDITIONING; AIR-HUMIDIFICATION; VENTILATION; USE OF AIR CURRENTS FOR SCREENING
- F24F11/00—Control or safety arrangements
- F24F11/89—Arrangement or mounting of control or safety devices
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/60—Information retrieval; Database structures therefor; File system structures therefor of audio data
- G06F16/63—Querying
- G06F16/635—Filtering based on additional data, e.g. user or group profiles
- G06F16/637—Administration of user profiles, e.g. generation, initialization, adaptation or distribution
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/28—Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
- H04L12/2803—Home automation networks
- H04L12/2807—Exchanging configuration information on appliance services in a home automation network
- H04L12/2809—Exchanging configuration information on appliance services in a home automation network indicating that an appliance service is present in a home automation network
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/223—Execution procedure of a spoken command
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/28—Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
- H04L12/2803—Home automation networks
- H04L2012/2847—Home automation networks characterised by the type of home appliance used
- H04L2012/2849—Audio/video appliances
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/28—Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
- H04L12/2803—Home automation networks
- H04L2012/2847—Home automation networks characterised by the type of home appliance used
- H04L2012/285—Generic home appliances, e.g. refrigerators
Landscapes
- Engineering & Computer Science (AREA)
- Automation & Control Theory (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Chemical & Material Sciences (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- Combustion & Propulsion (AREA)
- Mechanical Engineering (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Acoustics & Sound (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The application relates to a control device of voice equipment, a voice interaction method, a voice interaction device and electronic equipment, wherein the control device of the voice equipment comprises an attitude sensor, a controller and a communication module; the attitude sensor is in communication connection with the controller and is used for sending the acquired attitude information of the control device to the controller; the controller is used for determining user identity information of a voice signal to be input according to the posture information and sending a control instruction to the voice equipment, wherein the control instruction is used for controlling the voice equipment to operate according to an operation mode matched with the user identity information; the communication module is connected with the voice equipment in an associated mode and used for transmitting the control instruction sent by the controller to the voice equipment. The method and the device for controlling the voice equipment to execute the favorite voice function set by the current user identity correspond to different user identities through different faces of the configuration control device, and the voice equipment is controlled to execute the favorite voice function set by the current user identity through changing the posture of the control device in the using process, so that the method is simple to implement, and the user experience is improved.
Description
Technical Field
The present application relates to the field of home appliance control technologies, and in particular, to a control device for a voice device, a voice interaction method, a voice interaction device, and an electronic device.
Background
Nowadays, voice products are more and more popular, the functions are more and more complete, and the functions of listening to songs, listening to news, telling stories, hundred degrees encyclopedias, checking weather and the like can be realized. A common family is composed of people of different ages, the ages and habits of family members are different, and the habits of using the air conditioner are different. When a public air conditioner (such as an air conditioner in a living room) is used, the operation mode of the air conditioner is often required to be adjusted frequently, and thus, the use is inconvenient. For the voice intelligent air conditioner, different family members have different preferred resources. For example, the same voice command, "come music bar", children like to listen to children's song type, the elderly like military songs, red songs, etc., and adults like to listen to popular songs. This also results in a poor experience for the family members during the use of the air conditioner.
Disclosure of Invention
In order to solve the above technical problem or at least partially solve the above technical problem, the present application provides a control device of a voice device, a voice interaction method, a device and an electronic device, which determine user identity information of the voice device according to a posture of the device, and implement different voice functions according to preferences of different users.
In a first aspect, the present application provides a control apparatus for a speech device, including an attitude sensor, a controller, and a communication module;
the attitude sensor is in communication connection with the controller and is used for sending the acquired attitude information of the control device to the controller;
the controller is used for determining user identity information of a voice signal to be input according to the posture information and sending a control instruction to the voice equipment, wherein the control instruction is used for controlling the voice equipment to operate according to an operation mode matched with the user identity information;
the communication module is connected with the voice device in a correlation mode and used for transmitting the control instruction sent by the controller to the voice device.
Further, the control device has a plurality of gestures, and a plurality of regions corresponding to the plurality of gestures, each of the plurality of regions being associated with one of the plurality of gestures.
Further, one of the plurality of areas is associated with one of the plurality of user identity information, where the user identity information associated with any two of the plurality of areas is different.
Further, the user identity information includes: the age and the gender of the user and the voice equipment parameters matched with the identity information of the user.
Further, the controller is connected to the mobile terminal, and the controller is further configured to receive control information sent by the mobile terminal, and generate the control instruction for controlling the voice device according to the control information;
or the controller directly receives a voice control signal sent by a user and converts the voice control signal into the control instruction for controlling the voice equipment.
Further, the gesture information is used to indicate a target area on the control device in a direction toward a target, and the controller determines that the user identity information corresponding to the target area is the user identity information of the voice signal to be input.
In a second aspect, the present application provides a voice interaction method, including:
acquiring attitude information of a control device;
determining user identity information of the voice signal to be input according to the attitude information;
and sending a control instruction to voice equipment, wherein the control instruction is used for controlling the voice equipment to operate according to an operation mode matched with the user identity information.
Further, before determining the user identity information of the voice signal to be input according to the posture information, the method further comprises: establishing an association between each of a plurality of gestures with one of the user identity information, wherein the user identity information associated with any two of the plurality of gestures is different;
establishing an association between each of a plurality of regions that the control device has and a gesture, wherein the gestures associated with any two of the plurality of regions are different;
determining the user identity information of the voice signal to be input according to the gesture information comprises: and determining that the user identity information associated with the target area is the user identity information of the current voice signal to be input, wherein the target area is an area on the control device with the gesture facing the target direction.
In another aspect, the present application provides a voice interaction apparatus, including:
the attitude acquisition module is used for acquiring attitude information of the control device;
the identity matching module is used for determining the user identity information of the voice signal to be input according to the attitude information;
and the control module is used for sending a control instruction to the voice equipment, wherein the control instruction is used for controlling the voice equipment to operate according to the operation mode matched with the user identity information.
In another aspect, the present application provides an electronic device including a memory, a processor, and a program stored on the memory and executable on the processor, wherein the processor implements the steps of the method when executing the program.
In another aspect, the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method described above.
Compared with the prior art, the technical scheme provided by the embodiment of the application has the following advantages:
according to the control device and the control method provided by the embodiment of the application, different faces of the control device correspond to different user identities, and in the using process, the voice equipment is controlled to execute the favorite voice function set by the current user identity by changing the posture (which face is upward) of the control device. The realization method is simple and intelligent, and improves the user experience.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 is a block diagram of a control apparatus of a speech device according to an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of a control device of a speech apparatus according to an embodiment of the present application;
fig. 3 is a schematic flow chart of a voice interaction method according to an embodiment of the present application;
fig. 4 is a schematic diagram of a voice interaction apparatus according to an embodiment of the present application;
fig. 5 is an internal structure diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Fig. 1 is a control device of a voice device provided in an embodiment of the present application, including an attitude sensor 11, a controller 13, and a communication module 15;
the attitude sensor 11 is in communication connection with the controller and is used for sending the acquired attitude information of the control device to the controller.
Specifically, the attitude sensor 11 may be a three-axis gyroscope sensor, and may simultaneously measure position information, a movement track, and a movement speed of the control device in "up, down, left, right, front, and back" six directions (i.e., six directions in which three axes in a three-dimensional coordinate system are pointed), measure a current attitude of the control device, determine an orientation of each part of the control device, and transmit measured attitude information data to the controller. The control device may be in any shape, and preferably, for convenience of description and operation by a user, the control device may be set as a standard polyhedron, or a sphere, for example, a sphere similar to a football, which is divided into a plurality of regions with equal areas, and the orientation of each region is detected respectively. In this embodiment, a square shape is taken as an example for explanation.
As shown in fig. 2, as the user rotates the control device, the attitude sensor 11 may measure attitude information of the six faces of the cube A, B, C, D, E, F in real time, determine the orientation of each face after rotation, and send the attitude information to a controller, which may be a processor or a programmed single chip or the like.
The user can preset the associated attitude information of each surface of the control device, so that the attitude sensor 11 can accurately detect the attitude information of each surface and configure the orientation of one surface as the target orientation, namely when the attitude sensor 11 detects that the orientation of a certain surface is consistent with the configured target orientation, the user can control the surface, and other surfaces are not connected with the mobile terminal. For example, when the arrangement selection target is oriented upward, the control device may be rotated so that the corresponding surface faces upward, and the mobile terminal may be connected to the corresponding surface.
The controller 13 is configured to determine, according to the posture information, user identity information of a voice signal to be input, and send a control instruction to the voice device, where the control instruction is used to control the voice device to operate in an operation mode matched with the user identity information.
Specifically, the controller 13 receives the attitude information of each surface of the control device sent by the attitude sensor 11, and determines the user identity information of the voice signal to be input according to the attitude information. The controller 13 is connected to a communication module 15, and the communication module 15 includes a WiFi or bluetooth unit, wherein the WiFi or bluetooth unit is connected to each side of the control device and is connected to and paired with a corresponding voice device. For example, if the control device is a cube, the attitude sensor 11 calculates and acquires the current attitude of the control device, that is, which face is currently upward, and if the face a is upward, the controller 13 controls the face a to be connected with the WiFi or bluetooth unit and operate, the user can use the client application of the mobile terminal to connect with the WiFi or bluetooth unit to configure the face a, each face of the control device is provided with a display interface, and the configured content is displayed on the display interface of the corresponding face.
When each face of the control device is configured through the communication module 15, specifically, the attitude sensor 11 calculates and acquires the current attitude of the control device, that is, which face is upward, if the face a is upward, the user uses the client application of the mobile terminal to connect with the face a through the WiFi or bluetooth unit in the control device, at this time, a list of user identity information to be configured is displayed on the client interface of the mobile terminal, the user identity information can also be manually defined and added into the list, and the required user identity information is selected from the list, so that the controller 13 configures the user identity information selected by the user through the mobile terminal into the memory of the face a, and the configuration is completed.
The user identity information comprises user age, gender, voice equipment parameters matched with the user identity information and the like, for example, the user defines user identity Xiaoming, can define Xiaoming age, height, weight, voice speed played by the voice equipment, favorite songs and the like, if the voice equipment is an intelligent air conditioner, a wind speed and temperature mode of the air conditioner and the like can be set, after the setting, the user identity name is stored in the control device, and the corresponding user information is sent to the voice equipment.
After the configuration is completed, the control device is rotated, the gesture collector 1 collects the current gesture of the control device again, wherein the upward surface is connected with the mobile terminal through the WiFi or Bluetooth unit, the mobile terminal controls the WiFi or Bluetooth unit to configure the upward surface and define the identity information of the user represented by the upward surface, and the configuration operation of the six surfaces is completed in this way.
In the configuration process, if a certain face of the control device is repeatedly configured, the user identity information of the current configuration is updated. For example, if the user identification information of the a-plane is "mingmen" and the a-plane is to be arranged as "mingmen father", the control device is rotated again so that the a-plane faces upward, the a-plane is arranged, and the user identification information of the a-plane is updated to be "mingmen father". And after the configuration of each surface is completed, the configured user identity information name is displayed on the display interface of the surface.
After the control device is configured, a required information type is selected, the corresponding surface of the control device is rotated to face upwards, a voice instruction is input into the control device, the controller 13 of the control device converts the received voice instruction into a control signal, the control signal and corresponding user identity information are sent to a voice device connected in advance through the communication module 15, the voice device calls the configured corresponding user identity information according to the user identity information and the control signal sent by the control device, information search can be carried out according to the control instruction of a corresponding user, and voice broadcast is carried out or required information is sent to the mobile terminal.
For example, the user identity of the side a of the control device is configured as "xiaoming", the user identity of the side B is configured as "xiaoming father" type, and the user identity of the side C is configured as "xiaoming milk", when the user sets the side a to face up, the control device controls the corresponding voice device to match the current identity information "xiaoming" according to the identity information corresponding to the current posture, and retrieves the configuration information corresponding to the current identity information "xiaoming", for example, the configuration of the user "xiaoming" is configured as a favorite air conditioner setting (temperature 25 ℃, mode: cooling mode, wind speed: medium), favorite song of the week, fairy tale story, child broadcast; voice broadcasting: moderate speed of speech, light voice and large volume.
When the user sets that the B surface faces upwards, the control device controls the corresponding voice equipment to match the current identity information 'Xiaoming father' according to the identity information corresponding to the current posture, and calls the configuration information corresponding to the current identity information 'Xiaoming father', for example, the configuration of the user 'Xiaoming father' is the favorite air conditioner setting (temperature 27 ℃, mode: natural wind mode, wind speed: low), favorite horse certain song, current news, traffic broadcast; voice broadcasting: moderate speed of speech, light voice and small volume.
When the user sets the C surface to be upward, the control device controls the corresponding voice equipment to match the current identity information 'Xiaoming milk' according to the identity information corresponding to the current posture, and transfers the configuration information corresponding to the current identity information 'Xiaoming milk', such as the configuration of the user 'Xiaoming milk' into favorite air-conditioning setting (temperature 28 ℃, mode: natural wind mode, wind speed: low), favorite drama and broadcast; voice broadcasting: slow speech speed, light tone and large volume.
And controlling the voice equipment to execute the corresponding favorite voice functions for users with different identities by changing the posture (which faces upwards) of the control device. The realization method is simple and intelligent, and improves the user experience.
Besides the mobile client is used for configuring the user identity information corresponding to each face, the voice command can be directly input to the control device to configure the user identity information corresponding to each face. Specifically, a voice command 'configuration to a certain user' can be input to the surface of the control device facing the target direction, then the control device can broadcast corresponding parameters according to different connected voice equipment in a voice mode, the user inputs parameter values according to corresponding parameter voice, the user can also independently increase parameter types, the information type required by the configuration to the surface is achieved, then the control device is rotated to other surfaces, and the voice command is continuously input. And the currently configured information type name is displayed on the display interface of each surface.
As shown in fig. 3, an embodiment of the present application further discloses a voice interaction method, including:
s31, acquiring attitude information of the control device;
s32, determining the user identity information of the voice signal to be input according to the posture information;
and S33, sending a control instruction to the voice equipment, wherein the control instruction is used for controlling the voice equipment to operate according to the operation mode matched with the user identity information.
Specifically, firstly, the control device is bound with the voice equipment, then each face of the control device is configured to correspond to different user identities, and identity information of each user is configured; the attitude sensor is used for acquiring the attitude information of the control device, determining the orientation of the target, taking the upward direction as the orientation of the target in the embodiment, and sending the acquired attitude information to the controller. The voice control instruction is sent to the control device, the controller determines the user identity information corresponding to the face according to the current posture information, the voice equipment is controlled according to the information type corresponding to the face and the operation mode configured with the user identity information to realize the voice function, the corresponding voice function is intelligently executed according to the habit of the user, the use of the user is facilitated, and the user experience is improved.
As shown in fig. 4, an embodiment of the present application further discloses a voice interaction apparatus, including:
an attitude acquisition module 41, configured to acquire attitude information of the control device;
a control module 43, configured to send a control instruction to a voice device, where the control instruction is used to control the voice device to operate according to an operation mode matched with the user identity information;
and the identity matching module 45 is used for determining the user identity information of the voice signal to be input according to the posture information.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, apparatus (device), or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (devices) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Fig. 5 is an internal structure diagram of an electronic device according to an embodiment of the present application. As shown in fig. 5, the electronic device includes a processor, a memory, a network interface, an input device, and a display screen connected through a system bus. Wherein the memory includes a non-volatile storage medium and an internal memory. The non-volatile storage medium of the electronic device stores an operating system and may also store a computer program, which, when executed by the processor, causes the processor to implement the voice interaction method. The internal memory may also have stored therein a computer program that, when executed by the processor, causes the processor to perform the voice interaction method. The display screen of the electronic device can be a liquid crystal display screen or an electronic ink display screen, and the input device of the electronic device can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the electronic device, an external keyboard, a touch pad or a mouse, and the like.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing are merely exemplary embodiments of the present invention, which enable those skilled in the art to understand or practice the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (11)
1. The control device of the voice equipment is characterized by comprising an attitude sensor, a controller and a communication module;
the attitude sensor is in communication connection with the controller and is used for sending the acquired attitude information of the control device to the controller;
the controller is used for determining user identity information of a voice signal to be input according to the posture information and sending a control instruction to the voice equipment, wherein the control instruction is used for controlling the voice equipment to operate according to an operation mode matched with the user identity information;
the communication module is connected with the voice device in a correlation mode and used for transmitting the control instruction sent by the controller to the voice device.
2. The control device according to claim 1,
the control device has a plurality of gestures, and also has a plurality of regions corresponding to the plurality of gestures, each of the plurality of regions being associated with one of the plurality of gestures.
3. The control apparatus according to claim 2, wherein one of the plurality of areas is associated with one of a plurality of the user identity information, wherein the user identity information associated with any two of the plurality of areas is different.
4. The control apparatus of claim 1, wherein the user identity information comprises: the age and the gender of the user and the voice equipment parameters matched with the identity information of the user.
5. The control device according to claim 2,
the controller is connected with the mobile terminal, and is further used for receiving control information sent by the mobile terminal and generating the control instruction for controlling the voice equipment according to the control information;
or the controller directly receives a voice control signal sent by a user and converts the voice control signal into the control instruction for controlling the voice equipment.
6. The control device according to claim 1, wherein the posture information indicates a target area on the control device in a direction toward a target, and the controller determines the user identification information corresponding to the target area as the user identification information of the voice signal to be input.
7. A method of voice interaction, comprising:
acquiring attitude information of a control device;
determining user identity information of the voice signal to be input according to the attitude information;
and sending a control instruction to voice equipment, wherein the control instruction is used for controlling the voice equipment to operate according to an operation mode matched with the user identity information.
8. The method of claim 7,
before determining the user identity information of the voice signal to be input according to the posture information, the method further comprises: establishing an association between each of a plurality of gestures with one of the user identity information, wherein the user identity information associated with any two of the plurality of gestures is different;
establishing an association between each of a plurality of regions that the control device has and a gesture, wherein the gestures associated with any two of the plurality of regions are different;
determining the user identity information of the voice signal to be input according to the gesture information comprises: and determining that the user identity information associated with the target area is the user identity information of the current voice signal to be input, wherein the target area is an area on the control device with the gesture facing the target direction.
9. A voice interaction apparatus, comprising:
the attitude acquisition module is used for acquiring attitude information of the control device;
the identity matching module is used for determining the user identity information of the voice signal to be input according to the attitude information;
and the control module is used for sending a control instruction to the voice equipment, wherein the control instruction is used for controlling the voice equipment to operate according to the operation mode matched with the user identity information.
10. An electronic device comprising a memory, a processor and a program stored on the memory and executable on the processor, wherein the steps of the method of any one of claims 7 or 8 are implemented when the program is executed by the processor.
11. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 7 or 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910989799.4A CN111010320B (en) | 2019-10-17 | 2019-10-17 | Control device of voice equipment, voice interaction method and device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910989799.4A CN111010320B (en) | 2019-10-17 | 2019-10-17 | Control device of voice equipment, voice interaction method and device and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111010320A true CN111010320A (en) | 2020-04-14 |
CN111010320B CN111010320B (en) | 2021-05-25 |
Family
ID=70111866
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910989799.4A Active CN111010320B (en) | 2019-10-17 | 2019-10-17 | Control device of voice equipment, voice interaction method and device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111010320B (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102184022A (en) * | 2011-02-14 | 2011-09-14 | 徐敬 | Hexahedron wireless remote controller |
CN103529762A (en) * | 2013-02-22 | 2014-01-22 | Tcl集团股份有限公司 | Intelligent household control method and system based on sensor technology |
CN204798814U (en) * | 2015-06-01 | 2015-11-25 | 深圳市贝多福科技有限公司 | A magic cube for intelligent control terminal |
CN106647398A (en) * | 2016-12-23 | 2017-05-10 | 广东美的制冷设备有限公司 | Remote controller, operation control method and device |
CN107295382A (en) * | 2016-04-25 | 2017-10-24 | 深圳Tcl新技术有限公司 | Personal identification method and system based on exercise attitudes |
CN107704895A (en) * | 2017-08-17 | 2018-02-16 | 阿里巴巴集团控股有限公司 | A kind of business performs method and device |
WO2018145309A1 (en) * | 2017-02-13 | 2018-08-16 | 深圳市大疆创新科技有限公司 | Method for controlling unmanned aerial vehicle, unmanned aerial vehicle, and remote control device |
CN109932920A (en) * | 2019-03-28 | 2019-06-25 | 上海雷盎云智能技术有限公司 | A kind of smart home temprature control method, device and intelligent panel |
CN110113665A (en) * | 2019-04-25 | 2019-08-09 | 深圳市国华识别科技开发有限公司 | Show equipment autocontrol method, device, computer equipment and storage medium |
-
2019
- 2019-10-17 CN CN201910989799.4A patent/CN111010320B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102184022A (en) * | 2011-02-14 | 2011-09-14 | 徐敬 | Hexahedron wireless remote controller |
CN103529762A (en) * | 2013-02-22 | 2014-01-22 | Tcl集团股份有限公司 | Intelligent household control method and system based on sensor technology |
CN204798814U (en) * | 2015-06-01 | 2015-11-25 | 深圳市贝多福科技有限公司 | A magic cube for intelligent control terminal |
CN107295382A (en) * | 2016-04-25 | 2017-10-24 | 深圳Tcl新技术有限公司 | Personal identification method and system based on exercise attitudes |
CN106647398A (en) * | 2016-12-23 | 2017-05-10 | 广东美的制冷设备有限公司 | Remote controller, operation control method and device |
WO2018145309A1 (en) * | 2017-02-13 | 2018-08-16 | 深圳市大疆创新科技有限公司 | Method for controlling unmanned aerial vehicle, unmanned aerial vehicle, and remote control device |
CN107704895A (en) * | 2017-08-17 | 2018-02-16 | 阿里巴巴集团控股有限公司 | A kind of business performs method and device |
CN109932920A (en) * | 2019-03-28 | 2019-06-25 | 上海雷盎云智能技术有限公司 | A kind of smart home temprature control method, device and intelligent panel |
CN110113665A (en) * | 2019-04-25 | 2019-08-09 | 深圳市国华识别科技开发有限公司 | Show equipment autocontrol method, device, computer equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN111010320B (en) | 2021-05-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102411124B1 (en) | Electronic device and method for performing task using external electronic device in electronic device | |
WO2018113639A1 (en) | Interaction method between user terminals, terminal, server, system and storage medium | |
CN104049721B (en) | Information processing method and electronic equipment | |
WO2018108050A1 (en) | Intelligent terminal and application program right control method and apparatus therefor, and server | |
CN110308660B (en) | Intelligent equipment control method and device | |
CN107527615B (en) | Information processing method, device, equipment, system and server | |
KR101224351B1 (en) | Method for locating an object associated with a device to be controlled and a method for controlling the device | |
CN109683847A (en) | A kind of volume adjusting method and terminal | |
CN108574515B (en) | Data sharing method, device and system based on intelligent sound box equipment | |
CN107113226A (en) | Electronic installation and its method for recognizing peripheral equipment | |
JP2010503052A (en) | Configurable personal audio / video device for use in networked application sharing systems | |
CN106575142A (en) | Multi-device sensor subsystem joint optimization | |
WO2022160865A1 (en) | Control method, system, and apparatus for air conditioner, and air conditioner | |
WO2015127786A1 (en) | Hand gesture recognition method, device, system, and computer storage medium | |
CN109195201B (en) | Network connection method, device, storage medium and user terminal | |
CN107346115B (en) | Control method and control terminal of intelligent device and intelligent device | |
CN109901698B (en) | Intelligent interaction method, wearable device, terminal and system | |
US10531260B2 (en) | Personal working system capable of being dynamically combined and adjusted | |
CN111817929A (en) | Equipment interaction method and device, household equipment and storage medium | |
WO2023082660A1 (en) | Household appliance control method and control apparatus and household appliance control system | |
CN104318950A (en) | Information processing method and electronic equipment | |
CN113893123A (en) | Massage data processing method and device, electronic equipment and storage medium | |
CN111010320B (en) | Control device of voice equipment, voice interaction method and device and electronic equipment | |
US11562271B2 (en) | Control method, terminal, and system using environmental feature data and biological feature data to display a current movement picture | |
TW201828765A (en) | Launch method for applications with early-time memory reclaim and electronic device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |