CN110970023A - Control device of voice equipment, voice interaction method and device and electronic equipment - Google Patents

Control device of voice equipment, voice interaction method and device and electronic equipment Download PDF

Info

Publication number
CN110970023A
CN110970023A CN201910990397.6A CN201910990397A CN110970023A CN 110970023 A CN110970023 A CN 110970023A CN 201910990397 A CN201910990397 A CN 201910990397A CN 110970023 A CN110970023 A CN 110970023A
Authority
CN
China
Prior art keywords
voice
equipment
control
control device
controller
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910990397.6A
Other languages
Chinese (zh)
Inventor
韩雪
王慧君
王子
廖湖锋
毛跃辉
梁博
陶梦春
林金煌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gree Electric Appliances Inc of Zhuhai
Original Assignee
Gree Electric Appliances Inc of Zhuhai
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gree Electric Appliances Inc of Zhuhai filed Critical Gree Electric Appliances Inc of Zhuhai
Priority to CN201910990397.6A priority Critical patent/CN110970023A/en
Publication of CN110970023A publication Critical patent/CN110970023A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Abstract

The application relates to a control device of voice equipment, a voice interaction method, a voice interaction device and electronic equipment, wherein the control device of the voice equipment comprises an attitude sensor, a controller and a communication module; the attitude sensor is in communication connection with the controller and is used for sending the acquired attitude information of the control device to the controller; the controller is used for determining voice equipment to be controlled according to the attitude information and sending a control instruction to the voice equipment, wherein the control instruction is used for controlling the opening and closing of the voice function of the voice equipment; the communication module is connected with the voice equipment in a correlation mode and used for transmitting the control instruction sent by the controller to the voice equipment in the correlation mode. According to the voice awakening method and device, the voice awakening function of different voice equipment is controlled to be turned off or turned on through different postures of the control device, and the function of awakening specific equipment is controlled when the awakening words of the equipment are the same. The implementation method is simple, and the user experience is improved.

Description

Control device of voice equipment, voice interaction method and device and electronic equipment
Technical Field
The present application relates to the field of home appliance control technologies, and in particular, to a control device for a voice device, a voice interaction method, a voice interaction device, and an electronic device.
Background
Nowadays, voice products are more and more popular, the functions are more and more complete, and the functions of listening to songs, listening to news, telling stories, checking encyclopedias, checking weather and the like can be realized. There may be multiple voice products in a user's home, and the voice wake-up words for some products may be the same. During use, a user has no control over waking up a particular device. For example, the awakening words of a washing machine and an air conditioner at home are 'HI, xiaoming', when a user speaks the awakening word, the user can awaken the two devices at the same time, but the user only controls the washing machine, so that the use is inconvenient; or in the test environment, a plurality of same or similar voice devices wait for testing in the same test space, and the awakening words of the same voice devices are the same. During the test process, the tester can wake up all the devices by saying the wake-up word, but the test process needs to be performed separately. If the microphone function of the device is not required to be turned off, the device can only be turned off, so that the test is inconvenient, if different awakening words are set for each device, a user can easily memorize and confuse the device, forget which device corresponds to a certain awakening word, and reduce the user experience.
Disclosure of Invention
In order to solve the above technical problem or at least partially solve the above technical problem, the present application provides a control apparatus of a voice device, a voice interaction method, an apparatus, and an electronic device, where a voice function of a device to be activated is determined to be turned on or off according to a posture of the device, and when a plurality of device wake-up words are the same, activation of a specific device is controlled.
In a first aspect, the present application provides a control apparatus for a speech device, including an attitude sensor, a controller, and a communication module;
the attitude sensor is in communication connection with the controller and is used for sending the acquired attitude information of the control device to the controller;
the controller is used for determining voice equipment to be controlled according to the attitude information and sending a control instruction to the voice equipment, wherein the control instruction is used for controlling the opening and closing of the voice function of the voice equipment;
the communication module is connected with the voice equipment in a correlation mode and used for transmitting the control instruction sent by the controller to the voice equipment in the correlation mode.
Further, the control device has a plurality of postures, and each posture is bound with one voice device.
Further, the control device has a plurality of regions corresponding to the plurality of gestures, each of the plurality of regions being associated with one of the plurality of gestures;
the control device has a plurality of communication modules corresponding to the plurality of areas, each of the plurality of communication modules being associated with one of the plurality of areas, wherein when any one of the plurality of areas is in an associated attitude, a control instruction is forwarded by the communication module associated with the any one area.
Further, the controller is connected to the mobile terminal, and the controller is further configured to receive control information sent by the mobile terminal, and generate the control instruction for controlling the voice device according to the control information;
or the controller directly receives a voice control signal sent by a user and converts the voice control signal into a control instruction for controlling the voice equipment.
Further, the gesture information is used to indicate a target area on the control device in a direction toward a target, and the controller determines that the voice device corresponding to the target area is the voice device to be controlled.
In a second aspect, the present application provides a voice interaction method, including:
acquiring attitude information of a control device;
determining voice equipment to be controlled according to the attitude information;
and sending a control instruction to the voice equipment, wherein the control instruction is used for controlling the opening and closing of the voice function of the voice equipment.
Further, before determining the voice device to be controlled according to the posture information, the method further comprises: establishing an association between each of a plurality of gestures with one of the speech devices, wherein the speech devices associated with any two of the plurality of gestures are different;
establishing an association between each of a plurality of regions that the control device has and a gesture, wherein the gestures associated with any two of the plurality of regions are different;
determining the voice equipment to be controlled according to the attitude information comprises the following steps: and determining that the voice equipment associated with the target area is the current voice equipment to be controlled, wherein the target area is an area on the control device with the gesture facing the target direction.
Further, before sending the control instruction to the voice device, the method further includes: establishing an association between each of a plurality of communication modules and one of the plurality of zones, wherein the zones with which any two of the communication modules are associated are different;
sending a control instruction to the voice device includes: sending a control instruction to the voice device using the communication module associated with the target area.
In another aspect, the present application provides a voice interaction apparatus, including:
the attitude acquisition module is used for acquiring attitude information of the control device;
the equipment matching module is used for determining the voice equipment to be controlled according to the attitude information;
and the control module is used for sending a control instruction to the voice equipment, wherein the control instruction is used for controlling the opening and closing of the voice function of the voice equipment.
In another aspect, the present application provides an electronic device including a memory, a processor, and a program stored on the memory and executable on the processor, wherein the processor implements the steps of the method when executing the program.
In another aspect, the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method described above.
Compared with the prior art, the technical scheme provided by the embodiment of the application has the following advantages:
the control device and the control method provided by the embodiment of the application can control the voice awakening function to be turned off or on by binding different voice devices. In the using process, the voice function of the equipment corresponding to the current upward surface is controlled to be turned on or off through the posture (which surface is upward) of the control device, so that the function of awakening the specific equipment can be controlled when the awakening words of a plurality of equipment are the same. The implementation method is simple, and the user experience is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 is a block diagram of a control apparatus of a speech device according to an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of a control device of a speech apparatus according to an embodiment of the present application;
fig. 3 is a schematic flow chart of a voice interaction method according to an embodiment of the present application;
fig. 4 is a schematic diagram of a voice interaction apparatus according to an embodiment of the present application;
fig. 5 is an internal structure diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Fig. 1 is a control device of a voice device provided in an embodiment of the present application, including an attitude sensor 11, a controller 13, and a communication module 15;
the attitude sensor 11 is in communication connection with the controller and is used for sending the acquired attitude information of the control device to the controller.
Specifically, the attitude sensor 11 may be a three-axis gyroscope sensor, and may simultaneously measure position information, a movement track, and a movement speed of the control device in "up, down, left, right, front, and back" six directions (i.e., six directions in which three axes in a three-dimensional coordinate system are pointed), measure a current attitude of the control device, determine an orientation of each part of the control device, and transmit measured attitude information data to the controller. The control device may be in any shape, and preferably, for convenience of description and operation by a user, the control device may be set as a standard polyhedron, or a sphere, for example, a sphere similar to a football, which is divided into a plurality of regions with equal areas, and the orientation of each region is detected respectively. In this embodiment, a square shape is taken as an example for explanation.
As shown in fig. 2, as the user rotates the control device, the attitude sensor 11 may measure attitude information of the six faces of the cube A, B, C, D, E, F in real time, determine the orientation of each face after rotation, and send the attitude information to a controller, which may be a processor or a programmed single chip or the like.
The user can preset the associated attitude information of each surface of the control device, so that the attitude sensor 11 can accurately detect the attitude information of each surface and configure the orientation of one surface as the target orientation, namely when the attitude sensor 11 detects that the orientation of a certain surface is consistent with the configured target orientation, the user can control the surface, and other surfaces are not connected with the mobile terminal. For example, when the arrangement selection target is oriented upward, the control device may be rotated so that the corresponding surface faces upward, and the mobile terminal may be connected to the corresponding surface.
The controller 13 is configured to determine a voice device to be controlled according to the posture information, and send a control instruction to the voice device, where the control instruction is used to control the on/off of a voice function of the voice device.
Specifically, the controller 13 receives the attitude information of each surface of the control device sent by the attitude sensor 11, and determines the voice device to be controlled according to the attitude information. The controller 13 is connected to the communication module 15, the communication module 15 includes a plurality of WiFi or bluetooth units, where each WiFi or bluetooth unit is correspondingly connected to each side of the control device, for example, if the control device is a cube, six WiFi or bluetooth units are provided, the attitude sensor 11 calculates and collects a current attitude of the control device, that is, which side is currently facing up, if the side a is facing up, the controller 13 controls the side a to be paired with the WiFi or bluetooth unit provided in the side a, after pairing is successful, when the side a is facing up, the user may connect to the WiFi or bluetooth unit in the side a using the client application of the mobile terminal, configure the side a, each side of the control device is provided with a display interface, and configured contents are displayed on the display interface of the corresponding side.
After each face of the control device is paired with a corresponding WiFi or Bluetooth unit, the WiFi or Bluetooth unit corresponding to each face is paired with each voice device, specifically, the attitude sensor 11 calculates and collects the current attitude of the control device, namely which face is upward currently, if the face of.
After the pairing is completed, the control device is rotated, the gesture collector 1 collects the current gesture of the control device again, the upward WiFi or Bluetooth unit corresponding to the face is connected with the mobile terminal, the mobile terminal controls the WiFi or Bluetooth unit to be paired with the required voice equipment, and therefore the pairing operation of the six equipment is completed.
In the pairing process, if the control device is repeatedly paired on a certain surface, the current paired equipment is updated. For example, the A surface is paired with the A voice equipment, and when the A surface is paired with the B voice equipment again, the A surface is updated to be paired with the B voice equipment. And after each face is matched, the matched voice equipment name is displayed on the display interface of the face.
After the voice devices are paired, the control function of the control device can be set, and then the corresponding voice devices are controlled through the control device. Therefore, the voice function of the required equipment can be started, the voice function of the unnecessary equipment can be closed, and the situation that a user has a plurality of voice equipment to correspond to each other after inputting voice is prevented.
For example, the voice function of the voice device corresponding to each face can be configured to be turned on or off through the client application of the mobile terminal. If the A surface corresponds to an intelligent voice air conditioner, the B surface corresponds to an intelligent voice washing machine, awakening voice instructions are awakened, when a user needs to start the intelligent voice air conditioner, firstly, the control device is rotated to the A surface to be upward, at the moment, the attitude sensor 11 acquires the attitude of the control device, a WiFi or Bluetooth unit corresponding to the A surface is connected with the mobile terminal, the mobile terminal selects to start the voice function through an application interface, a control signal is sent to a controller in the control device through the WiFi or Bluetooth unit arranged in the A surface, the controller processes the control signal and then sends the control signal to the intelligent voice air conditioner corresponding to the A surface through the WiFi or Bluetooth unit in the A surface again, and the intelligent air conditioner starts the voice function; the control device is rotated to enable the B surface to face upwards, the attitude sensor 11 obtains the attitude of the control device, the WiFi or Bluetooth unit corresponding to the B surface is connected with the mobile terminal, the mobile terminal selects to close the voice function through the application interface, and then the intelligent voice washing machine corresponding to the B surface closes the voice function. When the user returns home and speaks the awakening word 'awaken', the intelligent voice air conditioner is awakened, the user can speak a subsequent voice instruction to control the intelligent voice air conditioner, and the intelligent voice washing machine cannot be awakened. If the intelligent washing machine needs to be awakened, the setting of the voice function can be modified and turned on or off through the control device.
If a user needs to wake up two or more voice devices simultaneously, for example, when the intelligent voice air conditioner and the intelligent television are woken up simultaneously, the C surface of the controllable device is paired with the intelligent television, and the C surface of the controllable device is configured to be the function of starting voice, when the user outputs a wakening word "waken up", the intelligent voice air conditioner and the intelligent television are woken up simultaneously, the user can operate the controllable device to enable the A surface to be upward, at the moment, an instruction for controlling the intelligent voice air conditioner is input into the controllable device, the intelligent voice air conditioner is controlled through the communication module, the intelligent television is in a wakened-up state but cannot be interfered by the user control instruction, when the user rotates the controllable device to enable the C surface to be upward, the input voice control instruction can control the intelligent television to perform corresponding operation.
Besides using the mobile client to configure the on or off of the voice function of the voice equipment corresponding to each face, the mobile client can also directly input a voice instruction to the control device to control the on or off of the voice function of the voice equipment. Specifically, a voice instruction of 'starting a voice function' can be input to a surface of the control device facing the target direction, and the voice function can be started by the corresponding voice equipment of the surface; the voice command 'close the voice function' is input, and the voice function of the corresponding voice equipment can be closed. And the display interface of each surface can display whether the current corresponding voice device starts the voice function.
According to the voice equipment awakening method and device, different voice equipment is bound through the control device, the awakening function of each voice equipment can be controlled to be turned off or turned on, and the function of awakening specific equipment is controlled when the awakening words of the voice equipment are the same. The implementation method is simple, and the user experience is improved.
The present application, including but not limited to turning on or off the voice function as described in the present embodiment, for example, setting the control function to increase or decrease the volume of the played voice, increasing or decreasing the speed of the played voice, etc., is within the protection scope of the present application.
As shown in fig. 3, an embodiment of the present application further discloses a voice interaction method, including:
s31, acquiring attitude information of the control device;
s32, determining voice equipment to be controlled according to the attitude information;
and S33, sending a control instruction to the voice equipment, wherein the control instruction is used for controlling the opening and closing of the voice function of the voice equipment.
Specifically, each face of the control device is configured to be bound with a communication module arranged in each face, each face is configured to correspond to different voice devices, and finally, the voice function of each face corresponding to the voice device is configured to be turned on or turned off; the attitude sensor is used for acquiring the attitude information of the control device, determining the orientation of the target, taking the upward direction as the orientation of the target in the embodiment, and sending the acquired attitude information to the controller. The controller determines the corresponding voice equipment facing upwards according to the current posture information, and controls the voice function of the corresponding voice equipment to be turned on or turned off according to the preset control information.
The method used in the embodiment can control the voice awakening function to be turned off or turned on by binding different voice devices. In the using process, the voice function of the equipment corresponding to the current upward surface is controlled to be turned on or off through the posture (which surface is upward) of the control device, so that the function of awakening the specific equipment can be controlled when the awakening words of a plurality of equipment are the same. The implementation method is simple, and the user experience is improved.
As shown in fig. 4, an embodiment of the present application further discloses a voice interaction apparatus, including:
an attitude acquisition module 41, configured to acquire attitude information of the control device;
a control module 43, configured to send a control instruction to the voice device, where the control instruction is used to control the on/off of a voice function of the voice device;
and the device matching module 45 is used for determining the voice device to be controlled according to the attitude information.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, apparatus (device), or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (devices) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Fig. 5 is an internal structure diagram of an electronic device according to an embodiment of the present application. As shown in fig. 5, the electronic device includes a processor, a memory, a network interface, an input device, and a display screen connected through a system bus. Wherein the memory includes a non-volatile storage medium and an internal memory. The non-volatile storage medium of the electronic device stores an operating system and may also store a computer program, which, when executed by the processor, causes the processor to implement the voice interaction method. The internal memory may also have stored therein a computer program that, when executed by the processor, causes the processor to perform the voice interaction method. The display screen of the electronic device can be a liquid crystal display screen or an electronic ink display screen, and the input device of the electronic device can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the electronic device, an external keyboard, a touch pad or a mouse, and the like.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing are merely exemplary embodiments of the present invention, which enable those skilled in the art to understand or practice the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (11)

1. The control device of the voice equipment is characterized by comprising an attitude sensor, a controller and a communication module;
the attitude sensor is in communication connection with the controller and is used for sending the acquired attitude information of the control device to the controller;
the controller is used for determining voice equipment to be controlled according to the attitude information and sending a control instruction to the voice equipment, wherein the control instruction is used for controlling the opening and closing of the voice function of the voice equipment;
the communication module is connected with the voice equipment in a correlation mode and used for transmitting the control instruction sent by the controller to the voice equipment in the correlation mode.
2. The control device according to claim 1,
the control device has a plurality of postures, and each posture is bound with one voice device.
3. The control device according to claim 2,
the control device having a plurality of regions corresponding to the plurality of gestures, each of the plurality of regions being associated with one of the plurality of gestures;
the control device has a plurality of communication modules corresponding to the plurality of areas, each of the plurality of communication modules being associated with one of the plurality of areas, wherein when any one of the plurality of areas is in an associated posture, a control instruction is transferred through the communication module associated with the any one of the plurality of areas.
4. The control device according to claim 1,
the controller is connected with the mobile terminal, and is further used for receiving control information sent by the mobile terminal and generating the control instruction for controlling the voice equipment according to the control information;
or the controller directly receives a voice control signal sent by a user and converts the voice control signal into a control instruction for controlling the voice equipment.
5. The control apparatus according to claim 1, wherein the posture information indicates a target area on the control apparatus in a direction toward a target, and the controller determines a voice device corresponding to the target area as the voice device to be controlled.
6. A method of voice interaction, comprising:
acquiring attitude information of a control device;
determining voice equipment to be controlled according to the attitude information;
and sending a control instruction to the voice equipment, wherein the control instruction is used for controlling the opening and closing of the voice function of the voice equipment.
7. The method of claim 6,
before determining the voice device to be controlled according to the posture information, the method further comprises: establishing an association between each of a plurality of gestures with one of the speech devices, wherein the speech devices associated with any two of the plurality of gestures are different;
establishing an association between each of a plurality of regions that the control device has and a gesture, wherein the gestures associated with any two of the plurality of regions are different;
determining the voice equipment to be controlled according to the attitude information comprises the following steps: and determining that the voice equipment associated with the target area is the current voice equipment to be controlled, wherein the target area is an area on the control device with the gesture facing the target direction.
8. The method of claim 7,
before sending the control instruction to the voice device, the method further comprises: establishing an association between each of a plurality of communication modules and one of the plurality of zones, wherein the zones with which any two of the communication modules are associated are different;
sending a control instruction to the voice device includes: sending a control instruction to the voice device using the communication module associated with the target area.
9. A voice interaction apparatus, comprising:
the attitude acquisition module is used for acquiring attitude information of the control device;
the equipment matching module is used for determining the voice equipment to be controlled according to the attitude information;
and the control module is used for sending a control instruction to the voice equipment, wherein the control instruction is used for controlling the opening and closing of the voice function of the voice equipment.
10. An electronic device comprising a memory, a processor and a program stored on the memory and executable on the processor, wherein the steps of the method of any of claims 6-8 are performed when the program is executed by the processor.
11. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 6 to 8.
CN201910990397.6A 2019-10-17 2019-10-17 Control device of voice equipment, voice interaction method and device and electronic equipment Pending CN110970023A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910990397.6A CN110970023A (en) 2019-10-17 2019-10-17 Control device of voice equipment, voice interaction method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910990397.6A CN110970023A (en) 2019-10-17 2019-10-17 Control device of voice equipment, voice interaction method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN110970023A true CN110970023A (en) 2020-04-07

Family

ID=70029745

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910990397.6A Pending CN110970023A (en) 2019-10-17 2019-10-17 Control device of voice equipment, voice interaction method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN110970023A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112929724A (en) * 2020-12-31 2021-06-08 海信视像科技股份有限公司 Display device, set top box and far-field pickup awakening control method

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101036106A (en) * 2004-10-04 2007-09-12 皇家飞利浦电子股份有限公司 Lighting device with user interface for light control
CN101621641A (en) * 2008-06-30 2010-01-06 索尼株式会社 Remote control device and remote control method
CN101827317A (en) * 2009-09-07 2010-09-08 上海银贵网络科技服务有限公司 Control method and controller for searching target objects via mobile terminals
CN102184022A (en) * 2011-02-14 2011-09-14 徐敬 Hexahedron wireless remote controller
CN102473040A (en) * 2009-08-11 2012-05-23 英派尔科技开发有限公司 Multi-dimensional controlling device
US20140078311A1 (en) * 2012-09-18 2014-03-20 Samsung Electronics Co., Ltd. Method for guiding controller to move to within recognizable range of multimedia apparatus, the multimedia apparatus, and target tracking apparatus thereof
CN103701981A (en) * 2012-09-27 2014-04-02 中兴通讯股份有限公司 Method and device for implementing voice recognition function
CN105474157A (en) * 2013-05-09 2016-04-06 亚马逊技术股份有限公司 Mobile device interfaces
CN106817396A (en) * 2015-12-02 2017-06-09 联发科技(新加坡)私人有限公司 The method and electronic equipment of selected target equipment
CN107728482A (en) * 2016-08-11 2018-02-23 阿里巴巴集团控股有限公司 Control system, control process method and device
CN108492825A (en) * 2018-03-12 2018-09-04 陈火 A kind of startup method, headset equipment and the speech recognition system of speech recognition
CN108573703A (en) * 2018-03-07 2018-09-25 珠海格力电器股份有限公司 The control method of electric system
US20190075200A1 (en) * 2017-08-10 2019-03-07 Lg Electronics Inc. Electronic device and method for controlling of the same
CN109618059A (en) * 2019-01-03 2019-04-12 北京百度网讯科技有限公司 The awakening method and device of speech identifying function in mobile terminal

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101036106A (en) * 2004-10-04 2007-09-12 皇家飞利浦电子股份有限公司 Lighting device with user interface for light control
CN101621641A (en) * 2008-06-30 2010-01-06 索尼株式会社 Remote control device and remote control method
CN102473040A (en) * 2009-08-11 2012-05-23 英派尔科技开发有限公司 Multi-dimensional controlling device
CN101827317A (en) * 2009-09-07 2010-09-08 上海银贵网络科技服务有限公司 Control method and controller for searching target objects via mobile terminals
CN102184022A (en) * 2011-02-14 2011-09-14 徐敬 Hexahedron wireless remote controller
US20140078311A1 (en) * 2012-09-18 2014-03-20 Samsung Electronics Co., Ltd. Method for guiding controller to move to within recognizable range of multimedia apparatus, the multimedia apparatus, and target tracking apparatus thereof
CN103701981A (en) * 2012-09-27 2014-04-02 中兴通讯股份有限公司 Method and device for implementing voice recognition function
CN105474157A (en) * 2013-05-09 2016-04-06 亚马逊技术股份有限公司 Mobile device interfaces
CN106817396A (en) * 2015-12-02 2017-06-09 联发科技(新加坡)私人有限公司 The method and electronic equipment of selected target equipment
CN107728482A (en) * 2016-08-11 2018-02-23 阿里巴巴集团控股有限公司 Control system, control process method and device
US20190075200A1 (en) * 2017-08-10 2019-03-07 Lg Electronics Inc. Electronic device and method for controlling of the same
CN108573703A (en) * 2018-03-07 2018-09-25 珠海格力电器股份有限公司 The control method of electric system
CN108492825A (en) * 2018-03-12 2018-09-04 陈火 A kind of startup method, headset equipment and the speech recognition system of speech recognition
CN109618059A (en) * 2019-01-03 2019-04-12 北京百度网讯科技有限公司 The awakening method and device of speech identifying function in mobile terminal

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
佚名: "小米小爱音箱万能遥控版体验:一句话遥控传统家电", 《极度网HTTPS://BAIJIAHAO.BAIDU.COM/S?ID=1635461215867941901&WFR=SPIDER&FOR=PC》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112929724A (en) * 2020-12-31 2021-06-08 海信视像科技股份有限公司 Display device, set top box and far-field pickup awakening control method

Similar Documents

Publication Publication Date Title
KR102411124B1 (en) Electronic device and method for performing task using external electronic device in electronic device
KR102469565B1 (en) Car control method of electronic apparatus and electronic appparatus thereof
WO2019105227A1 (en) Application icon display method, terminal, and computer readable storage medium
US8150384B2 (en) Methods and apparatuses for gesture based remote control
CN108023934B (en) Electronic device and control method thereof
CN102232211B (en) Handheld terminal device user interface automatic switching method and handheld terminal device
USRE48447E1 (en) Wearable terminal and method for controlling the same
US20180144111A1 (en) Systems and methods for coordinating applications with a user interface
KR20140125078A (en) Electronic device and method for unlocking in the electronic device
CN108920225A (en) Remote assistant control method and device, terminal, storage medium
WO2015180103A1 (en) Method and apparatus for selecting terminal mode
TW201443704A (en) Reconfigurable clip-on modules for mobile computing devices
CN109901698B (en) Intelligent interaction method, wearable device, terminal and system
WO2013034070A1 (en) Display method, terminal device and multi-terminal device system
KR102329761B1 (en) Electronic device for selecting external device and controlling the same and operating method thereof
WO2019218843A1 (en) Key configuration method and device, and mobile terminal and storage medium
KR20170110919A (en) Intelligent electronic device and operating method thereof
US20130298079A1 (en) Apparatus and method for unlocking an electronic device
CN107870674B (en) Program starting method and mobile terminal
CN103970500A (en) Method and device for displaying picture
WO2015043253A1 (en) Method and terminal for quickly entering terminal application
CN108920922A (en) unlocking method, device, mobile terminal and computer-readable medium
CN107562201A (en) Orient exchange method, device, electronic equipment and storage medium
WO2015180030A1 (en) Method and electronic device for recognizing user identity
CN108650408B (en) Screen unlocking method and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200407