CN115079579A - Method and device for controlling intelligent voice equipment and intelligent voice equipment - Google Patents

Method and device for controlling intelligent voice equipment and intelligent voice equipment Download PDF

Info

Publication number
CN115079579A
CN115079579A CN202210549201.1A CN202210549201A CN115079579A CN 115079579 A CN115079579 A CN 115079579A CN 202210549201 A CN202210549201 A CN 202210549201A CN 115079579 A CN115079579 A CN 115079579A
Authority
CN
China
Prior art keywords
scene mode
target scene
equipment
binding
started
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210549201.1A
Other languages
Chinese (zh)
Inventor
杜亮
陈会敏
吴洪金
国德防
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Haier Air Conditioner Gen Corp Ltd
Qingdao Haier Air Conditioning Electric Co Ltd
Haier Smart Home Co Ltd
Original Assignee
Qingdao Haier Air Conditioner Gen Corp Ltd
Qingdao Haier Air Conditioning Electric Co Ltd
Haier Smart Home Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Haier Air Conditioner Gen Corp Ltd, Qingdao Haier Air Conditioning Electric Co Ltd, Haier Smart Home Co Ltd filed Critical Qingdao Haier Air Conditioner Gen Corp Ltd
Priority to CN202210549201.1A priority Critical patent/CN115079579A/en
Publication of CN115079579A publication Critical patent/CN115079579A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B15/00Systems controlled by a computer
    • G05B15/02Systems controlled by a computer electric
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/418Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS] or computer integrated manufacturing [CIM]
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/26Pc applications
    • G05B2219/2642Domotique, domestic, home control, automation, smart house

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Manufacturing & Machinery (AREA)
  • Quality & Reliability (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application relates to the technical field of intelligent voice equipment control, and discloses a method for controlling intelligent voice equipment, which comprises the following steps: the method comprises the steps of obtaining a plurality of scene modes stored by the intelligent voice equipment and the binding state of equipment to be started corresponding to each scene mode; determining a target scene mode which is expected to be started by a user in a plurality of scene modes; determining a use label of the target scene mode according to the binding state of the equipment to be started corresponding to the target scene mode; and under the condition that the use label of the target scene mode is scene-realizable, controlling the intelligent voice equipment to execute a mode control instruction corresponding to the target scene mode. With the adoption of the scheme, the intelligent voice equipment can be controlled to execute the corresponding mode control instruction under the condition that the target scene mode is determined to be achievable, so that a user can better experience the scene mode of the intelligent voice equipment.

Description

Method and device for controlling intelligent voice equipment and intelligent voice equipment
Technical Field
The present application relates to the field of intelligent voice device control technologies, and for example, to a method and an apparatus for controlling an intelligent voice device, and an intelligent voice device.
Background
At the present stage, with the increasing development of science and technology and the improvement of living standard and quality of people, the intellectualization and intellectualization of life have become a trend. People can experience more intelligent equipment control through intelligent house. Among them, the smart voice device has been widely used by most users as an important component of smart homes.
At present, a lot of scene modes are set in the factory of the intelligent voice device, but the scene modes can be started only under specific conditions. Once the specific starting condition is not met, the user hardly experiences a good scene effect. Therefore, how to enable the user to better experience the scene mode of the intelligent voice device becomes a technical problem which needs to be solved urgently.
Disclosure of Invention
The following presents a simplified summary in order to provide a basic understanding of some aspects of the disclosed embodiments. This summary is not an extensive overview nor is intended to identify key/critical elements or to delineate the scope of such embodiments but rather as a prelude to the more detailed description that is presented later.
The embodiment of the disclosure provides a method and a device for controlling an intelligent voice device and the intelligent voice device, so as to provide a control scheme for enabling a user to better experience a scene mode of the intelligent voice device.
In some embodiments, the method for controlling a smart voice device comprises: the method comprises the steps of obtaining a plurality of scene modes stored by the intelligent voice equipment and the binding state of equipment to be started corresponding to each scene mode; determining a target scene mode which a user desires to start in a plurality of scene modes; determining a use label of the target scene mode according to the binding state of the equipment to be started corresponding to the target scene mode; and under the condition that the use label of the target scene mode is scene-realizable, controlling the intelligent voice equipment to execute a mode control instruction corresponding to the target scene mode.
In some embodiments, the method for controlling a smart voice device comprises: under the condition that the target scene mode corresponds to a plurality of devices to be started, respectively determining the binding states of the plurality of devices to be started; and determining the use label of the target scene mode according to the binding states of the plurality of devices to be started.
In some embodiments, the method for controlling a smart voice device comprises: determining that the use label of the target scene mode is a scene realizable state under the condition that the binding states of the plurality of devices to be started are all binding success; and under the condition that the binding states of the plurality of devices to be started are all binding failures, determining the use label of the target scene mode as a scene is not realizable.
In some embodiments, the method for controlling a smart voice device comprises: determining the binding state of the core device in the plurality of devices to be started under the condition that the binding states of the plurality of devices to be started are partially successful; and determining the use label of the target scene mode according to the binding state of the core equipment.
In some embodiments, the method for controlling a smart voice device comprises: determining that the use label of the target scene mode is a scene realizable state under the condition that the binding state of the core equipment is successful; and under the condition that the binding state of the core equipment is binding failure, determining that the use label of the target scene mode is scene unrealizable.
In some embodiments, the method for controlling a smart voice device comprises: sending the selection information of the core equipment to a user under the condition that the binding state of the core equipment is binding failure; and under the condition of receiving the confirmation information of the new core equipment fed back by the user, controlling the intelligent voice equipment to send a binding request to the new core equipment so as to establish a binding relationship between the intelligent voice equipment and the new core equipment.
In some embodiments, the method for controlling a smart voice device comprises: acquiring a scene use habit of a user; among the plurality of scene modes, a scene mode corresponding to the scene use habit is determined as a target scene mode that the user desires to start.
In some embodiments, the means for controlling the smart voice device comprises: the system comprises an obtaining module, a judging module and a starting module, wherein the obtaining module is configured to obtain a plurality of scene modes stored by the intelligent voice equipment and the binding state of equipment to be started corresponding to each scene mode; a first determination module configured to determine a target scene mode that a user desires to initiate among a plurality of scene modes; the second determination module is configured to determine a use label of the target scene mode according to the binding state of the device to be started corresponding to the target scene mode; and the control module is configured to control the intelligent voice equipment to execute the mode control instruction corresponding to the target scene mode under the condition that the use label of the target scene mode is scene realizable.
In some embodiments, the means for controlling the smart voice device comprises: a processor and a memory storing program instructions, the processor being configured to, upon execution of the program instructions, perform the aforementioned method for controlling a smart voice device.
In some embodiments, the smart voice device comprises: the aforementioned apparatus for controlling a smart voice device.
The method and the device for controlling the intelligent voice equipment and the intelligent voice equipment provided by the embodiment of the disclosure can realize the following technical effects: the method comprises the steps of obtaining a plurality of scene modes stored by the intelligent voice equipment and the binding state of equipment to be started corresponding to each scene mode; determining a target scene mode which is expected to be started by a user in a plurality of scene modes; determining a use label of the target scene mode according to the binding state of the equipment to be started corresponding to the target scene mode; and further controlling the intelligent voice equipment to execute the mode control instruction corresponding to the target scene mode under the condition that the use label of the target scene mode is scene-realizable. According to the scheme, whether the target scene mode can be realized or not can be judged by combining the binding state of the equipment to be started in the target scene mode, and the intelligent voice equipment is controlled to execute the corresponding mode control instruction under the condition that the target scene mode is determined to be realized, so that a user can better experience the scene mode of the intelligent voice equipment, and a more accurate scene control scheme is provided for the user.
The foregoing general description and the following description are exemplary and explanatory only and are not restrictive of the application.
Drawings
One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the accompanying drawings and not in limitation thereof, in which elements having the same reference numeral designations are shown as like elements and not in limitation thereof, and wherein:
FIG. 1 is a schematic diagram of a method for controlling a smart voice device according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a method for determining usage of a tag provided by an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of another method for determining usage of a tag provided by an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a method for sending a binding request according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of an apparatus for controlling a smart voice device according to an embodiment of the present disclosure;
fig. 6 is a schematic diagram of another apparatus for controlling a smart voice device according to an embodiment of the present disclosure.
Detailed Description
So that the manner in which the features and elements of the disclosed embodiments can be understood in detail, a more particular description of the disclosed embodiments, briefly summarized above, may be had by reference to the embodiments, some of which are illustrated in the appended drawings. In the following description of the technology, for purposes of explanation, numerous details are set forth in order to provide a thorough understanding of the disclosed embodiments. However, one or more embodiments may be practiced without these details. In other instances, well-known structures and devices may be shown in simplified form in order to simplify the drawing.
The terms "first," "second," and the like in the description and claims of the embodiments of the disclosure and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the present disclosure described herein may be made. Furthermore, the terms "comprising" and "having," as well as any variations thereof, are intended to cover non-exclusive inclusions.
The term "plurality" means two or more unless otherwise specified.
In the embodiment of the present disclosure, the character "/" indicates that the preceding and following objects are in an or relationship. For example, A/B represents: a or B.
The term "and/or" is an associative relationship that describes objects, meaning that three relationships may exist. For example, a and/or B, represents: a or B, or A and B.
The term "correspond" may refer to an association or binding relationship, and a corresponds to B refers to an association or binding relationship between a and B.
Fig. 1 is a schematic diagram of a method for controlling a smart voice device according to an embodiment of the present disclosure; referring to fig. 1, a method for controlling an intelligent speech device according to an embodiment of the present disclosure includes:
and S11, the intelligent voice device obtains a plurality of stored scene modes and the binding state of the device to be started corresponding to each scene mode.
And S12, the intelligent voice device determines a target scene mode which is expected to be started by the user in a plurality of scene modes.
And S13, the intelligent voice device determines the use label of the target scene mode according to the binding state of the device to be started corresponding to the target scene mode.
And S14, under the condition that the usage label of the target scene mode is scene-realizable, the intelligent voice device controls the intelligent voice device to execute the mode control instruction corresponding to the target scene mode.
In this scheme, intelligent speech equipment means the smart machine who possesses speech recognition and voice playing function, smart machine means the household electrical appliances product that forms behind the household electrical appliances with microprocessor, sensor technology, network communication technique introduction, have intelligent control, the characteristics of intelligent perception and intelligent application, smart machine's operation process often relies on the application and the processing of modern technologies such as thing networking, internet and electronic chip, for example smart machine can be through connecting electronic equipment, realize user's remote control and management to smart machine. Specifically, the intelligent voice device obtains a plurality of stored scene modes and the binding state of the device to be started corresponding to each scene mode. Here, the scene mode may include a sleep mode, a guest mode, an office mode, a cleaning mode, a refresh mode, and the like. As an example, if the scene mode is a sleep mode, the corresponding device to be started may be an intelligent lamp or an intelligent air conditioner; if the scene mode is the visitor mode, the corresponding equipment to be started can be an intelligent cabinet air conditioner, an intelligent water dispenser and an intelligent sound box; if the scene mode is the office mode, the corresponding equipment to be started can be an intelligent display screen and an intelligent lamp; if the scene mode is the purification mode, the corresponding equipment to be started can be an intelligent essence device, an intelligent humidifier and the like; if the scene mode is the refresh mode, the corresponding device to be started may be an intelligent fresh air machine, an intelligent fan, or the like. In the scheme, the binding state of the equipment to be started can be acquired through an application program on the mobile equipment associated with the intelligent voice equipment. Here, the binding state may include binding success or binding failure. The mobile device may include, for example, a cell phone, a smart home device, a wearable device, a smart mobile device, a virtual reality device, and the like, or any combination thereof, where the wearable device includes, for example: smart watches, smart bracelets, pedometers, and the like.
Further, the intelligent voice device can also determine a target scene mode which is expected to be started by the user in a plurality of scene modes. In one case, the smart voice device may acquire voice information input by a user and determine a target scene mode that the user desires to initiate among a plurality of scene modes by recognizing the voice information. In another case, the scene use habit of the user can be acquired; and determining a scene mode corresponding to the scene use habit as a target scene mode which is expected to be started by the user in the plurality of scene modes. Here, the scene use habit may include scene use time information and/or scene use frequency information. With the scheme, the target scene mode which the user desires to start can be determined in the plurality of scene modes more accurately.
Further, after the target scene mode which the user desires to start is determined, the intelligent voice device may determine the usage label of the target scene mode in combination with the binding state of the device to be started corresponding to the target scene mode. The binding state of the equipment to be started refers to whether the intelligent voice equipment and the equipment to be started have a binding relationship or not. If the intelligent voice equipment and the equipment to be started have a binding relationship, the binding state of the equipment to be started is successful; and if the intelligent voice equipment does not have the binding relationship with the equipment to be started, the binding state of the equipment to be started is binding failure. Specifically, the binding state of the device to be started corresponding to the target scene mode may be determined through an application program of the mobile device associated with the intelligent voice device. And further determining the use label of the target scene mode after determining the binding state of the device to be started. Here, the usage label of the scene mode may include scene realizable and scene non-realizable. By the scheme, a user can more intuitively know whether the scene has the scene starting condition or not by combining the use label of the scene mode. Further, when the usage label of the target scene mode is the scene realizable condition, the intelligent voice device controls the intelligent voice device to execute the mode control instruction corresponding to the target scene mode if the scene is determined to have the scene starting condition.
By adopting the method for controlling the intelligent voice equipment provided by the embodiment of the disclosure, a plurality of scene modes stored by the intelligent voice equipment and the binding state of the equipment to be started corresponding to each scene mode are obtained; determining a target scene mode which is expected to be started by a user in a plurality of scene modes; determining a use label of the target scene mode according to the binding state of the equipment to be started corresponding to the target scene mode; and further controlling the intelligent voice equipment to execute the mode control instruction corresponding to the target scene mode under the condition that the use label of the target scene mode is scene-realizable. According to the scheme, whether the target scene mode can be realized or not can be judged by combining the binding state of the equipment to be started in the target scene mode, and the intelligent voice equipment is controlled to execute the corresponding mode control instruction under the condition that the target scene mode is determined to be realized, so that a user can better experience the scene mode of the intelligent voice equipment, and a more accurate scene control scheme is provided for the user.
FIG. 2 is a schematic diagram of a method for determining usage of a tag provided by an embodiment of the present disclosure; referring to fig. 2, in S13, the determining, by the intelligent voice device, the usage label of the target scene mode according to the binding state of the device to be started corresponding to the target scene mode includes:
and S21, under the condition that the target scene mode corresponds to a plurality of devices to be started, the intelligent voice device respectively determines the binding states of the devices to be started.
And S22, the intelligent voice device determines the use label of the target scene mode according to the binding states of the devices to be started.
In this scheme, understandably, if a user wants to operate a target scene mode through an intelligent voice device, the intelligent voice device is required to bind a device to be started in the target scene mode in order to ensure a better scene effect of user experience. Therefore, the use label of the target scene mode can be determined by combining the binding state of the equipment to be started corresponding to the target scene mode. The method specifically comprises the following steps: under the condition that the target scene mode corresponds to a plurality of devices to be started, the intelligent voice device respectively determines the binding states of the plurality of devices to be started, and determines the use label of the target scene mode by combining the binding states of the plurality of devices to be started after the intelligent voice device determines the respective binding states of the plurality of devices to be started. According to the scheme, the use label of the target scene mode can be determined by combining the binding states of the devices to be started. Therefore, the user can judge whether the target scene modes corresponding to the multiple devices to be started can be realized through the use labels of the target scene modes.
Optionally, S22, the determining, by the smart voice device, the usage label of the target scene mode according to the binding states of the multiple devices to be started includes:
and under the condition that the binding states of the equipment to be started are all successfully bound, the intelligent voice equipment determines the use label of the target scene mode as the scene can be realized.
And under the condition that the binding states of the equipment to be started are all binding failures, the intelligent voice equipment determines that the use label of the target scene mode is scene unrealizable.
In the scheme, when the binding states of the multiple devices to be started are all successfully bound, it is determined that the intelligent voice device and the multiple devices to be started in the target scene mode all establish a binding relationship, and the intelligent voice device determines that the usage label of the target scene mode is a scene realizable. In an optimized scheme, if the usage label of the target scene mode is determined to be available for the scene, a target scene usage prompt can be sent to the user through the intelligent voice device so as to prompt the user that all devices in the target scene meet the scene usage condition, and the target scene mode can be executed. And under the condition that the binding states of the equipment to be started are all binding failures, determining that the intelligent voice equipment and the equipment to be started in the target scene mode do not establish a binding relationship, and determining that the use label of the target scene mode is scene unrealizable by the intelligent voice equipment. In an optimized scheme, if it is determined that the usage label of the target scene mode is scene unachievable, it is determined that a device to be started, which is required for executing the target scene mode, is absent in a home where the intelligent voice device is located, and a search request of the required device is sent to a gateway device associated with the intelligent voice device under the condition that the intelligent voice device is awakened again so as to search whether a newly-added device to be bound exists in the home environment, and if so, a binding request of the intelligent voice device and the newly-added device to be bound can be sent to a user; and under the condition that the intelligent voice equipment receives the confirmation information of the user, the binding relationship between the intelligent voice equipment and the newly-added band binding equipment is established. By the scheme, the use label of the target scene mode can be corrected by improving the binding relation of the intelligent voice equipment under the condition that the use label of the target scene mode is not realizable in scene.
FIG. 3 is a schematic diagram of another method for determining usage of a tag provided by an embodiment of the present disclosure; with reference to fig. 3, optionally, S22, the method for determining, by the smart voice device, the usage label of the target scene mode according to the binding states of the multiple devices to be started includes:
and S31, under the condition that the binding states of the multiple devices to be started are partially successful, the intelligent voice device determines the binding states of the core devices in the multiple devices to be started.
And S32, the intelligent voice device determines the use label of the target scene mode according to the binding state of the core device.
In this scheme, the intelligent voice device may determine the binding state of the core device in the multiple devices to be started, when the binding state of the multiple devices to be started is partially successful. Here, the core device is a device capable of implementing a core function in the scene mode. For example, if the scene mode is a cleaning mode, the core device is an intelligent cleaner. Further, the intelligent voice device may determine the binding state of the core device through an application program of the mobile device associated therewith, so that the intelligent voice device may determine the usage label of the target scene mode in combination with the binding state of the core device after determining the binding state of the core device. According to the scheme, the use label of the target scene mode can be determined by combining the binding state of the core equipment, so that a user can judge whether the target scene mode corresponding to the core equipment can be realized or not through the use label of the target scene mode.
Optionally, S32, the intelligent voice device determines the usage label of the target scene mode according to the binding state of the core device, including:
and under the condition that the binding state of the core equipment is successful, the intelligent voice equipment determines that the use label of the target scene mode is a scene realizable.
And under the condition that the binding state of the core equipment is binding failure, the intelligent voice equipment determines that the use label of the target scene mode is scene unrealizable.
In the scheme, under the condition that the binding state of the core equipment is successful, the intelligent voice equipment and the core equipment are determined to establish a binding relationship, and then the use label of the target scene mode is determined to be a scene realizable; and under the condition that the binding state of the core equipment is binding failure, determining that the intelligent voice equipment and the core equipment do not establish a binding relationship, and determining that the use label of the target scene mode is scene unrealizable. According to the scheme, the use label of the target scene mode can be more accurately determined by combining the binding state of the core equipment, so that a user can judge whether the target scene mode corresponding to the core equipment can be realized or not through the use label of the target scene mode.
FIG. 4 is a schematic diagram of a method for sending a binding request according to an embodiment of the present disclosure; as shown in fig. 4, optionally, when the binding state of the core device is binding failure, the intelligent voice device sends the selection information of the core device to the user.
And under the condition of receiving the confirmation information of the new core equipment fed back by the user, the intelligent voice equipment controls the intelligent voice equipment to send a binding request to the new core equipment so as to establish a binding relationship between the intelligent voice equipment and the new core equipment.
In the scheme, when the binding state of the core equipment is binding failure, the intelligent voice equipment is determined to be absent in the home environment where the intelligent voice equipment is located, and then the intelligent voice equipment sends selection information of the core equipment to a user, so that the user can select the core equipment from a plurality of equipment to be started in a target scene mode; further, under the condition that confirmation information of the new core equipment fed back by the user is received, the intelligent voice equipment controls the intelligent voice equipment to send a binding request to the new core equipment, so that the intelligent voice equipment and the new core equipment establish a binding relationship. With the adoption of the scheme, the intelligent voice equipment can be controlled to operate the target scene mode under the condition that the binding relation between the intelligent voice equipment and the new core equipment is established, so that a user can experience better scene effect.
Optionally, S12, the smart voice device determines a target scene mode that the user desires to initiate among a plurality of scene modes, including:
the intelligent voice equipment obtains the scene use habit of the user.
In a plurality of scene modes, the intelligent voice equipment determines the scene mode corresponding to the scene use habit as the target scene mode which is expected to be started by the user.
In the scheme, the intelligent voice equipment can obtain the scene use habit of the user. Here, the scene use habit may include scene use time information and/or scene use frequency information. Further, among the plurality of scene modes, the smart voice device may determine a scene mode corresponding to a scene use habit as a target scene mode that the user desires to initiate. With the scheme, the target scene mode which the user desires to start can be determined in the plurality of scene modes more accurately.
FIG. 5 is a schematic diagram of an apparatus for controlling a smart voice device according to an embodiment of the present disclosure; as shown in fig. 5, an apparatus for controlling an intelligent speech device according to an embodiment of the present disclosure includes an obtaining module 51, a first determining module 52, a second determining module 53, and a control module 54. The obtaining module 51 is configured to obtain a plurality of scene modes stored by the intelligent voice device, and a binding state of a device to be started corresponding to each scene mode; the first determination module 52 is configured to determine a target scene mode that the user desires to initiate among the plurality of scene modes; the second determining module 53 is configured to determine a usage label of the target scene mode according to the binding state of the device to be started corresponding to the target scene mode; the control module 54 is configured to control the smart voice device to execute a mode control instruction corresponding to the target scene mode if the usage label of the target scene mode is scene-realizable.
By adopting the device for controlling the intelligent voice equipment provided by the embodiment of the disclosure, a plurality of scene modes stored by the intelligent voice equipment and the binding state of the equipment to be started corresponding to each scene mode are obtained; determining a target scene mode which is expected to be started by a user in a plurality of scene modes; therefore, the use label of the target scene mode is determined according to the binding state of the equipment to be started corresponding to the target scene mode; and further controlling the intelligent voice equipment to execute the mode control instruction corresponding to the target scene mode under the condition that the use label of the target scene mode is scene-realizable. According to the scheme, whether the target scene mode can be realized or not can be judged by combining the binding state of the equipment to be started in the target scene mode, and the intelligent voice equipment is controlled to execute the corresponding mode control instruction under the condition that the target scene mode is determined to be realized, so that a user can better experience the scene mode of the intelligent voice equipment, and a more accurate scene control scheme is provided for the user.
FIG. 6 is a schematic diagram of another apparatus for controlling a smart voice device according to an embodiment of the present disclosure; as shown in fig. 6, an apparatus for controlling a smart voice device according to an embodiment of the present disclosure includes a processor (processor)100 and a memory (memory) 101. Optionally, the apparatus may also include a Communication Interface (Communication Interface)102 and a bus 103. The processor 100, the communication interface 102, and the memory 101 may communicate with each other via a bus 103. The communication interface 102 may be used for information transfer. The processor 100 may invoke logic instructions in the memory 101 to perform the method for controlling a smart voice device of the above-described embodiments.
In addition, the logic instructions in the memory 101 may be implemented in the form of software functional units and stored in a computer readable storage medium when the logic instructions are sold or used as independent products.
The memory 101, which is a computer-readable storage medium, may be used for storing software programs, computer-executable programs, such as program instructions/modules corresponding to the methods in the embodiments of the present disclosure. The processor 100 executes functional applications and data processing, i.e., implements the method for controlling the smart voice device in the above-described embodiments, by executing program instructions/modules stored in the memory 101.
The memory 101 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal device, and the like. In addition, the memory 101 may include a high-speed random access memory, and may also include a nonvolatile memory.
The embodiment of the disclosure provides an intelligent voice device, which comprises the device for controlling the intelligent voice device.
By adopting the intelligent voice equipment provided by the embodiment of the disclosure, a plurality of scene modes stored by the intelligent voice equipment and the binding state of the equipment to be started corresponding to each scene mode are obtained; determining a target scene mode which is expected to be started by a user in a plurality of scene modes; determining a use label of the target scene mode according to the binding state of the equipment to be started corresponding to the target scene mode; and further controlling the intelligent voice equipment to execute the mode control instruction corresponding to the target scene mode under the condition that the use label of the target scene mode is scene-realizable. According to the scheme, whether the target scene mode can be realized or not can be judged by combining the binding state of the equipment to be started in the target scene mode, and the intelligent voice equipment is controlled to execute the corresponding mode control instruction under the condition that the target scene mode is determined to be realized, so that a user can better experience the scene mode of the intelligent voice equipment, and a more accurate scene control scheme is provided for the user.
Embodiments of the present disclosure provide a computer-readable storage medium storing computer-executable instructions configured to perform the above-described method for controlling an intelligent voice device.
Embodiments of the present disclosure provide a computer program product comprising a computer program stored on a computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, cause the computer to perform the above-described method for controlling a smart voice device.
The computer-readable storage medium described above may be a transitory computer-readable storage medium or a non-transitory computer-readable storage medium.
The technical solution of the embodiments of the present disclosure may be embodied in the form of a software product, where the computer software product is stored in a storage medium and includes one or more instructions to enable a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method of the embodiments of the present disclosure. And the aforementioned storage medium may be a non-transitory storage medium comprising: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes, and may also be a transient storage medium.
The above description and drawings sufficiently illustrate embodiments of the disclosure to enable those skilled in the art to practice them. Other embodiments may incorporate structural, logical, electrical, process, and other changes. The examples merely typify possible variations. Individual components and functions are optional unless explicitly required, and the sequence of operations may vary. Portions and features of some embodiments may be included in or substituted for those of others. Furthermore, the words used in the specification are words of description only and are not intended to limit the claims. As used in the description of the embodiments and the claims, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. Similarly, the term "and/or" as used in this application is meant to encompass any and all possible combinations of one or more of the associated listed. Furthermore, the terms "comprises" and/or "comprising," when used in this application, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Without further limitation, an element defined by the phrase "comprising an …" does not exclude the presence of other like elements in a process, method or apparatus that comprises the element. In this document, each embodiment may be described with emphasis on differences from other embodiments, and the same and similar parts between the respective embodiments may be referred to each other. For methods, products, etc. of the embodiment disclosures, reference may be made to the description of the method section for relevance if it corresponds to the method section of the embodiment disclosure.
Those of skill in the art would appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software may depend upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosed embodiments. It can be clearly understood by the skilled person that, for convenience and brevity of description, the specific working processes of the system, the apparatus and the unit described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments disclosed herein, the disclosed methods, products (including but not limited to devices, apparatuses, etc.) may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units may be merely a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form. The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to implement the present embodiment. In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. In the description corresponding to the flowcharts and block diagrams in the figures, operations or steps corresponding to different blocks may also occur in different orders than disclosed in the description, and sometimes there is no specific order between the different operations or steps. For example, two sequential operations or steps may in fact be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved. Each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.

Claims (10)

1. A method for controlling a smart voice device, comprising:
the method comprises the steps of obtaining a plurality of scene modes stored by the intelligent voice equipment and the binding state of equipment to be started corresponding to each scene mode;
determining a target scene mode which a user desires to start in the plurality of scene modes;
determining a use label of the target scene mode according to the binding state of the equipment to be started corresponding to the target scene mode;
and under the condition that the use label of the target scene mode is scene-realizable, controlling the intelligent voice equipment to execute a mode control instruction corresponding to the target scene mode.
2. The method according to claim 1, wherein the determining the usage label of the target scene mode according to the binding state of the device to be started corresponding to the target scene mode comprises:
under the condition that the target scene mode corresponds to a plurality of devices to be started, respectively determining the binding states of the plurality of devices to be started;
and determining the use label of the target scene mode according to the binding states of the plurality of devices to be started.
3. The method of claim 2, wherein determining the usage label of the target scene mode according to the binding states of the plurality of devices to be started comprises:
determining that the use label of the target scene mode is scene realizable under the condition that the binding states of the devices to be started are all successfully bound;
and determining that the use label of the target scene mode is scene unrealizable under the condition that the binding states of the devices to be started are all binding failures.
4. The method of claim 2, wherein determining the usage label of the target scene mode according to the binding states of the plurality of devices to be started comprises:
determining the binding state of the core device in the plurality of devices to be started under the condition that the binding states of the plurality of devices to be started are partially successful;
and determining the use label of the target scene mode according to the binding state of the core equipment.
5. The method of claim 4, wherein determining the usage label of the target scenario mode according to the binding status of the core device comprises:
determining that the use label of the target scene mode is scene realizable under the condition that the binding state of the core device is successful;
and determining that the use label of the target scene mode is scene unachievable when the binding state of the core device is binding failure.
6. The method of claim 5, further comprising:
sending selection information of the core equipment to a user under the condition that the binding state of the core equipment is binding failure;
and under the condition of receiving confirmation information of the new core equipment fed back by a user, controlling the intelligent voice equipment to send a binding request to the new core equipment so as to establish a binding relationship between the intelligent voice equipment and the new core equipment.
7. The method of claim 1, wherein determining a target scene mode among the plurality of scene modes that a user desires to initiate comprises:
acquiring a scene use habit of a user;
determining a scene mode corresponding to the scene usage habit as a target scene mode that the user desires to start, among the plurality of scene modes.
8. An apparatus for controlling a smart voice device, comprising:
the system comprises an obtaining module, a judging module and a starting module, wherein the obtaining module is configured to obtain a plurality of scene modes stored by the intelligent voice equipment and the binding state of equipment to be started corresponding to each scene mode;
a first determination module configured to determine a target scene mode among the plurality of scene modes that a user desires to initiate;
the second determination module is configured to determine a use label of the target scene mode according to the binding state of the device to be started corresponding to the target scene mode;
and the control module is configured to control the intelligent voice equipment to execute a mode control instruction corresponding to the target scene mode under the condition that the use label of the target scene mode is scene realizable.
9. An apparatus for controlling a smart voice device, comprising a processor and a memory storing program instructions, characterized in that the processor is configured to perform the method for controlling a smart voice device according to any one of claims 1 to 7 when executing the program instructions.
10. An intelligent speech device, characterized in that it comprises means for controlling an intelligent speech device according to claim 8 or 9.
CN202210549201.1A 2022-05-20 2022-05-20 Method and device for controlling intelligent voice equipment and intelligent voice equipment Pending CN115079579A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210549201.1A CN115079579A (en) 2022-05-20 2022-05-20 Method and device for controlling intelligent voice equipment and intelligent voice equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210549201.1A CN115079579A (en) 2022-05-20 2022-05-20 Method and device for controlling intelligent voice equipment and intelligent voice equipment

Publications (1)

Publication Number Publication Date
CN115079579A true CN115079579A (en) 2022-09-20

Family

ID=83249796

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210549201.1A Pending CN115079579A (en) 2022-05-20 2022-05-20 Method and device for controlling intelligent voice equipment and intelligent voice equipment

Country Status (1)

Country Link
CN (1) CN115079579A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114913851A (en) * 2022-04-19 2022-08-16 青岛海尔空调器有限总公司 Method and device for controlling voice equipment, voice equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114913851A (en) * 2022-04-19 2022-08-16 青岛海尔空调器有限总公司 Method and device for controlling voice equipment, voice equipment and storage medium

Similar Documents

Publication Publication Date Title
EP3016318B1 (en) Method and apparatus for customizing scene mode of intelligent device
CN108986821B (en) Method and equipment for setting relation between room and equipment
CN110531630A (en) A kind of method and device controlling controlled device
CN105573778A (en) Method for starting application program and terminal
CN113504766B (en) Method, system, apparatus, server and storage medium for scheduling a scene task
CN113111186A (en) Method for controlling household appliance, storage medium and electronic device
CN113515053A (en) Method and device for controlling running of household appliance and household appliance
CN113498594A (en) Control method and device of intelligent household system, electronic equipment and storage medium
CN111724784A (en) Equipment control method and device
CN115079579A (en) Method and device for controlling intelligent voice equipment and intelligent voice equipment
CN113405249B (en) Control method and device for air conditioner, air conditioner and storage medium
CN113687757B (en) Agent control device, agent control method, and non-transitory recording medium
CN113341738A (en) Method, device and equipment for controlling household appliance
CN113329241A (en) Air conditioner and method and device for multimedia playing in air conditioner
WO2023168933A1 (en) Information processing method, device and system
CN105592572A (en) Bluetooth connection control method and terminal
CN113870849A (en) Information processing method, device and system
CN114740739A (en) Method, device, system and storage medium for intelligent household electrical appliance centralized control
CN114137842A (en) Scene configuration method and device, electronic equipment and storage medium
CN112995705A (en) Method and device for video processing and electronic equipment
CN111478831A (en) Intelligent household appliance naming method and intelligent household appliance
CN108206784B (en) Network topology generation method and device for smart home
CN112448870A (en) Household appliance control method, device and equipment
CN113300919A (en) Intelligent household appliance control method based on social software group function and intelligent household appliance
CN112187701A (en) Control method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination