CN115327932A - Scene creation method and device, electronic equipment and storage medium - Google Patents
Scene creation method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN115327932A CN115327932A CN202210162733.XA CN202210162733A CN115327932A CN 115327932 A CN115327932 A CN 115327932A CN 202210162733 A CN202210162733 A CN 202210162733A CN 115327932 A CN115327932 A CN 115327932A
- Authority
- CN
- China
- Prior art keywords
- scene
- instruction
- control
- information
- creating
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 74
- 230000009471 action Effects 0.000 claims description 58
- 230000003993 interaction Effects 0.000 claims description 39
- 230000002452 interceptive effect Effects 0.000 claims description 10
- 238000004458 analytical method Methods 0.000 claims description 6
- 238000004891 communication Methods 0.000 description 26
- 238000005516 engineering process Methods 0.000 description 24
- 230000006870 function Effects 0.000 description 15
- 230000008569 process Effects 0.000 description 9
- 238000012545 processing Methods 0.000 description 8
- 238000004378 air conditioning Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 238000007726 management method Methods 0.000 description 3
- 230000015572 biosynthetic process Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 238000003786 synthesis reaction Methods 0.000 description 2
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B15/00—Systems controlled by a computer
- G05B15/02—Systems controlled by a computer electric
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B19/00—Programme-control systems
- G05B19/02—Programme-control systems electric
- G05B19/418—Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS] or computer integrated manufacturing [CIM]
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/20—Pc systems
- G05B2219/26—Pc applications
- G05B2219/2642—Domotique, domestic, home control, automation, smart house
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/02—Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Manufacturing & Machinery (AREA)
- Quality & Reliability (AREA)
- Management Or Editing Of Information On Record Carriers (AREA)
Abstract
The embodiment of the application discloses a scene creating method, a scene creating device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring a scene creating instruction; acquiring the current control state of each device in a target area to which a scene creation instruction is directed; and creating scene control information corresponding to the scene creating instruction in the target area according to the current control state of each device. Therefore, by adopting the method of the application, the scene control information corresponding to the scene creation instruction in the target area is created according to the current control state of each device in the target area to which the scene creation instruction is directed, the scene is simply and quickly created through the scene control information, and the scene creation efficiency is improved.
Description
Technical Field
The present application relates to the field of computer technologies, and in particular, to a scene creation method and apparatus, an electronic device, and a storage medium.
Background
With the improvement of living standards of people and the development of internet of things technology, intelligent equipment gradually enters thousands of households, for example, the application of the intelligent equipment in an intelligent home scene is more and more extensive. In order to make the intelligent control system have richer contents, the intelligent device under the intelligent control system needs to be configured. In the prior art, a mode of editing and creating an intelligent control scene through an APP (Application) Application and further configuring an intelligent control system is very common, but with the increase of usage scenes of an intelligent device, the complexity of creating the intelligent control scene also increases.
In a traditional mode, generally, an intelligent control scene is manually edited and created on a terminal APP, configuration information of each device needs to be configured one by one, and the creation process is complex and inefficient.
Disclosure of Invention
The application provides a scene creating method, a scene creating device, a scene creating equipment and a storage medium, so as to solve the problems.
In a first aspect, an embodiment of the present application provides a scene creation method, where the method includes: acquiring a scene creating instruction; acquiring the current control state of each device in a target area to which a scene creating instruction aims; and creating scene control information corresponding to the scene creating instruction in the target area according to the current control state of each device.
In a second aspect, an embodiment of the present application further provides a scene creating apparatus, where the apparatus includes: the device comprises an instruction acquisition unit, a device state acquisition unit and a scene creation unit. The system comprises an instruction acquisition unit, an instruction generation unit and an instruction generation unit, wherein the instruction acquisition unit is used for acquiring a scene creation instruction; the device state acquisition unit is used for acquiring the current control state of each device in the target area to which the scene creation instruction is directed; and the scene creating unit is used for creating scene control information corresponding to the scene creating instruction in the target area according to the current control state of each device.
In one embodiment, voice information is obtained; performing intention recognition on the voice information to obtain an instruction intention corresponding to the voice information; if the instruction intention is the scene creation intention, identifying a scene identifier and a target area corresponding to the scene creation intention based on keywords extracted from the voice information; and generating a scene creating instruction according to the scene identification and the target area corresponding to the scene creating intention.
In one embodiment, the voice information comprises initial voice information and voice interaction information; the scene creating unit is also used for acquiring initial voice information and performing intention recognition on the initial voice information; if the instruction intention obtained by the intention identification in the current round is an incomplete intention, outputting interactive inquiry information based on the current identification result; and acquiring voice interaction information fed back according to the interaction inquiry information, and identifying the initial voice information and the voice interaction information intention until a complete instruction intention is obtained through identification.
In one embodiment, the current control state of each device includes the current action state of each device and the corresponding control parameter, and the current control state of each device is determined according to the current control state of each device; the scene creating unit is further used for generating a device execution action set corresponding to the scene creating instruction according to the current action state corresponding to each device and the corresponding control parameter; and generating scene control information corresponding to the scene creating instruction in the target area according to the action set executed by the equipment.
In one embodiment, the scene creation instruction includes a corresponding scene identifier; the scene creating unit is also used for generating a device state array corresponding to the scene identifier according to the current action state corresponding to each device and the corresponding control parameter; and by calling a scene creation interface, performing associated storage on the scene identifier and the equipment state array, and generating scene control information corresponding to the scene identifier in the target area.
In one embodiment, the scene creation unit is further configured to acquire device control recommendation information corresponding to each device matched with the scene creation instruction; pushing the current control state of each device and the device control recommendation information; if a selection instruction aiming at the current control state is received, scene control information corresponding to the scene creation instruction in the target area is created according to the current control state; and if a selection instruction aiming at the equipment control recommendation information is received, creating scene control information corresponding to the scene creation instruction in the target area according to the equipment control recommendation information.
In one embodiment, the apparatus further includes a scene recommendation unit, configured to obtain historical control data corresponding to each device in the target area; performing preference analysis on the historical control data to obtain equipment control information meeting preference conditions; and determining the device control information meeting the preference condition as the device control recommendation information matched with the scene creation instruction.
In one embodiment, the apparatus further includes a scene execution unit, configured to obtain a scene control instruction, and identify a scene identifier corresponding to the scene control instruction; acquiring scene linkage control information corresponding to the scene identification; the scene linkage control information comprises a target action and a control parameter for at least one device; and sending a control instruction to each device according to the scene linkage control information so as to control each device to execute the corresponding target action according to the control parameters.
In a third aspect, an embodiment of the present application further provides an electronic device, including: one or more processors, memory, and one or more applications. Wherein the one or more application programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the method of the first aspect as described above.
In a fourth aspect, embodiments of the present application further provide a computer-readable storage medium, in which a program code is stored, where the program code is called by a processor to execute the method according to the first aspect.
According to the technical scheme, a scene creating instruction is obtained; acquiring the current control state of each device in a target area to which a scene creating instruction aims; and creating scene control information corresponding to the scene creating instruction in the target area according to the current control state of each device. Therefore, by adopting the method, the complicated operation of configuring the information of each device one by one when the intelligent control scene is created is avoided, the scene control information corresponding to the scene creation instruction in the target area is created according to the current control state of each device in the target area to which the scene creation instruction is directed, the scene creation is simply and quickly carried out through the scene control information, and the scene creation efficiency is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings required to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the description below are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic application environment diagram illustrating a scene creation method according to an embodiment of the present application;
fig. 2 shows a schematic flowchart of a method for creating a smart home scene according to another embodiment of the present application;
fig. 3 shows a flowchart of a method for creating a smart home scene according to an embodiment of the present application;
fig. 4 shows a flowchart of a method for creating a smart home scene according to another embodiment of the present application;
fig. 5 shows a flowchart and a sequence diagram of a method for creating a smart home scene according to another embodiment of the present application;
fig. 6 is a flowchart illustrating a method for creating a smart home scene according to another embodiment of the present application;
fig. 7 is a flowchart illustrating a method for creating a smart home scene according to an embodiment of the present application;
fig. 8 is a schematic structural diagram illustrating an apparatus for creating a smart home scene according to an embodiment of the present application;
fig. 9 is a block diagram illustrating an electronic device according to an embodiment of the present application;
fig. 10 shows a block diagram of a computer-readable storage medium according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
With the improvement of living standards of people and the development of internet of things technology, intelligent equipment gradually enters thousands of households, for example, the application of the intelligent equipment in an intelligent home scene is more and more extensive. The intelligent household equipment enables a house to become more comfortable, more convenient and safer. In order to enable the intelligent control system to have richer contents, the intelligent control system needs to be configured, for example, an intelligent home automation scene is created through an intelligent home application program of the mobile terminal device, and intelligent home scenes such as that a person arrives at home and turns on a light automatically, and a person leaves from the light automatically and turns off the light automatically are achieved. The smart home scene refers to a set of actions executed by one or more smart home devices. However, as the usage scenarios of the smart home devices increase, the complexity of creating the smart control scenario by configuring the smart control system also increases, which makes the process of creating the scenario become complicated and inefficient.
For example, the existing technical solution for creating an intelligent home scene generally consists of four terminals, which are respectively: the system comprises a user, an APP, a cloud background and intelligent household equipment. The method comprises the following steps that a user uses an APP to perform creation operation of an intelligent home scene; the APP provides interaction for creating the smart home scene for the user, and the method comprises the following steps: adding, editing and deleting scene actions; editing room names, scene names, equipment names and the like, and uploading the created scenes to a cloud background; the cloud background stores intelligent control scenes manually edited and created on the terminal APP, and configures and creates configuration information of each intelligent home device in the intelligent home scene area one by one; the intelligent household equipment end is an execution device capable of realizing an intelligent household scene, such as a lamp, a fan, an air conditioner, a security system, a curtain, a clothes airing machine, a water heater, a door lock and the like. In the related technology, a user creates a scene on an APP application, selects actions of smart home devices to be added corresponding to the scene, adds the actions of the scene aiming at the smart home devices in an area one by one, and finally saves and names the scene after the addition is completed.
In a traditional scene creation mode, an intelligent control scene is usually manually edited and created on a terminal APP, 3 to 5 minutes are required for creating an intelligent control scene on average, and for an ordinary user, the operation of creating the intelligent control scene can be performed only after the time is spent on learning and certain skills are mastered. Therefore, in the related art, the process of creating the intelligent control scene is complicated, the time cost and the learning cost are high, and the efficiency of creating the intelligent control scene is low.
The inventor of the present application finds that, with the increasing maturity of voice interaction technology, more and more smart home products begin to be integrated into a voice interaction component, so that people can control and set parameters for smart home products through voice conversation. The method comprises the following steps: acquiring a scene creating instruction; acquiring the current control state of each device in a target area to which a scene creation instruction is directed; and creating scene control information corresponding to the scene creating instruction in the target area according to the current control state of each device. Therefore, by adopting the method, the scene control information corresponding to the scene creating instruction in the target area is created according to the current control state of each device in the target area to which the scene creating instruction aims, and the scene creating is simply and quickly carried out through the scene control information, so that the complex and complicated operation of configuring the information of each device one by one when the intelligent control scene is created is avoided, and the scene creating efficiency is improved.
The scene creation method provided by the embodiment of the application can be applied to the application environment shown in fig. 1. As shown in fig. 1, in the application environment, the terminal 10 communicates with the server 20 through a network, the server 20 communicates with the router 30 in the wireless network through the network, and the router 30 communicates with each intelligent device 40 in the wireless network through the network. Only a schematic view of the terminal 10 being a smartphone is shown in fig. 1.
The terminal 10 may be a device with a display function, or may be a device with a voice assistant, and the terminal 10 may be, but is not limited to, an intelligent control panel, a smart phone, a tablet computer, a notebook computer, a desktop computer, an intelligent watch, an intelligent wearable device, and the like. The server 20 may be an independent physical server, or may be a server cluster or a distributed system including a plurality of physical servers. The terminal 10 and the server 20 may be directly or indirectly connected through wired or wireless communication, and the present application is not limited thereto.
The smart device 40 may be one or more independent smart devices, or may be a smart device cluster formed by a plurality of smart devices. Specifically, the intelligent device may be a lamp, a fan, an air conditioner, a security system, a curtain, a clothes airing machine, a water heater, a door lock, or other intelligent devices. The intelligent device 40 may be connected to the gateway 50 through communication manners such as bluetooth, wiFi (Wireless-Fidelity, wireless Fidelity), and Zigbee (Zigbee technology), and in the embodiment of the present application, the connection manner between the intelligent device 40 and the gateway 50 is not limited.
Illustratively, a smart phone acquires a scene creation instruction through an APP; acquiring the current control state of the intelligent device 40 in the target area to which the scene creation instruction is directed through a network; and creating scene control information corresponding to the scene creating instruction in the target area according to the current control state of the intelligent device 40. The smart phone acquires a scene control instruction through the APP, and identifies a scene identifier corresponding to the scene control instruction; acquiring scene linkage control information corresponding to the scene identification; the scene linkage control information includes target actions and control parameters for the at least one smart device 40; and sending a control instruction to the intelligent device 40 according to the scene linkage control information so as to control the intelligent device 40 to execute the corresponding target action according to the control parameters.
It should be understood that the smart device 40 may also obtain the scene creating instruction, and after obtaining the scene creating instruction, may further execute the current control states of the devices in the target area to which the subsequent scene creating instruction is obtained, that is, the smart device 40 may execute the above-mentioned steps of the scene creating method in the terminal 10.
Embodiments of the present application will be described in detail below with reference to the accompanying drawings.
Referring to fig. 2, an embodiment of the present application provides a scene creation method, which is described by taking an example that the method is applied to an electronic device, and specifically may be an electronic device with a voice processing function, for example, the electronic device may be a terminal in the above figure. This embodiment describes a flow of steps on the electronic device side, and the method may include steps S110 to S130.
Step S110: a scene creation instruction is obtained.
The scene creation instruction refers to an instruction for creating a scene control scheme for each smart device in the target area, and specifically may be an instruction for instructing creation of a scene linkage control scheme, a scene automation control scheme, and the like, so that the corresponding smart device can automatically execute the created smart scene. In the intelligent scenario, the states of a series of devices reach the set expected states by executing a scenario control command.
It is understood that the form of the scene creation instruction may include, but is not limited to, an instruction in a voice form, an instruction triggered by an interface, and the like. Specifically, the user may trigger the scene creation instruction for the target area by one key in an application in an electronic device such as a smartphone, or may trigger the scene creation instruction for the target area by one key in an intelligent control panel. Further, the user can also send out voice information to obtain a scene creating instruction by identifying the voice information.
In some embodiments, before acquiring the scene creation instruction, the electronic device may acquire voice information by a voice assistant; performing intention recognition on the voice information to obtain an instruction intention corresponding to the voice information; if the instruction intention is the scene creation intention, identifying a scene identifier and a target area corresponding to the scene creation intention on the basis of keywords extracted from the voice information; and generating a scene creating instruction according to the scene identification and the target area corresponding to the scene creating intention.
In the embodiment of the application, the electronic device may acquire the scene creation instruction by performing voice recognition by a voice assistant. The voice assistant can be composed of five modules, namely a voice recognition module, a voice synthesis module, a natural language understanding module, a dialogue management module and a natural language generation module.
Specifically, the voice recognition module is used for completing the conversion from voice to text and converting the voice of the user speaking into voice; the natural language understanding module is used for completing semantic analysis of the text, extracting key information and identifying intentions; the dialogue management module is used for maintaining dialogue state, inquiring a database, managing context and the like; the natural language generating module is used for generating corresponding natural language texts; the speech synthesis module is used for converting the generated text into speech. The intention recognition is to recognize the purpose which the voice information sent by the user wants to achieve so as to recognize the intention of the user.
The voice assistant can be connected with the Internet or not, and can acquire the language sound received by the pickup equipment from the associated equipment through a wireless communication technology or an SPI (serial port communication interface) or from the associated cloud end through the wireless communication technology, convert the language sound into executable binary information, analyze and process the executable binary information and complete the execution of tasks; the binary speech can also be transmitted to the associated equipment through a wireless communication technology or SPI (serial port communication interface), or transmitted to the associated cloud end through the wireless communication technology; and the binary voice can be converted into human voice to be played.
In some embodiments, the electronic device may perform intent recognition on the speech by acquiring the speech of the user, and by extracting keywords of the speech. The voice assistant can correctly understand the meaning of the voice content through the acquired voice content, then deduce the relation between the voice content and correctly identify the semantic relation between the contexts, thereby identifying the voice intention of the user. The keywords may include keywords such as a room, a scene name, and a device name.
The specific process of extracting the keywords of the voice may be that the audio is received by a microphone, processed by a noise reduction algorithm, recorded on the device, and then compressed and transmitted to a voice assistant; sampling an analog audio signal through a specified frequency, converting an analog sound wave into digital data, analyzing the digital data to determine the position where a phoneme appears, and determining a text corresponding to the audio by using an algorithm after the phoneme is identified; the text is then processed using natural language understanding techniques. For example, part-of-speech tagging is used to determine which words are adjectives, verbs, nouns, etc., and then such tags are combined with statistical machine learning models to infer the meaning of a sentence; and finally, confirming whether the information provided by the user is complete through a dialogue management module, and otherwise, carrying out multiple rounds of dialogue until a scene creation instruction of a scene to be completely created is obtained. The noise reduction algorithm, the algorithm for determining the text corresponding to the audio and the statistical machine learning model can be obtained through third-party experimental data; may be pre-stored in the voice assistant or electronic device; and may also be obtained from an associated cloud or electronic device via wireless communication techniques.
Specifically, if the instruction intention is a scene creation intention, the electronic device identifies a scene identifier and a target area corresponding to the scene creation intention based on keywords extracted from the voice. The manner of determining that the instruction intent is the scene creation intent may be that the voice assistant acquires voice information of the user, and by extracting the keywords of the voice, recognizes that the keywords of the voice include keywords with creation meanings (such as terms of "creation," "new creation," "addition," and the like) and keywords with scene meanings (such as terms of "scene," "mode," or "scenario," and the like), confirms that the instruction intent of the voice is the scene creation intent, and recognizes a scene identifier and a target area corresponding to the scene creation intent based on the keywords extracted from the voice; and generating a scene creating instruction according to the scene identification and the target area corresponding to the scene creating intention.
The scene identifier refers to an identifier used for characterizing a scene to be created, and may specifically be a name of the scene to be created, for example. The target area refers to a specific one of the spatial areas, i.e. the area of the scene to be created. For example, the area where the scene is created may be a "bedroom", "living room", "kitchen", "bathroom", "children's room", etc. and the target area may include one room area or a plurality of room areas. Identifying a target area corresponding to the scene creation intention based on the keywords extracted from the voice, wherein the target area can be the keywords extracted from the scene voice; and detecting keywords with regional meanings in the keywords to further obtain the information of the target region. For example, the user speaks initial voice information of "create a scene of the home" or "create a scene of the current home" to the voice assistant, detects a sentence keyword through voice recognition, and recognizes the intention of the user to create the scene of the current home according to the detected keywords of "create", "scene" and "home" simultaneously contained in the voice, and obtains a target area for creating the scene as home; the target area may also be obtained by detecting the keyword "at" in the voice and grabbing the words "behind" such that the user says "create a scene at the restaurant" to the voice assistant, which obtains the target area where the scene was created as the restaurant by detecting the keyword "behind" the word "restaurant" in the voice.
In some embodiments, the obtained voice information may include initial voice information and voice interaction information. The initial voice information refers to the voice information of the user acquired for the first time; the voice interaction information refers to voice interaction information acquired by the electronic equipment and a user through multiple rounds of voice interaction.
The electronic device performs intention recognition on the voice information to obtain an instruction intention corresponding to the voice information, specifically, the electronic device may obtain initial voice information and perform intention recognition on the initial voice information. If the instruction intention obtained by intention recognition of the initial voice is an incomplete intention, the electronic equipment outputs interactive inquiry information based on the current recognition result; and acquiring voice interaction information fed back according to the interaction inquiry information, and identifying the initial voice information and the voice interaction information intention until a complete instruction intention is obtained through identification.
It should be noted that, for the complete scene creation intention identification, the scene identification and the target area of the created scene can be obtained.
In particular, an incomplete intent fails to identify the target area of the created scene or the name of the created scene (e.g., the scene identification of the created scene). If the incomplete intention cannot identify the target area of the created scene, outputting the interactive query information for querying the target area of the created scene; if the incomplete intention cannot identify the name of the created scene, the voice assistant outputs interactive query information for querying the name of the created scene.
Further, the electronic equipment acquires voice interaction information fed back according to the interaction inquiry information, and recognizes the initial voice information and the voice interaction information intention until a complete instruction intention is obtained through recognition. Wherein the complete instruction intent may be to include both the target area and the scene identification of the created scene.
For example, referring to fig. 3, the user says "create scene" to the voice assistant, the voice assistant recognizes the user's intention to create an intelligent home scene by extracting the keywords "create" and "scene" of the voice, and meanwhile, since the instruction intention obtained by the current turn of intention recognition is an incomplete intention (not including the target area and the scene identifier of the created scene), the voice assistant outputs the interactive query information (such as "which room", "scene name" and the like) based on the current recognition result; meanwhile, in some embodiments, the voice assistant screen displays all rooms of the current home, and the user can reply the names of one or more rooms, such as 'restaurant' and 'dining scene', acquire voice interaction information 'restaurant' and 'dining scene' fed back according to the interaction inquiry information, and the voice assistant recognizes the initial voice information and the voice interaction information intention until a complete instruction intention is recognized (a smart home scene with the name of 'dining scene' is created in the restaurant).
For example, referring to fig. 4, the user says "create a scene of the home" or "create a scene of the current home" to the voice assistant, the voice assistant determines that the user's intention is to create a smart home scene in the home by detecting keywords "create", "home" and "scene" in the voice, and meanwhile, the instruction intention obtained by the current turn of intention recognition is an incomplete intention (does not include a scene identifier of the created scene), and the voice assistant outputs interactive query information ("scene name call") based on the current recognition result; the user answers the common name 'dining scene', the voice assistant acquires the voice interaction information 'dining scene' fed back according to the interaction inquiry information, the initial voice information and the voice interaction information are identified until the complete instruction intention is identified (an intelligent home scene with the name called 'dining scene' is created at home).
Illustratively, referring to fig. 5, the user says "create a restaurant scene" to the voice assistant, the voice assistant determines the intention of the user to create a smart home scene in the "restaurant" area by detecting the keywords "create", "restaurant" and "scene" in the voice, and meanwhile, the voice assistant outputs interactive query information ("scene name") based on the current recognition result, because the instruction intention obtained by the current intention recognition is an incomplete intention (does not include the scene identifier of the created scene); the user answers the scene name 'dining scene', the voice assistant acquires voice interaction information ('dining scene') fed back according to the interaction inquiry information, and the initial voice information and the voice interaction information are identified until a complete instruction intention is identified (a smart home scene with the name 'dining scene' is created in a restaurant).
In some embodiments, the obtaining of the scene creating instruction may be obtaining a scene creating instruction input by a user at the electronic device, for example, the user manually inputs a scene creating instruction including a name (scene identifier) of a created scene and a target area at an APP of the electronic device or at an intelligent control panel.
In some embodiments, if the user manually inputs a scene creation instruction with incomplete intention at the APP of the electronic device or at the intelligent control panel, the electronic device may generate an intention prompt message prompting the user to input a scene creation instruction including a name of a created scene and a target area. The intention prompt message can be a voice prompt message, an indicator light prompt message or an interface display message; the electronic equipment generates prompt information for prompting to input the name of the created scene when judging that the scene creating command of the incomplete intention does not comprise the name of the created scene according to the keywords in the scene creating command for identifying the incomplete intention; when it is determined that the scene creation instruction of the incomplete intention does not include the target area of the created scene, prompt information prompting input of the target area of the created scene is generated.
Step S120: and acquiring the current control state of each device in the target area to which the scene creation instruction is directed.
The current control state of each device refers to current state information of each device. For example, taking the device as an intelligent lamp, if the current state of the intelligent lamp is an on state and a lamp brightness state of 80%, the current control state of the intelligent lamp may be that the lamp is on and the brightness is 80%.
In the embodiment of the application, a scene identifier and a target area corresponding to a scene creation intention are identified based on keywords extracted from voice information, a device list in the target area and device states of devices in the device list are inquired according to the target area of the created scene, and a current control state of each device in the target area corresponding to a scene creation instruction is obtained.
After a target area of a created scene is obtained, a device list in the target area and a device state of each device in the device list are queried, the device list in the target area and the device state of each device in the device list can be obtained from a related cloud through a wireless communication technology, the device state of each device in the target area can also be directly obtained through the wireless communication technology, and the device list in the target area is generated; it may also be that the device list in the target area and the device status of each device in the device list are acquired from the associated device through a wireless communication technology or SPI (serial communication interface). The device state of each device in the device list includes the current working state and control parameter of each device in the device list, the historical working state and control parameter of each device in the device list, and the most frequently kept working state and control parameter of each device in the device list obtained according to the third-party data.
In some embodiments, the device status of each device in the device list includes: a user firstly adjusts the intelligent household equipment to an expected state through manual control, voice control or APP; analyzing the state of each device which is most frequently used by the user according to the historical device state of each device and the living habits of the user; the most frequently used state of each device is obtained from third party data.
For example, if the manner of creating a scene as shown in fig. 3 is used to obtain the current control state of each device in the target area to which the scene creation instruction is directed, specifically, when the target area where the smart home scene is created is a restaurant, the voice assistant directly queries the current state of each smart home device in the target area through a wireless communication technology (technologies such as WiFi, bluetooth, and Zigbee), generates a device list in the target area, and obtains the device list in the target area and the current state of each device corresponding to the device list (restaurant light = on, light brightness =100%, color temperature =4000K; restaurant air conditioner =25 ℃).
Step S130: and creating scene control information corresponding to the scene creation instruction in the target area according to the current control state of each device.
It can be understood that the scene control information may be a scene control scheme corresponding to each device in the target area, and specifically may be a scene linkage control scheme, an automatic scene control scheme, and the like, so as to implement intelligent control on each device in the target area.
In the embodiment of the present application, the current control state of each device includes the current control state of each device and a corresponding control parameter. The current control state may specifically be a current operation state of each device. Creating scene control information corresponding to the scene creation instruction in the target area according to the current control state of each device, specifically, generating a device execution action set corresponding to the scene creation instruction according to the current action state and the corresponding control parameter corresponding to each device; and generating scene control information corresponding to the scene creating instruction in the target area according to the action set executed by the equipment.
It should be noted that, the scene creation instruction includes a corresponding scene identifier, and the device execution action set corresponding to the scene creation instruction is generated according to the current action state and the corresponding control parameter corresponding to each device, where the device execution action set may be generated according to the current action state and the corresponding control parameter corresponding to each device, and a device state array corresponding to the scene identifier; the generating of the scene control information corresponding to the scene creating instruction in the target area according to the device execution action set may be by invoking a scene creating interface, and performing associated storage on the scene identifier and the device state array, to generate the scene control information corresponding to the scene identifier in the target area.
For example, a user may wish to create a "dining scene" in which the device status of the target area may be that the restaurant lights are on, the lights are at 100% brightness, the color temperature is 4000K, and the restaurant air conditioning temperature is 25 ℃. Before a scene is created, a user manually or through APP turns the intelligent household equipment to a desired state, namely manually or through APP, the restaurant lamp is turned on, the brightness of the lamp is adjusted to 100%, the color temperature is adjusted to 4000K, and the temperature of the air conditioner of the restaurant is adjusted to 25 ℃. If the method for creating the scene shown in fig. 3 is adopted, the target area for creating the smart home scene is obtained as a restaurant, the created smart home scene is identified as a dining scene, and the current state of each device in the device list in the target area is obtained according to the target area (restaurant lamp = on, lamp brightness =100%, color temperature =4000K, and restaurant air conditioner =25 ℃).
In some embodiments, the scene creating interface is called to store the scene identifier and the device state array of the smart home scene in an associated manner, and the scene control information corresponding to the scene identifier in the target area is generated by transferring the scene identifier of the smart home scene, the device state array of each device in the device list and the device list into a preset function, algorithm or model for creating the smart home scene to run through the scene creating interface, storing a result of running the preset function, algorithm or model for creating the smart home scene in a storage unit of the electronic device or transmitting the result to the associated cloud or electronic device through a wireless communication technology for storage, that is, by calling the scene creating interface, the scene identifier and the device state array are stored in an associated manner to generate the scene control information corresponding to the scene identifier in the target area. The preset functions (such as API and the like), algorithms (associated algorithms and the like) or models (convolutional neural network models and the like) for creating the smart home scene can be obtained through third-party experimental data; the cloud storage system can be pre-stored in a storage unit of the electronic device, or obtained from an associated cloud or the electronic device through a wireless communication technology, or obtained from the associated cloud through an SPI or the wireless communication technology.
The device state array may be generated according to the current state of each device in the device list in the target area, and the device state array may be established according to the device list in the target area queried according to the target area and the device state of each device in the device list at the current time. The control parameter corresponding to each device is an element of the device state array. It should be appreciated that a device state array corresponds to a scene identification.
In some embodiments, after generating the scene control information corresponding to the scene identifier in the target area, the electronic device may generate a prompt message for prompting the user that the scene control information is created. The prompt message can be a voice message, an indicator light message or an interface display message.
For example, referring to fig. 4, the user says "create a scene of the home" or "create a scene of the current home" to the voice assistant, and the voice assistant recognizes the user's intention to create a scene of the smart home in the current home by detecting keywords including "create", "current", "home", and "scene" at the same time. Meanwhile, because the instruction intention obtained by the current round of intention recognition is an incomplete intention (for example, the scene identification for creating the scene is not included), the voice assistant outputs interactive inquiry information (a scene name) based on the current recognition result; the user answers the 'dining scene', the voice assistant converts the voice information answered by the user into words by acquiring voice interaction information (for example, 'dining scene') fed back by the user based on the interaction inquiry information, the words 'dining scene' are the name of the created smart home scene, namely scene identification (for example, N1), and the voice assistant performs intention recognition on the initial voice information and the voice interaction information until a complete instruction intention is recognized (for example, a scene called 'dining scene' is created at home).
The electronic device obtains a list of devices and device states for all rooms of the current home (e.g., device state 1: restaurant lights on; device state 2: restaurant lights brightness 100%; device state 3: dining hall lights color temperature 4000K; device state 4: restaurant air conditioning temperature 25 ℃). The electronic equipment generates an equipment state array Buf1 according to the current state of each equipment in an equipment list in a target area (such as home), wherein each element in the array Buf1 sequentially corresponds to the states 1-4 of the equipment; taking the device state number Buf1 and a scene identifier N1 of the intelligent home scene as parameters for creating an API function of the intelligent home scene; operating and creating an API function of the intelligent home scene to store the scene identifier N1 of the intelligent home scene and the equipment state array Buf1 in a correlation manner and generate scene control information corresponding to the scene identifier in the target area; and after the scene control information is created, the voice assistant generates voice prompt information for prompting the completion of the creation of the scene control information. For example, the voice assistant generates a voice prompt message that "the dining scenario was created successfully, you can start using".
According to the technical scheme provided by the embodiment of the application, the electronic equipment acquires a scene creation instruction; acquiring the current control state of each device in a target area to which a scene creation instruction is directed; and creating scene control information corresponding to the scene creating instruction in the target area according to the current control state of each device. By adopting the method, the scene is created according to the current state of the equipment in the target area corresponding to the scene creating instruction, the method is simple and easy to operate, the user can complete the configuration of the equipment in the scene without mastering the installation skill of the equipment in the area corresponding to the scene when the corresponding scene is created complicatedly, the operation complexity of the scene creation is greatly reduced, the threshold of linkage use of the intelligent equipment in the scene is reduced, meanwhile, the time for creating the scene is saved, the scene creating work which can be completed within several minutes by using the APP originally is shortened to be completed within ten seconds by voice conversation, and therefore the efficiency for creating the scene is improved.
Referring to fig. 6, fig. 6 is a flowchart illustrating a scene creation method according to another embodiment of the present application, where the method is described from a device terminal side, and the method may include steps S210 to S260.
Step S210: a scene creation instruction is obtained.
Step S220: and acquiring the current control state of each device in the target area to which the scene creating instruction aims.
For the specific implementation process of step S210 to step S220, reference may be made to the foregoing detailed description of step S110 to step S120, which is not repeated herein.
Step S230: and acquiring the device control recommendation information corresponding to each device matched with the scene creation instruction.
In the embodiment of the application, the obtaining of the device control recommendation information corresponding to each device matched with the scene creation instruction may be obtaining of historical control data corresponding to each device in the target area; performing preference analysis on the historical control data to obtain equipment control information meeting preference conditions; and determining the device control information meeting the preference condition as the device control recommendation information matched with the scene creating instruction.
The device control recommendation information may be historical states of each device in the analysis target area, and the states of each device frequently set by the user are obtained as recommended control states of each device; it is also possible to obtain the state in which each device in the target area is often set as the recommended control state of each device by the third party data. The recommended control state of the equipment comprises recommended actions of the equipment and control parameters corresponding to the actions.
In some embodiments, the device control recommendation information may include a list of recommended devices for the creation scenario and recommended control states for the recommended devices. The device list for creating the scene is a list of devices for linkage control in the scene, for example, the device list comprises devices such as lamps, air conditioners and fans; the recommended control state of the equipment can be the state of each equipment which is most frequently used by the user and analyzed according to the living habits of the user according to the historical equipment states of each equipment; the most frequently used state of each device is obtained from third party data. For example, the recommended control state of the equipment is the most frequently used state of each equipment obtained by the third party data, the brightness of the lamp is 90%, and the color temperature is 4000K; the air-conditioning temperature of the restaurant is 25 ℃.
Step S240: and pushing the current control state of each device and the device control recommendation information.
In the embodiment of the application, the current control state of each device and the device control recommendation information may be pushed, and the voice assistant displays a push interface through a screen; the voice assistant pushes the current control state of each device and the device control recommendation information through voice prompt; the voice assistant pushes the current control state and the device control recommendation information of each device to the associated cloud or electronic device through a wireless communication technology.
Step S250: and if a selection instruction aiming at the current control state is received, creating scene control information corresponding to the scene creation instruction in the target area according to the current control state.
In the embodiment of the application, the user makes a selection instruction based on the push of the current control state of each device and the device control recommendation information by the device. Specifically, the selection instruction may be a selection voice instruction sent by the user, and after the voice assistant of the device obtains the voice instruction of the user, a keyword of the voice instruction is extracted to obtain the selection instruction of the user; or the user can send a selection instruction through a touch screen or a key of the operating device; the user can also send a selection instruction to the associated electronic device, the associated APP or the associated cloud, so that the device obtains the selection instruction through the wireless communication technology or the serial port communication interface.
The selection instruction may be a selection instruction for a current control state of the device, or may be a selection instruction for the device control recommendation information.
In some embodiments, if a selection instruction for a current control state is received, generating a device state array corresponding to the scene identifier according to a current action state and a corresponding control parameter corresponding to each device; and by calling a scene creation interface, performing associated storage on the scene identifier and the equipment state array, and generating scene control information corresponding to the scene identifier in the target area.
In the embodiment of the application, the scene control information corresponding to the scene identifier in the target area is generated by calling the scene creation interface and storing the scene identifier and the device state array in an associated manner, or the device state array is generated according to the target state of each device in the device list in the target area information by calling the scene creation interface; taking the equipment state array and the intelligent home scene identification as parameters for creating an intelligent home scene function; and operating an intelligent home scene creating function to store the intelligent home scene name and the target state of each device in the device list in an associated manner so as to complete the creation of the intelligent home scene. The target state of the equipment is the required working state of each equipment in the target area when the scene is created, and the required working state of each equipment comprises the target action of each equipment and the control parameter corresponding to the target action. The control parameter corresponding to each device is an element of the device state array; a device state array corresponds to a scene identifier.
Step S260: and if a selection instruction aiming at the equipment control recommendation information is received, creating scene control information corresponding to the scene creation instruction in the target area according to the equipment control recommendation information.
In the embodiment of the application, if a selection instruction aiming at the current control state is received, generating a device state array corresponding to the scene identifier according to the recommended action state corresponding to each device and the corresponding control parameter; the scene identification and the equipment state array are stored in an associated way by calling a scene creation interface, and the scene control information corresponding to the scene identification in the target area is generated
Illustratively, a user wishes to create a "dining scene" that requires a restaurant light to be on, with 100% light intensity and 4000K color temperature; the air conditioning temperature of the restaurant is 25 ℃. Before a scene is created, a user manually or through an APP adjusts the intelligent household equipment to a preset state of an expected scene, namely, a restaurant lamp is turned on manually or through the APP, the brightness of the lamp is adjusted to 100%, and the color temperature is adjusted to 4000K; the air conditioning temperature of the restaurant was adjusted to 25 ℃.
A user says 'create a scene of dining in a restaurant' to a voice assistant, and the voice assistant obtains a scene creating instruction (for example, the intention is to create a scene, a target area of the created scene is the restaurant, and the scene is identified as 'dining scene') by extracting a keyword of the voice; acquiring a current control state of each device (for example, a restaurant lamp = on, lamp luminance =100%, color temperature =4000K; restaurant air conditioner =25 ℃) in a target area to which a scene creation instruction is directed, and simultaneously acquiring device control recommendation information (for example, a restaurant lamp = on, lamp luminance =90%, color temperature =4000K; restaurant air conditioner =25 ℃) corresponding to each device matching the scene creation instruction; and the voice assistant pushes the current control state of each device and the device control recommendation information through voice prompt. Illustratively, the voice assistant generates a voice prompt message "whether to use the device state of the restaurant as the execution action of the scene" for prompting the user to select whether to create the smart home scene using the current state of each device; the user issues a selection instruction of "no" or "yes" based on the voice prompt information.
As one implementation mode, the user sends a 'yes' selection instruction based on the voice prompt message, and the voice assistant acquires the current state and control parameters of each device (for example, device state 1: restaurant lamp is on; device state 2: restaurant lamp brightness is 100%; device state 3: restaurant lamp color temperature is 4000K; device state 4: restaurant air-conditioning temperature is 25 ℃); generating an equipment state array Buf1 according to the current state of each equipment in an equipment list in a target area (restaurant), wherein each element in the array Buf1 sequentially corresponds to the states 1-4 of the equipment; the device state number Buf1 and scene identification of the intelligent home scene are used as parameters for establishing an API function of the intelligent home scene; operating and creating an intelligent home scene API function to store the scene identification of the intelligent home scene and the equipment state array Buf1 in a correlation manner and generate scene control information corresponding to the scene identification in the target area; after the scene control information is created, the voice assistant generates voice prompt information for prompting that the scene control information is created, for example, the voice assistant generates voice prompt information that "the dining scene is created successfully, and you can start using".
In another embodiment, the user sends a "no" selection instruction based on the voice prompt information, and the voice assistant extracts a keyword of the voice selection instruction and confirms that the user creates a scene based on the device control recommendation information. In some embodiments, the voice assistant issues a prompt to prompt the user to create a scene based on the device control recommendation information, e.g., a voice prompt to "good, i.e., to automatically create a scene for you according to your lifestyle habits"; meanwhile, the voice assistant generates an equipment state array Buf1 according to equipment control recommendation information (for example, equipment state 1: a restaurant lamp is turned on; equipment state 2: a restaurant lamp brightness is 90%, equipment state 3: a restaurant lamp color temperature is 4000K; equipment state 4: a restaurant air-conditioning temperature is 25 ℃), wherein each element in the array Buf1 sequentially corresponds to the states 1-4 of the equipment; taking the device state number Buf1 and the scene identification of the intelligent home scene as parameters for creating an API function of the intelligent home scene; operating and creating an API function of the intelligent home scene to store the scene identification of the intelligent home scene and the device state array Buf1 in a correlation manner, and generating scene control information corresponding to the scene identification in the target area; after the scene control information is created, the voice assistant generates voice prompt information for prompting that the scene control information is created, for example, the voice assistant generates voice prompt information that "the dining scene is created successfully, and you can start using".
According to the technical scheme provided by the embodiment of the application, a scene creating instruction is obtained; acquiring the current control state of each device in a target area aimed at by a scene creating instruction, and acquiring device control recommendation information corresponding to each device matched with the scene creating instruction; pushing the current control state of each device and the device control recommendation information; if a selection instruction aiming at the current control state is received, scene control information corresponding to the scene creation instruction in the target area is created according to the current control state; and if a selection instruction aiming at the equipment control recommendation information is received, creating scene control information corresponding to the scene creation instruction in the target area according to the equipment control recommendation information. Therefore, by adopting the method, the scene is created in a voice conversation mode, the method is simple and easy to operate, the threshold of linkage use of the intelligent equipment in the created scene is reduced, the time for creating the scene is saved, the efficiency for creating the scene is improved, meanwhile, the working state of the equipment for creating the scene is selected by a user, and the experience of the user is improved.
Referring to fig. 7, fig. 7 is a schematic flowchart illustrating a scene creation method according to another embodiment of the present application, where this embodiment may be applied to a device terminal having a voice assistant and an intelligent control panel. The method may include steps S310 to S360.
Step S310: a scene creation instruction is obtained.
Step S320: and acquiring the current control state of each device in the target area to which the scene creation instruction is directed.
Step S330: and creating scene control information corresponding to the scene creation instruction in the target area according to the current control state of each device.
For the specific implementation process of step S310 to step S330, reference may be made to the detailed description of step S110 to step S130, which is not repeated herein.
It should be understood that the device terminal may set a scene of the target area through the APP or the intelligent control panel, and create scene control information corresponding to the scene creation instruction under the target area.
Step S340: and acquiring a scene control instruction, and identifying a scene identifier corresponding to the scene control instruction.
The scene control instruction is an instruction to start the created scene, and may specifically be an instruction to cause a device in the created scene to execute the created scene, for example. The scene control command may correspond to one or more scene identifiers.
Specifically, the manner of obtaining the scene control instruction and identifying the scene identifier corresponding to the scene control instruction may be that the voice assistant receives a voice instruction of the user, and extracts a keyword in the voice instruction, and if the keyword corresponds to one or more scene identifiers, determines that the voice instruction is the scene control instruction.
The corresponding scene identifier or scene identifiers in the keyword may be pre-stored in a storage unit of the device or obtained from an associated cloud through a wireless communication technology; or the system can be created by a user; or may be pre-created based on third party data. Each scene identifier is stored with a corresponding equipment state array and a corresponding target area in an associated manner; specifically, each device state array includes the action and control parameters of each device.
Step S350: and acquiring scene linkage control information corresponding to the scene identification.
In an embodiment of the present application, the scene linkage control information includes a target action and a control parameter for at least one device. The target action of the equipment refers to the action of the equipment corresponding to the scene identification; the target control parameter of the device is a control parameter corresponding to the motion of the device corresponding to the scene identification.
It should be understood that the scene linkage control information refers to element information of the device state array corresponding to the scene identification. The element information of the device state array comprises the action states and control parameters of all devices in the device state array. A scene identification corresponds to scene linkage control information, and the scene identification is stored in association with the corresponding scene linkage control information. The scene linkage control information corresponding to the scene identifier may be pre-stored in a storage unit of the device or obtained from an associated cloud terminal through a wireless communication technology; or the user can create the data autonomously; the data can also be created in advance according to third party data; the scene linkage control information may be the scene control information corresponding to the scene identifier in the target area, which is generated by calling the scene creation interface to store the scene identifier and the device state array in an associated manner.
Step S360: and sending a control instruction to each device according to the scene linkage control information so as to control each device to execute corresponding target actions according to control parameters.
In the embodiment of the present application, the sending of the control instruction to each device according to the scene linkage control information to control each device to execute the corresponding target action according to the control parameter may be that, according to the scene linkage control information, element information of a device state array associated with the scene linkage control information is obtained, and a target action state and a control parameter of each device, that is, an action state and a control parameter of each device in the device state array are obtained; generating a control instruction for controlling the working state of each device in the target area corresponding to the scene identifier according to the target action state and the control parameter of each device; and sending corresponding control instructions to the equipment to enable the equipment to work in corresponding target states. The target state refers to a target action state and a control parameter of the device, which are included in the scene control information corresponding to the scene identifier, when the device operates.
In some embodiments, the corresponding control instruction is sent to each device, so that each device operates in a corresponding target state, the corresponding control instruction may be sent to each device through a wireless communication technology or an SPI, or the instruction for executing a scene corresponding to the scene identifier is sent to an associated cloud terminal through the wireless communication technology, the control instruction is sent to the devices in the target area corresponding to the scene identifier in sequence according to the element information by searching the scene linkage control information corresponding to the scene stored in the background of the cloud terminal and sequentially querying the element information of the device state array associated with the scene linkage control information, so that each device in the target area operates in a corresponding target state.
In some embodiments, after each device in the target area operates in the corresponding target state, the voice assistant generates prompt information for prompting that the scene identification corresponds to the successful execution of the scene. The prompt message may be a voice prompt message, an indicator light prompt message, or an interface display prompt message, such as a voice prompt message "scene execution is successful"; if the existing device in the target area does not work in the corresponding target state, the voice assistant may also generate a prompt message for prompting that the scene corresponding to the scene identifier fails to be executed, where the prompt message may be a voice prompt message, an indicator light prompt message, or an interface display prompt message, such as a voice prompt message "execute failure".
For example, if the "dining scene" is created in the manner shown in fig. 5, the user speaks "dining scene" to the voice assistant, the voice assistant detects that the keyword of the voice contains words with scene meanings, such as "scene" and "mode", and detects that the keyword "dining", identifies the scene identifier of the "dining scene", and determines that the user intends to execute the "dining scene"; the voice assistant sends an instruction for executing a 'restaurant scene' to the associated cloud background, and after receiving the instruction, the cloud background finds the scene with the scene identifier of 'restaurant scene' from the database and acquires scene linkage control information corresponding to the scene identifier; the cloud end sequentially sends an instruction for setting the equipment state to the intelligent home equipment according to the element information of the equipment state array, namely the states of all equipment in the target area in the dining scene, so that the intelligent home equipment works in the target state, and after the intelligent home equipment works in the target state, the voice assistant outputs a voice prompt of successful scene execution; otherwise, the voice assistant outputs a voice prompt of 'execution failure'.
According to the technical scheme provided by the embodiment of the application, a scene creating instruction is obtained; acquiring the current control state of each device in a target area to which a scene creation instruction is directed; according to the current control state of each device, creating scene control information corresponding to the scene creating instruction in the target area; acquiring a scene control instruction, and identifying a scene identifier corresponding to the scene control instruction; acquiring scene linkage control information corresponding to the scene identification; the scene linkage control information comprises a target action and a control parameter for at least one device; and sending a control instruction to each device according to the scene linkage control information so as to control each device to execute the corresponding target action according to the control parameters. Therefore, by the adoption of the method, the intelligent home scene is created and started simply and quickly in a voice interaction mode, the difficulty of creating and using the intelligent home is reduced, the time for a user to create and start the intelligent home scene is shortened, the user does not need to spend a large amount of time to learn and master certain skills to create and start the scene, and the experience of the user is improved.
By adopting the method, the scene is simply and quickly created and started in a voice interaction mode, the difficulty in creating the intelligent scene and using the intelligent equipment in a linkage mode is reduced, the scene creating efficiency is improved, the time spent by a user in creating and starting the intelligent scene is reduced, meanwhile, the user can create and use the intelligent scene without spending a large amount of time in learning and mastering certain skills, and the experience feeling of the user is improved.
Referring to fig. 8, which illustrates a scene creating apparatus according to an embodiment of the present invention, the apparatus 400 includes: an instruction obtaining unit 410, a device state obtaining unit 420, and a scene creating unit 430. Specifically, the instruction obtaining unit 410 is configured to obtain a scene creation instruction; a device state obtaining unit 420, configured to obtain a current control state of each device in a target area to which the scene creation instruction is directed; a scene creating unit 430, configured to create scene control information corresponding to the scene creating instruction in the target area according to the current control state of each device.
In one embodiment, voice information is obtained; performing intention recognition on the voice information to obtain an instruction intention corresponding to the voice information; if the instruction intention is the scene creation intention, identifying a scene identifier and a target area corresponding to the scene creation intention on the basis of keywords extracted from the voice information; and generating a scene creating instruction according to the scene identification and the target area corresponding to the scene creating intention.
In one embodiment, the voice information comprises initial voice information and voice interaction information; the scene creating unit is also used for acquiring initial voice information and performing intention recognition on the initial voice information; if the instruction intention obtained by the intention identification in the current round is an incomplete intention, outputting interactive inquiry information based on the current identification result; and acquiring voice interaction information fed back according to the interaction inquiry information, and identifying the initial voice information and the voice interaction information intention until a complete instruction intention is obtained through identification.
In one embodiment, the current control state of each device includes the current action state of each device and the corresponding control parameter, and the current control state of each device is determined according to the current control state of each device; the scene creating unit is also used for generating a device execution action set corresponding to the scene creating instruction according to the current action state corresponding to each device and the corresponding control parameter; and generating scene control information corresponding to the scene creating instruction in the target area according to the action set executed by the equipment.
In one embodiment, the scene creation instruction includes a corresponding scene identification; the scene creating unit is also used for generating a device state array corresponding to the scene identifier according to the current action state corresponding to each device and the corresponding control parameter; and by calling a scene creation interface, performing associated storage on the scene identifier and the equipment state array, and generating scene control information corresponding to the scene identifier in the target area.
In one embodiment, the scene creation unit is further configured to acquire device control recommendation information corresponding to each device matched with the scene creation instruction; pushing the current control state of each device and the device control recommendation information; if a selection instruction aiming at the current control state is received, scene control information corresponding to the scene creation instruction in the target area is created according to the current control state; and if a selection instruction aiming at the equipment control recommendation information is received, creating scene control information corresponding to the scene creation instruction in the target area according to the equipment control recommendation information.
In one embodiment, the apparatus further includes a scene recommendation unit, configured to obtain historical control data corresponding to each device in the target area; performing preference analysis on the historical control data to obtain equipment control information meeting preference conditions; and determining the device control information meeting the preference condition as the device control recommendation information matched with the scene creation instruction.
In one embodiment, the apparatus further includes a scene execution unit, configured to acquire a scene control instruction, and identify a scene identifier corresponding to the scene control instruction; acquiring scene linkage control information corresponding to the scene identification; the scene linkage control information comprises target actions and control parameters for at least one device; and sending a control instruction to each device according to the scene linkage control information so as to control each device to execute the corresponding target action according to the control parameters.
It should be noted that, the embodiments of the present disclosure are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. For the device-like embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment. For any processing manner described in the method embodiment, all the processing manners may be implemented by corresponding processing modules in the apparatus embodiment, and details in the apparatus embodiment are not described again.
Referring to fig. 9, based on the above scenario creation method, another electronic device 500 provided in the present application includes a processor capable of executing the scenario creation method, where the electronic device 500 further includes one or more processors 510, a memory 520, and one or more application programs. The memory 520 stores programs that can execute the content of the foregoing embodiments, and the processor 510 can execute the programs stored in the memory 520. Wherein, electronic equipment 500 can be intelligent control panel, smart mobile phone, intelligent wearing equipment, intelligent robot, panel computer, personal, intelligent house equipment etc..
Processor 510 may include, among other things, one or more cores for processing data and a message matrix unit. The processor 510 interfaces with various components throughout the electronic device using various interfaces and lines to perform various functions of the electronic device 500 and process data by executing or performing instructions, programs, code sets, or instruction sets stored in the memory 520 and invoking data stored in the memory 520. Alternatively, the processor 510 may be implemented in hardware using at least one of Digital Signal Processing (DSP), field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 510 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. The CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 510, but may be implemented by a communication chip.
The Memory 520 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). The memory 520 may be used to store instructions, programs, code sets, or instruction sets. The memory 520 may include a program storage area and a data storage area, wherein the program storage area may store instructions for implementing an operating system, instructions for implementing at least one function (such as acquiring a scene creation instruction, etc.), instructions for implementing various method embodiments described below, and the like. The storage data area may also store data created by the terminal in use, such as current device status, historical device status, scene identification, etc.
Referring to fig. 10, a block diagram of a computer-readable storage medium according to an embodiment of the present application is shown. The computer-readable storage medium 600 has stored therein a program code 610, said program code 610 being invokable by the processor for performing the method described in the above-described method embodiments.
The computer-readable storage medium 600 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Alternatively, the computer-readable storage medium 600 includes a non-volatile computer-readable storage medium. The computer readable storage medium 600 has storage space for program code 610 for performing any of the method steps of the method described above. These program codes 610 can be read from or written to one or more computer program products. The program may be compressed, for example, in a suitable form.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.
Claims (11)
1. A method for scene creation, the method comprising:
acquiring a scene creating instruction;
acquiring the current control state of each device in a target area to which the scene creation instruction is directed;
and creating scene control information corresponding to the scene creating instruction in the target area according to the current control state of each device.
2. The method of claim 1, wherein before the obtaining the scene creation instruction, the method further comprises:
acquiring voice information;
performing intention recognition on the voice information to obtain an instruction intention corresponding to the voice information;
if the instruction intention is a scene creation intention, identifying a scene identification and a target area corresponding to the scene creation intention based on keywords extracted from the voice information;
and generating a scene creating instruction according to the scene identification and the target area corresponding to the scene creating intention.
3. The method according to claim 2, wherein the voice information includes initial voice information and voice interaction information, and the performing intent recognition on the voice information to obtain an instruction intent corresponding to the voice information includes:
acquiring initial voice information, and performing intention recognition on the initial voice information;
if the instruction intention obtained by the intention identification in the current round is an incomplete intention, outputting interactive inquiry information based on the current identification result;
and acquiring voice interaction information fed back according to the interaction inquiry information, and identifying the initial voice information and the voice interaction information intention until a complete instruction intention is obtained through identification.
4. The method according to claim 1, wherein the current control state of each of the devices includes a current action state of each of the devices and a corresponding control parameter, and the creating scene control information corresponding to the scene creation instruction in the target area according to the current control state of each of the devices includes:
generating a device execution action set corresponding to the scene creation instruction according to the current action state and the corresponding control parameter corresponding to each device;
and generating scene control information corresponding to the scene creating instruction in the target area according to the device execution action set.
5. The method according to claim 4, wherein the scene creation instruction includes a corresponding scene identifier, and the generating, according to the current action state and the corresponding control parameter corresponding to each of the devices, a device execution action set corresponding to the scene creation instruction includes:
generating a device state array corresponding to the scene identifier according to the current action state and the corresponding control parameter corresponding to each device;
the generating of the scene control information corresponding to the scene creation instruction in the target area according to the device execution action set includes:
and by calling a scene creation interface, performing associated storage on the scene identifier and the equipment state array, and generating scene control information corresponding to the scene identifier in the target area.
6. The method according to claim 1, wherein before creating scene control information corresponding to the scene creation instruction under the target area according to the current control state of each of the devices, the method further comprises:
acquiring device control recommendation information corresponding to each device matched with the scene creation instruction;
the creating of the scene control information corresponding to the scene creating instruction in the target area according to the current control state of each device includes:
pushing the current control state of each device and the device control recommendation information;
if a selection instruction aiming at the current control state is received, creating scene control information corresponding to the scene creation instruction in the target area according to the current control state;
and if a selection instruction aiming at the equipment control recommendation information is received, creating scene control information corresponding to the scene creation instruction in the target area according to the equipment control recommendation information.
7. The method according to claim 6, wherein the obtaining of the device control recommendation information corresponding to each device matching the scene creation instruction comprises:
acquiring historical control data corresponding to each device in the target area;
performing preference analysis on the historical control data to obtain equipment control information meeting preference conditions;
and determining the equipment control information meeting the preference condition as equipment control recommendation information matched with the scene creation instruction.
8. The method according to any one of claims 1-7, further comprising:
acquiring a scene control instruction, and identifying a scene identifier corresponding to the scene control instruction;
acquiring scene linkage control information corresponding to the scene identification; the scene linkage control information comprises a target action and a control parameter for at least one device;
and sending a control instruction to each device according to the scene linkage control information so as to control each device to execute a corresponding target action according to the control parameters.
9. A scene creation apparatus, characterized in that the apparatus comprises:
an instruction acquisition unit for acquiring a scene creation instruction;
the device state acquisition unit is used for acquiring the current control state of each device in the target area to which the scene creation instruction aims;
and the scene creating unit is used for creating scene control information corresponding to the scene creating instruction in the target area according to the current control state of each device.
10. An electronic device, comprising:
one or more processors;
a memory;
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the method of any of claims 1-8.
11. A computer-readable storage medium, having stored thereon program code that can be invoked by a processor to perform the method according to any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210162733.XA CN115327932A (en) | 2022-02-22 | 2022-02-22 | Scene creation method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210162733.XA CN115327932A (en) | 2022-02-22 | 2022-02-22 | Scene creation method and device, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115327932A true CN115327932A (en) | 2022-11-11 |
Family
ID=83916084
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210162733.XA Pending CN115327932A (en) | 2022-02-22 | 2022-02-22 | Scene creation method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115327932A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115495609A (en) * | 2022-11-21 | 2022-12-20 | 安徽淘云科技股份有限公司 | Sitting posture data acquisition system, sitting posture data acquisition method and device |
CN117014247A (en) * | 2023-08-28 | 2023-11-07 | 广东金朋科技有限公司 | Scene generation method, system and storage medium based on state learning |
CN117098065A (en) * | 2023-08-25 | 2023-11-21 | 广东星云开物科技股份有限公司 | Equipment migration method and electronic equipment |
CN117118773A (en) * | 2023-08-28 | 2023-11-24 | 广东金朋科技有限公司 | Scene generation method, system and storage medium |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105099840A (en) * | 2015-07-31 | 2015-11-25 | 小米科技有限责任公司 | Setting method and device of intelligent household scene |
CN108449241A (en) * | 2018-02-09 | 2018-08-24 | 深圳绿米联创科技有限公司 | Configuration method and device, the terminal of Intelligent household scene |
CN109257259A (en) * | 2018-11-30 | 2019-01-22 | 广东美的制冷设备有限公司 | Scene inter-linked controlling method, device and household appliance |
CN110162347A (en) * | 2019-05-15 | 2019-08-23 | 苏州达家迎信息技术有限公司 | A kind of application program launching method, device, equipment and storage medium |
CN110851221A (en) * | 2019-10-30 | 2020-02-28 | 青岛海信智慧家居系统股份有限公司 | Smart home scene configuration method and device |
CN112199623A (en) * | 2020-09-29 | 2021-01-08 | 上海博泰悦臻电子设备制造有限公司 | Script execution method and device, electronic equipment and storage medium |
CN112306968A (en) * | 2020-11-10 | 2021-02-02 | 珠海格力电器股份有限公司 | Scene establishing method and device |
CN112363405A (en) * | 2020-11-09 | 2021-02-12 | 深圳康佳电子科技有限公司 | Smart home linkage control method, smart terminal and storage medium |
CN113268004A (en) * | 2021-04-22 | 2021-08-17 | 深圳Tcl新技术有限公司 | Scene creating method and device, computer equipment and storage medium |
CN113495489A (en) * | 2020-04-07 | 2021-10-12 | 深圳爱根斯通科技有限公司 | Automatic configuration method and device, electronic equipment and storage medium |
-
2022
- 2022-02-22 CN CN202210162733.XA patent/CN115327932A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105099840A (en) * | 2015-07-31 | 2015-11-25 | 小米科技有限责任公司 | Setting method and device of intelligent household scene |
CN108449241A (en) * | 2018-02-09 | 2018-08-24 | 深圳绿米联创科技有限公司 | Configuration method and device, the terminal of Intelligent household scene |
CN109257259A (en) * | 2018-11-30 | 2019-01-22 | 广东美的制冷设备有限公司 | Scene inter-linked controlling method, device and household appliance |
CN110162347A (en) * | 2019-05-15 | 2019-08-23 | 苏州达家迎信息技术有限公司 | A kind of application program launching method, device, equipment and storage medium |
CN110851221A (en) * | 2019-10-30 | 2020-02-28 | 青岛海信智慧家居系统股份有限公司 | Smart home scene configuration method and device |
CN113495489A (en) * | 2020-04-07 | 2021-10-12 | 深圳爱根斯通科技有限公司 | Automatic configuration method and device, electronic equipment and storage medium |
CN112199623A (en) * | 2020-09-29 | 2021-01-08 | 上海博泰悦臻电子设备制造有限公司 | Script execution method and device, electronic equipment and storage medium |
CN112363405A (en) * | 2020-11-09 | 2021-02-12 | 深圳康佳电子科技有限公司 | Smart home linkage control method, smart terminal and storage medium |
CN112306968A (en) * | 2020-11-10 | 2021-02-02 | 珠海格力电器股份有限公司 | Scene establishing method and device |
CN113268004A (en) * | 2021-04-22 | 2021-08-17 | 深圳Tcl新技术有限公司 | Scene creating method and device, computer equipment and storage medium |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115495609A (en) * | 2022-11-21 | 2022-12-20 | 安徽淘云科技股份有限公司 | Sitting posture data acquisition system, sitting posture data acquisition method and device |
CN115495609B (en) * | 2022-11-21 | 2023-03-10 | 安徽淘云科技股份有限公司 | Sitting posture data acquisition system, sitting posture data acquisition method and device |
CN117098065A (en) * | 2023-08-25 | 2023-11-21 | 广东星云开物科技股份有限公司 | Equipment migration method and electronic equipment |
CN117014247A (en) * | 2023-08-28 | 2023-11-07 | 广东金朋科技有限公司 | Scene generation method, system and storage medium based on state learning |
CN117118773A (en) * | 2023-08-28 | 2023-11-24 | 广东金朋科技有限公司 | Scene generation method, system and storage medium |
CN117118773B (en) * | 2023-08-28 | 2024-05-24 | 广东金朋科技有限公司 | Scene generation method, system and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN115327932A (en) | Scene creation method and device, electronic equipment and storage medium | |
CN108831469B (en) | Voice command customizing method, device and equipment and computer storage medium | |
CN107370649B (en) | Household appliance control method, system, control terminal and storage medium | |
CN109725541B (en) | Automatic generation method and device, electronic equipment and storage medium | |
CN106297781B (en) | Control method and controller | |
US20220057927A1 (en) | Scene-operation method, electronic device, and non-transitory computer readable medium | |
CN109829106B (en) | Automatic recommendation method and device, electronic equipment and storage medium | |
CN109618202B (en) | Method for controlling peripheral equipment, television and readable storage medium | |
CN113412469A (en) | Equipment network distribution method and device, electronic equipment and storage medium | |
CN111367188B (en) | Control method and device for intelligent home, electronic equipment and computer storage medium | |
CN103197571A (en) | Control method, device and system | |
CN108170034A (en) | Smart machine control method, device, computer equipment and storage medium | |
CN105185378A (en) | Voice control method, voice control system and air conditioner capable of realizing voice control | |
WO2020228030A1 (en) | Device recommendation method and apparatus, electronic device, and storage medium | |
CN110932953A (en) | Intelligent household control method and device, computer equipment and storage medium | |
CN113611306A (en) | Intelligent household voice control method and system based on user habits and storage medium | |
CN113495489A (en) | Automatic configuration method and device, electronic equipment and storage medium | |
CN114120996A (en) | Voice interaction method and device | |
CN111933135A (en) | Terminal control method and device, intelligent terminal and computer readable storage medium | |
CN113658590B (en) | Control method and device of intelligent household equipment, readable storage medium and terminal | |
CN111540355A (en) | Personalized setting method and device based on voice assistant | |
CN113676382B (en) | IOT voice command control method, system and computer readable storage medium | |
CN117424956A (en) | Setting item processing method and device, electronic equipment and storage medium | |
CN110164426B (en) | Voice control method and computer storage medium | |
CN113314115B (en) | Voice processing method of terminal equipment, terminal equipment and readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |