CN110851221B - Smart home scene configuration method and device - Google Patents

Smart home scene configuration method and device Download PDF

Info

Publication number
CN110851221B
CN110851221B CN201911046592.XA CN201911046592A CN110851221B CN 110851221 B CN110851221 B CN 110851221B CN 201911046592 A CN201911046592 A CN 201911046592A CN 110851221 B CN110851221 B CN 110851221B
Authority
CN
China
Prior art keywords
scene
voice information
user
semantics
script
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911046592.XA
Other languages
Chinese (zh)
Other versions
CN110851221A (en
Inventor
荀晶
李冰
卢炳岐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Hisense Smart Life Technology Co Ltd
Original Assignee
Qingdao Hisense Smart Life Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Hisense Smart Life Technology Co Ltd filed Critical Qingdao Hisense Smart Life Technology Co Ltd
Priority to CN201911046592.XA priority Critical patent/CN110851221B/en
Publication of CN110851221A publication Critical patent/CN110851221A/en
Application granted granted Critical
Publication of CN110851221B publication Critical patent/CN110851221B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B15/00Systems controlled by a computer
    • G05B15/02Systems controlled by a computer electric
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/418Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS] or computer integrated manufacturing [CIM]
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/26Pc applications
    • G05B2219/2642Domotique, domestic, home control, automation, smart house
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Theoretical Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Quality & Reliability (AREA)
  • Manufacturing & Machinery (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a method and a device for configuring smart home scenes, wherein the method comprises the steps of obtaining voice information input by a user, determining semantics corresponding to the voice information according to the voice information input by the user, generating a scene script according to the semantics corresponding to the voice information and a preset device list and instruction set, and generating a scene visualization page according to the scene script for display. The semantics are obtained by processing the voice information input by the user, and then the corresponding scene script is generated according to the equipment list and the instruction set, so that the rapid scene configuration process can be realized, the difficulty of configuring the scene by the user is reduced, the scene configuration efficiency is improved, and the convenient smart home experience is provided for the user.

Description

Smart home scene configuration method and device
Technical Field
The embodiment of the invention relates to the technical field of smart home, in particular to a method and a device for configuring a smart home scene.
Background
At present, smart home scenes are divided into a manual triggering scene, a timing triggering scene and a conditional triggering scene, the smart home scenes are one or more intelligent devices execute a group of actions under specific conditions according to personalized requirements of users, the configuration of the scenes is completed through manual selection and input of smart home APP on a mobile terminal, the process is complex, the users have certain learning cost, sometimes installation personnel are needed to assist in user configuration, the purpose of planning the scenes is to provide more convenient and intelligent smart home experience for the users, and the enthusiasm of the users on the requirements is greatly reduced in the process of configuring the scenes.
Disclosure of Invention
The embodiment of the invention provides a method and a device for configuring smart home scenes, which are used for improving the efficiency of configuring the scenes and providing convenient smart home experience for users.
In a first aspect, an embodiment of the present invention provides a method for configuring a smart home scene, including:
acquiring voice information input by a user;
determining the semantics corresponding to the voice information according to the voice information input by the user;
generating a scene script according to the semantics corresponding to the voice information, a preset device list and an instruction set;
and generating a scene visualization page according to the scene script for display.
According to the technical scheme, the semantics are obtained by processing the voice information input by the user, so that the corresponding scene script is generated according to the equipment list, the instruction set and the state set, the rapid scene configuration process can be realized, the difficulty of configuring the scene by the user is reduced, the scene configuration efficiency is improved, and the convenient smart home experience is provided for the user.
Optionally, the determining, according to the voice information input by the user, the semantics corresponding to the voice information includes:
the voice information input by the user is identified through a voice identification service, and an identification result is obtained;
and carrying out semantic understanding on the recognition result through semantic understanding service to obtain the semantics corresponding to the voice information.
Optionally, the generating a scenario script according to the semantics corresponding to the voice information and a preset device list, an instruction set and a state set includes:
according to the semantics corresponding to the voice information, determining equipment related in the semantics from the preset equipment list, an instruction set and a state set, wherein the area where the equipment is located and a control instruction or a control action for the equipment;
and generating the scene script according to the equipment involved in the semantics, the area where the equipment is located, the control instruction or the control action for the equipment and a preset script template.
Optionally, after generating the scene visualization page for display according to the scene script, the method further includes:
acquiring scene modification information of a user;
and modifying the scene visual page according to the scene modification information.
In a second aspect, an embodiment of the present invention provides an apparatus for configuring a smart home scene, including:
the acquisition unit is used for acquiring voice information input by a user;
the processing unit is used for determining the semantics corresponding to the voice information according to the voice information input by the user; generating a scene script according to the semantics corresponding to the voice information and a preset device list, instruction set and state set; and generating a scene visualization page according to the scene script for display.
Optionally, the processing unit is specifically configured to:
the voice information input by the user is identified through a voice identification service, and an identification result is obtained;
and carrying out semantic understanding on the recognition result through semantic understanding service to obtain the semantics corresponding to the voice information.
Optionally, the processing unit is specifically configured to:
according to the semantics corresponding to the voice information, determining equipment related in the semantics from the preset equipment list, an instruction set and a state set, wherein the area where the equipment is located and a control instruction or a control action for the equipment;
and generating the scene script according to the equipment involved in the semantics, the area where the equipment is located, the control instruction or the control action for the equipment and a preset script template.
Optionally, the processing unit is further configured to:
after a scene visualization page is generated according to the scene script and displayed, scene modification information of a user is obtained;
and modifying the scene visual page according to the scene modification information.
In a third aspect, embodiments of the present invention also provide a computing device, comprising:
a memory for storing program instructions;
and the processor is used for calling the program instructions stored in the memory and executing the intelligent home scene configuration method according to the obtained program.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable nonvolatile storage medium, including computer-readable instructions, which when read and executed by a computer, cause the computer to perform the method for configuring a smart home scenario described above.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a system architecture according to an embodiment of the present invention;
fig. 2 is a flowchart of a method for configuring a smart home scenario according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a scene visualization page according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a device for configuring smart home scenarios according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail below with reference to the accompanying drawings, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Fig. 1 exemplarily shows a system architecture to which an embodiment of the present invention is applied, which may include a terminal device 100, a voice server 200, and a smart home server 300.
Wherein, install intelligent house APP on the terminal equipment 100, this intelligent house APP can register the association with voice server 200 and intelligent house server 300, communicates with this voice server 200 and intelligent house server 300.
The voice server 200 may receive the audio transmitted from the terminal device 100, recognize and parse the audio, and return the semantic understanding result to the terminal device 100. The voice server 200 also communicates with the smart home server 300, acquires a device list, an instruction set, and a status set stored in the smart home server 300, and generates a scenario script based on the device list, the instruction set, and the status set, and the semantic understanding result.
The smart home server 300 is used for storing scene information configured by the terminal device 100 and providing data support for the terminal device 100.
It should be noted that the structure shown in fig. 1 is merely an example, and the embodiment of the present invention is not limited thereto.
Based on the above description, fig. 2 shows in detail a flow of a method for configuring a smart home scenario according to an embodiment of the present invention, where the flow may be executed by a device for configuring a smart home scenario, and the device may be located in the terminal device 100 shown in fig. 1 or may be the terminal device 100.
As shown in fig. 2, the process specifically includes:
step 201, obtaining voice information input by a user.
In the embodiment of the invention, when a user needs to configure the smart home scene, voice information can be input through the smart home APP on the terminal equipment. It should be noted that, the scene in the embodiment of the present invention may refer to a scene in which devices to be controlled are linked to perform operations under a set scene name, for example, a user may predefine an away-home scene, which is used for linking operation scenes of devices to be controlled in the scene after the user leaves home.
When the user speaks the scene to be configured, the terminal device can acquire the voice information input by the user.
Step 202, determining the semantics corresponding to the voice information according to the voice information input by the user.
Specifically, voice information input by a user can be identified through a voice identification service to obtain an identification result, and then semantic understanding is carried out on the identification result through a semantic understanding service to obtain semantics corresponding to the voice information. It should be noted that, in the embodiment of the present invention, the voice recognition service is recognition in the scene configuration process, and in the specific implementation process, when the terminal device performs voice recognition on the voice information input by the user, it is also required to determine the application corresponding to the voice information input by the user, for example, for scene configuration, for device control, and so on.
For example, the voice information input by the user is "the setting condition triggers the scene owner to go home, when the door lock is opened, the living room plays sky city, the living room air conditioner is started, the refrigerating mode is adopted, the temperature is 26 degrees, the living room lamp is opened", the voice recognition service can recognize, and the scene name is obtained: the owner returns home; the devices involved in the scene: door lock, living room sound box, living room air conditioner and living room lamp; the related operations are as follows: unlocking, playing sky city, starting the air conditioner, refrigerating at 26 deg.c, and turning on the lamp.
After the recognition result is obtained by voice recognition, the recognition result can be analyzed to obtain corresponding semantics. Wherein the semantic understanding service may use an existing general semantic understanding method, which is not limited by the embodiment of the present invention.
And 203, generating a scene script according to the semantics corresponding to the voice information and a preset device list, instruction set and state set.
After the semantics corresponding to the voice information are obtained, equipment involved in the semantics, an area where the equipment is located, and a control instruction or a control action for the equipment are determined from a preset equipment list, an instruction set and a state set according to the semantics corresponding to the voice information, and then a scene script is generated according to the equipment involved in the semantics, the area where the equipment is located, the control instruction or the control action for the equipment, and a preset script template. Wherein the operation instructions are in an instruction set and the state set is in a state set.
The process of generating the scene script is equivalent to the process of compiling the device, the region where the device is located and the control instruction of the device involved in the semantics, and the finally generated scene script can be as follows:
{ "sceneName": "owner home", "sceneType":3, "status":1, "cmdList":
[{"cmdOrder":1,"deviceId":"008b9681019d","wifiId":"86100c00fffeffd00000fffe9681019d","cmdId":1,"cmdParm":11,"delayTime":0};
{"cmdOrder":2,"deviceId":"a00500900100124b0017239175","wifiId":"86100c0090040010000002044e8f21d1","cmdId":18,"cmdParm":2,"delayTime":0};
{"cmdOrder":3,"deviceId":"1kk00000yy0000j66pf0026","wifiId":"86100c00900200200000230123cf86f2","cmdId":4,"cmdParm":1,"delayTime":0};
{"cmdOrder":4,"deviceId":"1kk00000yy0000j66pf0026","wifiId":"86100c00900200200000230123cf86f2","cmdId":3,"cmdParm":2,"delayTime":0};
{"cmdOrder":5,"deviceId":"1kk00000yy0000j66pf0026","wifiId":"86100c00900200200000230123cf86f2","cmdId":6,"cmdParm":26,"delayTime":0}];
"sceneTrigCondition":[{"deviceId":"a0050460018308c012004b1200","wifiId":"86100c00a0050010000000054d10339f","statusValue":150,"statusParamValue":1,"operateType":1}],"conditionRelationship":0,"effectiveTimeList":[{"start":"00:00:00","end":"24:00:00”}]}。
the above-mentioned through scene script can be known as follows:
(1) "deviceId": 008b9681019d "," confiId ": 86100c00fffeffd00000fffe9681019d" is a background music device, "cmdId":1 "cmdParm":11 is a background music device by querying a device list, the 11 th song is played for the first partition by querying an instruction set, the first partition is a living room by acquiring data, and the 11 th song is a sky city.
(2) "deviceId" a00500900100124b0017239175"," confiId "86100c0090040010000002044e8f21d1" is known to be a living room light by querying the list of devices, "cmdId" 18 "cmdParm" 2 is known to be switch on by querying the instruction set.
(3) The "1kk00000yy0000j66pf0026", "confiId" 86100c00900200200000230123cf86f2 "is the living room air conditioner known by the query device list," cmdId "4," cmdParm "1 is the air conditioner on by the query instruction set," cmdId "3," cmdParm "2 is the air conditioner set to the cooling mode" cmdId "6" and "cmdParm" 26 is the air conditioner set to 26 degrees by the query instruction set.
(4) "deviceId" a0050460018308c012004b1200"," wifiId "86100c00a0050010000000054d10339f" is known to be an intelligent door lock by querying the device list, "status value" 150, "status ParamValue" 1, "operateType" 1 is known to be the door lock in an unlocked state by querying the state set.
And 204, generating a scene visualization page according to the scene script for display.
After determining the scene script, a scene visualization page may be generated, for example, according to the scene script generated in step 203, a scene visualization page as shown in fig. 3 may be obtained.
The scene visualization page is displayed on a display interface of the smart home APP of the terminal equipment, and a scene can be generated after confirmation of a user.
In addition, after the scene visual page is displayed, the user can also carry out corresponding modification on the page, and at the moment, the scene modification information of the user can be acquired, and the scene visual page is modified according to the scene modification information.
For example, the user feels that the setting temperature of the living room air conditioner is too low, and the setting temperature is adjusted to 26 degrees, and the scene visualization page is correspondingly modified.
Compared with the existing configuration mode manually selected by a user, the scene configuration method provided by the embodiment of the invention can save a plurality of operation steps, thereby improving the efficiency of scene configuration.
The above embodiment shows that, by acquiring the voice information input by the user, determining the semantics corresponding to the voice information according to the voice information input by the user, generating a scene script according to the semantics corresponding to the voice information and a preset device list and instruction set, and generating a scene visualization page according to the scene script for display. The semantics are obtained by processing the voice information input by the user, and then the corresponding scene script is generated according to the equipment list and the instruction set, so that the rapid scene configuration process can be realized, the difficulty of configuring the scene by the user is reduced, the scene configuration efficiency is improved, and the convenient smart home experience is provided for the user.
Based on the same technical concept, fig. 4 illustrates an exemplary structure of a device for configuring a smart home scene according to an embodiment of the present invention, where the device may execute a flow of configuring a smart home scene, and the device may be located in the terminal device 100 shown in fig. 1 or may be the terminal device 100.
As shown in fig. 4, the apparatus specifically includes:
an acquiring unit 401, configured to acquire voice information input by a user;
a processing unit 402, configured to determine, according to the voice information input by the user, semantics corresponding to the voice information; generating a scene script according to the semantics corresponding to the voice information and a preset device list, instruction set and state set; and generating a scene visualization page according to the scene script for display.
Optionally, the processing unit 402 is specifically configured to:
the voice information input by the user is identified through a voice identification service, and an identification result is obtained;
and carrying out semantic understanding on the recognition result through semantic understanding service to obtain the semantics corresponding to the voice information.
Optionally, the processing unit 402 is specifically configured to:
according to the semantics corresponding to the voice information, determining equipment related in the semantics from the preset equipment list, an instruction set and a state set, wherein the area where the equipment is located and a control instruction or a control action for the equipment;
and generating the scene script according to the equipment involved in the semantics, the area where the equipment is located, the control instruction or the control action for the equipment and a preset script template.
Optionally, the processing unit 402 is further configured to:
after a scene visualization page is generated according to the scene script and displayed, scene modification information of a user is obtained;
and modifying the scene visual page according to the scene modification information.
Based on the same technical concept, the embodiment of the invention further provides a computing device, which comprises:
a memory for storing program instructions;
and the processor is used for calling the program instructions stored in the memory and executing the intelligent home scene configuration method according to the obtained program.
Based on the same technical concept, the embodiment of the invention also provides a computer-readable nonvolatile storage medium, which comprises computer-readable instructions, wherein when the computer reads and executes the computer-readable instructions, the computer executes the method for configuring the smart home scene.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (8)

1. A method for smart home scene configuration, comprising:
acquiring voice information input by a user;
determining the semantics corresponding to the voice information according to the voice information input by the user;
generating a scene script according to the semantics corresponding to the voice information and a preset device list, instruction set and state set;
generating a scene visualization page according to the scene script for display;
generating a scene script according to the semantics corresponding to the voice information and a preset device list, instruction set and state set, wherein the scene script comprises the following steps:
according to the semantics corresponding to the voice information, determining equipment related in the semantics from the preset equipment list, an instruction set and a state set, wherein the area where the equipment is located and a control instruction or a control action for the equipment;
and generating the scene script according to the device related in the semantics, the region where the device is located, the control instruction or the control action for the device and a preset script template, wherein the control instruction is positioned in an instruction set, and the state is positioned in a state set.
2. The method of claim 1, wherein the determining the semantics corresponding to the voice information according to the voice information input by the user comprises:
the voice information input by the user is identified through a voice identification service, and an identification result is obtained;
and carrying out semantic understanding on the recognition result through semantic understanding service to obtain the semantics corresponding to the voice information.
3. The method of any of claims 1 to 2, further comprising, after generating a scene visualization page for display from the scene script:
acquiring scene modification information of a user;
and modifying the scene visual page according to the scene modification information.
4. An apparatus for smart home scene configuration, comprising:
the acquisition unit is used for acquiring voice information input by a user;
the processing unit is used for determining the semantics corresponding to the voice information according to the voice information input by the user; generating a scene script according to the semantics corresponding to the voice information and a preset device list, instruction set and state set; generating a scene visualization page according to the scene script for display;
the processing unit is specifically configured to:
according to the semantics corresponding to the voice information, determining equipment related in the semantics from the preset equipment list, an instruction set and a state set, wherein the area where the equipment is located and a control instruction or a control action for the equipment;
and generating the scene script according to the device related in the semantics, the region where the device is located, the control instruction or the control action for the device and a preset script template, wherein the control instruction is positioned in an instruction set, and the state is positioned in a state set.
5. The apparatus of claim 4, wherein the processing unit is specifically configured to:
the voice information input by the user is identified through a voice identification service, and an identification result is obtained;
and carrying out semantic understanding on the recognition result through semantic understanding service to obtain the semantics corresponding to the voice information.
6. The apparatus of any one of claims 4 to 5, wherein the processing unit is further to:
after a scene visualization page is generated according to the scene script and displayed, scene modification information of a user is obtained;
and modifying the scene visual page according to the scene modification information.
7. A computing device, comprising:
a memory for storing program instructions;
a processor for invoking program instructions stored in said memory and for performing the method according to any of claims 1 to 3 in accordance with the obtained program.
8. A computer readable non-transitory storage medium comprising computer readable instructions which, when read and executed by a computer, cause the computer to perform the method of any of claims 1 to 3.
CN201911046592.XA 2019-10-30 2019-10-30 Smart home scene configuration method and device Active CN110851221B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911046592.XA CN110851221B (en) 2019-10-30 2019-10-30 Smart home scene configuration method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911046592.XA CN110851221B (en) 2019-10-30 2019-10-30 Smart home scene configuration method and device

Publications (2)

Publication Number Publication Date
CN110851221A CN110851221A (en) 2020-02-28
CN110851221B true CN110851221B (en) 2023-06-30

Family

ID=69598956

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911046592.XA Active CN110851221B (en) 2019-10-30 2019-10-30 Smart home scene configuration method and device

Country Status (1)

Country Link
CN (1) CN110851221B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113495489A (en) * 2020-04-07 2021-10-12 深圳爱根斯通科技有限公司 Automatic configuration method and device, electronic equipment and storage medium
CN114528064A (en) * 2020-11-23 2022-05-24 深圳Tcl新技术有限公司 Scene configuration method, storage medium and terminal equipment
CN113055255A (en) * 2020-12-25 2021-06-29 青岛海尔科技有限公司 Scene configuration method and device of intelligent household appliance, storage medium and electronic equipment
CN113488041A (en) * 2021-06-28 2021-10-08 青岛海尔科技有限公司 Method, server and information recognizer for scene recognition
CN113341754A (en) * 2021-06-30 2021-09-03 青岛海尔科技有限公司 Scene configuration method, scene engine, user terminal and intelligent home system
CN115695065A (en) * 2021-07-30 2023-02-03 青岛海尔科技有限公司 Scene creating method and device, storage medium and electronic equipment
CN114237063B (en) * 2021-12-16 2024-10-11 深圳绿米联创科技有限公司 Scene control method, device and system, electronic equipment and medium
CN115327932A (en) * 2022-02-22 2022-11-11 深圳绿米联创科技有限公司 Scene creation method and device, electronic equipment and storage medium
CN115083399A (en) * 2022-05-11 2022-09-20 深圳绿米联创科技有限公司 Equipment control method, device, equipment and storage medium
CN114968036A (en) * 2022-05-27 2022-08-30 中国第一汽车股份有限公司 Scene mode generation method, device, equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104777751A (en) * 2015-03-19 2015-07-15 珠海格力电器股份有限公司 Intelligent household system and WIFI module configuration method
CN106454898A (en) * 2016-10-20 2017-02-22 北京小米移动软件有限公司 Intelligent scene configuration method and device
CN108337139A (en) * 2018-01-29 2018-07-27 广州索答信息科技有限公司 Home appliance voice control method, electronic equipment, storage medium and system
CN108683574A (en) * 2018-04-13 2018-10-19 青岛海信智慧家居系统股份有限公司 A kind of apparatus control method, server and intelligent domestic system
CN109920413A (en) * 2018-12-28 2019-06-21 广州索答信息科技有限公司 A kind of implementation method and storage medium of kitchen scene touch screen voice dialogue
CN110246499A (en) * 2019-08-06 2019-09-17 苏州思必驰信息科技有限公司 The sound control method and device of home equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104777751A (en) * 2015-03-19 2015-07-15 珠海格力电器股份有限公司 Intelligent household system and WIFI module configuration method
CN106454898A (en) * 2016-10-20 2017-02-22 北京小米移动软件有限公司 Intelligent scene configuration method and device
CN108337139A (en) * 2018-01-29 2018-07-27 广州索答信息科技有限公司 Home appliance voice control method, electronic equipment, storage medium and system
CN108683574A (en) * 2018-04-13 2018-10-19 青岛海信智慧家居系统股份有限公司 A kind of apparatus control method, server and intelligent domestic system
CN109920413A (en) * 2018-12-28 2019-06-21 广州索答信息科技有限公司 A kind of implementation method and storage medium of kitchen scene touch screen voice dialogue
CN110246499A (en) * 2019-08-06 2019-09-17 苏州思必驰信息科技有限公司 The sound control method and device of home equipment

Also Published As

Publication number Publication date
CN110851221A (en) 2020-02-28

Similar Documents

Publication Publication Date Title
CN110851221B (en) Smart home scene configuration method and device
CN109326289B (en) Wake-up-free voice interaction method, device, equipment and storage medium
CN108831469B (en) Voice command customizing method, device and equipment and computer storage medium
CN106782526B (en) Voice control method and device
CN108986821B (en) Method and equipment for setting relation between room and equipment
CN111447123A (en) Smart home configuration method and device, electronic equipment and medium
JP6728319B2 (en) Service providing method and system using a plurality of wake words in an artificial intelligence device
CN116844543A (en) Control method and system based on voice interaction
CN108899027A (en) Voice analysis method and device
CN109215638B (en) Voice learning method and device, voice equipment and storage medium
CN110992937B (en) Language off-line identification method, terminal and readable storage medium
CN107369446A (en) Handle state prompt method, device and computer-readable recording medium
CN109285542B (en) Voice interaction method, medium, device and system of karaoke system
CN113696849A (en) Vehicle control method and device based on gestures and storage medium
CN109324515A (en) Method for controlling intelligent electric appliance and control terminal
CN110473542A (en) Awakening method, device and the electronic equipment of phonetic order execution function
CN114327185A (en) Vehicle screen control method and device, medium and electronic equipment
CN112152890B (en) Control system and method based on intelligent sound box
CN110450789A (en) A kind of information processing method and device
CN111161734A (en) Voice interaction method and device based on designated scene
CN114690992B (en) Prompting method, prompting device and computer storage medium
CN114399896B (en) Method and system for configuring remote control function data according to remote control equipment image
CN113539254A (en) Voice interaction method and system based on action engine and storage medium
CN110633037B (en) Man-machine interaction method and device based on natural language and computer storage medium
CN112380871A (en) Semantic recognition method, apparatus, and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 266100 Songling Road, Laoshan District, Qingdao, Shandong Province, No. 399

Applicant after: Qingdao Hisense Smart Life Technology Co.,Ltd.

Address before: 266100 Songling Road, Laoshan District, Qingdao, Shandong Province, No. 399

Applicant before: QINGDAO HISENSE SMART HOME SYSTEMS Co.,Ltd.

GR01 Patent grant
GR01 Patent grant