CN115004641A - Setting method and device - Google Patents

Setting method and device Download PDF

Info

Publication number
CN115004641A
CN115004641A CN202080093272.4A CN202080093272A CN115004641A CN 115004641 A CN115004641 A CN 115004641A CN 202080093272 A CN202080093272 A CN 202080093272A CN 115004641 A CN115004641 A CN 115004641A
Authority
CN
China
Prior art keywords
information
setting
instruction
generating corresponding
instruction information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202080093272.4A
Other languages
Chinese (zh)
Inventor
茹昭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Publication of CN115004641A publication Critical patent/CN115004641A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]

Abstract

The embodiment of the application provides a setting method and equipment. The method comprises the following steps: acquiring instruction information; and generating corresponding setting information according to the instruction information, wherein the setting information is used for setting scene resources and/or rule resources. The method and the device can reduce the operation difficulty of setting the scene resources and/or the rule resources.

Description

Setting method and device Technical Field
The present application relates to the field of internet of things, and more particularly, to a setup method and apparatus.
Background
In the internet of things system, a smart home system including devices, a network, a platform and applications is usually used to construct specific device automation and device linkage so as to realize specific applications and services.
In an Open Connectivity Foundation (OCF) protocol, automation control of equipment is generally realized by creating and operating Scene resources (Scene resources), and linkage of the equipment is realized by creating Rule resources (Rule resources).
When setting scene resources and rule resources, a user needs to select or input multiple items of information, and the operation is complex and the use is inconvenient. And the application program is difficult to use because the user is required to understand the scene/rule of the internet of things and the structure of the scene/rule.
Disclosure of Invention
The embodiment of the application provides a setting method and device, which can automatically generate setting information for setting scene resources and/or rule resources according to instruction information of a user, and the user does not need to select or input multiple items of information, so that the operation difficulty can be reduced.
The embodiment of the application provides a setting method, which comprises the following steps:
acquiring instruction information;
and generating corresponding setting information according to the instruction information, wherein the setting information is used for setting scene resources and/or rule resources.
The embodiment of the application provides a setting device, which comprises:
the acquisition module is used for acquiring instruction information;
and the setting information generating module is used for generating corresponding setting information according to the instruction information, and the setting information is used for setting the scene resources and/or the rule resources.
The embodiment of the application provides a setting device, including: a processor and a memory for storing a computer program, said processor being adapted to call and run the computer program stored in said memory, performing the setting method as described above.
The embodiment of the present application provides a chip, including: and the processor is used for calling and running the computer program from the memory so that the equipment provided with the chip executes the setting method.
An embodiment of the present application provides a computer-readable storage medium for storing a computer program, where the computer program enables a computer to execute the setting method.
The embodiment of the present application provides a computer program product, which includes computer program instructions, and the computer program instructions enable a computer to execute the setting method.
An embodiment of the present application provides a computer program, which enables a computer to execute the setting method.
According to the method and the device, when the setting equipment acquires the instruction information, the corresponding setting information is generated according to the instruction information and is used for setting the scene resources and/or the rule resources, and various information required when the user selects or inputs to create and modify the scene resources and/or the rule resources is avoided, so that the operation difficulty is reduced.
Drawings
Fig. 1 is a flow chart of an implementation of a setup method 100 according to an embodiment of the present application.
Fig. 2 is a flowchart implemented in step S120 in a setting method according to an embodiment of the present application.
Fig. 3 is a flowchart of an implementation of the first embodiment of the present application.
Fig. 4 is a schematic structural diagram of a setup device 400 according to an embodiment of the present application.
Fig. 5 is a schematic structural diagram of the setting information generation submodule 520 according to an embodiment of the present application.
Fig. 6 is a schematic structural diagram of a setup device 600 according to an embodiment of the present application.
Fig. 7 is a schematic configuration diagram of a setting apparatus 700 according to an embodiment of the present application.
Fig. 8 is a schematic block diagram of a chip 800 according to an embodiment of the application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
The technical scheme of the embodiment of the application can be applied to various communication systems, for example: global System for Mobile communications (GSM) System, Code Division Multiple Access (CDMA) System, Wideband Code Division Multiple Access (WCDMA) System, General Packet Radio Service (GPRS), Long Term Evolution (Long Term Evolution, LTE) System, LTE-a System, New Radio (NR) System, Evolution System of NR System, LTE-a System over unlicensed spectrum, NR (NR-b) System, UMTS (Universal Mobile telecommunications System), UMTS (UMTS) System, WLAN-b System over unlicensed spectrum, WiFi-b System, Wireless Local Area Network (WLAN) System, Wireless Local Area network (WiFi) System, GPRS (General Packet Radio Service, GPRS) System, GPRS (GPRS) System, LTE-b System, LTE-a System, NR System, LTE-b System over unlicensed spectrum, and LTE-b System over unlicensed spectrum, A next Generation communication (5th-Generation, 5G) system, other communication systems, and the like.
Generally, conventional Communication systems support a limited number of connections and are easy to implement, however, with the development of Communication technology, mobile Communication systems will support not only conventional Communication, but also, for example, Device-to-Device (D2D) Communication, Machine-to-Machine (M2M) Communication, Machine Type Communication (MTC), and Vehicle-to-Vehicle (V2V) Communication, and the embodiments of the present application can also be applied to these Communication systems.
Optionally, the communication system in the embodiment of the present application may be applied to a Carrier Aggregation (CA) scenario, may also be applied to a Dual Connectivity (DC) scenario, and may also be applied to an independent (SA) networking scenario.
The frequency spectrum of the application is not limited in the embodiment of the present application. For example, the embodiments of the present application may be applied to a licensed spectrum and may also be applied to an unlicensed spectrum.
The embodiments of the present application have been described with reference to a setup device, in which: the provisioning device may also be a User Equipment (UE), access terminal, subscriber unit, subscriber station, mobile station, remote terminal, mobile device, User terminal, wireless communication device, User agent, or User Equipment, etc. The setting device may be a Station (ST) in the WLAN, and may be a cellular phone, a cordless phone, a Session Initiation Protocol (SIP) phone, a Wireless Local Loop (WLL) station, a Personal Digital Assistant (PDA) device, a handheld device having a Wireless communication function, a computing device or other processing device connected to a Wireless modem, an in-vehicle device, a wearable device, and a next-generation communication system, for example, a terminal device in an NR Network or a terminal device in a future-evolution Public Land Mobile Network (PLMN) Network, and the like. The setting equipment can also be intelligent terminal, personal computer, tablet computer and other equipment. Tablet computers, which may also be referred to as laptops (Tablet PCs), use touch screens, cameras, microphones, etc. as basic input devices.
By way of example and not limitation, in embodiments of the present application, the setup device may also be a wearable device. Wearable equipment can also be called wearable intelligent equipment, is the general term of equipment that uses wearable technique to carry out intelligent design, develop can dress to daily wearing, such as glasses, gloves, wrist-watch, dress and shoes. A wearable device is a portable device that is worn directly on the body or integrated into the clothing or accessories of the user. The wearable device is not only a hardware device, but also realizes powerful functions through software support, data interaction and cloud interaction. The generalized wearable smart device includes full functionality, large size, and can implement full or partial functionality without relying on a smart phone, such as: smart watches or smart glasses and the like, and only focus on a certain type of application functions, and need to be used in cooperation with other devices such as smart phones, such as various smart bracelets for physical sign monitoring, smart jewelry and the like.
It should be understood that the terms "system" and "network" are often used interchangeably herein. The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
The embodiment of the application can be applied to the field of the Internet of things. The Internet of things collects any equipment or process needing monitoring, connection and interaction in real time through various devices and technologies such as various information sensors, radio frequency identification technologies, global positioning systems, infrared sensors, laser scanners and the like, collects various required information such as sound, light, heat, electricity, mechanics, chemistry, biology, positions and the like, realizes wide connection between objects and the objects and between the objects and people through various possible network accesses, and realizes intelligent sensing, identification and management of the objects and the processes. The technology of the Internet of things can be applied to the field of smart home. The intelligent home system connects various devices (such as audio and video devices, lighting systems, curtain control, air conditioner control, security systems, digital cinema systems, audio and video servers, network home appliances and the like) in a home together through the Internet of things technology, and provides multiple functions and means such as home appliance control, lighting control, telephone remote control, indoor and outdoor remote control, anti-theft alarm, environment monitoring, heating and ventilation control, infrared forwarding, programmable timing control and the like.
In the internet of things system, a smart home system including devices, a network, a platform and applications is generally used to construct specific device automation and device linkage so as to realize specific applications and services.
In the OCF protocol, device automation control is realized by operating Scene resources (Scene resources) that have been created (either user created or system predefined). A Scene Resource is an aggregate Resource that references one or more Scene Member resources (Scene Member resources), each of which is also an aggregate Resource that references a local or remote device Resource. Meanwhile, in the OCF protocol, linkage of devices is realized by defining Rule resources (Rule resources). A Rule Resource is an assembly Resource that references a Rule Input Collection Resource that references one or more resources (typically, functional resources of a device that can generate notifications of status data of the device). The Rule Resource in turn references a Rule Expression Resource (Rule Expression Resource). The Rule Resource also references a Rule action Collection Resource (Rule Actions Collection Resource) that references one or more scene resources. At present, in order to create or modify scene resources and/or rule resources, a user needs to select or input multiple items of information, which is cumbersome to operate and inconvenient to use. In order to reduce the operation difficulty, an embodiment of the present application provides a setting method, and fig. 1 is a flowchart of an implementation of a setting method 100 according to an embodiment of the present application, including the following steps:
s110: acquiring instruction information;
s120: and generating corresponding setting information according to the instruction information, wherein the setting information is used for setting the scene resources and/or the rule resources.
Optionally, the method may further include: setting the scene resource and/or the rule resource according to the setting information.
Optionally, as shown in fig. 2, the step S120 may include:
s210: generating corresponding setting parameters according to the instruction information;
s220: and generating corresponding setting information according to the setting parameters.
In some embodiments, the instruction information acquired in step S110 may include at least one of voice information, image information, motion information, and gesture information. The instruction information may refer to a control instruction sent by the user to the smart device, and includes a control effect that the user wishes to achieve. For example, the user speaks instructions such as "turn on hall lantern and air conditioner when going home", "turn off hall lantern when turning on tv", "turn on air conditioner at 30 degrees", "call me for ball game at midnight 2 o' clock", etc. In the case that the user sends the instruction information in a voice form, the embodiment of the application may use the sound receiving device to obtain the instruction information sent by the user. As another example, the user draws an image of an effect desired to be achieved, for example, the user draws a character indicating "hall lantern and air conditioner are turned on when going home", or the user writes a character "hall lantern and air conditioner are turned on when going home". In the foregoing case that the user sends the instruction information in the form of an image, the embodiment of the application may adopt the image acquisition device to acquire the instruction information sent by the user. For another example, the user sends a control instruction by using a motion or a gesture, and the image acquisition device may be used to capture an image including the motion or the gesture of the user in the embodiment of the application.
In the case that the instruction information includes voice information, the embodiment of the application may use a voice semantic service to convert the instruction information into corresponding text information and convert the text information into corresponding semantic information, where the semantic information may include setting parameters. The voice semantic services include voice recognition services and semantic recognition services, which may also be referred to as natural language processing services.
Under the condition that the instruction information comprises image information, the embodiment of the application can adopt an image recognition service to extract the image characteristics in the image information, and adopt the image characteristics to search a preset first corresponding relation to obtain the setting parameters corresponding to the image characteristics.
Under the condition that the instruction information comprises action information or gesture information, the embodiment of the application can capture images of user actions or gestures, identify action characteristics in the action information/gesture information by adopting an image identification service, and search a preset second corresponding relation by adopting the action characteristics to obtain setting parameters corresponding to the action characteristics.
In some embodiments, the manner of generating the corresponding setting information according to the setting parameter in step S220 may include at least one of the following:
(1) and generating corresponding setting information according to the setting parameters and the preset rules.
(2) And generating corresponding setting information according to the setting parameters and the intelligent equipment information, wherein the intelligent equipment information comprises operation records and/or state data of the intelligent equipment. Such as the temperature that is normally set when the air conditioner is turned on, the brightness that is normally set for the illumination lamp at different periods of time, etc.
(3) And generating corresponding setting information according to the setting parameters and the user information, wherein the user information comprises preference information of the user to the intelligent equipment. Such as the user's preferred brightness, humidity, temperature, television channel, background music, etc. The preference information may be formed by an Artificial Intelligence (AI) system after analyzing the user's behavior over a period of time.
In some embodiments, the setting the scene resource and/or the rule resource according to the setting information includes at least one of:
creating scene resources and/or rule resources according to the setting information;
and modifying the existing scene resources and/or rule resources according to the setting information.
For example, when the setting information is converted into a scene or rule, if the scene or rule of the same name already exists, the existing setting is overwritten with the new setting; if not, a new scenario resource and/or rule resource is created.
Further, the embodiment of the application can also convert the setting information into feedback information, wherein the feedback information comprises voice information or image information; and playing or displaying the feedback information.
By adopting the foregoing process, the feedback information may be played in a form of voice, or displayed in a form of image, etc., so as to inform the user of the set result, and prompt the user to confirm the set result or perform further adjustment.
The setting method can be realized by adopting a plurality of devices, and each device shares different functions and has information interaction with each other. The present application is described in detail below with reference to specific examples.
In this embodiment, an application program of the intelligent device is used to receive instruction information in a voice form sent by a user, a voice semantic service is used to convert the instruction information into semantic information (the semantic information may include setting parameters), an AI system is used to convert the setting parameters into setting information, and a scene/rule management program is used to set scene resources and/or rule resources by using the setting information. The intelligent device can be an intelligent terminal, such as a smart phone. In the following embodiments, a smart phone (abbreviated as a mobile phone) is taken as an example for description. The setting device in the embodiment of the application is not limited to the smart phone, and the embodiment of the application is also applicable to other forms of terminal devices.
The voice semantic service can run on a server, and provides functions of voice recognition, semantic analysis, voice broadcast and the like for an application program in a HyperText Transfer Protocol/HyperText Transfer Protocol over Secure Socket Layer (HTTP/HTTPS) interface mode on a HyperText Transfer Protocol/Secure channel. Specifically, the voice recognition function may receive a voice input, process it, and convert it to a text output; the semantic analysis function can extract semantic information such as meaning, subject, category and similarity of the text from the text according to a specific context environment, for example, "what, who, when, where and why" information; the voice broadcast function can receive text input and convert the text input into voice output after processing.
The AI system analyzes and simulates user behavior by collecting user and environmental data. Particularly, in the internet of things system, the AI system collects operation records of the user on the intelligent device, status data of the device, and the like, and can analyze user information such as temperature, humidity, brightness, television channels, and the like preferred by the user, wherein the user information includes preference information of the user on the intelligent device.
Fig. 3 is a flowchart of an implementation of the first embodiment of the present application, including:
s301: the user naturally says the control effect to be achieved, such as "turn on the hall light and the air conditioner when going home", such as "turn off the hall light when turning on the television", such as "turn on the air conditioner when turning on 30 degrees", such as "call me for a ball game at 2 o' clock in the middle of the night".
S302: and the mobile phone application sends the instruction information of the voice form of the user to the voice semantic service. Optionally, the mobile phone application sends the instruction information to a voice semantic service running on the server through an HTTP/HTTPs interface.
S303: the voice semantic service converts voice into text and analyzes semantic information in the text, or key semantics, such as "what, who, when, where, and why" information. The semantic information may contain setting parameters for generating setting information.
In one embodiment, when the instruction information in the form of voice is "get home with hall lantern and air conditioner", the voice semantic service analyzing the semantic information includes: "go home, get a room lamp, get a room air conditioner".
In another embodiment, the instruction information in the form of voice is "turn off the hall light when turning on the television", and the voice semantic service analyzes the semantic information, including: when the television is turned on, the lamp in the living room is turned off.
In another embodiment, the instruction information in the form of voice is "turn on air conditioner at 30 degrees", and the semantic information analyzed by the voice semantic service includes: and when the temperature is 30 degrees, the air conditioner is started.
In another embodiment, the instruction information in the form of voice is "call me ball game at 2 o' clock in the middle of night", and the semantic information analyzed by the voice semantic service includes: "night 2 o' clock, ball game".
The semantic information in the above embodiments includes at least one of "what, who, when, where, and why" information.
S304: the voice semantic service feeds semantic information back to the mobile phone application. Optionally, the voice semantic service feeds the semantic information back to the mobile phone application through an HTTP/HTTPs interface.
S305: and the mobile phone application sends the semantic information to the AI system and requests the AI system to process the semantic information of the user.
S306: and the AI system converts the setting parameters contained in the semantic information into the setting information according with the habit of the user.
In one embodiment, for user a, the semantic information includes: "go home, get a guest room light, get a guest room air conditioner", change and set up the information and include:
{ "name": go home "," desc ": when going home, the hall light and air conditioner are turned on", "trigger": { "datatime": { "search": "window", "date": "date", "time": 19:00 "} … }," devices "[ {" hall light ": on": uetre "," bright ": 70" } }, { "hall air conditioner": true, { "temperature": 25 "} … } } true }
In the above setting information, the operation objects are hall lights and hall air conditioners. The triggering conditions for the hall lantern are as follows: the time is 19:00 in winter; the operation mode is as follows: turn on and set the hall light brightness to 70. The triggering conditions for the air conditioner in the living room are as follows: the time is 19:00 in winter; the operation mode is as follows: open and set the temperature of the living room air conditioner to 25 degrees.
In another embodiment, for user B, the semantic information includes: "go home, get a guest room lamp, get a guest room air conditioner", change and set up the information and include:
{ "name": go home "," desc ": when going home, the living room light and air conditioner are turned on", "trigger" { "door magnet": { "on": "true" }, … }, "devices": [ { "living room light": on ": true," bright "and" 60 "} }, {" living room air conditioner ": true," temperature ": 25" }, … } }
In the above setting information, the operation objects are hall lights and hall air conditioners. The triggering conditions for the hall lantern are as follows: the door magnet is opened; the operation mode is as follows: turn on and set the hall light brightness to 60. The triggering conditions for the air conditioner in the living room are as follows: the door magnet is opened; the operation mode is as follows: open and set the temperature of the living room air conditioner to 25 degrees.
In another embodiment, for user C, the semantic information includes: "go home, get a guest room light, get a guest room air conditioner", change and set up the information and include:
{ "name": go home "," desc ": get room light and air conditioner when going home", "trigger" { "anthropometric": { "on": "true" }, "datetime": "{" search ":" character ": cigarette", "date": "day", "time": 19:00 "}, … }," views ": {" living room light ": {" on ": true," bright ": 80 }, {" living room air conditioner ": {" on ": true," temperature ": 26" }, … } ] }
In the above setting information, the operation objects are a hall lantern and a hall air conditioner. The triggering conditions for the hall lamps are as follows: the time is 17:00 in winter and the body sensor is turned on; the operation mode is as follows: turn on and set the hall light brightness to 80. The triggering conditions for the air conditioner in the living room are as follows: the time is 17:00 in winter and the body sensor is turned on; the operation mode is as follows: turn on and set the temperature of the living room air conditioner to 26 degrees.
In another embodiment, for user D, the semantic information includes: "go home, get a guest room light, get a guest room air conditioner", change and set up the information and include:
{ "name": go home "," desc ": when going home, the hall light and air conditioner are turned on", "trigger" { "microphone": { "voice": i come back "}, … }," devices "[ {" hall light ": {" on ": true," bright "80" } }, { "hall air conditioner": { "on": true, "mode": cold "," temperature ": 26" }, … } ] }
In the above setting information, the operation objects are hall lights and hall air conditioners. The triggering conditions for the hall lantern are as follows: the voice receiving device receives the voice of ' I ' coming back '; the operation mode is as follows: turn on and set the hall light brightness to 80. The triggering conditions for the air conditioner in the living room are as follows: the voice receiving device receives the voice of ' I ' coming back '; the operation mode is as follows: turn on and set the temperature of the living room air conditioner to 26 degrees.
In some embodiments, the setting information is described in JSON (JavaScript Object Notation).
It can be seen that, in the above embodiments, although the semantic information is the same, since the setting preference of the user for the home device is different, the setting information converted according to the setting parameters included in the semantic information is different for different users. The AI system collects the operation records of the intelligent device by the user, analyzes the preference of the user and sets the setting information according to the preference of the user. For example, user A typically returns home at 19:00, and is accustomed to setting the hall light intensity to 70 and the hall air conditioner to 25 degrees. The entrance door of the user B is provided with a door magnet, the brightness of the living room lamp is set to be 60 by the habit of the user B, and the air conditioner of the living room is set to be 25 degrees. User C installs a human body sensing device in the home, and user C is used to set the brightness of the living room lamp to 80 degrees and the air conditioner in the living room to 26 degrees. User D installs the sound induction system in the family, and user D is used to set sitting room lamp luminance to 80, and the air conditioner in the sitting room is set to refrigerate 26 degrees, etc.
In the above embodiment, the AI system is used to generate the setting information. In other embodiments of the present application, the semantic information may be converted into setting information by using a predefined rule. The aforementioned predefined rules may include: the correspondence of the setting parameter representing "what" with the operation of the device in the scene, the correspondence of the setting parameter representing "when"/"what" with the input information in the name or rule, and the like. With the predefined rule, "when"/"why" is converted to a name and an input in the rule, the generated scenario is the set of operations in the rule. Without the when/why information, only scenes are generated and auto names (e.g., "scene 1") are employed.
In one embodiment, the semantic information includes: "go home, get a guest room lamp, get a guest room air conditioner", the setting parameter that contains in the semantic information is converted into setting information and includes:
{ "name": go home "," desc ": when going home, a living room light and an air conditioner are turned on", "trigger" { "datatime": { "date": date "," time ": 19: 00" }, "{" devices ": {" living room light ": {" on ": true } }, {" living room air conditioner ": {" on ": true } }, … } ].
In another embodiment, the semantic information includes: "turning off the lamp in the living room when turning on the television", converting the setting parameters contained in the semantic information into the setting information includes:
the television comprises { "name": turn on the television "," desc ": turn off the lamp in the living room when the television is turned on", "trigger" { "television": on { "true": true "} and" devices "[ {" lamp in the living room ": on": false }, … } ].
In another embodiment, the semantic information includes: "when 30 degrees, turn on the air conditioner", change the setting parameter included in the semantic information into the setting information and include:
the method comprises the steps of { "name": 30 degrees "," desc ": 30 degrees, turning on the air conditioner", "trigger": { "temperature sensor": temperature ": 30" }, "devices": { "air conditioner": on ": false," mode ": cold", "temperature": 25}, … } ].
In another embodiment, the semantic information includes: "2 o' clock at midnight, ball game", the setting parameter who contains in the semantic information converts the setting information to and includes:
{ "name": 2 o 'clock in the middle of the night "," desc ": 2 o' clock in the middle of the night", "trigger" { "datetime": { "date": 2020-02-20 "," time ": 2: 00" }, "devices": alarm ": time": 2:00 {, "alarm": true }, { "television" { "on": true, "channel": 5 "}, … } ].
S307: and the AI system returns the setting information to the mobile phone application.
S308: the mobile phone application issues setting information to the scene/rule management program.
S309: and the scene/rule management program sets the scene resources and/or the rule resources according to the setting information. Alternatively, when the setting information is converted into a scene or a rule, if the scene or the rule of the same name already exists, the existing setting is overwritten with the new setting. If not, a new scenario or rule is established.
S310: and the scene/rule management program returns a feedback message, such as a setting success message, to the mobile phone application, wherein the setting information is carried.
S311: the mobile phone application sends the setting information to the voice semantic service and requests the voice semantic service to convert the setting information into voice.
S312: the voice semantic service converts the setting information into voice information and returns the voice information to the mobile phone application. For example: "when you say 'i am back', the living room light will be on and set to brightness 80 and the living room air conditioner will be on and set to cool down 26 degrees".
S313: the mobile phone application broadcasts the voice information, informs the user as a result set by the user, and prompts the user to confirm the setting result or further adjust. If the user selects further adjustment, then the process jumps to step S301.
In other embodiments of the present application, the user may input instruction information in the form of an image. For this situation, an image recognition service may be adopted to recognize image features in the image information and obtain setting parameters corresponding to the image features. Alternatively, the user may input instruction information including motion information or gesture information. In this case, a graph including a motion or a gesture of the user may be photographed, and the image recognition service may be used to recognize motion characteristics in the motion information/gesture information in the image and obtain setting parameters corresponding to the motion characteristics. In addition, in other embodiments of the present application, a predetermined rule may also be adopted to convert the semantic information into corresponding setting information.
By adopting the embodiment of the application, when the scene resources and the rule resources are created/modified, the user can input the instruction information in the modes of natural language, images, actions or gestures and the like, the operation is simple, the user only needs to naturally describe the control effect to be achieved, the scene rules and the structure of the internet of things do not need to be understood, and the application program is low in use difficulty.
An embodiment of the present application further provides a setting device, and fig. 4 is a schematic structural diagram of a setting device 400 according to an embodiment of the present application, including:
an obtaining module 410, configured to obtain instruction information;
and the setting information generating module 420 is configured to generate corresponding setting information according to the instruction information, where the setting information is used to set the scene resource and/or the rule resource.
Fig. 5 is a schematic structural diagram of a setting information generating module 420 in a setting device according to an embodiment of the present application, including:
the setting parameter generating sub-module 510 is configured to generate a corresponding setting parameter according to the instruction information;
and the setting information generating submodule 520 is configured to generate corresponding setting information according to the setting parameter.
In some embodiments, the instruction information includes at least one of voice information, image information, motion information, and gesture information.
Optionally, in a case that the instruction information includes voice information, the setting parameter generating sub-module 510 is configured to generate corresponding text information according to the voice information; and generating corresponding setting parameters according to the text information.
Optionally, in a case that the instruction information includes image information, the parameter generating sub-module 510 is configured to extract an image feature in the image information; and searching a preset first corresponding relation by adopting the image characteristics to obtain a setting parameter corresponding to the image characteristics.
Optionally, in a case that the instruction information includes motion information or gesture information, the setting parameter generating sub-module 510 is configured to identify a motion feature in the motion information/gesture information; and searching a preset second corresponding relation by adopting the action characteristics to obtain a setting parameter corresponding to the action characteristics.
As shown in fig. 5, optionally, the setting information generating sub-module 520 includes:
a rule conversion unit 521, configured to generate corresponding setting information according to the instruction information and a predetermined rule; and/or the presence of a gas in the gas,
an artificial intelligence conversion unit 522, configured to generate corresponding setting information according to the instruction information and the intelligent device information, where the intelligent device information includes an operation record and/or status data of an intelligent device; and/or generating corresponding setting information according to the instruction information and the user information, wherein the user information comprises preference information of the user to the intelligent equipment.
Fig. 6 is a schematic structural diagram of a setting apparatus 600 according to an embodiment of the present application, and as shown in fig. 6, the apparatus further includes:
a feedback module 630, configured to convert the setting information into feedback information, where the feedback information includes voice information or image information; and playing or displaying the feedback information.
It should be understood that the above and other operations and/or functions of the modules in the setting device according to the embodiment of the present application are respectively for implementing the corresponding flows in the method 100 of fig. 1, and are not described herein again for brevity.
The setting device provided by the embodiment of the application can be realized by adopting an independent device, and each module of the setting device can also be arranged in a plurality of devices. For example, the acquisition module 410 is provided in an intelligent terminal, the setting parameter generation sub-module 510 is provided in a voice semantic server, or a server providing an image recognition service and a semantic recognition service, and the setting information generation sub-module 520 is provided in an AI system.
Fig. 7 is a schematic configuration diagram of a setup apparatus 700 according to an embodiment of the present application. The setup device 700 shown in fig. 7 comprises a processor 710, and the processor 710 may call and run a computer program from a memory to implement the method in the embodiment of the present application.
Optionally, as shown in fig. 7, the setup device 700 may further include a memory 720. From the memory 720, the processor 710 can call and run a computer program to implement the method in the embodiment of the present application.
The memory 720 may be a separate device from the processor 710, or may be integrated into the processor 710.
Optionally, as shown in fig. 7, the setting device 700 may further include a transceiver 730, and the processor 710 may control the transceiver 730 to communicate with other devices, and in particular, may transmit information or data to the other devices or receive information or data transmitted by the other devices.
The transceiver 730 may include a transmitter and a receiver, among others. The transceiver 730 may further include an antenna, and the number of antennas may be one or more.
Optionally, the setting device 700 may implement a corresponding process implemented by the setting device in each method of the embodiment of the present application, and for brevity, details are not described here again.
Fig. 8 is a schematic block diagram of a chip 800 according to an embodiment of the application. The chip 800 shown in fig. 8 includes a processor 810, and the processor 810 can call and run a computer program from a memory to implement the method in the embodiment of the present application.
Optionally, as shown in fig. 8, the chip 800 may further include a memory 820. From the memory 820, the processor 810 can call and run a computer program to implement the method in the embodiment of the present application.
The memory 820 may be a separate device from the processor 810, or may be integrated into the processor 810.
Optionally, the chip 800 may further include an input interface 830. The processor 810 can control the input interface 830 to communicate with other devices or chips, and in particular, can obtain information or data transmitted by other devices or chips.
Optionally, the chip 800 may further include an output interface 840. The processor 810 can control the output interface 840 to communicate with other devices or chips, and in particular, can output information or data to other devices or chips.
Optionally, the chip may be applied to the setting device in the embodiment of the present application, and the chip may implement the corresponding process implemented by the setting device in each method in the embodiment of the present application, and for brevity, details are not described here again.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as a system-on-chip, a system-on-chip or a system-on-chip, etc.
The aforementioned processors may be general purpose processors, Digital Signal Processors (DSPs), Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), or other programmable logic devices, transistor logic devices, discrete hardware components, etc. The general purpose processor mentioned above may be a microprocessor or any conventional processor etc.
The above-mentioned memories may be volatile or nonvolatile memories or may include both volatile and nonvolatile memories. The non-volatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. Volatile memory can be Random Access Memory (RAM).
It should be understood that the above memories are exemplary but not limiting illustrations, for example, the memories in the embodiments of the present application may also be Static Random Access Memory (SRAM), dynamic random access memory (dynamic RAM, DRAM), Synchronous Dynamic Random Access Memory (SDRAM), double data rate SDRAM (double data rate SDRAM), enhanced SDRAM (enhanced SDRAM, ESDRAM), Synchronous Link DRAM (SLDRAM), Direct Rambus RAM (DR RAM), and so on. That is, the memory in the embodiments of the present application is intended to comprise, without being limited to, these and any other suitable types of memory.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that includes one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (21)

  1. A setup method, comprising:
    acquiring instruction information;
    and generating corresponding setting information according to the instruction information, wherein the setting information is used for setting scene resources and/or rule resources.
  2. The method according to claim 1 or 2, wherein the generating corresponding setting information according to the instruction information comprises:
    generating corresponding setting parameters according to the instruction information;
    and generating corresponding setting information according to the setting parameters.
  3. The method of claim 1 or 2, wherein the instruction information comprises at least one of voice information, image information, motion information, and gesture information.
  4. The method of claim 3, wherein, in the case that the instruction information includes voice information, the generating corresponding setting parameters according to the instruction information includes:
    generating corresponding text information according to the voice information;
    and generating corresponding setting parameters according to the text information.
  5. The method of claim 3, wherein, in the case that the instruction information includes image information, the generating corresponding setting parameters according to the instruction information comprises:
    extracting image features in the image information;
    and searching a preset first corresponding relation by adopting the image characteristics to obtain a setting parameter corresponding to the image characteristics.
  6. The method according to claim 3, wherein in a case where the instruction information includes motion information or gesture information, the generating the corresponding setting parameter according to the instruction information includes:
    identifying motion features in the motion information/gesture information;
    and searching a preset second corresponding relation by adopting the action characteristics to obtain a setting parameter corresponding to the action characteristics.
  7. The method according to any one of claims 2 to 6, wherein the generating corresponding setting information according to the setting parameters comprises:
    generating corresponding setting information according to the setting parameters and a preset rule; and/or the presence of a gas in the gas,
    generating corresponding setting information according to the setting parameters and the intelligent equipment information, wherein the intelligent equipment information comprises operation records and/or state data of the intelligent equipment; and/or the presence of a gas in the gas,
    and generating corresponding setting information according to the setting parameters and the user information, wherein the user information comprises preference information of the user to the intelligent equipment.
  8. The method of any of claims 1 to 7, further comprising:
    converting the setting information into feedback information, wherein the feedback information comprises voice information or image information;
    and playing or displaying the feedback information.
  9. A setup device comprising:
    the acquisition module is used for acquiring instruction information;
    and the setting information generating module is used for generating corresponding setting information according to the instruction information, and the setting information is used for setting the scene resources and/or the rule resources.
  10. The apparatus of claim 9, the setting information generating module comprising:
    the setting parameter generating submodule is used for generating corresponding setting parameters according to the instruction information;
    and the setting information generation submodule is used for generating corresponding setting information according to the setting parameters.
  11. The apparatus according to claim 9 or 10, wherein the instruction information includes at least one of voice information, image information, motion information, and gesture information.
  12. The device according to claim 11, wherein in a case where the instruction information includes voice information, the setting parameter generation sub-module is configured to generate corresponding text information from the voice information; and generating corresponding setting parameters according to the text information.
  13. The apparatus according to claim 11, wherein in a case where the instruction information includes image information, the setting parameter generation sub-module is configured to extract an image feature in the image information; and searching a preset first corresponding relation by adopting the image characteristics to obtain a setting parameter corresponding to the image characteristics.
  14. The device according to claim 11, wherein in the case that the instruction information includes motion information or gesture information, the setting parameter generation sub-module is configured to identify a motion feature in the motion information/gesture information; and searching a preset second corresponding relation by adopting the action characteristic to obtain a setting parameter corresponding to the action characteristic.
  15. The apparatus according to any one of claims 10 to 14, wherein the setting information generation sub-module includes:
    the rule conversion unit is used for generating corresponding setting information according to the instruction information and a preset rule; and/or the presence of a gas in the atmosphere,
    the artificial intelligence conversion unit is used for generating corresponding setting information according to the instruction information and the intelligent equipment information, and the intelligent equipment information comprises operation records and/or state data of the intelligent equipment; and/or generating corresponding setting information according to the instruction information and the user information, wherein the user information comprises preference information of the user to the intelligent equipment.
  16. The apparatus of any of claims 9 to 15, further comprising:
    the feedback module is used for converting the setting information into feedback information, and the feedback information comprises voice information or image information; and playing or displaying the feedback information.
  17. A setup device comprising: a processor and a memory for storing a computer program, the processor being configured to invoke and execute the computer program stored in the memory to perform the method of any of claims 1 to 8.
  18. A chip, comprising: a processor for calling and running a computer program from a memory so that a device on which the chip is installed performs the method of any one of claims 1 to 8.
  19. A computer-readable storage medium storing a computer program for causing a computer to perform the method of any one of claims 1 to 8.
  20. A computer program product comprising computer program instructions to cause a computer to perform the method of any one of claims 1 to 8.
  21. A computer program for causing a computer to perform the method of any one of claims 1 to 8.
CN202080093272.4A 2020-03-09 2020-03-09 Setting method and device Pending CN115004641A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/078485 WO2021179148A1 (en) 2020-03-09 2020-03-09 Setting method and device

Publications (1)

Publication Number Publication Date
CN115004641A true CN115004641A (en) 2022-09-02

Family

ID=77671073

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080093272.4A Pending CN115004641A (en) 2020-03-09 2020-03-09 Setting method and device

Country Status (2)

Country Link
CN (1) CN115004641A (en)
WO (1) WO2021179148A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106537849A (en) * 2014-05-28 2017-03-22 三星电子株式会社 Apparatus and method for controlling internet of things devices
CN208402155U (en) * 2018-03-20 2019-01-18 深圳市宇昊电子科技有限公司 Bluetooth remote control lamp with voice control
CN109240116A (en) * 2018-11-02 2019-01-18 合肥吴亦科技有限公司 A kind of intelligent lighting curtain Controller for smart home
CN110147047A (en) * 2019-06-19 2019-08-20 深圳聚点互动科技有限公司 Smart home device screening technique, device, computer equipment and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20170015622A (en) * 2015-07-29 2017-02-09 삼성전자주식회사 User terminal apparatus and control method thereof
CN106778310A (en) * 2016-12-26 2017-05-31 北京恒华伟业科技股份有限公司 A kind of data managing method and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106537849A (en) * 2014-05-28 2017-03-22 三星电子株式会社 Apparatus and method for controlling internet of things devices
CN208402155U (en) * 2018-03-20 2019-01-18 深圳市宇昊电子科技有限公司 Bluetooth remote control lamp with voice control
CN109240116A (en) * 2018-11-02 2019-01-18 合肥吴亦科技有限公司 A kind of intelligent lighting curtain Controller for smart home
CN110147047A (en) * 2019-06-19 2019-08-20 深圳聚点互动科技有限公司 Smart home device screening technique, device, computer equipment and storage medium

Also Published As

Publication number Publication date
WO2021179148A1 (en) 2021-09-16

Similar Documents

Publication Publication Date Title
JP7381021B2 (en) Device control page display method, related equipment, and system
US20220124100A1 (en) Device Control Method and Device
CN110795179B (en) Display method and electronic equipment
WO2018145447A1 (en) Terminal operation control method and apparatus, and terminal
CN113272745A (en) Smart home equipment sharing system and method and electronic equipment
CN111669430B (en) Communication method, method for controlling Internet of things equipment and electronic equipment
JP7234379B2 (en) Methods and associated devices for accessing networks by smart home devices
CN111031002B (en) Broadcast discovery method, broadcast discovery device, and storage medium
WO2020155870A1 (en) Device control method and devices
CN111556479B (en) Information sharing method and related device
CN116074143A (en) Scene synchronization method and device, electronic equipment and readable storage medium
CN114449110B (en) Control method and device of electronic equipment
CN114448530B (en) Method for detecting video monitoring equipment and electronic equipment
CN109961793B (en) Method and device for processing voice information
CN115004641A (en) Setting method and device
CN112305927A (en) Equipment control method and device
CN105450850A (en) Method for providing information and electronic device thereof
CN105768405B (en) Method for controlling lamp and device
CN112840680A (en) Position information processing method and related device
WO2023274329A1 (en) Device collaboration method and electronic device
CN113132189B (en) Network distribution method of terminal, terminal and system
WO2023025059A1 (en) Communication system and communication method
CN117528399A (en) Method for installing intelligent device and electronic device
CN117979156A (en) Picture shooting method, picture processing method, electronic device and storage medium
CN114360484A (en) Audio optimization method, device, system and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination