CN118034073A - Intelligent scene processing method and electronic equipment - Google Patents

Intelligent scene processing method and electronic equipment Download PDF

Info

Publication number
CN118034073A
CN118034073A CN202211413878.9A CN202211413878A CN118034073A CN 118034073 A CN118034073 A CN 118034073A CN 202211413878 A CN202211413878 A CN 202211413878A CN 118034073 A CN118034073 A CN 118034073A
Authority
CN
China
Prior art keywords
scene
intelligent
intelligent scene
resource
smart
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211413878.9A
Other languages
Chinese (zh)
Inventor
殷佳欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202211413878.9A priority Critical patent/CN118034073A/en
Publication of CN118034073A publication Critical patent/CN118034073A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B15/00Systems controlled by a computer
    • G05B15/02Systems controlled by a computer electric

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Telephone Function (AREA)

Abstract

The application relates to the field of intelligent life, and discloses a processing method and electronic equipment of intelligent scenes, which are used for providing a method for controlling execution sequences of a plurality of intelligent scenes. In the method, the electronic equipment responds to an execution instruction for executing a second intelligent scene, and acquires a resource type associated with the second intelligent scene and a corresponding resource tag thereof, wherein the resource tag is used for indicating the occupation relationship of the second intelligent scene to the resource type; then, according to the associated resource type of each intelligent scene and the corresponding resource label, determining the execution result of a plurality of intelligent scenes, such as executing a plurality of intelligent scenes in parallel or selecting at least one or a plurality of intelligent scenes from the plurality of intelligent scenes for execution.

Description

Intelligent scene processing method and electronic equipment
Technical Field
The embodiment of the application relates to the field of intelligent life, in particular to a processing method of an intelligent scene and electronic equipment.
Background
In application fields such as intelligent life or whole house intelligence, intelligent scene can link multiple electronic equipment, realizes the demand of carrying out coordinated control to electronic equipment in the user's life. For example, when the temperature is higher than 30 ℃, the air conditioner is turned on for refrigeration; when the outdoor wind power is greater than 3 levels, the window is closed, etc.
However, as smart scenes increase, electronic devices involved increase gradually; moreover, different intelligent scenes may correspond to the same electronic device, and when the intelligent scenes are executed at the same time, the scenes may conflict with each other; or even if corresponding to different electronic devices, the execution of multiple intelligent scenes may have problems such as interference. Therefore, how to implement execution of multiple scenarios is worth studying.
Disclosure of Invention
The embodiment of the application provides a processing method of intelligent scenes and electronic equipment, which are used for providing a method for controlling the execution sequence of a plurality of intelligent scenes.
In a first aspect, an embodiment of the present application provides a method for processing an intelligent scene. In the method, the electronic equipment can receive and respond to an execution instruction for executing a second intelligent scene to acquire a resource type and a corresponding resource tag associated with the second intelligent scene; the resource tag is used for indicating the occupation relation of the second intelligent scene to the resource type; in one possible scenario, the electronic device may determine that there is no occupation conflict between the resources of the second smart scenario and the first smart scenario according to the resource type associated with the second smart scenario and the corresponding resource tag thereof, and the resource type associated with the first smart scenario and the corresponding resource tag thereof, and control the second smart scenario to execute in parallel with the first smart scenario; the first intelligent scene is any intelligent scene being executed; or in another possible scenario, the electronic device may further determine that the second smart scenario has an occupation conflict with the resources of the first smart scenario according to the resource type associated with the second smart scenario and the corresponding resource tag thereof, and the resource type associated with the first smart scenario and the corresponding resource tag thereof, so as to control stopping executing the first smart scenario or control keeping executing the first smart scenario.
In the method, based on the fact that the resource type is determined and the resource label is determined by arranging the scenes of the intelligent scenes, after the electronic equipment receives an execution instruction of the intelligent scenes, whether the plurality of intelligent scenes have the conflict of occupation of the resources in the resource granularity can be determined according to one or more resource types associated with the intelligent scenes and the resource labels corresponding to the resource types. And, when the resource occupation conflicts, the execution of the intelligent scene can be selectively controlled from a plurality of intelligent scenes, and when the resource occupation does not conflict, the execution of a plurality of intelligent scenes can be controlled in parallel. Therefore, through the division of the resource granularity of the intelligent scene, more accurate and finer-granularity execution conflict detection can be realized, on one hand, the execution conflict and interference of the intelligent scene can be avoided, and on the other hand, a plurality of non-conflicting intelligent scenes can be executed in parallel, so that the execution accuracy and the execution efficiency of the intelligent scene can be improved.
In one possible design, the determining that the second smart scenario has an occupancy conflict with the resources of the first smart scenario, controlling to stop executing the first smart scenario or controlling to keep executing the first smart scenario includes: determining the priority of the second intelligent scene based on the triggering mode of the execution instruction and a pre-configured priority rule; and controlling to stop executing the first intelligent scene or controlling to keep executing the first intelligent scene according to the priority of the second intelligent scene and the priority of the first intelligent scene.
In the design, a selection mode for selectively controlling execution of the intelligent scenes from a plurality of intelligent scenes is provided, and the intelligent scenes for selection and execution can be more fit with the use habit of a user by considering the priority of the intelligent scenes, so that the user experience can be improved.
In one possible design, the method further comprises: determining the priority of the second intelligent scene according to the instruction of the execution instruction; when the priority of the second intelligent scene is determined to be the first priority, determining that the second intelligent scene is executed; when the priority of the second intelligent scene is determined to be a second priority, controlling to execute the second intelligent scene when the second intelligent scene is controlled to be executed in parallel with the first intelligent scene or when the control stops executing the first intelligent scene; the first priority is higher than the second priority.
In the design, the execution sequence of the intelligent scenes can be configured based on the priority of the intelligent scenes, and the execution time of the second intelligent scenes is controlled by considering the priority of the second intelligent scenes triggered later in time; that is, when the second intelligent scene is of high priority, the second intelligent scene can be executed first, and then the electronic device controls whether other executing intelligent scenes need to stop executing, so as to avoid conflict to the execution of the second intelligent scene; when the second intelligent scene is of low priority, the electronic device can further control whether the second intelligent scene can be executed according to the execution results of the plurality of intelligent scenes. Therefore, the intelligent scenes with later time can be executed preferentially, so that the execution efficiency of the intelligent scenes with higher priority can be improved, and the delay caused by control is reduced.
In one possible design, the determining the priority of the second smart scenario includes: when the triggering mode is active triggering of a user, determining that the second intelligent scene is of a first priority; when the triggering mode is indirect automatic triggering, determining that the second intelligent scene is of a second priority; the first priority is higher than the second priority.
In the design, when the priority of the intelligent scene is considered based on the triggering mode of the execution instruction, the intelligent scene which needs to be executed by the active triggering mode of the user can be set to be higher in priority, and the intelligent scene equipment which needs to be executed by the indirect automatic triggering is set to be lower in priority. Therefore, the execution result of the intelligent scene determined by the electronic equipment can be more fit with the expectation of the user, and the user experience can be improved.
In addition, in another possible design, the priority of the intelligent scene in the present application may be determined according to one or a combination of factors such as manual setting of a user, default configuration, and type of terminal device involved in the intelligent scene, for example, the electronic device may default the intelligent scene of the air quality alarm to be higher priority, even if the triggering manner is an indirect automatic triggering manner. In this way, more possible execution results of the intelligent scene can be expanded.
In one possible design, the control stops executing the first smart scenario or the control remains executing the first smart scenario, comprising: when the priority of the second intelligent scene is higher than or equal to the priority of the first intelligent scene, controlling to stop executing the first intelligent scene; and when the priority of the second intelligent scene is lower than that of the first intelligent scene, controlling to execute the first intelligent scene.
In the design, the electronic device may determine execution results of the plurality of smart scenes based on the comparison result of the priorities of the smart scenes. It can be understood that when the resource occupation conflicts, the intelligent scene with higher priority is executed preferentially, or the intelligent scene after the triggering execution is executed preferentially, so that the resource occupation conflicts can be solved, the expected execution result of the user can be more attached, and the intelligent life experience of the user is ensured.
In one possible design, the method further comprises, prior to the receiving and in response to the execution instruction for executing the second smart scenario: receiving a new instruction for creating the second smart scene; responding to the new instruction, analyzing one or more resource types associated with the second intelligent scene, and analyzing resource labels corresponding to the resource types; and storing one or more resource types of the second intelligent scene and resource labels corresponding to the resource types.
In the design, in the process of creating the intelligent scene, the resource types required by each intelligent scene can be obtained through scene arrangement analysis, and the resource labels corresponding to each resource type are further determined. Thus, when the intelligent scene is executed, one or more resource types associated with the intelligent scene and the resource types corresponding to the resource types can be obtained, so that the resource conflict can be detected, and the execution results of a plurality of intelligent scenes can be obtained.
In one possible design, the parsing one or more resource types associated with the second intelligent scene and parsing a resource tag corresponding to each resource type in response to the new instruction includes: according to the new instruction, one or more resource types configured by a user for the second intelligent scene and resource labels corresponding to the resource types are obtained; or analyzing according to the new instruction, matching one or more resource types associated with the second intelligent scene, and analyzing resource labels corresponding to the resource types.
In the design, the user configuration of the resource types and the corresponding resource labels can be received through the user interface, so that the resource types and the corresponding resource labels associated with the intelligent scene can be more fit with the use habit of the user. Or the resource type and the corresponding resource label can be obtained through automatic analysis of the electronic equipment. Therefore, user operation can be reduced, and the creation efficiency of the intelligent scene is improved; in addition, the electronic equipment can also receive the adjustment of the resource type and the corresponding resource label obtained by the automatic analysis of the electronic equipment by the user, so that the establishment efficiency of the intelligent scene can be ensured, and the accuracy of the resource type and the resource label to the user can be improved.
In one possible design, the control stops executing the first smart scenario, comprising: stopping executing the first smart scene by controlling, by a target electronic device for managing the first smart scene, the target electronic device being the electronic device or other electronic devices other than the electronic device. Wherein, the intelligent scene of the execution management can be controlled by a scene engine in each electronic device; moreover, the electronic device can interact with other electronic devices to obtain the execution state of the intelligent scene of one or more related electronic devices, so that the problem of resource conflict across scene engines can be realized. Wherein one or more of the associated electronic devices may be other electronic devices that may control the same electronic device as the electronic device.
In one possible design, the resource types include one or a combination of the following types: space, subsystem, equipment; the space is obtained by grouping a plurality of electronic devices based on a position relation, and the subsystem is obtained by grouping the plurality of electronic devices based on a device function.
In the design, through the consideration of the position relation, and/or the equipment function and/or the equipment, the conflict of the resource occupation of two intelligent scenes in the same space, and/or the subsystem and/or the equipment can be avoided; wherein, the occupied interference can be avoided by the resource type of the space. It can be understood that if there are two intelligent scenes that need to use resources corresponding to different spaces and different subsystems, or directly corresponding to different devices, the execution of the two intelligent scenes will not affect each other.
In one possible design, the resource tag includes: occupying the tag, sharing the tag; the occupation tag is used for indicating that the corresponding resource type needs to be occupied independently, and the sharing tag is used for indicating that the corresponding resource type can be occupied together with other intelligent scenes.
In the design, the resource occupation conflict of two intelligent scenes can be detected through the occupation condition of the resource labels. Therefore, based on the occupation condition of the resource tag, on one hand, execution conflict of the intelligent scene can be avoided, and the execution efficiency of the intelligent scene can be improved.
In a second aspect, the present application provides an electronic device including a plurality of functional modules; the plurality of functional modules interact to implement the method performed by the electronic device in any of the aspects and embodiments thereof. The plurality of functional modules may be implemented based on software, hardware, or a combination of software and hardware, and the plurality of functional modules may be arbitrarily combined or divided based on the specific implementation.
In a third aspect, the present application provides an electronic device comprising at least one processor and at least one memory, the at least one memory storing computer program instructions that, when executed by the electronic device, perform the method performed by the electronic device in any of the above aspects and embodiments thereof.
In a fourth aspect, the present application also provides a processing system of a smart scenario, which may comprise an electronic device as described in the second aspect or an electronic device as described in the third aspect. Optionally, the system may further include at least one other electronic device, for example, a device for detecting a user instruction and sending an execution instruction to the electronic device, or a device for controlling execution of a first smart scene or a second smart scene, such as a bluetooth gateway, or an electronic device for executing the first smart scene or the second smart scene, such as a bluetooth device.
In a fifth aspect, the present application also provides a computer readable storage medium having stored therein a computer program which, when executed by a computer, causes the computer to perform the method of any of the above aspects and their respective possible design electronics.
In a sixth aspect, the present application provides a computer program product comprising: a computer program (which may also be referred to as code, or instructions), when executed, causes a computer to perform the method of designing an electronic device of any of the above aspects and their respective possibilities.
In a seventh aspect, embodiments of the present application also provide a graphical user interface on an electronic device with a display screen, one or more memories, and one or more processors to execute one or more computer programs stored in the one or more memories, the graphical user interface comprising a graphical user interface displayed when the electronic device performs any of the above aspects and their respective possible designs.
In an eighth aspect, the present application also provides a chip for reading a computer program stored in a memory, performing the method performed by any of the above aspects and each of the possible design electronics.
In a ninth aspect, the present application further provides a chip system, which includes a processor for supporting a computer device to implement the method performed by any one of the above aspects and each possible design electronic device thereof. In one possible design, the chip system further includes a memory for storing programs and data necessary for the computer device. The chip system may be formed of a chip or may include a chip and other discrete devices.
The advantages of any one of the second aspect to the ninth aspect and the possible designs thereof are specifically referred to the advantages of the various possible designs of the first aspect, and are not described herein.
Drawings
FIG. 1a is an interface diagram of a processing method of an intelligent scene according to an embodiment of the present application;
Fig. 1b is a schematic view of scene composition of a processing method of an intelligent scene according to an embodiment of the present application;
FIG. 1c is a schematic diagram of a system to which a method for processing an intelligent scene according to an embodiment of the present application is applicable;
FIG. 1d is a diagram illustrating a smart scenario execution conflict;
fig. 2 is a schematic hardware structure of a possible electronic device according to an embodiment of the present application;
fig. 3 is a block diagram of a software system architecture of an electronic device according to an embodiment of the present application;
FIG. 4 is a schematic diagram of an application scenario of a smart scenario processing method according to an embodiment of the present application;
FIG. 5 is a second application scenario diagram of a smart scenario processing method according to an embodiment of the present application;
FIG. 6 is a third application scenario diagram of a smart scenario processing method according to an embodiment of the present application;
FIG. 7 is a schematic diagram of an application scenario of a smart scenario processing method according to an embodiment of the present application;
FIG. 8 is a schematic diagram of another system architecture to which the processing method of an intelligent scenario according to the embodiment of the present application is applicable;
FIG. 9 is a flowchart of a processing method of an intelligent scene according to an embodiment of the present application;
FIG. 10 is a second flowchart of a processing method of an intelligent scene according to the embodiment of the application;
FIG. 11 is a third flow chart of a processing method of an intelligent scene according to the embodiment of the application;
Fig. 12 is a flowchart of a processing method of an intelligent scene according to an embodiment of the application.
Detailed Description
Embodiments of the present application will be described in detail below with reference to the accompanying drawings.
The technical scheme provided by the embodiment of the application can be applied to the fields of intelligent life and the like, and particularly can be applied to the application fields related to the Internet of things such as whole house intelligence, intelligent office and the like. With the development and popularization of intelligent life technology, users have increasingly used intelligent scenes (also referred to as "intelligent scenes") in a wider range of applications. Generally, smart scenes can be largely classified into manual scenes and automatic scenes. The manual scene is a scene triggered by a user operation instruction such as a user operation on a scene card on a specific Application (APP) included in an electronic device such as a mobile phone, a voice instruction, or a click operation on a key on a panel such as a full-house center control screen. An auto scene refers to a scene triggered in response to a change in an operational state of an electronic device, a device state of the electronic device, or an environmental state.
In the embodiment of the application, the method provided by the application is introduced by taking the application field of whole house intelligence as an example. The intelligent scene can be configured by factory defaults or can be created by a user; the user can also adjust the intelligent scene of the factory default configuration. The created one or more scenes can be displayed in an APP such as smart life included in an electronic device such as a mobile phone or on a full house central control screen. In this way, the user can manage the created scenes, create new scenes, etc. in the smart life APP or on the control screen in the whole house.
For example, fig. 1a is an interface schematic diagram of a processing method of an intelligent scene according to an embodiment of the present application. In fig. 1a, the user creates a "smart home" scenario in a display interface 11 that the smart life APP can include. As can be seen from interface 11, the user can set or adjust the validation conditions of the scene through a plurality of controls 101, for example, the validation conditions that can be set can include, but are not limited to: time conditions (e.g., "workday 19:00-21:00"), equipment conditions (e.g., "smart door lock"), character conditions (e.g., "small Ming"), place conditions (e.g., "home"), environmental conditions (e.g., "indoor temperature greater than" 28 ℃), equipment operating conditions (e.g., "video APP" played within "2 hours" of the handset). And, the user may also select and set one or more of the execution actions in the scene through control 102, e.g., the execution actions may include, but are not limited to: turning on a "living room light", turning on a "living room air conditioner", cooling to 23 ℃, turning on a "smart screen" and playing a "latest play" in a "video APP". Finally, the user can determine, through control 103, to create the intelligent scene to which interface 11 corresponds. It will be appreciated that when the processing system of the smart scenario detects a trigger event and determines that the trigger event satisfies the validation condition as set in the interface 11, then the execution action set in the smart scenario will be executed.
Fig. 1b is a schematic view of scene composition of a processing method of an intelligent scene according to an embodiment of the present application. In general, a smart scene may include at least the following:
(1) A trigger event (event), which is the start of the entire scene, may represent an instantaneous state change. When this change occurs, the entire scene flow may begin to execute. As shown in fig. 1b, an event may include, for example, but not limited to: user operation of a user on an intelligent life APP or full-house central control screen, self-defining keys of the full-house central control screen, voice instructions, equipment events, host rule events, full-house scene events and scene state events. For example, an event may be, for example, a light scene in which a party mode is manually turned on by a user on a smart life APP, or a key for turning off the lights of a full house pressed by a user on a full house center control screen, or a scene state in which a change in indoor temperature is detected as introduced in fig. 1a, or the like.
In the embodiment of the present application, a detection rule for detecting an event, such as a device or a sensor, may be preconfigured, and information, such as a temperature rise or drop, a time change, a device state change, a user operation, a voice command, etc., may be used as a detection rule for detecting a trigger time. Wherein triggering of an event may be monitored by a process or thread in the processor, so that a smart scenario may be identified that may need to be executed.
(2) And the condition (condition) of effectiveness shows that after the event occurs, judging whether the event meets the condition corresponding to the intelligent scene, and if the condition is confirmed to be met, executing the intelligent scene. As shown in fig. 1b, the condition may include, for example, but is not limited to: system and environment status, time information, location information, equipment status, full house scenario, full house data map. The whole house scene can be a condition comprehensively detected by a plurality of sensors and/or a plurality of electronic devices, and for example, the condition can comprise a condition of whether a person exists in a home or not; the whole house data map may be average data or weighted average data obtained by comprehensive calculation of a plurality of sensors and/or a plurality of electronic devices, for example, may include average temperature obtained by comprehensive calculation of a plurality of temperature sensors, and may further include comprehensive evaluation of human body perception comfort level values based on information such as temperature measured by the temperature sensors, humidity measured by the humidity sensors, air quality measured by the air detection device, and the like.
In a scene state in which a change in the indoor temperature is detected, as shown in fig. 1a, the indoor temperature after the change is further compared with a condition of the set corresponding indoor temperature, i.e., the indoor temperature after the change is compared with the set 28 ℃. For another example, when the equipment event is detected, the equipment event is further compared with a preset equipment event, for example, if the unlocking event of the intelligent door lock is detected, whether the unlocking person is a set small person is compared.
(3) Performing actions (actions), such as turning on an air conditioner, turning on a living room light, turning on a smart screen and playing the latest play in the video APP, etc. It will be appreciated that event, condition, action may be orchestrated together by flow control to arrive at a smart scene when it is created. For example, event may be an unlocking event of an intelligent door lock, conditions may be "weekday 19:00-21:00", "Ming" and "home", and action may be turning on "living room light". Thus, by scene arrangement, a smart scene can be realized in which the living room light is turned on when the smart door lock detects a coming home from the Ming day 19:00-21:00. Wherein, the action can be realized by the following modes: local device control, host rules, full house services, internet services, user interactions, etc., e.g., user interactions may implement actions by popup, voice, or notification messages, etc.
Fig. 1c is a schematic diagram of a system suitable for a processing method of an intelligent scene according to an embodiment of the present application. The system at least comprises: terminal device 110, full house central control panel 120, full house host 130, power Line Communication (PLC) gateway 140, PLC device 141 connected to PLC gateway 140, bluetooth gateway 150, bluetooth device 151 connected to bluetooth gateway 150, and wireless-fidelity (Wi-Fi) device 161. Wherein, a host scenario engine may be deployed on the whole house host 130, a gateway scenario engine 1 may be deployed on the PLC gateway 140, and a gateway scenario engine 2 may be deployed on the bluetooth gateway 150. The scenario engine (host scenario engine, gateway scenario engine 1, or gateway scenario engine 2) may be used to implement execution of the smart scenario, for example, by a process in a processor included in the device, or may also be implemented by a processing unit, which is not limited in the present application.
The intelligent scene can be deployed in a host scene engine, a gateway scene engine 1 or a gateway scene engine 2 when the intelligent scene is deployed. Wherein, when executing the smart scene deployed in the host scene engine, the PLC device 141, the bluetooth device 151, and the Wi-Fi device 161 may be correspondingly controlled. The PLC device 141 may be correspondingly controlled when executing the smart scene deployed in the gateway scene engine 1. The bluetooth device 151 may be correspondingly controlled when executing the smart scene deployed in the gateway scene engine 2.
As can be seen from the above description, in the processing system of the smart scene, different scene engines can be deployed on different devices, and different scene engines can also control the same electronic device. Thus, even though the conflict problem of executing a smart scene may be resolved in one scene engine by a control strategy in one possible implementation, there is still a conflict problem in that different scene engines need to control the same electronic device when executing the smart scene. For example, FIG. 1d is a schematic diagram of a smart scenario execution conflict. Assume that at 0 th second, a "party mode" smart scene is triggered to be executed, and in this smart scene, from 1 st second, action1 for playing music for 90 seconds and action2 for realizing continuous light change for 90 seconds corresponding to the smart scene are executed. During the execution of the "party mode" smart scene, it is assumed that the execution of the "conference mode" smart scene is triggered again at the 2 nd second, and at this time, from the 3 rd second, the action1' for realizing the gradual change to the conference light effect within 10 seconds corresponding to the "conference mode" smart scene is executed at the same time. It can be obtained that, in the 3 rd to 91 th seconds, due to the execution conflict of the two intelligent scenes, the lamplight may flash in a messy manner, so that the user experience is poor.
In addition, in another possible implementation manner, only one smart scene is controlled to be executed at a time to avoid the smart scene execution conflict, and although the smart scene execution conflict can be avoided, the problem of poor smart scene execution efficiency exists. For example, in the case of executing the smart scene 1 for controlling the living room light, even if the smart scene 2 for controlling the bedroom light is executed at the same time, the smart scene 2 is executed after the execution of the smart scene 1 is completed, which results in poor user experience.
Therefore, in the embodiment of the application, a processing method of an intelligent scene is provided. The method can control execution of a plurality of different intelligent scenes based on consideration of resource occupation granularity. Therefore, not only can execution conflict and execution interference of the intelligent scene be avoided, but also the execution efficiency of the intelligent scene can be improved, and thus the user experience can be improved.
The scheme can be applied to electronic equipment included in a processing system of the intelligent scene. The electronic device may be, for example, a whole house host 130, or a PLC gateway 140, or a bluetooth gateway 150, etc. in the system as shown in fig. 1 c. The electronic device may also be, for example, an electronic device with processing capabilities such as a cell phone, computer, tablet computer, etc.
The processing method of the intelligent scene provided by the embodiment of the application is suitable for the electronic equipment. The electronic device may be a portable electronic device such as a mobile phone, a tablet computer, a notebook computer, etc.; the portable electronic device can also be wearing equipment such as a watch, a bracelet and the like; or intelligent household equipment such as televisions, refrigerators and the like; or may also be a vehicle-mounted device, etc., or may also be a Virtual Reality (VR) device, an augmented Reality (Augmented Reality, AR) device, a Mixed Reality (MR) device, etc., and in any case, embodiments of the present application are not limited to a specific type of electronic device. In some examples, the processing method of the intelligent scene provided by the embodiment of the application can be a function, service or application in the electronic equipment. The application may be an application hosted by the electronic device or downloaded from a network, such as a smart life APP.
The processing method of the intelligent scene provided by the embodiment of the application can be also applied to a system, and the system comprises at least one terminal device and at least one network side device. Each terminal device can be connected with the network side device in a wireless or wired mode. For example, the at least one terminal device may be the terminal device 110, the full house central control screen 120, the PLC device 141, the bluetooth device 151, the Wi-Fi device 161 as shown in fig. 1 c; the at least one network side device may be a whole house host 130, a PLC gateway 140 and a bluetooth gateway 150 as shown in fig. 1 c. The network side device can be used for responding to the occurrence of the event based on one or more pre-deployed intelligent scenes, and controlling the corresponding action to trigger when the event is determined to meet the condition corresponding to the intelligent scene. The terminal device 110 and the full house central control screen 120 may be used to detect events, assist in creating smart scenes on network side devices, etc. The PLC device 141, the bluetooth device 151, the Wi-Fi device 161, and other devices may be configured to receive control from a network side device, so as to implement execution of an action corresponding to a smart scene, for example, the PLC device 141 may be an electronic device connected by a wired manner, such as a living room lamp, and the network side device may control to turn on the living room lamp when detecting that an event satisfies a condition shown in fig. 1 a.
In summary, the processing method of the intelligent scene provided by the embodiment of the application can be completed by one device alone or by a system. For ease of understanding, the following description will be primarily made by way of example with respect to the system shown in fig. 1 c.
The electronic device to which the embodiments of the present application can be applied, exemplary embodiments include, but are not limited to, piggy-back Or other operating system electronic devices.
Fig. 2 shows a schematic diagram of a possible hardware architecture of an electronic device. Wherein, the electronic device 200 comprises: radio Frequency (RF) circuitry 210, power supply 220, processor 230, memory 240, input unit 250, display unit 260, audio circuitry 270, communication interface 280, and Wi-Fi module 290. It will be appreciated by those skilled in the art that the hardware structure of the electronic device 200 shown in fig. 2 does not constitute a limitation of the electronic device 200, and the electronic device 200 provided in the embodiment of the present application may include more or less components than those illustrated, may combine two or more components, or may have different component configurations. The various components shown in fig. 2 may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
The following describes the components of the electronic device 200 in detail with reference to fig. 2:
The RF circuitry 210 may be used for receiving and transmitting data during a communication or session. Specifically, the RF circuit 210 receives downlink data of a base station and then sends the downlink data to the processor 230 for processing; in addition, uplink data to be transmitted is transmitted to the base station. Typically, the RF circuitry 210 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier (low noise amplifier, LNA), a duplexer, and the like.
In addition, RF circuit 210 may also communicate with other devices via a wireless communication network. The wireless communication may use any communication standard or protocol including, but not limited to, global system for mobile communications (global system of mobile communication, GSM), general packet radio service (GENERAL PACKET radio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), long term evolution (long term evolution, LTE), email, short message service (short MESSAGING SERVICE, SMS), and the like.
Wi-Fi technology belongs to a short-distance wireless transmission technology, and the electronic device 200 can be connected with an Access Point (AP) through a Wi-Fi module 290, so as to realize access of a data network. The Wi-Fi module 290 may be used for receiving and transmitting data during communication. In the embodiment of the present application, in a scenario where the electronic device 200 is the whole-house host 130 as shown in fig. 1c, the Wi-Fi module 290 may be used to communicate with the Wi-Fi device 161, and the Wi-Fi device 161 may be controlled to execute actions corresponding to the smart scenario. Taking Wi-Fi device 161 as an example of a smart speaker, the smart speaker may be controlled to perform actions such as playing music.
The electronic device 200 may be physically connected to other devices through the communication interface 280. Optionally, the communication interface 280 is connected with the communication interfaces of the other devices through a cable, so as to realize data transmission between the electronic device 200 and the other devices. In the embodiment of the present application, in the scenario where the electronic device 200 is the PLC gateway 140 as shown in fig. 1c, the electronic device may communicate with the PLC device 141 through the communication interface 280, and may control the Wi-Fi device 161 to execute actions corresponding to the smart scenario, for example. Taking PLC device 141 as a living room light as an example, activities such as turning on the living room light can be controlled.
The electronic device 200 can also realize communication services and interact with other electronic devices, so the electronic device 200 needs to have a data transmission function, that is, the electronic device 200 needs to include a communication module. Although fig. 2 illustrates the RF circuit 210, the Wi-Fi module 290, and the communication interface 280, it is understood that at least one of the above components or other communication modules (e.g., bluetooth module) for implementing communication are present in the electronic device 200 for data transmission.
For example, when the electronic device 200 is a mobile phone, the electronic device 200 may include the RF circuit 210, may also include the Wi-Fi module 290, or may include a bluetooth module (not shown in fig. 2); when the electronic device 200 is a computer or the whole house host 130 as shown in fig. 1c, the electronic device 200 may include the communication interface 280, may further include the Wi-Fi module 290, or may include a bluetooth module (not shown in fig. 2); when the electronic device 200 is a tablet computer, the electronic device 200 may include the Wi-Fi module, or may include a bluetooth module (not shown in fig. 2).
The memory 240 may be used to store software programs and modules. The processor 230 executes various functional applications and data processing of the electronic device 200 by running software programs and modules stored in the memory 240. Alternatively, the memory 240 may mainly include a storage program area and a storage data area. The storage program area may store an operating system (mainly including a kernel layer, a system layer, an application program framework layer, an application program layer, and other software programs or modules corresponding to each other).
In addition, the memory 240 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. In an embodiment of the present application, the memory 240 may store one or more smart scenarios that are pre-deployed. The intelligent scene can comprise corresponding condition, action, resources to be used and the like; resources may include, but are not limited to: space, subsystem, device. For example, the space may be divided into living rooms, bedrooms 1, bedrooms 2, toilets, kitchens, etc., and the subsystems may include, but are not limited to, illumination subsystems, sun-shading subsystems, etc., and the devices may be one or more devices that each subsystem includes, e.g., the illumination subsystem includes living room lights, etc.
The input unit 250 may be used to receive editing operations of a plurality of different types of data objects such as numeric or character information inputted by a user, and to generate key signal inputs related to user settings and function controls of the electronic device 200. Alternatively, the input unit 250 may include a touch panel 251 and other input devices 252.
The touch panel 251, which is also referred to as a touch screen, may collect touch operations thereon or thereabout (such as operations of a user using any suitable object or accessory such as a finger, a stylus, etc. on the touch panel 251 or thereabout) and drive the corresponding connection device according to a preset program. In an embodiment of the present application, the touch panel 251 may collect user operations on or near the touch panel 251, where the user operations may be used to create or trigger execution of a smart scene using the electronic device 200, for example, the user operations may be operations for creating conditions and actions of a smart life in a smart life APP included in the electronic device 200 by the user.
Alternatively, the other input devices 252 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, mouse, joystick, etc.
The display unit 260 may be used to display information input by a user or provided to the user and various menus of the electronic device 200. The display unit 260 is a display system of the electronic device 200, and is configured to present an interface to implement man-machine interaction. The display unit 260 may include a display panel 261. Alternatively, the display panel 261 may be configured in the form of a Liquid Crystal Display (LCD) CRYSTAL DISPLAY, an organic light-emitting diode (OLED), or the like. In the embodiment of the present application, the display unit 260 may be configured to display a display interface related to creating a smart scene for a user, for example, the interface 11 shown in fig. 1 a; and may also be used to display a list of created smart scenes to the user for review by the user.
The processor 230 is a control center of the electronic device 200, connects various components using various interfaces and lines, and performs various functions of the electronic device 200 and processes data by running or executing software programs and/or modules stored in the memory 240 and calling data stored in the memory 240, thereby realizing various services based on the electronic device 200. In the embodiment of the present application, the processor 230 may be configured to implement the method provided in the embodiment of the present application.
The electronic device 200 also includes a power source 220 (such as a battery) for powering the various components. Optionally, the power supply 220 may be logically connected to the processor 230 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
As shown in fig. 2, the electronic device 200 further includes audio circuitry 270, a microphone 271, and a speaker 272, which may provide an audio interface between a user and the electronic device 200. The audio circuit 270 may be configured to convert audio data into a signal recognizable by the speaker 272, and transmit the signal to the speaker 272 for conversion by the speaker 272 into a sound signal output. The microphone 271 is used for collecting external sound signals (such as the voice of a person speaking, or other sounds, etc.), converting the collected external sound signals into signals recognizable by the audio circuit 270, and transmitting the signals to the audio circuit 270. Audio circuit 270 may also be used to convert the signal sent by microphone 271 to audio data, which is then output to RF circuit 210 for transmission to, for example, another electronic device, or to memory 240 for subsequent further processing. In the embodiment of the present application, the speaker 272 may be used to execute a smart scene that needs to output a sound signal, for example, when the sound box executes an action of playing music according to the smart scene, the music may be output through the speaker 272; the microphone 271 may be used to perform acquisition of user voice instructions, etc., for example, the microphone 271 may acquire voice instructions of a user for performing an intelligent scene of "party mode", such as "on party mode".
Although not shown, the electronic device 200 may further include a camera, at least one sensor, etc., which will not be described herein. The at least one sensor may include, but is not limited to, a pressure sensor, a barometric pressure sensor, an acceleration sensor, a distance sensor, a fingerprint sensor, a touch sensor, a temperature sensor, and the like.
An Operating System (OS) to which an embodiment of the present application relates is the most basic system software that runs on the electronic device 200. The software system of the electronic device 200 may employ a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. The embodiment of the present application exemplifies the software structure of the electronic device 200 by taking an operating system adopting a hierarchical architecture as an example.
Fig. 3 is a software structure block diagram of an electronic device according to an embodiment of the present application. As shown in fig. 3, the software structure of the electronic device may be a hierarchical architecture, for example, the software may be divided into several layers, each layer having a distinct role and division of work. The layers communicate with each other through a software interface. In some embodiments, the operating system is divided into five layers, from top to bottom, an application layer, an application framework layer (FWK), runtime and system libraries, a kernel layer, and a hardware layer, respectively.
The application layer may include a series of application packages. As shown in fig. 3, the application layer may include a camera, settings, skin modules, user Interfaces (UIs), third party applications, and the like. The third party applications may include, for example, wireless local area networks (wireless local area network, WLAN), music, talk, bluetooth, video, etc.
In one possible implementation, the application may be developed using the java language, by calling an application programming interface (application programming interface, API) provided by the application framework layer, through which the developer may interact with the underlying layers of the operating system (e.g., hardware layer, kernel layer, etc.) to develop its own application. The application framework layer is essentially a series of services and management systems for the operating system.
The application framework layer provides an application programming interface and programming framework for the application of the application layer. The application framework layer includes some predefined functions. As shown in FIG. 3, the application framework layer may include an activity manager, a window manager, a content provider, a view system, a telephony manager, a resource manager, a notification manager, and the like.
The activity manager is used for managing the life cycle of each application program and providing a common navigation rollback function, and provides an interactive interface for windows of all programs.
The window manager is used for managing window programs. The window manager can acquire the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like. The content provider is used to store and retrieve data and make such data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc.
The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, a display interface including a text message notification icon may include a view displaying text and a view displaying a picture.
The telephony manager is for providing communication functions of the electronic device. Such as the management of call status (including on, hung-up, etc.).
The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like.
The notification manager allows the application to display notification information in a status bar, can be used to communicate notification type messages, can automatically disappear after a short dwell, and does not require user interaction. Such as notification manager is used to inform that the download is complete, message alerts, etc. The notification manager may also be a notification in the form of a chart or scroll bar text that appears on the system top status bar, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, a text message is prompted in a status bar, a prompt tone is emitted, the electronic device vibrates, and an indicator light blinks, etc.
The runtime includes a core library and a virtual machine. The runtime is responsible for the scheduling and management of the operating system.
The core library consists of two parts: one part is a function which needs to be called by java language, and the other part is a core library of an operating system. The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface manager (surface manager), media frame (media frame), three-dimensional graphics processing library (e.g., openGL ES), two-dimensional graphics engine (e.g., SGL), etc.
The surface manager is used to manage the display subsystem and provides a fusion of two-dimensional and 3D layers for multiple applications.
Media frames support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media framework may support a variety of audio video coding formats, such as: MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
A two-dimensional graphics engine is a drawing engine that draws two-dimensional drawings.
In some embodiments, a three-dimensional graphics processing library may be used to render three-dimensional motion trail images and a two-dimensional graphics engine may be used to render two-dimensional motion trail images.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
The hardware layer may include various types of sensors, such as acceleration sensors, gravity sensors, touch sensors, and the like.
Typically, the electronic device 200 may run multiple applications simultaneously. More simply, an application may correspond to one process, and more complex, an application may correspond to multiple processes. Each process is provided with a process number (process ID).
It should be understood that the expression "at least one of the following" or the like in the embodiments of the present application refers to any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b or c may represent: a, b, c, a and b, a and c, b and c, or a, b and c, wherein a, b and c can be single or multiple. "plurality" means two or more. "and/or" for describing the association relationship of the association object, the representation may have three relationships, for example, a and/or B may represent: a alone, a and B together, and B alone, wherein A, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship.
In addition, it should be understood that in the description of the present application, the words "first," "second," and the like are used merely for distinguishing between the descriptions and not for indicating or implying any relative importance or order.
It should be understood that, the hardware structure of the electronic device may be shown in fig. 2, the software system architecture may be shown in fig. 3, where a software program and/or a module corresponding to the software system architecture in the electronic device may be stored in the memory 240, and the processor 230 may execute the software program and the application stored in the memory 240 to perform a flow of a processing method of an intelligent scenario provided by an embodiment of the present application.
In order to facilitate understanding of the processing method of the smart scenario provided by the present application, the implementation procedure of the method provided by the present application will be described below with reference to fig. 4 to 12.
The embodiment of the application is suitable for application scenes such as intelligent life and the like. First, the processing effect of the method provided by the embodiment of the application is described through the following multiple application scenarios. It can be appreciated that the embodiments of the present application are not limited to the following application scenarios.
Application scenario a, fig. 4 is a schematic diagram of an application scenario of a processing method of an intelligent scenario according to an embodiment of the present application. Taking as an example a smart scenario identified as a guest mode and a smart scenario identified as a party mode, the resource conditions that these two smart scenarios need to use are: the space of two intelligent scenes belongs to living room space and is provided with a space sharing label, and the space sharing label is used for indicating that the intelligent scene can be shared with other intelligent scenes to the living room space; both smart scenes require control of the illumination subsystem and have an illumination subsystem occupancy label that indicates that the smart scene needs to occupy the illumination subsystem alone.
For example, if the intelligent scene of the guest mode is triggered based on the active trigger of the user, the priority of the intelligent scene of the guest mode may be determined to be high. If the intelligent scene of the party mode is triggered to be executed based on the active triggering mode of the user while the intelligent scene of the guest mode is being executed, the following table 1-1 can be referred to determine the execution sequence of the two intelligent modes. As shown in the following table 1-1:
TABLE 1-1
Smart scene identification Resource tag Priority level Duration of occupation
Guest meeting mode { (Occupancy, lighting subsystem), (shared, living room space) } High height 5 Seconds
Party mode { (Occupancy, lighting subsystem), (shared, living room space) } High height 5 Seconds
It can be seen from the above table 1-1 that since the two intelligent modes have the occupation labels for the illumination subsystem, which means that the two intelligent modes need to occupy the illumination subsystem separately, it is determined that the two intelligent scenes are executed simultaneously, which has a resource occupation conflict, that is, the two intelligent scenes cannot be executed simultaneously.
Based on this, when the intelligent scene of the guest mode is executed first and not executed yet, since the intelligent scene of the party mode also belongs to a high priority, two intelligent scenes belonging to the same priority can be implemented to stop executing the guest mode and turn to execute the party mode, that is, the intelligent scene of the party mode preempts resources from the intelligent scene of the guest mode.
It can be appreciated that an alternative implementation manner is to implement a smart scene with a later trigger time when there is a smart scene with the same priority that needs to be executed simultaneously and only one smart scene can be executed, so that the latest smart scene can be executed in time; in other words, if resources required by multiple smart scenes conflict, the smart scenes with the same priority can be implemented as smart scenes with later execution trigger time. Therefore, the intelligent scene which can be executed is more attached to the latest operation instruction of the user or the execution instruction which is automatically triggered by the intelligent scene, and the processing accuracy of the intelligent scene can be improved.
Application scenario B, fig. 5 is a schematic diagram of another application scenario of a processing method of an intelligent scenario according to an embodiment of the present application. Taking a smart scene identified as a guest receiving mode and a smart scene identified as a person to be lighted as examples, the resource conditions required to be used by the two smart scenes are as follows: the space of two intelligent scenes belongs to living room space and has a space sharing label; both smart scenarios require control of the lighting subsystem and have a lighting subsystem occupancy label.
By way of example, in connection with the introduction of application scenario a, the priority of the intelligent scenario in the guest mode is still taken as a high priority. If the intelligent scene of the guest mode is being executed and the executed person is triggered to light up based on the indirect automatic triggering mode, the following tables 1-2 can be referred to determine the execution sequence of the two intelligent modes. As shown in the following tables 1-2:
TABLE 1-2
Smart scene identification Resource tag Priority level Duration of occupation
Guest meeting mode { (Occupancy, lighting subsystem), (shared, living room space) } High height 5 Seconds
Lighting of the light from the person { (Occupancy, lighting subsystem), (shared, living room space) } Low and low NA
As can be seen from the above table 1-2, since the intelligent scene of the guest mode belongs to a high priority and the person lighting belongs to a low priority, when the intelligent scene of the guest mode is executed first and not yet executed, the intelligent scene of the person lighting belonging to a lower priority can be implemented as refusal to be executed.
The application scene B is different from the application scene A in that the priority of the intelligent scenes can be determined to be different based on the different triggering modes, so that the execution sequence of the intelligent scenes can be controlled to be different. It will be appreciated that an alternative embodiment is that when a low priority smart scenario collides with a high priority smart scenario being executed, the low priority smart scenario does not execute. In addition, it is also understood that when the high-priority smart scene collides with the executing low-priority smart scene, the high-priority smart scene can preempt resources from the low-priority smart scene, i.e. the executing low-priority smart scene stops executing and changes to executing the high-priority smart scene.
Application scenario C, FIG. 6 is a schematic diagram of another application scenario of a processing method of an intelligent scenario according to an embodiment of the present application. Taking a smart scene identified as a guest-receiving mode and a smart scene identified as a fully-lit lamp as examples, the resource conditions required for the two smart scenes are: the space of two intelligent scenes belongs to living room space and has a space sharing label; both smart scenes need to control the lighting subsystem, but the smart scene in the guest-meeting mode has a lighting subsystem occupation tag, while the fully lit smart scene has a lighting subsystem sharing tag for indicating that the smart scene can occupy the lighting subsystem together with other smart scenes.
By way of example, in connection with the introduction of the application scenario a, the priority of the intelligent scenario in the present guest mode is still taken as a high priority. If the intelligent scene of the guest mode is being executed based on the active triggering mode of the user to trigger the intelligent scene of the execution lamp to be fully lighted, the execution sequence of the two intelligent modes can be determined by referring to the following tables 1-3. As shown in tables 1-3 below:
Tables 1 to 3
Smart scene identification Resource tag Priority level Duration of occupation
Guest meeting mode { (Occupancy, lighting subsystem), (shared, living room space) } High height 5 Seconds
All-on lamp { (Shared, lighting subsystem), (shared, living room space) } High height 5 Seconds
As can be seen from tables 1-3 above, although the fully lit smart scene is a shared tag for the lighting subsystem, since the guest mode is already occupied, it is still determined that two smart scenes are performing a conflict. Based on this, when the intelligent scene of the guest reception mode is executed first and not executed yet, even if the intelligent scene of the lamp full-lighting is the shared label for the subsystem, at this time, two intelligent scenes belonging to the same priority can be implemented as stopping the guest reception mode and turning to execute the lamp full-lighting, that is, the intelligent scene of the guest reception mode does not occupy the resource any more, and the resource is released to the intelligent scene of the lamp full-lighting.
The application scene C is different from the application scene A in that the resource labels of the contrast subsystem are different; the method is characterized in that the resource label of one intelligent scene exists as an occupied label in the two intelligent scenes, so that the two scenes still have conflict with the resource occupation of the subsystem; and, the same thing also includes the same triggering manner as the two intelligent scenes, and thus has the same priority. It will be appreciated that an alternative implementation is that when at least one smart scene has a subsystem occupation tag for the same subsystem for different smart scenes, it may be determined that the smart scenes are in conflict.
In addition, it is also understood that two intelligent scenarios, each having a subsystem sharing label for the same subsystem, may be executed in parallel. That is, if two smart scenarios do not conflict with each other for occupation of resources, then the two smart scenarios may be executed in parallel. For example, the smart scene of playing music and the smart scene of warm reminding can be instructed in parallel, for example, in executing the smart scene of playing music through the sound box subsystem, the smart scene of warm reminding for whole point time can also be played simultaneously through the sound box system.
Application scenario D, fig. 7 is a schematic diagram of another application scenario of a processing method of an intelligent scenario according to an embodiment of the present application. Taking a smart scene identified as a guest receiving mode and a smart scene identified as a smart sunshade as examples, the resource conditions required to be used by the two smart scenes are as follows: the space of two intelligent scenes belongs to living room space and has a space sharing label; the intelligent scene in the guest-meeting mode needs to control the lighting subsystem and has the lighting subsystem occupation tag, while the intelligent scene in the intelligent sunshade needs to control the sunshade subsystem and has the sunshade subsystem sharing tag, and the sunshade subsystem sharing tag is used for indicating that the intelligent scene can occupy the sunshade subsystem together with other intelligent scenes.
By way of example, in connection with the introduction of the application scenario a, the priority of the intelligent scenario in the present guest mode is still taken as a high priority as an example. If the intelligent scene of intelligent sunshade is triggered to be executed based on the indirect automatic triggering mode during the execution of the intelligent scene of the guest-meeting mode, the execution sequence of the two intelligent modes can be determined by referring to the following tables 1-4. As shown in tables 1-4 below:
Tables 1 to 4
Smart scene identification Resource tag Priority level Duration of occupation
Guest meeting mode { (Occupancy, lighting subsystem), (shared, living room space) } High height 5 Seconds
Intelligent sunshade { (Shared, sunshade subsystem), (shared, living room space) } Low and low NA
From the above tables 1-4, it can be seen that since the intelligent scene of the guest mode needs to occupy the illumination subsystem, and the intelligent scene of the intelligent sunshade needs to occupy the sunshade subsystem, the subsystems occupied by the two intelligent modes are not related, so that the two intelligent scenes can be triggered and executed in parallel. Based on this, even when the intelligent scene of the guest mode is executed first and not yet executed, and the priority of the intelligent scene of the intelligent sunshade is lower than that of the intelligent scene of the guest mode, the intelligent scenes of the intelligent sunshade can be executed simultaneously.
The application scene D is the same as the application scene B in that the triggering modes based on the two intelligent scenes are different and have different priorities; the difference is that the two intelligent scenes occupy different subsystems based on the application scene D, so that the two intelligent scenes have no conflict on the occupation of resources, and can be executed in parallel. It will be appreciated that an alternative embodiment is when the subsystems used by the two smart scenarios are not related, the two scenarios may be executed in parallel; it is also understood that when resources required for multiple intelligent scenarios do not conflict, they may be executed in parallel. Therefore, the execution efficiency of the intelligent scene can be guaranteed, and the user experience is improved.
In addition, it can be understood that in the embodiment of the present application, the problem that even if the smart scenes correspond to different electronic devices, the execution of multiple smart scenes may have interference can also be avoided by setting the occupancy relationship of the resource tags corresponding to the space resource types. For example, if at least one smart scene occupies the living room space in the plurality of smart scenes, it indicates that the plurality of smart scenes have a conflict in occupying resources.
Based on the introduction of the processing effects of the above several application scenes, it can be seen that the method provided by the embodiment of the present application, on one hand, when the scene arrangement is performed, the corresponding resource tag is preconfigured for each intelligent scene, so that the requirement condition of each intelligent scene on the resources can be determined based on the resource tag, for example, which resources are required to be occupied by the scene alone, which resources can be occupied by the scene together with other intelligent scenes, and the like.
It should be noted that, the embodiment of the present application is not limited to the resource types required by each smart scenario, for example, the resource types may include, but are not limited to, one or a combination of the following resources: space, subsystem, device. Wherein,
(1) The space can be obtained by grouping a plurality of electronic devices included in the processing system of the intelligent scene according to positions, so that the processing control of the intelligent scene can be realized more accurately; in addition, one electronic device may belong to one or more spaces, for example, a living room light may belong to not only a living room space but also a whole house space.
(2) The subsystem can be obtained after a plurality of electronic devices included in the processing system of the intelligent scene are grouped according to the device type, and can also realize the processing control of the intelligent scene more accurately. In addition, when the application is implemented, the electronic equipment included in the system can be divided into finer groups by combining the consideration of the control and the subsystem, such as a lighting subsystem of a living room space, a lighting subsystem of a bedroom space and the like.
(3) The device may represent each electronic device included in the system. Alternatively, the device may be an electronic device belonging to the spatial resource type and/or the subsystem resource type described in the introduction above.
On the other hand, when an event of the smart scene is detected, the priority of the smart scene may be determined based on the trigger mode of the event. Alternatively, the priority of the intelligent scene triggered based on the triggering mode of the user active trigger may be high priority, for example, the triggering mode of the user active trigger may include, but is not limited to: through clicking intelligent scene card that intelligent life APP included, instruct electronic equipment through the voice command, through controlling the button on the accuse screen in the whole room. Alternatively, the priority of the intelligent scene triggered by the triggering mode based on indirect automatic triggering may be low priority, for example, the triggering mode of indirect active triggering may include, but is not limited to: device events (e.g., sensor detection), host rule events (e.g., time events, place events, etc.), whole house scenario events, scenario status events. Alternatively, the priority of the smart scene may be determined by a pre-configured manner, for example, the user may pre-configure the priority of a specific smart scene (assumed to be the smart scene a) to be high, and the priority of the smart scene a is high no matter what trigger manner the smart scene a adopts.
Based on the above, by the method provided by the embodiment of the application, the electronic device can determine the execution results of a plurality of intelligent scenes according to one or a combination of the information such as the resource type, the resource label corresponding to the resource type, the priority of the intelligent scene and the like, so that not only the execution conflict and the execution interference of the intelligent scenes can be avoided, but also the parallel execution of a plurality of intelligent scenes without conflict can be realized, and the execution accuracy of the intelligent scenes can be improved.
Fig. 8 is a schematic diagram of a system architecture suitable for a processing method of an intelligent scene according to an embodiment of the present application. The system architecture shown in fig. 1c differs in that fig. 8 includes a whole-house host 800. The whole-house host 800 may include an occupancy decision-making service in addition to a host scenario engine. The occupation decision service may be implemented by an application process or thread in the whole house host 800, for example, may be processed in a processor included in the whole house host 800; or may be implemented by one of the processing units in the whole house host 800. The occupancy decision service may be used to perform collision detection and execution sequence decisions for multiple intelligent scenarios, etc. In the following embodiments, the occupation decision service is disposed in the whole-house host 800 as an example, and in a specific implementation, the occupation decision service may be processed by any electronic device in the system, for example, may be implemented by the PLC gateway 140, the bluetooth gateway 150, or the terminal device 110, which is not limited to this embodiment of the present application.
Referring to fig. 9, a flowchart of a processing method of a smart scenario according to an embodiment of the present application is provided, where the method may be applied to an electronic device or a system including a plurality of electronic devices (or processing units), and the system shown in fig. 8 is taken as an example in the following embodiment. The process may include the steps of:
Step 901, arranging the intelligent scene. In an alternative example, the terminal device 110 may detect and respond to a new event of the user creating the smart scene, and send the new event to the whole-house host 800; the whole house host 800 receives and analyzes the new event. Illustratively, the whole-house host 800 may parse out the resource types needed by the smart scenario from the new event.
Alternatively, the whole-house host 800 may directly read the resource types required for the smart scenario through the configuration of the user. For example, the user may configure the resource type required for the smart scene, for example, the living room space and the lighting subsystem, through the configuration interface of the terminal device 110 for creating the smart scene, as shown in the interface 11.
Alternatively, the whole house host 800 may automatically parse from the smart scene created by the user to the resource type required by the smart scene. For example, in connection with the smart scenario shown in fig. 1a where the living room light is turned on when the smart door lock detects a coming home from the workday 19:00-21:00, the whole house host 800 may resolve from it to the resource type including living room space, lighting subsystem. The whole-house host 800 may store a resource matching relationship in advance, and based on the resource matching relationship and the content of the new event, the resource type corresponding to the new event may be matched; the resource matching relationship may be manually configured or learned based on a neural network model, which is not limited in the present application. In addition, in the embodiment of the present application, the whole-house host 800 may also display the automatically parsed resource types to the user through the terminal device 110 or the like, so that the user may learn and manually adjust the corresponding resource types.
It will be appreciated that after the smart scenes are created in any manner, the whole-house host 800 can perform scene arrangement for each smart scene, so as to match the corresponding required resource types. Therefore, the analysis of resource granularity of the intelligent scene can be realized through scene arrangement, and more accurate execution conflict detection can be conveniently carried out.
In addition, it can be understood that the terminal device 110 or the whole house host 800 may analyze the condition and the action of the smart scene according to the new event, so as to implement execution of the action when detecting the event and determining that the event satisfies the condition. For example, in connection with what is shown in fig. 1a, the terminal device 110 or the whole house host 800 may parse conditions including "weekday 19:00-21:00", "smart door lock", "min" and "go home", and may also parse actions to turn on "living room light". Thus, when the event meeting the condition of executing the intelligent scene is detected, the related electronic equipment in the system can be controlled to execute the corresponding action.
Step 902, determining a resource tag of the intelligent scene. Illustratively, after step 901, the whole-house host 800 may further determine a resource tag corresponding to each resource type of the smart scenario.
Optionally, the whole-room host can directly read the resource tag corresponding to each resource type required by the intelligent scene through the configuration of the user. For example, the user may simultaneously configure the resource tag of the type of resource required by the smart scene, for example, configure the resource tag of the living room space required by the smart scene as shared and the resource tag of the lighting subsystem as occupied, in the configuration interface of the terminal device 110 for creating the smart scene as shown in the interface 11.
Alternatively, the whole-house host 800 may automatically parse the resource tag required by the smart scene from the smart scene created by the user, for example, may be directly defined as a default resource tag, or may match the corresponding resource tag according to the smart scene. The whole-house host 800 may also store a tag matching relationship in advance, and based on the tag matching relationship and the resource type determined in step 901, a resource tag corresponding to the smart scene may be matched; the label matching relationship may be manually configured or learned based on a neural network model, which is not limited in the present application. Similarly, in the embodiment of the present application, the whole-house host 800 may also display the automatically parsed resource tag to the user through the terminal device 110 or the like, so that the user may learn and manually adjust the corresponding resource type.
Based on what is described in fig. 4-7, in an embodiment of the present application, resource tags may include, but are not limited to, occupancy tags and sharing tags. The shared label is used for indicating that the intelligent scene can occupy resources together with other intelligent scenes. In addition, it should be noted that, in the embodiment of the present application, the type of the resource tag and the number of types of the partitions are not limited, and may be configured to be a finer tag type, for example, the shared tag may further include a tag shared with another smart scene, a tag shared with other two smart scenes, and/or a tag shared with a specific smart scene.
In addition, the whole house host may also provide a tag configuration interface to the user through the terminal device 110 or the whole house central control screen 120, where the tag configuration interface may be used to perform user configuration of resource tags. Therefore, the classification of the resource labels can be more fit with the use habit of the user, so that the user experience is guaranteed.
Optionally, the whole-house host 800 may further pre-configure a default configuration of each resource type, and if the resource tag corresponding to the resource type is not matched in the tag matching relationship, the resource tag of the default configuration may be used as the tag of the resource type. For example, the lighting subsystem may be configured as a shared tag by default, and if the resource tag of the newly added smart scene is not matched, the lighting subsystem is confirmed as the shared tag.
In addition, in the embodiment of the present application, after executing step 901 and step 902, the scene arrangement result and the resource tag determination result for the smart scene may be stored, so that the smart scene may be directly invoked when being executed again, thereby reducing the amount of calculation. It will be appreciated that if the smart scene is a created smart scene, the steps 901 and 902 may not be performed, but the steps of calling the scene layout result and the resource tag determination result of the smart scene stored in the memory may be performed.
Step 903, performing execution conflict detection of the intelligent scene, and determining an execution result.
In an alternative embodiment, referring to fig. 10, another flow chart of a processing method of an intelligent scene according to an embodiment of the application is provided. The process may include the steps of:
In step 9030, the scenario engine determines a priority of the second smart scenario in response to the second smart scenario execution instruction. The scene engine may be the host scene engine, or the gateway scene engine 1, or the gateway scene engine 2 shown in fig. 8, which is not limited by the present application.
For example, if the second smart scene has a predefined priority, the priority of the second smart scene may be directly determined. For another example, if the second smart scene does not have a predefined priority, the priority of the second smart scene may be determined based on a trigger mode of the second smart scene; optionally, if the triggering manner is active triggering by the user, the triggering manner may be determined as the first priority (which may be understood as a high priority in the embodiment of the present application); alternatively, if the triggering manner is indirect automatic triggering, the second priority may be determined (which may be understood as a low priority in the embodiment of the present application). In addition, in the embodiment of the present application, the division of the smart scenario into two priorities is not limited, and in a specific implementation, the division into three priorities or more may be performed.
In step 903a1, if the priority of the second intelligent scene is the first priority, the scene engine executes the second intelligent scene. For example, when the second smart scenario is determined to be of high priority, the second smart scenario may be directly executed, so that the execution efficiency of the smart scenario may be improved.
Step 903a2, the scene engine sends the first indication information to the occupancy decision service. Wherein the first indication information may include one or a combination of the following information: the identification of the second intelligent scene, the declarative occupation of the second intelligent scene, the priority of the second intelligent scene and the occupation duration of the second intelligent scene.
The declarative occupation of the second intelligent scene can be used for indicating resources which the second intelligent scene needs to occupy, and can also comprise occupied resource types and resource labels; for example, it can be expressed as: { (occupied, lighting subsystem), (shared, living room space) }, or { (shared, device 1), (occupied, device 2) }, or { (occupied, lighting subsystem), (shared, living room space), (shared, device 1), (occupied, device 2) }; it can be appreciated that in the embodiment of the present application, the electronic device involved in the smart scenario may be determined by different resource types, which is not limited in the present application. The occupancy time of the second smart scenario may be determined based on the execution time of one or more actions included in the second smart scenario, and a time delay (delay) between actions. For example, the second smart scenario includes three actions, where there is a delay of 5 seconds between action2 and action 3. If the execution time of the action1, the action2 and the action3 configured in the second intelligent scene is ignored, the predicted execution time of the second intelligent scene is 5 seconds.
It is to be appreciated that the occupancy decision service can determine whether the second smart scenario has been executed based on the priority and the default configuration of the second smart scenario; optionally, if the second smart scenario is of high priority, the second smart scenario may be executed directly; alternatively, if the second smart scene is of low priority, the second smart scene is not executing and waits for execution of the instruction.
In step 903a3, the occupancy decision service determines, according to the first indication information, whether there is a resource conflict between the second smart scenario and the first smart scenario. Wherein the first smart scene is an executing smart scene.
For example, the occupancy decision service may maintain corresponding execution state lists for different scenario engines, respectively; alternatively, if it is determined that the execution of the smart scene is ended, the corresponding smart scene list record may be deleted from the execution status list. Detection of execution conflicts for a plurality of smart scenarios is performed based on the execution status list. The form of the execution status list may be referred to in the following table 2:
TABLE 2
Smart scene identification Resource tag Priority level Duration of occupation
First Smart scene { (Occupancy, lighting subsystem), (shared, living room space) } High height 5 Seconds
Second Smart scene { (Occupancy, lighting subsystem), (shared, living room space) } High height 5 Seconds
Based on the execution state list shown in table 2, the occupancy decision service can compare the conflict situations of multiple intelligent scenes for each resource. Optionally, if at least one of the resources of the two intelligent scene subsystems is occupied, it indicates that the two intelligent scenes have resource conflicts. Thus, as can be seen from table 2, there is a resource conflict between the second smart scenario and the executing first smart scenario.
Step 903a4, the occupancy decision service sends a first instruction to the scene engine. The first instruction is used for instructing the scene engine to stop executing the first intelligent scene so as to avoid conflict between the execution of the first intelligent scene and the execution of the second intelligent scene. In addition, the occupancy decision service may determine the scene engine corresponding to each smart scene, so a first instruction may be sent to the target scene engine, e.g. if the target scene engine is the gateway scene engine 1, the first instruction may be sent to the gateway scene engine 1.
In step 903a5, the scene engine stops executing the first smart scene according to the first instruction.
For another example, based on the determination of step 903a3, if it is determined that there is no resource conflict between the second smart scenario and the first smart scenario, the occupancy decision service may not process the executing first smart scenario. For example, if the resource labels of the subsystems of the two smart scenarios are shared, it means that the two smart scenarios can be executed in parallel, so that the decision service is occupied and no processing is required.
In another alternative embodiment, referring to fig. 11, a further flowchart of a processing method of an intelligent scene according to an embodiment of the present application is shown. The process may include the steps of:
step 9030, which is the same as that described in fig. 10, is not repeated here.
In step 903b1, the scene engine determines the priority of the second smart scene as the second priority. For example, when the second smart scenario is determined to be of low priority, it may be determined by the occupancy decision service whether the second smart scenario is executable, and after the determination is executable, the second smart scenario is executed.
Step 903b2, the scene engine sends the second indication information to the occupancy decision service. Similar to the first indication information, the second indication information may include one or a combination of the following information: the identification of the second intelligent scene, the declarative occupation of the second intelligent scene, the priority of the second intelligent scene and the occupation duration of the second intelligent scene.
Step 903b3, the occupation decision service determines whether there is a resource conflict between the second smart scenario and the first smart scenario according to the second instruction information. By way of example, there may be several execution results at this time:
(1) And if the occupancy decision service determines that the second intelligent scene conflicts with the first intelligent scene, and the priority of the second intelligent scene is not lower than that of the first intelligent scene, the second intelligent scene can be determined to occupy resources from the first intelligent scene. Based on the execution result one, the following steps 903b41 and 903b51 may be continued:
step 903b41, the occupancy decision service sends a second instruction to the scene engine.
Step 903b51, the scene engine stops executing the first intelligent scene and executes the second intelligent scene according to the second instruction.
(2) And if the occupancy decision service determines that the second intelligent scene conflicts with the first intelligent scene and the priority of the second intelligent scene is lower than that of the first intelligent scene, the second intelligent scene can be determined to be unable to occupy resources from the first intelligent scene. Based on the execution result two, the execution may be continued, which may include the following steps 903b42 and 903b52:
step 903b42, the occupancy decision service sends a third instruction.
Step 903b52, the scene engine determines not to execute the second smart scene according to the third instruction.
(3) And if the occupation decision service determines that the second intelligent scene and the first intelligent scene have no resource conflict, the execution result can determine that the second intelligent scene can co-occupy resources with the first intelligent scene. Based on the execution result three, the execution may be continued, which may include the following steps 903b43 and 903b53:
step 903b43, the occupancy decision service sends a fourth instruction.
Step 903b53, the scene engine executes the second smart scene according to the fourth instruction. It will be appreciated that the first intelligent scenario being executed at this time may still continue to execute.
In yet another alternative embodiment, referring to fig. 12, a further flowchart of a processing method of an intelligent scene according to an embodiment of the present application is shown. The process may include the steps of:
Step 1201, the scene engine receives a second intelligent scene execution instruction.
Step 1202, the scene engine sends third indication information. Unlike the implementations described in fig. 10 and 11, the scenario engine may directly determine the execution result by the occupancy decision service without employing a different processing approach based on the priority of the second smart scenario. At this time, the third indication information may also include one or a combination of the following information, similarly to the first indication information or the second indication information: the identification of the second intelligent scene, the declarative occupation of the second intelligent scene, the priority of the second intelligent scene and the occupation duration of the second intelligent scene.
It is understood that the occupation decision service does not make a judgment as to whether the second smart scene is being executed, but directly determines an execution result based on the third instruction information and controls the execution of the smart scene, unlike step 903a 2.
In step 1203, the occupation decision service determines whether there is a resource conflict between the second smart scenario and the first smart scenario according to the third indication information. The implementation process of step 1203 may refer to step 903a3, and the description thereof will not be repeated here.
Steps 12041 to 12053 are the same as steps 903b41 to 903b53 described in fig. 11, and are not described herein.
In the method provided by the application, the resource type is obtained and the resource label is determined by carrying out scene arrangement analysis on the intelligent scene, so that the division of the resource granularity can be realized on a plurality of terminal devices included in the system, and more accurate resource conflict detection can be realized. And when resource conflict exists between the two intelligent scenes, the execution result can be further determined based on the resource information of the two intelligent scenes. Therefore, on one hand, execution conflict of the intelligent scene can be avoided, and on the other hand, a plurality of intelligent scenes without conflict can be executed in parallel, so that the execution accuracy and the execution efficiency of the intelligent scenes can be improved. In addition, the embodiment of the application can also respectively maintain the execution state lists of different scene engines through the occupation decision service, thereby avoiding resource conflict and interference caused by multiple scene engines.
Based on the above embodiments, the present application further provides an electronic device, including a plurality of functional modules; the functional modules interact to realize the functions executed by the electronic equipment in the methods described in the embodiments of the present application. The plurality of functional modules may be implemented based on software, hardware, or a combination of software and hardware, and the plurality of functional modules may be arbitrarily combined or divided based on the specific implementation. Steps 901 to 903 as performed by the electronic device in the embodiment shown in fig. 9 are performed; or steps 9030 to 903a5 performed by the electronic device in the embodiment shown in fig. 10, or steps 9030 to 903b53 performed by the electronic device in the embodiment shown in fig. 11, or steps 1201 to 12053 performed by the electronic device in the embodiment shown in fig. 12. The electronic device in fig. 10 or 11 or 12 may be divided into two functional modules of a scene engine and an occupancy decision service.
Based on the above embodiments, the present application further provides an electronic device, where the electronic device includes at least one processor and at least one memory, where the at least one memory stores computer program instructions, and when the electronic device is running, the at least one processor executes functions executed by the electronic device in the methods described in the embodiments of the present application. Steps 901 to 903 as performed by the electronic device in the embodiment shown in fig. 9 are performed; or steps 9030 to 903a5 performed by the electronic device in the embodiment shown in fig. 10, or steps 9030 to 903b53 performed by the electronic device in the embodiment shown in fig. 11, or steps 1201 to 12053 performed by the electronic device in the embodiment shown in fig. 12. The electronic device in fig. 10 or 11 or 12 may be divided into two functional modules of a scene engine and an occupancy decision service.
Based on the above embodiment, the present application further provides a processing system for an intelligent scene, where the system may include an electronic device as in the above embodiment. Optionally, the system may further include one or more other electronic devices, where the one or more other electronic devices may further include two types, one type is an electronic device for controlling the execution of the smart scenario, for example, the PLC gateway, the bluetooth gateway, and the like in the foregoing, and the other type may be an electronic device for executing the smart scenario, for example, the PLC device, the bluetooth device, the Wi-Fi device, the terminal device, and the electronic device such as a full-house central control screen in the foregoing.
Based on the above embodiments, the present application also provides a computer program product comprising: a computer program (which may also be referred to as code, or instructions), when executed, causes a computer to perform the methods described in embodiments of the present application.
Based on the above embodiments, the present application also provides a computer-readable storage medium having stored therein a computer program which, when executed by a computer, causes the computer to execute the methods described in the embodiments of the present application.
Based on the above embodiment, the present application further provides a chip, where the chip is configured to read a computer program stored in a memory, and implement the methods described in the embodiments of the present application.
Based on the above embodiments, the present application provides a chip system, which includes a processor for supporting a computer device to implement the methods described in the embodiments of the present application. In one possible design, the chip system further includes a memory for storing programs and data necessary for the computer device. The chip system can be composed of chips, and can also comprise chips and other discrete devices. It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the scope of the application. Thus, it is intended that the present application also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (13)

1. A processing method of an intelligent scene, which is applied to an electronic device, and comprises the following steps:
Receiving and responding to an execution instruction for executing a second intelligent scene, and acquiring a resource type associated with the second intelligent scene and a corresponding resource tag thereof; the resource tag is used for indicating the occupation relation of the second intelligent scene to the resource type;
According to the resource type and the corresponding resource label associated with the second intelligent scene, and the resource type and the corresponding resource label associated with the first intelligent scene, determining that the resources of the second intelligent scene and the first intelligent scene do not have occupation conflict, and controlling the second intelligent scene and the first intelligent scene to execute in parallel; the first intelligent scene is any intelligent scene being executed; or alternatively
And determining that the second intelligent scene and the resources of the first intelligent scene have occupation conflict according to the resource type associated with the second intelligent scene and the corresponding resource label thereof, the resource type associated with the first intelligent scene and the corresponding resource label thereof, and controlling to stop executing the first intelligent scene or controlling to keep executing the first intelligent scene.
2. The method of claim 1, wherein the determining that the second smart scenario has an occupancy conflict with the resources of the first smart scenario, controlling to stop executing the first smart scenario or controlling to keep executing the first smart scenario, comprises:
determining the priority of the second intelligent scene based on the triggering mode of the execution instruction and a pre-configured priority rule;
And controlling to stop executing the first intelligent scene or controlling to keep executing the first intelligent scene according to the priority of the second intelligent scene and the priority of the first intelligent scene.
3. The method according to claim 1, wherein the method further comprises:
determining the priority of the second intelligent scene according to the instruction of the execution instruction;
When the priority of the second intelligent scene is determined to be the first priority, determining that the second intelligent scene is executed;
When the priority of the second intelligent scene is determined to be a second priority, controlling to execute the second intelligent scene when the second intelligent scene is controlled to be executed in parallel with the first intelligent scene or when the control stops executing the first intelligent scene; the first priority is higher than the second priority.
4. A method according to claim 2 or 3, wherein said determining the priority of the second smart scenario comprises:
When the triggering mode is active triggering of a user, determining that the second intelligent scene is of a first priority;
When the triggering mode is indirect automatic triggering, determining that the second intelligent scene is of a second priority; the first priority is higher than the second priority.
5. The method according to any one of claims 2 to 4, wherein the controlling to stop executing the first smart scenario or controlling to keep executing the first smart scenario comprises:
When the priority of the second intelligent scene is higher than or equal to the priority of the first intelligent scene, controlling to stop executing the first intelligent scene;
And when the priority of the second intelligent scene is lower than that of the first intelligent scene, controlling to execute the first intelligent scene.
6. The method of any one of claims 1 to 5, wherein prior to receiving and in response to an execution instruction for executing a second smart scenario, the method further comprises:
receiving a new instruction for creating the second smart scene;
Responding to the new instruction, analyzing one or more resource types associated with the second intelligent scene, and analyzing resource labels corresponding to the resource types;
And storing one or more resource types of the second intelligent scene and resource labels corresponding to the resource types.
7. The method of claim 6, wherein resolving one or more resource types associated with the second smart scenario and resolving a resource tag corresponding to each resource type in response to the add instruction comprises:
According to the new instruction, one or more resource types configured by a user for the second intelligent scene and resource labels corresponding to the resource types are obtained; or alternatively
And according to the new instruction, analyzing, matching to one or more resource types associated with the second intelligent scene, and analyzing resource labels corresponding to the resource types.
8. The method according to any one of claims 1 to 7, wherein the controlling to stop executing the first smart scenario comprises:
stopping executing the first smart scene by controlling, by a target electronic device for managing the first smart scene, the target electronic device being the electronic device or other electronic devices other than the electronic device.
9. The method according to any one of claims 1 to 8, wherein the resource types include one or a combination of the following types: space, subsystem, equipment; the space is obtained by grouping a plurality of electronic devices based on a position relation, and the subsystem is obtained by grouping the plurality of electronic devices based on a device function.
10. The method according to any one of claims 1 to 9, wherein the resource tag comprises: occupying the tag, sharing the tag; the occupation tag is used for indicating that the corresponding resource type needs to be occupied independently, and the sharing tag is used for indicating that the corresponding resource type can be occupied together with other intelligent scenes.
11. An electronic device comprising at least one processor coupled to at least one memory, the at least one processor configured to read a computer program stored by the at least one memory to perform the method of any one of claims 1-10.
12. A computer readable storage medium having instructions stored therein which, when run on a computer, cause the computer to perform the method of any of claims 1 to 10.
13. A computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of any of claims 1 to 10.
CN202211413878.9A 2022-11-11 2022-11-11 Intelligent scene processing method and electronic equipment Pending CN118034073A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211413878.9A CN118034073A (en) 2022-11-11 2022-11-11 Intelligent scene processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211413878.9A CN118034073A (en) 2022-11-11 2022-11-11 Intelligent scene processing method and electronic equipment

Publications (1)

Publication Number Publication Date
CN118034073A true CN118034073A (en) 2024-05-14

Family

ID=90993944

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211413878.9A Pending CN118034073A (en) 2022-11-11 2022-11-11 Intelligent scene processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN118034073A (en)

Similar Documents

Publication Publication Date Title
CN110574057B (en) Suggesting actions based on machine learning
WO2021027267A1 (en) Speech interaction method and apparatus, terminal and storage medium
US20160179087A1 (en) Activity-centric contextual modes of operation for electronic devices
KR101431398B1 (en) Automatic annunciator allocation
CN110795179B (en) Display method and electronic equipment
CN105389099A (en) method and apparatus for voice recording and playback
CN106778117B (en) Permission open method, apparatus and system
KR102089459B1 (en) Data communication method and apparatus using a wireless communication
CN109992400A (en) Resource allocation methods, device, mobile terminal and computer readable storage medium
CN109992398A (en) Method for managing resource, device, mobile terminal and computer readable storage medium
CN103250425A (en) Method and apparatus for simultaneously presenting at least two multimedia contents on a processing device
CN107948231A (en) Service providing method, system and operating system based on scene
CN109286727A (en) A kind of method of controlling operation thereof and terminal device
CN109992399A (en) Method for managing resource, device, mobile terminal and computer readable storage medium
CN110032439A (en) Method for managing resource, device, mobile terminal and computer readable storage medium
TW201814515A (en) Systems, methods, and devices for context-aware applications
CN116668580B (en) Scene recognition method, electronic device and readable storage medium
CN118034073A (en) Intelligent scene processing method and electronic equipment
CN109511139A (en) WIFI control method, device, mobile device, computer readable storage medium
CN114063459B (en) Terminal and intelligent home control method
US11360816B1 (en) Dynamic usage of storage and processing unit allocation
US10735919B1 (en) Recipient-based content optimization in a messaging system
WO2023174429A1 (en) Intelligent-device control method and electronic device
WO2023241508A1 (en) Active service method for disembarking scenario, and terminal device
CN116095230B (en) Application program recommendation method, terminal device and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination