CN116009438A - Control scene generation method and device, storage medium and electronic device - Google Patents

Control scene generation method and device, storage medium and electronic device Download PDF

Info

Publication number
CN116009438A
CN116009438A CN202211635359.7A CN202211635359A CN116009438A CN 116009438 A CN116009438 A CN 116009438A CN 202211635359 A CN202211635359 A CN 202211635359A CN 116009438 A CN116009438 A CN 116009438A
Authority
CN
China
Prior art keywords
control
action
instruction
user
instructions
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211635359.7A
Other languages
Chinese (zh)
Inventor
温兴超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Haier Technology Co Ltd
Haier Smart Home Co Ltd
Haier Uplus Intelligent Technology Beijing Co Ltd
Original Assignee
Qingdao Haier Technology Co Ltd
Haier Smart Home Co Ltd
Haier Uplus Intelligent Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Haier Technology Co Ltd, Haier Smart Home Co Ltd, Haier Uplus Intelligent Technology Beijing Co Ltd filed Critical Qingdao Haier Technology Co Ltd
Priority to CN202211635359.7A priority Critical patent/CN116009438A/en
Publication of CN116009438A publication Critical patent/CN116009438A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Selective Calling Equipment (AREA)

Abstract

The application discloses a control scene generation method, a device, a storage medium and an electronic device, and relates to the technical field of intelligent home/intelligent families, wherein the control scene generation method comprises the following steps: acquiring a plurality of action instructions initiated by a user to a plurality of intelligent home devices and corresponding user space positions when the user initiates the action instructions; performing association analysis on a plurality of action instructions based on the user space position, and determining an association action instruction set in the action instructions; and generating a control scene corresponding to the associated action instruction set based on the associated action instruction set. The control scene generation method provided by the application can generate the control scene meeting the personalized requirements of the user, lays a foundation for personalized control of the intelligent home equipment based on the control scene, and can improve the experience and satisfaction of the user.

Description

Control scene generation method and device, storage medium and electronic device
Technical Field
The application relates to the technical field of smart home, in particular to a control scene generation method, a control scene generation device, a storage medium and an electronic device.
Background
Smart home is an important development direction of smart home, and it realizes intelligence, convenience and artistry of living environment. The home equipment is an indispensable carrier for carrying smart home and can be intelligently controlled through various sensors and image forms or voices.
The related art shows that the current smart home supports automatic control, and a certain scene is generally provided by using an automatic control system. Each scene will contain one continuous set of actions.
However, the current scene corresponding to the automatic control of the smart home is often fixed, and if the continuous action set of the user changes, the control scene cannot well meet the user requirement. Therefore, finding a control scene generation method that can meet the personalized needs of users is a current research hotspot.
Disclosure of Invention
The method, the device, the storage medium and the electronic device for generating the control scene can obtain the associated action instruction set corresponding to the user preference, so that the control scene meeting the personalized requirements of the user can be generated, and a foundation is laid for personalized control of intelligent household equipment based on the control scene.
The application provides a control scene generation method, which comprises the following steps: acquiring a plurality of action instructions initiated by a user to a plurality of intelligent home devices and corresponding user space positions when the user initiates the action instructions; performing association analysis on a plurality of action instructions based on the user space position, and determining an association action instruction set in the action instructions; and generating a control scene corresponding to the associated action instruction set based on the associated action instruction set.
According to the control scene generation method provided by the application, the correlation analysis is performed on a plurality of action instructions based on the user space position, and a correlation action instruction set is determined in the plurality of action instructions, and the method specifically comprises the following steps: screening the action instructions based on the user space positions to obtain a plurality of first action instructions, wherein the first action instructions are action instructions with corresponding user space positions being preset space positions; and carrying out association analysis on the plurality of first action instructions based on an association rule discovery algorithm, and determining an association action instruction set from the plurality of first action instructions.
According to the control scene generation method, the control scene comprises a target control scene, wherein the occurrence position of the target control scene is the occurrence position corresponding to the preset spatial position; before the association rule discovery algorithm performs association analysis on the plurality of first action instructions, the control scene generation method further includes: acquiring the occurrence time of the first action instruction; the association rule discovery algorithm is based on performing association analysis on a plurality of first action instructions, and determining an association action instruction set from the plurality of first action instructions, wherein the association action instruction set specifically comprises: screening the plurality of first action instructions based on the occurrence time to obtain a plurality of second action instructions, wherein the second action instructions are first action instructions with the occurrence time interval of adjacent action instructions smaller than or equal to a preset time interval; performing association analysis on a plurality of second action instructions based on an association rule discovery algorithm to obtain a plurality of candidate association action instruction sets; determining occurrence frequency of a plurality of groups of candidate associated action instruction sets in a preset time period; determining a candidate associated action instruction set with highest occurrence frequency based on the occurrence frequency, and taking the candidate associated action instruction set with highest occurrence frequency as the associated action instruction data set; the generating a control scene corresponding to the associated action instruction set based on the associated action instruction set specifically comprises: and determining the target control scene corresponding to the associated action instruction set based on the associated action instruction data set.
According to the control scene generation method provided by the application, after the control scene corresponding to the associated action instruction set is generated, the control scene generation method further comprises the following steps: and storing an execution action sequence and a control instruction under the condition that a user confirms to store the storage instruction of the control scene, so that under the condition that the control instruction is received, a plurality of intelligent household devices are controlled to execute according to the execution action sequence in sequence, wherein the execution action sequence is the execution action sequence of the plurality of intelligent household devices corresponding to the control scene, and the control instruction is the instruction for controlling the plurality of intelligent household devices to execute according to the execution action sequence in sequence.
According to the control scene generation method provided by the application, after the control scene corresponding to the associated action instruction set is generated, the control scene generation method further comprises the following steps: deleting the saved execution action sequence and the saved control instruction under the condition that a user confirms to delete a saved control scene, wherein the saved control scene is confirmed to be saved by the user; the stored execution action sequence is stored and corresponds to the stored control scene, and the stored control instruction is an instruction for controlling a plurality of intelligent home devices to execute according to the stored execution action sequence in sequence.
According to the control scene generation method provided by the application, the control instruction is determined by adopting the following modes: obtaining a plurality of candidate control instructions, wherein the candidate control instructions are obtained through preset; determining a target candidate control instruction corresponding to a selection instruction in a plurality of candidate control instructions based on the selection instruction of a user, and taking the target candidate control instruction as the control instruction or receiving user-defined control voice sent by the user; and obtaining the control instruction based on the custom control voice.
According to the control scenario generation method provided by the application, after the target control scenario corresponding to the associated action instruction set is determined based on the associated action instruction data set, the control scenario generation method further includes: determining a first target control scene and a second target control scene in a plurality of target control scenes based on the occurrence positions of the target control scenes, wherein the distance difference between the first occurrence position of the first target control scene and the second occurrence position of the second target control scene is smaller than or equal to a distance threshold; when the time interval between the first occurrence time and the second occurrence time is smaller than or equal to an interval threshold, and when a control instruction corresponding to the first target control scene is received again, the intelligent home equipment is controlled to sequentially execute according to an execution action sequence corresponding to the second target control scene through a preset time interval, wherein the first occurrence time is the occurrence time of the control instruction corresponding to the first target control scene, and the second occurrence time is the occurrence time of the control instruction corresponding to the second target control scene.
The application also provides a control scene generation device, which comprises: the first module is used for acquiring a plurality of action instructions initiated by a user to a plurality of intelligent home devices and a corresponding user space position when the user initiates the action instructions; the second module is used for carrying out association analysis on a plurality of action instructions based on the user space position, and determining an association action instruction set in the action instructions; and the third module is used for generating a control scene corresponding to the associated action instruction set based on the associated action instruction set.
The present application also provides an electronic device comprising a memory in which a computer program is stored, and a processor arranged to implement a control scene generation method as described in any of the above by execution of the computer program.
The present application also provides a computer-readable storage medium including a stored program, wherein the program when executed performs a control scene generation method according to any one of the above.
The present application also provides a computer program product comprising a computer program which when executed by a processor implements a control scene generation method as described in any one of the above.
According to the control scene generation method, the device, the storage medium and the electronic device, the associated action instruction set corresponding to the user preference can be obtained by carrying out associated analysis on the plurality of action instructions based on the user space position, and the control scene meeting the personalized requirements of the user can be generated based on the associated action instruction set, so that a foundation is laid for personalized control of intelligent household equipment based on the control scene, and the experience and satisfaction of the user can be improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the description of the embodiments or the prior art will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
Fig. 1 is a schematic diagram of a hardware environment of a control scenario generation method according to an embodiment of the present application;
FIG. 2 is one of the flow diagrams of the control scenario generation method provided in the present application;
FIG. 3 is a second flow chart of the control scenario generation method provided in the present application;
FIG. 4 is a third flow chart of the control scenario generation method provided in the present application;
FIG. 5 is a flow chart of a control scenario generation method provided in the present application;
fig. 6 is a schematic structural diagram of a control scenario generating device provided in the present application;
fig. 7 is a schematic structural diagram of an electronic device provided in the present application.
Detailed Description
In order to make the present application solution better understood by those skilled in the art, the following description will be made in detail and with reference to the accompanying drawings in the embodiments of the present application, it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
According to one aspect of the embodiments of the present application, a control scene generation method is provided. The control scene generation method is widely applied to full-house intelligent digital control application scenes such as intelligent Home (Smart Home), intelligent Home equipment ecology, intelligent Home (Intelligence House) ecology and the like. Alternatively, in the present embodiment, the above-described control scenario generation method may be applied to a hardware environment constituted by the terminal device 102 and the server 104 as shown in fig. 1. As shown in fig. 1, the server 104 is connected to the terminal device 102 through a network, and may be used to provide services (such as application services and the like) for a terminal or a client installed on the terminal, a database may be set on the server or independent of the server, for providing data storage services for the server 104, and cloud computing and/or edge computing services may be configured on the server or independent of the server, for providing data computing services for the server 104.
The network may include, but is not limited to, at least one of: wired network, wireless network. The wired network may include, but is not limited to, at least one of: a wide area network, a metropolitan area network, a local area network, and the wireless network may include, but is not limited to, at least one of: WIFI (Wireless Fidelity ), bluetooth. The terminal device 102 may not be limited to a PC, a mobile phone, a tablet computer, an intelligent air conditioner, an intelligent smoke machine, an intelligent refrigerator, an intelligent oven, an intelligent cooking range, an intelligent washing machine, an intelligent water heater, an intelligent washing device, an intelligent dish washer, an intelligent projection device, an intelligent television, an intelligent clothes hanger, an intelligent curtain, an intelligent video, an intelligent socket, an intelligent sound box, an intelligent fresh air device, an intelligent kitchen and toilet device, an intelligent bathroom device, an intelligent sweeping robot, an intelligent window cleaning robot, an intelligent mopping robot, an intelligent air purifying device, an intelligent steam box, an intelligent microwave oven, an intelligent kitchen appliance, an intelligent purifier, an intelligent water dispenser, an intelligent door lock, and the like.
In yet another embodiment, the control scenario generation method provided by the application can be applied to intelligent home appliances. The intelligent household appliance is a household appliance product formed by introducing a microprocessor, a sensor technology and a network communication technology into household appliances, has the functions of automatically sensing the space state of a house, the state of the household appliance and the service state of the household appliance, and can automatically control and receive control instructions of a house user in the house or remotely. It is understood that the smart home appliance is an integral part of the smart home.
In order to further describe the control scenario generation method provided in the present application, the following will be described with reference to fig. 2.
Fig. 2 is a schematic flow chart of a control scenario generation method provided in the present application.
In an exemplary embodiment of the present application, the control scenario generation method may be applied to a cloud system, and in an example, the cloud system may learn habits of a user based on an action instruction about an intelligent home device initiated by the user, so as to generate a control scenario meeting personalized requirements of the user and habits of the user. As can be seen in fig. 2, the control scenario generation method may include steps 210 to 230, and each step will be described below.
In step 210, a plurality of action instructions initiated by a user on a plurality of smart home devices and corresponding user space positions when the user initiates the action instructions are obtained.
In one embodiment, a plurality of action instructions initiated by a user on a plurality of smart home devices may be obtained. In an example, the cloud system may obtain, through communication transmission, a plurality of action instructions initiated by a user to a plurality of smart home devices.
In yet another embodiment, the user spatial location may also be obtained based on an infrared detection device or sensor, etc., and information about the user spatial location uploaded to the cloud system. It should be noted that, the user space position may be obtained by other manners, and in this embodiment, a specific form of obtaining the user space position is not limited.
The action instructions and the user space positions acquired by the cloud system can be stored in the form of log data, and the log data can be used as data for controlling scene mining under the condition of permission of a user security protocol in the application process.
In step 220, a correlation analysis is performed on the plurality of action instructions based on the user spatial location, and a set of correlated action instructions is determined from the plurality of action instructions.
In one embodiment, a correlation algorithm based on a correlation rule may perform a correlation analysis on the acquired plurality of action instructions in combination with the user spatial location to determine a set of correlated action instructions among the plurality of action instructions. The association rule discovery algorithm may be an Apriori algorithm or an FP-Growth algorithm, and the association rule discovery algorithm may be adjusted according to actual situations, and in this embodiment, the association rule discovery algorithm is not specifically limited.
It can be understood that the associated action instruction set can characterize the preference of the user and the smart home control habit, that is, the user preference sequentially controls the plurality of smart home devices according to the sequence of the action instructions in the associated action instruction set so as to meet the personalized requirements of the user.
In step 230, a control scenario corresponding to the associated action instruction set is generated based on the associated action instruction set.
In one embodiment, a control scene may be generated from an associated set of action instructions. The control scenario may be understood as a set of a plurality of associated action instructions for completing a specific task, where the specific task is a task or an event meeting the personalized needs of the user, for example, may be a setting of a viewing mode meeting the personalized needs of the user, for example, including sequentially turning off a window curtain, turning off an electric lamp, turning on a display, and the like; it is also possible to set a shower mode that meets the user's personalized needs, including for example, sequentially turning on music, turning on a magic mirror, and turning off a toilet, etc. In the application process, when a user wants to start a control scene, each intelligent home device can be controlled to run in sequence according to the associated action instruction set.
According to the control scene generation method, the plurality of action instructions are subjected to association analysis based on the user space position, so that the association action instruction set corresponding to the user preference can be obtained, and the control scene meeting the personalized requirements of the user can be generated based on the association action instruction set, so that a foundation is laid for personalized control of intelligent household equipment based on the control scene, and the experience and satisfaction of the user can be improved.
Fig. 3 is a second flowchart of the control scenario generation method provided in the present application.
The procedure of the control scenario generation method provided in the present application will be described with reference to fig. 3.
In an exemplary embodiment of the present application, as can be seen in fig. 3, the control scenario generation method may include steps 310 to 340, wherein step 310 is the same as or similar to step 210, step 340 is the same as or similar to step 230, and the detailed description and the beneficial effects thereof are shown in the foregoing, which are not repeated in this embodiment, and step 320 and step 330 will be described below respectively.
In step 320, the plurality of action instructions are filtered based on the user space position, so as to obtain a plurality of first action instructions, where the first action instructions are action instructions in which the corresponding user space position is a preset space position.
In step 330, a correlation analysis is performed on the plurality of first action instructions based on a correlation rule discovery algorithm, and a set of correlation action instructions is determined from the plurality of first action instructions.
It can be appreciated that in the same control scenario, the smart home devices operated by the user are often in the same area. In order to more accurately mine a control scene meeting the needs of a user, in an example, a first action instruction can be screened out from a plurality of action instructions according to the spatial position of the user, wherein the first action instruction is an action instruction with the corresponding spatial position of the user as a preset spatial position. It should be noted that, the preset spatial position may be determined according to an actual situation, in an example, the preset spatial position may refer to the same spatial position, or may refer to a spatial position corresponding to a position difference between spatial positions that is less than or equal to a position difference threshold, and in this embodiment, the preset spatial position is not specifically limited.
Further, association analysis can be performed on the screened first action instructions based on an association rule discovery algorithm, and an association action instruction set is determined from a plurality of first action instructions. In this embodiment, the first action instruction refers to an action instruction of a spatial position where a user initiating the instruction is located at a preset spatial position, so that the first action instruction is ensured to be an action instruction with an association relationship, and then association analysis is performed based on the first action instruction, so that an association action instruction set can be further accurately determined in a plurality of first action instructions, and a foundation is laid for obtaining a control scene meeting the personalized requirements of the user.
Fig. 4 is a third flowchart of the control scenario generation method provided in the present application.
The procedure of the control scenario generation method provided in the present application will be described with reference to fig. 4.
In an exemplary embodiment of the present application, the control scenario may include a target control scenario, where an occurrence position of the target control scenario is an occurrence position corresponding to a preset spatial position.
Referring to fig. 4, the control scenario generation method may include steps 410 to 480, wherein steps 410 to 420 are the same as or similar to steps 310 to 320, and step 480 is the same as or similar to step 340, and the detailed description and the beneficial effects thereof are shown in the foregoing, which are not repeated in the present embodiment, and steps 430 to 470 will be described respectively.
In step 430, the time of occurrence of the first action instruction is obtained.
In step 440, the plurality of first action instructions are filtered based on the occurrence time, so as to obtain a plurality of second action instructions, where the second action instructions are first action instructions with the occurrence time interval of adjacent action instructions being less than or equal to the preset time interval.
In step 450, association analysis is performed on the plurality of second action instructions based on the association rule discovery algorithm, so as to obtain a plurality of candidate association action instruction sets.
It should be noted that, the action commands of the smart home devices belonging to the same control scene are often action commands in which the occurrence time interval of adjacent action commands is less than or equal to the preset time interval. In the application process, in order to more accurately and quickly screen out the related action instruction set from a plurality of action instructions, the screening can be performed based on the interval of the occurrence time of the action instructions.
In one embodiment, the occurrence time of the first action command may be obtained, and then the plurality of first action commands may be filtered based on the occurrence time to obtain a plurality of second action commands. The second action command may be a first action command in which an interval of occurrence time of adjacent action commands is less than or equal to a preset time interval. Further, based on the association rule discovery algorithm, association analysis is performed on the plurality of second action instructions, so as to obtain a plurality of groups of candidate association action instruction sets.
It should be noted that, the preset time interval may be adjusted according to the actual situation, for example, may be 10 minutes, and in this embodiment, the preset time interval is not specifically limited.
In step 460, the occurrence frequency of the multiple candidate associated action instruction sets within the preset time period is determined.
In step 470, the candidate associated action instruction set with the highest occurrence frequency is determined based on the occurrence frequency, and the candidate associated action instruction set with the highest occurrence frequency is used as the associated action instruction data set.
In one embodiment, the second action instruction may be subjected to a correlation analysis, and multiple sets of candidate sets of correlated action instructions may be obtained. Because the occurrence position of the target control scene is the same as the occurrence position information corresponding to the multiple sets of candidate associated action instruction sets, namely the occurrence position corresponding to the preset spatial position, the associated action instruction set corresponding to the target control scene can be screened out from the multiple sets of candidate associated action instruction sets.
In yet another embodiment, the occurrence frequency of multiple sets of candidate associated action instruction sets in a preset time period may be determined respectively, and a set of candidate associated action instruction sets with the highest occurrence frequency may be used as the associated action instruction set. Further, a target control scene is determined based on the associated action instruction set, namely a group of candidate associated action instruction sets with highest occurrence frequency.
In one embodiment, the mining of a scene before a user's shower may be illustrated, and the action instructions may be mined based on Apriori algorithm, so as to find some associated actions before the user's action of opening the shower, so as to obtain a set of associated actions (corresponding to the set of associated action instructions).
In the application process, the action of the linkage can be limited to a preset position, for example, filtering is performed in a bathroom, based on the instruction generation position information (corresponding to the user space position corresponding to the user when the user initiates the action instruction) of the action instruction, so as to obtain a plurality of groups of action instructions with the same instruction generation position information. Further, the generation time interval of action instructions with the same instruction generation position information is limited to 10 minutes for filtering, so that the action instructions after screening are obtained. Further, performing association analysis on a plurality of groups of action instructions which have the same instruction generation position information and have the generation time interval within 10 minutes to obtain a plurality of groups of candidate association action instruction sets; and then sequencing the multiple groups of candidate associated action instruction sets according to the frequency, and defining a group of instruction sets with highest frequencies of the candidate associated action instruction sets as a pre-bath scene. According to the embodiment, the control scene meeting the personalized requirements of the user can be generated based on the associated action instruction set, and a foundation is laid for personalized control of the intelligent home equipment based on the control scene, so that the experience and satisfaction degree of the user can be improved.
It should be noted that, the user's viewing scene may also be mined, where the mining process may refer to the mining process of the scene before the shower, and in this embodiment, no specific limitation is made.
In order to further describe the control scenario generation method provided by the present application, the following embodiments will be described.
In an exemplary embodiment of the present application, continuing with the embodiment illustrated in fig. 2, after generating a control scenario corresponding to the associated action instruction set (corresponding step 230), the control scenario generating method may further include the following steps:
and storing an execution action sequence and a control instruction under the condition that a user confirms a preservation instruction for preserving the control scene, so that under the condition that the control instruction is received, a plurality of intelligent home devices are controlled to execute according to the execution action sequence in sequence, wherein the execution action sequence is an execution action sequence of the plurality of intelligent home devices corresponding to the control scene, and the control instruction is an instruction for controlling the plurality of intelligent home devices to execute according to the execution action sequence in sequence.
In one embodiment, after the control scene is automatically mined, in order not to affect the experience of the user, prompt information about whether to save the control scene may also be initiated to the user. In an example, the prompt information may be pushed to a mobile terminal device of the user, where the mobile terminal device establishes a connection with the cloud system through communication. In the application process, if a save instruction for confirming to save the control scene is received, the execution action sequence and the control instruction can be stored, so that under the condition that the control instruction is received, a plurality of intelligent home devices are controlled to execute according to the execution action sequence in sequence.
The execution action sequence is an execution action sequence of a plurality of intelligent home devices corresponding to the control scene, and the control instruction is an instruction for controlling the plurality of intelligent home devices to execute according to the execution action sequence in sequence.
In the application process, the associated action instruction set is integrated into a control scene. Therefore, in the application process, when a user initiates a control instruction about a control scene, the plurality of intelligent home devices can be controlled to be sequentially executed according to the execution action sequence under the condition that the control instruction is received. According to the embodiment, the control scene can be automatically mined according to the preference and habit of the user, and under the condition that the user initiates the corresponding control instruction, all intelligent home equipment is automatically controlled to be sequentially executed according to the execution action sequence, so that the actual scene requirement of the user is individually met, and the experience and satisfaction of the user are improved.
In still another exemplary embodiment of the present application, continuing with the embodiment illustrated in fig. 2, after generating a control scenario corresponding to the associated action instruction set (corresponding step 230), the control scenario generation method may further include the steps of:
Deleting the saved execution action sequence and the saved control instruction under the condition that a user confirms to delete a saved control scene, wherein the saved control scene is confirmed to be saved by the user; the stored execution action sequence is stored execution action sequence corresponding to the stored control scene, and the stored control instruction is an instruction for controlling a plurality of intelligent home devices to sequentially execute according to the stored execution action sequence.
In this embodiment, the stored execution action sequence and the stored control instruction corresponding to the old scene (corresponding to the stored control scene) may be deleted from the cloud system according to the deletion instruction of the user, so as to achieve the time-in-time offline of the old scene and save the storage space of the cloud system, thereby ensuring that the intelligent control of the intelligent home device is achieved according to the requirement of the user.
In yet another exemplary embodiment of the present application, the control instructions may be determined in the following manner:
obtaining a plurality of candidate control instructions, wherein the candidate control instructions are obtained through preset;
determining a target candidate control instruction corresponding to the selection instruction from a plurality of candidate control instructions based on the selection instruction of the user, and taking the target candidate control instruction as the control instruction, or
Receiving user-defined control voice sent by a user;
and obtaining a control instruction based on the custom control voice.
In one embodiment, the description will be continued taking as an example the setting of the control instruction by the user based on the mobile terminal device. During application, a plurality of candidate control instructions may be provided for the control scene. When the control APP based on the mobile terminal device receives a selection instruction of the user, a target candidate control instruction corresponding to the selection instruction may be determined from among a plurality of candidate control instructions, and the target candidate control instruction may be used as the control instruction. In this embodiment, a plurality of candidate control instructions are provided for the user to select, so that the user can reduce excessive time spent on setting the control instructions, and the experience and satisfaction of the user can be improved.
In yet another embodiment, the control instruction may be a user-defined control instruction set by the user in order to meet the personalized requirement of the user. In the application process, the user-defined control voice sent by the user can be received, and the control instruction corresponding to the user-defined control voice is obtained based on the user-defined control voice.
Fig. 5 is a flow chart of a control scenario generation method provided in the present application.
In order to further describe the control scenario generation method provided by the present application, the following embodiments will be described.
In an exemplary embodiment of the present application, as can be seen in fig. 5, the control scenario generation method may include steps 501 to 510, wherein steps 501 to 508 are the same as or similar to steps 410 to 480, and the detailed description and the beneficial effects thereof are referred to the foregoing description, and in this embodiment, step 509 and step 510 will be respectively described below without further description.
In step 509, a first target control scene and a second target control scene are determined among the plurality of target control scenes based on the occurrence positions of the target control scenes.
In step 510, when the time interval between the first occurrence time and the second occurrence time is less than or equal to the interval threshold, and the control instruction corresponding to the first target control scene is received again, the plurality of smart home devices are controlled to execute sequentially according to the execution action sequence corresponding to the second target control scene after a preset time interval.
The distance difference between the first occurrence position of the first target control scene and the second occurrence position of the second target control scene is smaller than or equal to a distance threshold. The first occurrence time is the occurrence time of a control instruction corresponding to the first target control scene, and the second occurrence time is the occurrence time of a control instruction corresponding to the second target control scene.
In one embodiment, when the first target control scene and the second target control scene are determined from the plurality of target control scenes according to the occurrence position of the target control scene, it may be determined that a certain association relationship may exist between the first target control scene and the second target control scene. And further judging by combining the occurrence time of the first target control scene and the second target control scene. In the application process, when the time interval between the first occurrence time and the second occurrence time is smaller than or equal to the interval threshold value, the association relation between the first target control scene and the second target control scene is indicated. In an example, when the control instruction corresponding to the first target control scenario is received again, the plurality of smart home devices may be controlled to execute sequentially according to the execution action sequence corresponding to the second target control scenario after a preset time interval.
According to the embodiment, when the occurrence positions and the occurrence time of two target control scenes (corresponding to the first target control scene and the second target control scene) are mined, when the control instruction of the first target control scene is initiated again, the second target control scene is automatically executed after a preset time interval, so that more intelligent control of the intelligent household equipment is realized. According to the control scene generation method provided by the application, the associated action instruction set corresponding to the user preference can be obtained by carrying out association analysis on the plurality of action instructions based on the user space position, and the control scene meeting the personalized requirements of the user can be generated based on the associated action instruction set, so that a foundation is laid for personalized control of the intelligent household equipment based on the control scene, and the experience and satisfaction of the user can be improved.
Based on the same inventive concept, the application also provides a control scene generation device.
The control scenario generating device provided in the present application is described below, and the control scenario generating device described below and the control scenario generating method described above may be referred to correspondingly to each other.
Fig. 6 is a schematic structural diagram of a control scenario generating apparatus provided in the present application.
In an exemplary embodiment of the present application, the control scenario generating apparatus may be applied to an intelligent home device. As can be seen in conjunction with fig. 6, the control scenario generating device may include a first module 610 to a third module 630, and each module will be described below.
The first module 610 may be configured to obtain a plurality of action instructions initiated by a user on a plurality of smart home devices, and a user spatial position corresponding to when the user initiates the action instructions;
a second module 620, which may be configured to perform association analysis on a plurality of action instructions based on the user spatial location, and determine an associated action instruction set from the plurality of action instructions;
the third module 630 may be configured to generate a control scenario corresponding to the set of associated action instructions based on the set of associated action instructions.
In an exemplary embodiment of the present application, the second module 620 may perform association analysis on a plurality of action instructions based on the spatial location of the user, and determine an associated action instruction set from the plurality of action instructions in the following manner:
screening the plurality of action instructions based on the user space position to obtain a plurality of first action instructions, wherein the first action instructions are action instructions with the corresponding user space position being a preset space position;
and carrying out association analysis on the plurality of first action instructions based on an association rule discovery algorithm, and determining an association action instruction set from the plurality of first action instructions.
In an exemplary embodiment of the present application, the control scene may include a target control scene, where an occurrence position of the target control scene is an occurrence position corresponding to a preset spatial position;
the second module 620 may also be configured to:
acquiring the occurrence time of a first action instruction;
the second module 620 may further implement association analysis on the plurality of first action instructions based on the association rule discovery algorithm by determining an association action instruction set from the plurality of first action instructions:
screening the plurality of first action instructions based on the occurrence time to obtain a plurality of second action instructions, wherein the second action instructions are first action instructions with the occurrence time interval of adjacent action instructions smaller than or equal to a preset time interval;
Performing association analysis on a plurality of second action instructions based on an association rule discovery algorithm to obtain a plurality of candidate association action instruction sets;
determining occurrence frequency of a plurality of groups of candidate associated action instruction sets in a preset time period;
based on the occurrence frequency, determining a candidate associated action instruction set with the highest occurrence frequency, and taking the candidate associated action instruction set with the highest occurrence frequency as an associated action instruction data set;
the third module 630 may implement generating a control scenario corresponding to the associated action instruction set based on the associated action instruction set in the following manner:
based on the associated action instruction data set, a target control scenario corresponding to the associated action instruction set is determined.
In an exemplary embodiment of the present application, the third module 630 may be further configured to:
and storing an execution action sequence and a control instruction under the condition that a user confirms a preservation instruction for preserving the control scene, so that under the condition that the control instruction is received, a plurality of intelligent home devices are controlled to execute according to the execution action sequence in sequence, wherein the execution action sequence is an execution action sequence of the plurality of intelligent home devices corresponding to the control scene, and the control instruction is an instruction for controlling the plurality of intelligent home devices to execute according to the execution action sequence in sequence.
In an exemplary embodiment of the present application, the third module 630 may be further configured to:
deleting the saved execution action sequence and the saved control instruction under the condition that a user confirms to delete a saved control scene, wherein the saved control scene is confirmed to be saved by the user; the stored execution action sequence is stored execution action sequence corresponding to the stored control scene, and the stored control instruction is an instruction for controlling a plurality of intelligent home devices to sequentially execute according to the stored execution action sequence.
In an exemplary embodiment of the present application, the third module 630 may determine the control instruction in the following manner:
obtaining a plurality of candidate control instructions, wherein the candidate control instructions are obtained through preset;
determining a target candidate control instruction corresponding to the selection instruction from a plurality of candidate control instructions based on the selection instruction of the user, and taking the target candidate control instruction as the control instruction, or
Receiving user-defined control voice sent by a user;
and obtaining a control instruction based on the custom control voice.
In an exemplary embodiment of the present application, the third module 630 may be further configured to:
Determining a first target control scene and a second target control scene in the plurality of target control scenes based on the occurrence positions of the target control scenes, wherein the difference in distance between the first occurrence position of the first target control scene and the second occurrence position of the second target control scene is less than or equal to a distance threshold;
when the time interval between the first occurrence time and the second occurrence time is smaller than or equal to the interval threshold, and when the control instruction corresponding to the first target control scene is received again, the plurality of intelligent home devices are controlled to sequentially execute according to the execution action sequence corresponding to the second target control scene after the preset time interval, wherein the first occurrence time is the occurrence time of the control instruction corresponding to the first target control scene, and the second occurrence time is the occurrence time of the control instruction corresponding to the second target control scene.
Fig. 7 illustrates a physical schematic diagram of an electronic device, as shown in fig. 7, which may include: processor 710, communication interface (Communications Interface) 720, memory 730, and communication bus 740, wherein processor 710, communication interface 720, memory 730 communicate with each other via communication bus 740. Processor 710 may invoke logic instructions in memory 730 to perform a control scene generation method comprising: acquiring a plurality of action instructions initiated by a user to a plurality of intelligent home devices and corresponding user space positions when the user initiates the action instructions; performing association analysis on a plurality of action instructions based on the user space position, and determining an association action instruction set in the action instructions; and generating a control scene corresponding to the associated action instruction set based on the associated action instruction set.
Further, the logic instructions in the memory 730 described above may be implemented in the form of software functional units and may be stored in a computer readable storage medium when sold or used as a stand alone product. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In another aspect, the present application further provides a computer program product, where the computer program product includes a computer program, where the computer program can be stored on a computer readable storage medium, where the computer program, when executed by a processor, can perform a control scenario generation method provided by the above methods, where the control scenario generation method includes: acquiring a plurality of action instructions initiated by a user to a plurality of intelligent home devices and corresponding user space positions when the user initiates the action instructions; performing association analysis on a plurality of action instructions based on the user space position, and determining an association action instruction set in the action instructions; and generating a control scene corresponding to the associated action instruction set based on the associated action instruction set.
In still another aspect, the present application further provides a computer readable storage medium, where the computer readable storage medium includes a stored program, where the program executes a control scenario generation method provided by the above methods, where the control scenario generation method includes: acquiring a plurality of action instructions initiated by a user to a plurality of intelligent home devices and corresponding user space positions when the user initiates the action instructions; performing association analysis on a plurality of action instructions based on the user space position, and determining an association action instruction set in the action instructions; and generating a control scene corresponding to the associated action instruction set based on the associated action instruction set.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and are not limiting thereof; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions.

Claims (10)

1. A control scene generation method, characterized in that the control scene generation method comprises:
acquiring a plurality of action instructions initiated by a user to a plurality of intelligent home devices and corresponding user space positions when the user initiates the action instructions;
performing association analysis on a plurality of action instructions based on the user space position, and determining an association action instruction set in the action instructions;
and generating a control scene corresponding to the associated action instruction set based on the associated action instruction set.
2. The control scenario generation method according to claim 1, wherein the performing association analysis on the plurality of action instructions based on the user space position, and determining an association action instruction set from the plurality of action instructions, specifically includes:
screening the action instructions based on the user space positions to obtain a plurality of first action instructions, wherein the first action instructions are action instructions with corresponding user space positions being preset space positions;
and carrying out association analysis on the plurality of first action instructions based on an association rule discovery algorithm, and determining an association action instruction set from the plurality of first action instructions.
3. The control scene generation method according to claim 2, wherein the control scene includes a target control scene, wherein an occurrence position of the target control scene is an occurrence position corresponding to the preset spatial position;
before the association rule discovery algorithm performs association analysis on the plurality of first action instructions, the control scene generation method further includes:
acquiring the occurrence time of the first action instruction;
the association rule discovery algorithm is based on performing association analysis on a plurality of first action instructions, and determining an association action instruction set from the plurality of first action instructions, wherein the association action instruction set specifically comprises:
screening the plurality of first action instructions based on the occurrence time to obtain a plurality of second action instructions, wherein the second action instructions are first action instructions with the occurrence time interval of adjacent action instructions smaller than or equal to a preset time interval;
performing association analysis on a plurality of second action instructions based on an association rule discovery algorithm to obtain a plurality of candidate association action instruction sets;
determining occurrence frequency of a plurality of groups of candidate associated action instruction sets in a preset time period;
Determining a candidate associated action instruction set with highest occurrence frequency based on the occurrence frequency, and taking the candidate associated action instruction set with highest occurrence frequency as the associated action instruction data set;
the generating a control scene corresponding to the associated action instruction set based on the associated action instruction set specifically comprises:
and determining the target control scene corresponding to the associated action instruction set based on the associated action instruction data set.
4. The control scenario generation method according to claim 1, characterized in that after the generation of the control scenario corresponding to the associated action instruction set, the control scenario generation method further comprises:
and storing an execution action sequence and a control instruction under the condition that a user confirms to store the storage instruction of the control scene, so that under the condition that the control instruction is received, a plurality of intelligent household devices are controlled to execute according to the execution action sequence in sequence, wherein the execution action sequence is the execution action sequence of the plurality of intelligent household devices corresponding to the control scene, and the control instruction is the instruction for controlling the plurality of intelligent household devices to execute according to the execution action sequence in sequence.
5. The control scenario generation method according to claim 1, characterized in that after the generation of the control scenario corresponding to the associated action instruction set, the control scenario generation method further comprises:
deleting the saved execution action sequence and the saved control instruction under the condition that a user confirms to delete a saved control scene, wherein the saved control scene is confirmed to be saved by the user; the stored execution action sequence is stored and corresponds to the stored control scene, and the stored control instruction is an instruction for controlling a plurality of intelligent home devices to execute according to the stored execution action sequence in sequence.
6. The control scene generation method according to claim 4, wherein the control instruction is determined by:
obtaining a plurality of candidate control instructions, wherein the candidate control instructions are obtained through preset;
determining a target candidate control instruction corresponding to a selection instruction in a plurality of candidate control instructions based on the selection instruction of a user, and taking the target candidate control instruction as the control instruction, or
Receiving user-defined control voice sent by a user;
and obtaining the control instruction based on the custom control voice.
7. The control scenario generation method according to claim 3, characterized in that after the determination of the target control scenario corresponding to the associated action instruction set based on the associated action instruction data set, the control scenario generation method further comprises:
determining a first target control scene and a second target control scene in a plurality of target control scenes based on the occurrence positions of the target control scenes, wherein the distance difference between the first occurrence position of the first target control scene and the second occurrence position of the second target control scene is smaller than or equal to a distance threshold;
when the time interval between the first occurrence time and the second occurrence time is smaller than or equal to an interval threshold, and when a control instruction corresponding to the first target control scene is received again, the intelligent home equipment is controlled to sequentially execute according to an execution action sequence corresponding to the second target control scene through a preset time interval, wherein the first occurrence time is the occurrence time of the control instruction corresponding to the first target control scene, and the second occurrence time is the occurrence time of the control instruction corresponding to the second target control scene.
8. A control scene generation device, characterized in that the control scene generation device includes:
the first module is used for acquiring a plurality of action instructions initiated by a user to a plurality of intelligent home devices and a corresponding user space position when the user initiates the action instructions;
the second module is used for carrying out association analysis on a plurality of action instructions based on the user space position, and determining an association action instruction set in the action instructions;
and the third module is used for generating a control scene corresponding to the associated action instruction set based on the associated action instruction set.
9. A computer-readable storage medium, characterized in that the computer-readable storage medium includes a stored program, wherein the program when run performs the control scene generation method of any one of claims 1 to 7.
10. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to execute the control scenario generation method according to any one of claims 1 to 7 by means of the computer program.
CN202211635359.7A 2022-12-19 2022-12-19 Control scene generation method and device, storage medium and electronic device Pending CN116009438A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211635359.7A CN116009438A (en) 2022-12-19 2022-12-19 Control scene generation method and device, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211635359.7A CN116009438A (en) 2022-12-19 2022-12-19 Control scene generation method and device, storage medium and electronic device

Publications (1)

Publication Number Publication Date
CN116009438A true CN116009438A (en) 2023-04-25

Family

ID=86024140

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211635359.7A Pending CN116009438A (en) 2022-12-19 2022-12-19 Control scene generation method and device, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN116009438A (en)

Similar Documents

Publication Publication Date Title
US11546186B2 (en) Automatic device orchestration and configuration
CN115167164A (en) Method and device for determining equipment scene, storage medium and electronic device
CN115327934A (en) Intelligent household scene recommendation method and system, storage medium and electronic device
CN114755931A (en) Control instruction prediction method and device, storage medium and electronic device
CN115343962A (en) Intelligent household appliance control method and system, intelligent screen and storage medium
CN116009438A (en) Control scene generation method and device, storage medium and electronic device
CN116540556A (en) Equipment control method and device based on user habit
CN115309062A (en) Device control method, device, storage medium, and electronic apparatus
CN115167160A (en) Device control method and apparatus, device control system, and storage medium
CN115631832A (en) Cooking plan determination method and device, storage medium and electronic device
CN114691731A (en) Usage preference determination method and apparatus, storage medium, and electronic apparatus
CN118226764A (en) Control method and device of mobile terminal equipment, storage medium and electronic device
CN115001885B (en) Equipment control method and device, storage medium and electronic device
CN116088334A (en) Smart home device joint control method, storage medium and electronic device
CN115148204B (en) Voice wakeup processing method and device, storage medium and electronic device
CN117914635A (en) Method and device for controlling smart home equipment to leave home based on vehicle terminal
CN115930378A (en) Control instruction sending method and device and electronic device
CN117240874A (en) Equipment linkage method and device, storage medium and electronic device
CN115473755A (en) Control method and device of intelligent equipment based on digital twins
CN114691730A (en) Storage position prompting method and device, storage medium and electronic device
CN115987935A (en) Notification message pushing method and device, storage medium and electronic device
CN116382110A (en) Equipment scheduling method and device, storage medium and electronic device
CN115616930A (en) Control instruction sending method and device, storage medium and electronic device
CN117376042A (en) Functional platform configuration method and device of intelligent equipment and electronic device
CN115562056A (en) Instruction sending method and device, storage medium and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination