CN110687802A - Intelligent household electrical appliance control method and intelligent household electrical appliance control device - Google Patents
Intelligent household electrical appliance control method and intelligent household electrical appliance control device Download PDFInfo
- Publication number
- CN110687802A CN110687802A CN201810734605.1A CN201810734605A CN110687802A CN 110687802 A CN110687802 A CN 110687802A CN 201810734605 A CN201810734605 A CN 201810734605A CN 110687802 A CN110687802 A CN 110687802A
- Authority
- CN
- China
- Prior art keywords
- comfort
- evaluation result
- control action
- parameter information
- state
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 49
- 238000011156 evaluation Methods 0.000 claims abstract description 183
- 230000009471 action Effects 0.000 claims abstract description 163
- 230000002787 reinforcement Effects 0.000 claims abstract description 66
- 230000007613 environmental effect Effects 0.000 claims description 20
- 238000006243 chemical reaction Methods 0.000 claims description 17
- 230000007704 transition Effects 0.000 claims description 17
- 238000003062 neural network model Methods 0.000 claims description 2
- 230000008569 process Effects 0.000 description 8
- 238000010586 diagram Methods 0.000 description 7
- 238000012545 processing Methods 0.000 description 5
- 238000004134 energy conservation Methods 0.000 description 4
- 238000013507 mapping Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000013075 data extraction Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 239000002245 particle Substances 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005728 strengthening Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B15/00—Systems controlled by a computer
- G05B15/02—Systems controlled by a computer electric
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B19/00—Programme-control systems
- G05B19/02—Programme-control systems electric
- G05B19/418—Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS] or computer integrated manufacturing [CIM]
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/20—Pc systems
- G05B2219/26—Pc applications
- G05B2219/2642—Domotique, domestic, home control, automation, smart house
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Manufacturing & Machinery (AREA)
- Quality & Reliability (AREA)
- Air Conditioning Control Device (AREA)
- Selective Calling Equipment (AREA)
Abstract
The application relates to an intelligent household appliance control method and an intelligent household appliance control device, and belongs to the field of intelligent household appliance control. The intelligent household appliance control method comprises the following steps: acquiring parameter information; acquiring a control action corresponding to the parameter information through a preset model based on the parameter information, wherein the preset model comprises a reinforcement learning model, and the reinforcement learning model can be adjusted according to a comfort evaluation result; and controlling the operation according to the control action. According to the method and the device, based on the acquired parameter information, the control action with a good evaluation result is output to control the intelligent household appliance to operate, the control action executed by the intelligent household appliance can meet the comfort requirement of a user, comfort control is achieved, and user experience is improved.
Description
Technical Field
The application belongs to the field of intelligent household appliance control, and particularly relates to an intelligent household appliance control method and an intelligent household appliance control device.
Background
The intelligent household appliance can improve the comfort of home life, and by taking an air conditioner as an example, a comfortable environment temperature environment can be provided for a user.
Currently, the air conditioner control method is that a user sets an operating temperature, and the air conditioner performs feedback adjustment according to the ambient temperature of a room where the air conditioner is located, so as to keep the ambient temperature of the room at the set temperature of the air conditioner. In the aspect of comfort control, a user usually sets an air conditioner by means of the user's body feeling to realize comfort control. In this case, there is a problem in that the air conditioner operation state set by the user each time may not be the most comfortable operation state, thereby resulting in a poor user experience.
Therefore, there is still a need for improvement in the aspects of comfort control and user experience in the operation of intelligent home appliances.
Disclosure of Invention
In order to overcome the problems in the related art at least to a certain extent, the present application provides an intelligent household appliance control method and an intelligent household appliance control device.
In order to achieve the purpose, the following technical scheme is adopted in the application:
an intelligent household appliance control method comprises the following steps:
acquiring parameter information;
acquiring a control action corresponding to the parameter information through a preset model based on the parameter information, wherein the preset model comprises a reinforcement learning model, and the reinforcement learning model can be adjusted according to a comfort evaluation result;
and controlling the operation according to the control action.
Further, the acquiring the parameter information includes:
obtaining environmental parameter information, and/or,
and acquiring parameter information of the intelligent household appliance.
Further, the acquiring the environmental parameter information includes:
acquiring environmental parameter information acquired and/or configured by the intelligent household appliance; and/or
And acquiring environment parameter information acquired and/or configured by external equipment of the intelligent household appliance.
Further, the preset model further comprises a state transition model;
the obtaining of the control action corresponding to the parameter information through a preset model based on the parameter information includes:
acquiring a state parameter corresponding to the parameter information through the state conversion model based on the parameter information, wherein the state conversion model is used for representing the corresponding relation between the parameter information and the state parameter;
and generating a control action through the reinforcement learning model based on the state parameter, wherein the reinforcement learning model is used for representing the corresponding relation between the state parameter and the control action.
Further, the state transition model comprises one or more of a state comparison table, a neural network model and a preset logic rule.
Further, the reinforcement learning model can be adjusted according to the comfort evaluation result, and the method comprises the following steps:
the probability of the control action output by the reinforcement learning model can be adjusted according to a comfort evaluation result.
Further, still include:
obtaining the comfort evaluation result after the control operation according to the control action;
and updating the reinforcement learning model according to the comfort evaluation result.
Further, the obtaining the comfort evaluation result after the operation according to the control action includes:
acquiring state parameters before and after executing corresponding operation according to the control action;
calculating a first comfort value and a second comfort value according to a preset comfort evaluation algorithm, wherein the first comfort value is a comfort value corresponding to the state parameter before the corresponding operation is executed according to the control action, and the second comfort value is a comfort value corresponding to the state parameter after the corresponding operation is executed according to the control action;
and obtaining the comfort evaluation result according to the first comfort value and the second comfort value.
Furthermore, the same or different weights are set corresponding to the various state parameters in the comfort evaluation algorithm.
Further, the obtaining the comfort evaluation result after the operation according to the control action includes:
and obtaining a comfort evaluation result fed back by the user after executing corresponding operation according to the control action.
Further, the comfort evaluation result includes a positive evaluation result or a negative evaluation result, and the updating the reinforcement learning model according to the comfort evaluation result includes:
if the comfort evaluation result is a positive evaluation result, increasing the output probability of the control action; alternatively, the first and second electrodes may be,
and if the comfort evaluation result is a negative evaluation result, reducing the output probability of the control action.
An intelligent appliance control device comprising:
the first acquisition module is used for acquiring parameter information;
the second acquisition module is used for acquiring control actions corresponding to the parameter information through a preset model based on the parameter information, wherein the preset model comprises a reinforcement learning model, and the reinforcement learning model can be adjusted according to comfort evaluation results;
and the control module is used for controlling the operation according to the control action.
Further, the first obtaining module is specifically configured to:
obtaining environmental parameter information, and/or,
and acquiring parameter information of the intelligent household appliance.
Further, in the second obtaining module, the preset model further includes a state transition model;
the obtaining of the control action corresponding to the parameter information through a preset model based on the parameter information includes:
acquiring a state parameter corresponding to the parameter information through the state conversion model based on the parameter information, wherein the state conversion model is used for representing the corresponding relation between the parameter information and the state parameter;
and generating a control action through the reinforcement learning model based on the state parameter, wherein the reinforcement learning model is used for representing the corresponding relation between the state parameter and the control action.
Further, the intelligent household electrical appliance control device further comprises:
the evaluation module is used for obtaining the comfort evaluation result after the control operation according to the control action;
and the updating module is used for updating the reinforcement learning model according to the comfort evaluation result.
Further, the evaluation module is specifically configured to:
acquiring state parameters before and after executing corresponding operation according to the control action;
calculating a first comfort value and a second comfort value according to a preset comfort evaluation algorithm, wherein the first comfort value is a comfort value corresponding to the state parameter before the corresponding operation is executed according to the control action, and the second comfort value is a comfort value corresponding to the state parameter after the corresponding operation is executed according to the control action;
and obtaining the comfort evaluation result according to the first comfort value and the second comfort value.
Further, the evaluation module is further specifically configured to:
and obtaining a comfort evaluation result fed back by the user after executing corresponding operation according to the control action.
Further, the update module is specifically configured to:
the comfort evaluation result comprises a positive evaluation result or a negative evaluation result;
if the comfort evaluation result is a positive evaluation result, increasing the output probability of the control action; alternatively, the first and second electrodes may be,
and if the comfort evaluation result is a negative evaluation result, reducing the output probability of the control action.
This application adopts above technical scheme, possesses following beneficial effect at least:
this application is based on the parameter information who acquires, through predetermineeing the control action that the model acquireed and parameter information corresponds, predetermine the model including strengthening the learning model, strengthen the learning model and can adjust according to the travelling comfort evaluation result, through the good control action of output evaluation result to control intelligent household electrical appliances operation, the control action that realizes intelligent household electrical appliances and carry out can satisfy user's travelling comfort demand, with this travelling comfort control that realizes intelligent household electrical appliances, promote user experience.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flowchart of an intelligent household appliance control method according to an embodiment of the present application;
FIG. 2 is a schematic structural diagram of a state transition model provided in an embodiment of the present application;
fig. 3 is a schematic flowchart of an intelligent household appliance control method according to another embodiment of the present application;
fig. 4 is a schematic flowchart of an intelligent household appliance control method according to another embodiment of the present application;
fig. 5 is a schematic structural diagram of an intelligent appliance control device according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an intelligent appliance control device according to another embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail below. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the examples given herein without making any creative effort, shall fall within the protection scope of the present application.
Fig. 1 is a schematic flow chart of an intelligent household appliance control method according to an embodiment of the present application, and as shown in fig. 1, the intelligent household appliance control method includes the following steps:
step S101, acquiring parameter information.
It can be understood that the parameter information is related to comfort of the control action of the intelligent household appliance, and the parameter information may be environment parameter information. For example, the intelligent household appliance utilizes environmental parameter information acquired by a sensor configured by the intelligent household appliance, such as indoor temperature, humidity and particle information; for example, room information configured in the intelligent home appliance, such as the size, orientation, and lighting of a room. Alternatively, the environmental parameter information may be collected and/or configured by an external device of the intelligent household appliance. The intelligent household appliance can receive environment parameter information sent by external equipment, for example, the intelligent household appliance receives local weather information sent by a cloud server through a network, such as local temperature, humidity, rain, snow and other information. Or, the intelligent household appliance can also be associated with other intelligent household appliances and sensors to receive the environmental parameter information acquired by the other intelligent household appliances and the sensors. For example, the method is associated with other intelligent household appliances, and receives information such as temperature, humidity and the like acquired by other intelligent household appliances; for example, the intelligent household appliance is associated with a door and window sensor, the door and window sensor obtains the opening and closing state information of the door and the window, and then the intelligent household appliance receives the opening and closing state information sent by the door and window sensor; for example, the smart home appliance may receive information sent by a control center of the smart home system, such as room information configured in the control center of the smart home system.
From the viewpoint of obtaining, the environmental parameter information may be obtained by the intelligent appliance itself, or obtained by the intelligent appliance from another device.
From a specific information perspective, in one specific embodiment, the environmental parameter information may include at least one of: local weather information such as temperature, humidity, rain, snow, etc.; room information of a room in which the intelligent household appliance is located, such as space size, orientation, lighting condition and the like; and the intelligent household appliance is used for providing information of other devices in the room, such as door and window state information, for example, door and window opening or closing.
In addition, the parameter information may be parameter information of the intelligent home appliance itself, such as runtime length information of the intelligent home appliance.
Through above-mentioned embodiment, can realize the diversification that parameter information obtained, diversified parameter information synthesizes and is used for controlling intelligent household electrical appliances, and intelligent household electrical appliances control through responding diversified parameter information, replaces the user to operate in person, can promote user experience.
In a specific embodiment, the intelligent appliance includes, but is not limited to, an intelligent air conditioner. By taking the intelligent air conditioner as an example, the intelligent air conditioner can respond to diversified parameter information to control, replace the user to operate in person, improve the user experience, and also can realize the purpose of energy conservation under some conditions, such as when the local hot weather is suddenly cooled down and becomes cool, and under the condition that the user is unknown, the intelligent air conditioner can operate according to the local weather control at the moment, so that the energy conservation can be realized.
And S102, acquiring a control action corresponding to the parameter information through a preset model based on the parameter information, wherein the preset model comprises a reinforcement learning model, and the reinforcement learning model can be adjusted according to a comfort evaluation result.
In the scheme, the control action can be generated through the reinforcement learning model, and the reinforcement learning model can be adjusted according to the comfort evaluation result, so that the control action generated by adjustment meets the comfort experience of a user.
In some embodiments, the preset model may further include a state transition model, and accordingly, the obtaining, based on the parameter information, a control action corresponding to the parameter information through the preset model includes:
acquiring a state parameter corresponding to the parameter information through the state conversion model based on the parameter information, wherein the state conversion model is used for representing the corresponding relation between the parameter information and the state parameter;
and generating a control action through the reinforcement learning model based on the state parameter, wherein the reinforcement learning model is used for representing the corresponding relation between the state parameter and the control action.
Fig. 2 is a schematic structural diagram of a state transition model according to an embodiment of the present application, as shown in fig. 2, the input of the state transition model 20 is parameter information, and the output is a state parameter, for example, the parameter information includes a door and window closing condition 201, a weather environment condition 202 (including temperature, humidity, rain, snow, and the like), and room information 203 (including space size, orientation, lighting, and the like), and the state parameter is a preset fixed number, as shown in fig. 2, three state parameters are taken as an example. The specific state parameters may be set according to actual conditions, for example, taking the intelligent household appliance as an air conditioner, the state parameters may include temperature, humidity and lighting, and may be three states obtained by comprehensively determining information of various parameters.
The state transition model 20 may be an artificially set logic rule, a state lookup table, a neural network structure, or a mixture of the three. The output is a simplified mapping of the input information, and the specific output parameter type is determined according to the actual control target. Typically, the establishment of this process requires extensive refinement or training of the actual case data. For example, the state parameter B corresponding to the parameter information a can be determined according to actual case data extraction. Therefore, when the parameter information is obtained as a, the corresponding state parameter can be obtained as B according to the state transition model.
The method is suitable for the condition that the parameter information is complex and numerous through state conversion, converts the complex parameter information into the state parameter with the mapping relation, can summarize the complex parameter information through the state parameter, realizes simplification of data processing, and can avoid the processing pressure of a reinforcement learning model when facing the complex and numerous parameter information.
The input of the reinforcement learning model is a state parameter, and the output is a control action. Taking an air conditioner as an example, the state parameter is temperature decrease, and the control action is temperature increase, for example. The output probability of the output parameter of the reinforcement learning model can be adjusted according to the comfort evaluation result, so that each time the reinforcement model outputs the control action, the control action with the best comfort evaluation result can be obtained.
In one embodiment, the reinforcement learning model can be adjusted according to the comfort evaluation result, and includes: the probability of the control action output by the reinforcement learning model can be adjusted according to a comfort evaluation result.
It will be appreciated that the output probability of the control action may be adjusted according to the comfort assessment result, such that the resulting control action may be adapted to the comfort experience of the user with the maximum probability.
And step S103, controlling the operation according to the control action.
It can be understood that the reinforcement learning model can be obtained by continuously and circularly updating the comfort evaluation result of the control action, and the control action of the reinforcement learning model can be adjusted according to the comfort evaluation result, corresponds to a specific state parameter, and outputs the best control action corresponding to the state parameter and the best evaluation result according to the reinforcement learning model to control the operation of the intelligent household appliance, so that the control action executed by the intelligent household appliance can realize comfort control, and further improve the user experience.
Fig. 3 is a schematic flow chart of an intelligent household appliance control method according to another embodiment of the present application, and as shown in fig. 3, the intelligent household appliance control method further includes the following steps:
and step S104, obtaining the comfort evaluation result after the control operation according to the control action.
The comfort evaluation result reflects the comfort experience degree given to the user after the control action is executed, and it can be understood that after the control action has a new comfort evaluation result, the control action is adjusted according to the new comfort evaluation result when the intelligent household appliance runs later.
The comfort evaluation result can be a feedback result of the user, and/or the comfort evaluation result can be calculated according to a preset algorithm.
For example, in an embodiment, the obtaining the comfort evaluation result after the intelligent appliance is controlled to operate according to the control action includes:
and obtaining a comfort evaluation result fed back by the user after executing corresponding operation according to the control action.
Taking the air conditioner as an example, after the air conditioner executes the control action, the user can perform intuitive comfort experience evaluation on the environment, and the result is taken as the comfort evaluation result of the air conditioner control action.
For another example, in another embodiment, the obtaining the comfort evaluation result after the operation according to the control action includes:
acquiring state parameters before and after executing corresponding operation according to the control action;
calculating a first comfort value and a second comfort value according to a preset comfort evaluation algorithm, wherein the first comfort value is a comfort value corresponding to the state parameter before the corresponding operation is executed according to the control action, and the second comfort value is a comfort value corresponding to the state parameter after the corresponding operation is executed according to the control action;
and obtaining the comfort evaluation result according to the first comfort value and the second comfort value.
Taking an air conditioner as an example, before the air conditioner executes a new control action, the state parameter of the air conditioner corresponds to a first comfort value, after the air conditioner executes the new control action, the new state parameter of the air conditioner corresponds to a second comfort value, the first comfort value and the second comfort value are compared in value, if the second comfort value is larger than the first comfort value, the comfort evaluation result is a positive evaluation result, and the comfort evaluation of the control action is better; and if the second comfort value is smaller than the first comfort value, the comfort evaluation result is a negative evaluation result, and the comfort evaluation of the control action is poor.
In the above scheme, the comfort value corresponding to the state parameter is obtained through a preset comfort evaluation algorithm, and the comfort evaluation algorithm may be a comparison table of the state parameter and the comfort value state, or may be a formula and the like. The same or different weights can be set corresponding to each state parameter in the comfort evaluation algorithm, and the state parameters are quantized through the weight proportion to obtain corresponding comfort values.
And step S105, updating the reinforcement learning model according to the comfort evaluation result.
It can be understood that, after the reinforcement learning model is updated according to the comfort evaluation result, the control action of the reinforcement learning model is also adjusted accordingly, and the control action with the best evaluation result output by the reinforcement learning model can be realized.
In one embodiment, the comfort evaluation result includes a positive evaluation result or a negative evaluation result, and the updating the reinforcement learning model according to the comfort evaluation result includes:
if the comfort evaluation result is a positive evaluation result, increasing the output probability of the control action; alternatively, the first and second electrodes may be,
and if the comfort evaluation result is a negative evaluation result, reducing the output probability of the control action.
Taking an air conditioner as an example, if the comfort evaluation result is a positive evaluation result, the comfort of the indoor environment of the air conditioner after the control action is executed is increased for the user, and according to the positive evaluation, the probability of the control action occurring in the later operation of the air conditioner is increased; if the comfort evaluation result is a negative evaluation result, the comfort of the indoor environment of the air conditioner after the control action is executed is reduced for the user, and according to the negative evaluation, the probability of the control action occurring in the later operation of the air conditioner is reduced. It can be understood that, in the actual operation of the air conditioner, the above process is repeated for many times, and if one control action is subjected to forward evaluation for many times, it is indicated that the comfort experience of the user by the indoor environment after the control action is executed is really good, and the probability of executing the control action by the air conditioner is also very high, so that the control action of the air conditioner can be adjusted towards the direction with the optimal comfort, thereby improving the comfort control of the operation of the air conditioner, and improving the user experience.
Fig. 4 is a schematic flowchart of an intelligent household appliance control method according to another embodiment of the present application, and as shown in fig. 4, the intelligent household appliance control method includes the following steps:
s21, acquiring parameter information, including: and acquiring environmental parameter information and/or acquiring parameter information of the intelligent household appliance.
S22, acquiring control actions corresponding to the parameter information through a preset model based on the parameter information, wherein the preset model comprises a state conversion model and a reinforcement learning model, and the reinforcement learning model can be adjusted according to comfort evaluation results;
acquiring a state parameter corresponding to the parameter information through the state conversion model, wherein the state conversion model is used for representing the corresponding relation between the parameter information and the state parameter;
and generating a control action through the reinforcement learning model based on the state parameter, wherein the reinforcement learning model is used for representing the corresponding relation between the state parameter and the control action.
And S23, controlling the operation according to the control action.
S24, obtaining the comfort evaluation result after the control operation according to the control action, including:
acquiring state parameters before and after executing corresponding operation according to the control action;
calculating a first comfort value and a second comfort value according to a preset comfort evaluation algorithm, wherein the first comfort value is a comfort value corresponding to the state parameter before the corresponding operation is executed according to the control action, and the second comfort value is a comfort value corresponding to the state parameter after the corresponding operation is executed according to the control action;
and obtaining the comfort evaluation result according to the first comfort value and the second comfort value.
S25, the comfort evaluation result comprises a positive evaluation result or a negative evaluation result, and the reinforcement learning model is updated according to the comfort evaluation result, wherein the method comprises the following steps: if the comfort evaluation result is a positive evaluation result, increasing the output probability of the control action; or if the comfort evaluation result is a negative evaluation result, reducing the output probability of the control action.
It is to be understood that the step S24 of obtaining the comfort evaluation result after the control operation according to the control action may further include:
and obtaining a comfort evaluation result fed back by the user after executing corresponding operation according to the control action. And taking the evaluation of the user feedback as the evaluation of the control action.
It can be understood that, in the actual operation of intelligent household electrical appliances, above step process is repeated many times, can realize that the control action of intelligent household electrical appliances can be towards the optimal direction adjustment of user experience to improve the travelling comfort control of intelligent household electrical appliances operation, make intelligent household electrical appliances control more accurate, and then promote user experience.
It should be noted that the embodiment of the intelligent household appliance control method is not limited to be applied to the embodiment of the intelligent air conditioner, and can also be applied to other intelligent household appliances, such as an intelligent air purifier and the like.
Fig. 5 is a schematic structural diagram of an intelligent home appliance control device according to an embodiment of the present application, and as shown in fig. 5, the intelligent home appliance control device 5 includes:
a first obtaining module 51, configured to obtain parameter information;
a second obtaining module 52, configured to obtain, based on the parameter information, a control action corresponding to the parameter information through a preset model, where the preset model includes a reinforcement learning model, and the reinforcement learning model is adjustable according to a comfort evaluation result;
and the control module 53 is used for controlling the operation according to the control action.
It can be understood that, in the above-mentioned intelligent household appliance control device 5, the first obtaining module 51 obtains the parameter information, the second obtaining module 52 obtains the control action of the intelligent household appliance based on the parameter information, and in the second obtaining module 52, because the reinforcement learning model can be adjusted according to the comfort evaluation result, the control action generated according to the reinforcement learning model can be the control action with the best comfort evaluation result, so the control module 53 controls the operation of the intelligent household appliance according to the control action and is suitable for the user comfort experience.
In an embodiment, the first obtaining module 51 is specifically configured to:
obtaining environmental parameter information, and/or,
and acquiring parameter information of the intelligent household appliance.
It can be understood that the parameter information is related to comfort of the control action of the intelligent household appliance, and the parameter information may be environment parameter information. For example, the intelligent household appliance utilizes environmental parameter information acquired by a sensor configured by the intelligent household appliance, such as indoor temperature, humidity and particle information; for example, room information configured in the intelligent home appliance, such as the size, orientation, and lighting of a room. Alternatively, the environmental parameter information may be collected and/or configured by an external device of the intelligent household appliance. The intelligent household appliance can receive environment parameter information sent by external equipment, for example, the intelligent household appliance receives local weather information sent by a cloud server through a network, such as local temperature, humidity, rain, snow and other information. Or, the intelligent household appliance can also be associated with other intelligent household appliances and sensors to receive the environmental parameter information acquired by the other intelligent household appliances and the sensors. For example, the method is associated with other intelligent household appliances, and receives information such as temperature, humidity and the like acquired by other intelligent household appliances; for example, the intelligent household appliance is associated with a door and window sensor, the door and window sensor obtains the opening and closing state information of the door and the window, and then the intelligent household appliance receives the opening and closing state information sent by the door and window sensor; for example, the smart home appliance may receive information sent by a control center of the smart home system, such as room information configured in the control center of the smart home system.
From the viewpoint of obtaining, the environmental parameter information may be obtained by the intelligent appliance itself, or obtained by the intelligent appliance from another device.
From a specific information perspective, in one specific embodiment, the environmental parameter information may include at least one of: local weather information such as temperature, humidity, rain, snow, etc.; room information of a room in which the intelligent household appliance is located, such as space size, orientation, lighting condition and the like; and the intelligent household appliance is used for providing information of other devices in the room, such as door and window state information, for example, door and window opening or closing.
In addition, the parameter information may be parameter information of the intelligent home appliance itself, such as runtime length information of the intelligent home appliance.
Through above-mentioned embodiment, can realize the diversification that parameter information obtained, diversified parameter information synthesizes and is used for controlling intelligent household electrical appliances, and intelligent household electrical appliances control through responding diversified parameter information, replaces the user to operate in person, can promote user experience.
In a specific embodiment, the intelligent appliance includes, but is not limited to, an intelligent air conditioner. By taking the intelligent air conditioner as an example, the intelligent air conditioner can respond to diversified parameter information to control, replace the user to operate in person, improve the user experience, and also can realize the purpose of energy conservation under some conditions, such as when the local hot weather is suddenly cooled down and becomes cool, and under the condition that the user is unknown, the intelligent air conditioner can operate according to the local weather control at the moment, so that the energy conservation can be realized.
In one embodiment, the second obtaining module 52 is configured to, in,
further, in the second obtaining module 52, the preset model further includes a state transition model;
the obtaining of the control action corresponding to the parameter information through a preset model based on the parameter information includes:
acquiring a state parameter corresponding to the parameter information through the state conversion model based on the parameter information, wherein the state conversion model is used for representing the corresponding relation between the parameter information and the state parameter;
and generating a control action through the reinforcement learning model based on the state parameter, wherein the reinforcement learning model is used for representing the corresponding relation between the state parameter and the control action.
Fig. 2 is a schematic structural diagram of a state transition model according to an embodiment of the present application, as shown in fig. 2, the input of the state transition model 20 is parameter information, and the output is a state parameter, for example, the parameter information includes a door and window closing condition 201, a weather environment condition 202 (including temperature, humidity, rain, snow, and the like), and room information 203 (including space size, orientation, lighting, and the like), and the state parameter is a preset fixed number, as shown in fig. 2, three state parameters are taken as an example. The specific state parameters may be set according to actual conditions, for example, taking the intelligent household appliance as an air conditioner, the state parameters may include temperature, humidity and lighting, and may be three states obtained by comprehensively determining information of various parameters.
The state transition model 20 may be an artificially set logic rule, a state lookup table, a neural network structure, or a mixture of the three. The output is a simplified mapping of the input information, and the specific output parameter type is determined according to the actual control target. Typically, the establishment of this process requires extensive refinement or training of the actual case data. For example, the state parameter B corresponding to the parameter information a can be determined according to actual case data extraction. Therefore, when the parameter information is obtained as a, the corresponding state parameter can be obtained as B according to the state transition model.
The state conversion method is suitable for the conditions that the parameter information is complex and numerous, converts the complex parameter information into the state parameter with the mapping relation, can summarize the complex parameter information by the state parameter, realizes simplification of data processing, and can avoid the processing pressure of a reinforcement learning model when facing the complex and numerous parameter information.
The input of the reinforcement learning model is a state parameter, and the output is a control action. Taking an air conditioner as an example, the state parameter is temperature decrease, and the control action is temperature increase, for example. The output probability of the output parameter of the reinforcement learning model can be adjusted according to the comfort evaluation result, so that each time the reinforcement model outputs the control action, the control action with the best comfort evaluation result can be obtained.
Fig. 6 is a schematic structural diagram of an intelligent household appliance control device according to another embodiment of the present application, and as shown in fig. 6, the intelligent household appliance control device 5 further includes:
the evaluation module 54 is used for obtaining the comfort evaluation result after the control operation according to the control action;
and the updating module 55 is used for updating the reinforcement learning model according to the comfort evaluation result.
It can be understood that, after the evaluation result of the control action is obtained by the evaluation module 54, and the control action has a new comfort evaluation result, the situation that the control action occurs in the future operation of the intelligent household appliance can be adjusted according to the new comfort evaluation result. After the reinforcement learning model is updated by the updating module 55 according to the comfort evaluation result, the control action of the reinforcement learning model is adjusted accordingly, so that the control action with the best comfort evaluation result is realized. Through above-mentioned each module, in the actual motion of intelligent household electrical appliances, above process is repeated many times, can realize that the control action of intelligent household electrical appliances can be towards the most optimal direction adjustment of user's comfortable nature experience to improve the travelling comfort control of intelligent household electrical appliances operation, make intelligent household electrical appliances control more accurate, and then promote user experience.
In one embodiment, the evaluation module 54 is specifically configured to:
acquiring state parameters before and after executing corresponding operation according to the control action;
calculating a first comfort value and a second comfort value according to a preset comfort evaluation algorithm, wherein the first comfort value is a comfort value corresponding to the state parameter before the corresponding operation is executed according to the control action, and the second comfort value is a comfort value corresponding to the state parameter after the corresponding operation is executed according to the control action;
and obtaining the comfort evaluation result according to the first comfort value and the second comfort value.
Taking an air conditioner as an example, before the air conditioner executes a new control action, the state parameter of the air conditioner corresponds to a first comfort value, after the air conditioner executes the new control action, the new state parameter of the air conditioner corresponds to a second comfort value, the first comfort value and the second comfort value are compared in value, if the second comfort value is larger than the first comfort value, the comfort evaluation result is a positive evaluation result, and the comfort evaluation of the control action is better; and if the second comfort value is smaller than the first comfort value, the comfort evaluation result is a negative evaluation result, and the comfort evaluation of the control action is poor.
In the above scheme, the comfort value corresponding to the state parameter is obtained through a preset comfort evaluation algorithm, and the comfort evaluation algorithm may be a comparison table of the state parameter and the comfort value state, or may be a formula or the like. The same or different weights can be set corresponding to each state parameter in the comfort evaluation algorithm, and the state parameters are quantized through the weight proportion to obtain corresponding comfort values.
In one embodiment, the evaluation module 54 is further specifically configured to:
and obtaining a comfort evaluation result fed back by the user after executing corresponding operation according to the control action.
Taking the air conditioner as an example, after the air conditioner executes the control action, the user can perform intuitive comfort experience evaluation on the environment, and the result is taken as the comfort evaluation result of the air conditioner control action.
In one embodiment, the update module 55 is specifically configured to:
the comfort evaluation result comprises a positive evaluation result or a negative evaluation result;
if the comfort evaluation result is a positive evaluation result, increasing the output probability of the control action; alternatively, the first and second electrodes may be,
and if the comfort evaluation result is a negative evaluation result, reducing the output probability of the control action.
Taking an air conditioner as an example, if the comfort evaluation result is a positive evaluation result, the comfort of the indoor environment of the air conditioner after the control action is executed is increased for the user, and according to the positive evaluation, the probability of the control action occurring in the later operation of the air conditioner is increased; if the comfort evaluation result is a negative evaluation result, the comfort of the indoor environment of the air conditioner after the control action is executed is reduced for the user, and according to the negative evaluation, the probability of the control action occurring in the later operation of the air conditioner is reduced. It can be understood that, in the actual operation of the air conditioner, the above process is repeated for many times, and if one control action is subjected to forward evaluation for many times, it is indicated that the comfort experience of the user by the indoor environment after the control action is executed is really good, and the probability of executing the control action by the air conditioner is also very high, so that the control action of the air conditioner can be adjusted towards the direction with the optimal comfort, thereby improving the comfort control of the air conditioner operation and also improving the user experience.
It should be noted that the application examples of the intelligent household appliance control device 5 include, but are not limited to, the application examples of an intelligent air conditioner, and may also be applied to other intelligent household appliances, such as an intelligent air purifier, and the like.
It is understood that the same or similar parts in the above embodiments may be mutually referred to, and the same or similar parts in other embodiments may be referred to for the content which is not described in detail in some embodiments.
It should be noted that, in the description of the present application, the terms "first", "second", etc. are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Further, in the description of the present application, the meaning of "a plurality" means at least two unless otherwise specified.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.
Claims (18)
1. An intelligent household appliance control method is characterized by comprising the following steps:
acquiring parameter information;
acquiring a control action corresponding to the parameter information through a preset model based on the parameter information, wherein the preset model comprises a reinforcement learning model, and the reinforcement learning model can be adjusted according to a comfort evaluation result;
and controlling the operation according to the control action.
2. The intelligent appliance control method according to claim 1, wherein the acquiring parameter information comprises:
obtaining environmental parameter information, and/or,
and acquiring parameter information of the intelligent household appliance.
3. The intelligent appliance control method according to claim 2, wherein the acquiring environmental parameter information comprises:
acquiring environmental parameter information acquired and/or configured by the intelligent household appliance; and/or
And acquiring environment parameter information acquired and/or configured by external equipment of the intelligent household appliance.
4. The intelligent appliance control method of any one of claims 1 to 3, wherein the preset model further comprises a state transition model;
the obtaining of the control action corresponding to the parameter information through a preset model based on the parameter information includes:
acquiring a state parameter corresponding to the parameter information through the state conversion model based on the parameter information, wherein the state conversion model is used for representing the corresponding relation between the parameter information and the state parameter;
and generating a control action through the reinforcement learning model based on the state parameter, wherein the reinforcement learning model is used for representing the corresponding relation between the state parameter and the control action.
5. The intelligent appliance control method according to claim 4,
the state transition model comprises one or more of a state comparison table, a neural network model and a preset logic rule.
6. The intelligent appliance control method according to claim 4, wherein the reinforcement learning model is adjustable according to a comfort evaluation result, and comprises:
the probability of the control action output by the reinforcement learning model can be adjusted according to a comfort evaluation result.
7. The intelligent appliance control method of claim 4, further comprising:
obtaining the comfort evaluation result after the control operation according to the control action;
and updating the reinforcement learning model according to the comfort evaluation result.
8. The intelligent appliance control method of claim 7, wherein the obtaining the comfort evaluation result after the operation according to the control action comprises:
acquiring state parameters before and after executing corresponding operation according to the control action;
calculating a first comfort value and a second comfort value according to a preset comfort evaluation algorithm, wherein the first comfort value is a comfort value corresponding to the state parameter before the corresponding operation is executed according to the control action, and the second comfort value is a comfort value corresponding to the state parameter after the corresponding operation is executed according to the control action;
and obtaining the comfort evaluation result according to the first comfort value and the second comfort value.
9. The intelligent home appliance control method according to claim 8, wherein the comfort evaluation algorithm sets the same or different weights for each state parameter.
10. The intelligent appliance control method according to claim 7 or 8, wherein the obtaining of the comfort evaluation result after the operation according to the control action includes:
and obtaining a comfort evaluation result fed back by the user after executing corresponding operation according to the control action.
11. The intelligent household appliance control method according to claim 7 or 8, wherein the comfort evaluation result comprises a positive evaluation result or a negative evaluation result, and the updating the reinforcement learning model according to the comfort evaluation result comprises:
if the comfort evaluation result is a positive evaluation result, increasing the output probability of the control action; alternatively, the first and second electrodes may be,
and if the comfort evaluation result is a negative evaluation result, reducing the output probability of the control action.
12. An intelligent household appliance control device, comprising:
the first acquisition module is used for acquiring parameter information;
the second acquisition module is used for acquiring control actions corresponding to the parameter information through a preset model based on the parameter information, wherein the preset model comprises a reinforcement learning model, and the reinforcement learning model can be adjusted according to comfort evaluation results;
and the control module is used for controlling the operation according to the control action.
13. The intelligent appliance control device of claim 12, wherein the first obtaining module is specifically configured to:
obtaining environmental parameter information, and/or,
and acquiring parameter information of the intelligent household appliance.
14. The intelligent appliance control device according to claim 12, wherein in the second obtaining module, the preset model further comprises a state transition model;
the obtaining of the control action corresponding to the parameter information through a preset model based on the parameter information includes:
acquiring a state parameter corresponding to the parameter information through the state conversion model based on the parameter information, wherein the state conversion model is used for representing the corresponding relation between the parameter information and the state parameter;
and generating a control action through the reinforcement learning model based on the state parameter, wherein the reinforcement learning model is used for representing the corresponding relation between the state parameter and the control action.
15. The intelligent appliance control device according to any one of claims 12 to 14, further comprising:
the evaluation module is used for obtaining the comfort evaluation result after the control operation according to the control action;
and the updating module is used for updating the reinforcement learning model according to the comfort evaluation result.
16. The intelligent appliance control device of claim 15, wherein the evaluation module is specifically configured to:
acquiring state parameters before and after executing corresponding operation according to the control action;
calculating a first comfort value and a second comfort value according to a preset comfort evaluation algorithm, wherein the first comfort value is a comfort value corresponding to the state parameter before the corresponding operation is executed according to the control action, and the second comfort value is a comfort value corresponding to the state parameter after the corresponding operation is executed according to the control action;
and obtaining the comfort evaluation result according to the first comfort value and the second comfort value.
17. The intelligent appliance control device of claim 15, wherein the evaluation module is further configured to:
and obtaining a comfort evaluation result fed back by the user after executing corresponding operation according to the control action.
18. The intelligent appliance control device of claim 15,
the update module is specifically configured to:
the comfort evaluation result comprises a positive evaluation result or a negative evaluation result;
if the comfort evaluation result is a positive evaluation result, increasing the output probability of the control action; alternatively, the first and second electrodes may be,
and if the comfort evaluation result is a negative evaluation result, reducing the output probability of the control action.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810734605.1A CN110687802A (en) | 2018-07-06 | 2018-07-06 | Intelligent household electrical appliance control method and intelligent household electrical appliance control device |
PCT/CN2018/122256 WO2020006993A1 (en) | 2018-07-06 | 2018-12-20 | Intelligent household electrical appliance control method and intelligent household electrical appliance control device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810734605.1A CN110687802A (en) | 2018-07-06 | 2018-07-06 | Intelligent household electrical appliance control method and intelligent household electrical appliance control device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110687802A true CN110687802A (en) | 2020-01-14 |
Family
ID=69060037
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810734605.1A Pending CN110687802A (en) | 2018-07-06 | 2018-07-06 | Intelligent household electrical appliance control method and intelligent household electrical appliance control device |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110687802A (en) |
WO (1) | WO2020006993A1 (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111338227A (en) * | 2020-05-18 | 2020-06-26 | 南京三满互联网络科技有限公司 | Electronic appliance control method and control device based on reinforcement learning and storage medium |
CN111913400A (en) * | 2020-07-28 | 2020-11-10 | 深圳Tcl新技术有限公司 | Information fusion method and device and computer readable storage medium |
CN113375300A (en) * | 2020-03-09 | 2021-09-10 | 青岛海尔空调器有限总公司 | Intelligent control method and intelligent control equipment of air conditioner |
CN113495488A (en) * | 2020-04-07 | 2021-10-12 | 佛山市顺德区美的洗涤电器制造有限公司 | Control method of household appliance, household appliance and computer readable storage medium |
CN114675552A (en) * | 2022-03-10 | 2022-06-28 | 深圳亿思腾达集成股份有限公司 | Intelligent home management method and system based on deep learning algorithm |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102213966A (en) * | 2010-04-09 | 2011-10-12 | 宁波中科集成电路设计中心有限公司 | Wireless intelligent measurement and control system of greenhouse |
CN103186677A (en) * | 2013-04-15 | 2013-07-03 | 北京百纳威尔科技有限公司 | Information display method and information display device |
CN106549833A (en) * | 2015-09-21 | 2017-03-29 | 阿里巴巴集团控股有限公司 | A kind of control method and device of intelligent home device |
CN106598058A (en) * | 2016-12-20 | 2017-04-26 | 华北理工大学 | Intrinsically motivated extreme learning machine autonomous development system and operating method thereof |
CN106842925A (en) * | 2017-01-20 | 2017-06-13 | 清华大学 | A kind of locomotive smart steering method and system based on deeply study |
US20170206615A1 (en) * | 2012-01-23 | 2017-07-20 | Earth Networks, Inc. | Optimizing and controlling the energy consumption of a building |
CN107797459A (en) * | 2017-09-15 | 2018-03-13 | 珠海格力电器股份有限公司 | Control method, device, storage medium and the processor of terminal device |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6308466B2 (en) * | 2014-07-24 | 2018-04-11 | パナソニックIpマネジメント株式会社 | Environmental control device, program |
CN104597761B (en) * | 2014-12-31 | 2018-01-23 | 珠海格力电器股份有限公司 | The control method and device of intelligent domestic equipment |
CN105068515B (en) * | 2015-07-16 | 2017-08-25 | 华南理工大学 | A kind of intelligent home device sound control method based on self-learning algorithm |
CN105371425A (en) * | 2015-10-12 | 2016-03-02 | 美的集团股份有限公司 | Air conditioner |
CN106369739A (en) * | 2016-08-23 | 2017-02-01 | 海信(山东)空调有限公司 | Air conditioner control method, air conditioner controller and air conditioner system |
-
2018
- 2018-07-06 CN CN201810734605.1A patent/CN110687802A/en active Pending
- 2018-12-20 WO PCT/CN2018/122256 patent/WO2020006993A1/en active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102213966A (en) * | 2010-04-09 | 2011-10-12 | 宁波中科集成电路设计中心有限公司 | Wireless intelligent measurement and control system of greenhouse |
US20170206615A1 (en) * | 2012-01-23 | 2017-07-20 | Earth Networks, Inc. | Optimizing and controlling the energy consumption of a building |
CN103186677A (en) * | 2013-04-15 | 2013-07-03 | 北京百纳威尔科技有限公司 | Information display method and information display device |
CN106549833A (en) * | 2015-09-21 | 2017-03-29 | 阿里巴巴集团控股有限公司 | A kind of control method and device of intelligent home device |
CN106598058A (en) * | 2016-12-20 | 2017-04-26 | 华北理工大学 | Intrinsically motivated extreme learning machine autonomous development system and operating method thereof |
CN106842925A (en) * | 2017-01-20 | 2017-06-13 | 清华大学 | A kind of locomotive smart steering method and system based on deeply study |
CN107797459A (en) * | 2017-09-15 | 2018-03-13 | 珠海格力电器股份有限公司 | Control method, device, storage medium and the processor of terminal device |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113375300A (en) * | 2020-03-09 | 2021-09-10 | 青岛海尔空调器有限总公司 | Intelligent control method and intelligent control equipment of air conditioner |
WO2021179958A1 (en) * | 2020-03-09 | 2021-09-16 | 青岛海尔空调器有限总公司 | Intelligent control method for air conditioner, and intelligent control device for air conditioner |
CN113495488A (en) * | 2020-04-07 | 2021-10-12 | 佛山市顺德区美的洗涤电器制造有限公司 | Control method of household appliance, household appliance and computer readable storage medium |
CN111338227A (en) * | 2020-05-18 | 2020-06-26 | 南京三满互联网络科技有限公司 | Electronic appliance control method and control device based on reinforcement learning and storage medium |
CN111338227B (en) * | 2020-05-18 | 2020-12-01 | 南京三满互联网络科技有限公司 | Electronic appliance control method and control device based on reinforcement learning and storage medium |
CN111913400A (en) * | 2020-07-28 | 2020-11-10 | 深圳Tcl新技术有限公司 | Information fusion method and device and computer readable storage medium |
CN111913400B (en) * | 2020-07-28 | 2024-04-30 | 深圳Tcl新技术有限公司 | Information fusion method, device and computer readable storage medium |
CN114675552A (en) * | 2022-03-10 | 2022-06-28 | 深圳亿思腾达集成股份有限公司 | Intelligent home management method and system based on deep learning algorithm |
Also Published As
Publication number | Publication date |
---|---|
WO2020006993A1 (en) | 2020-01-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110687802A (en) | Intelligent household electrical appliance control method and intelligent household electrical appliance control device | |
US11562296B2 (en) | Machine learning device, machine learning method, and storage medium | |
KR102553299B1 (en) | Data learning server and method for generating and using thereof | |
CN106322656B (en) | A kind of air conditioning control method and server and air-conditioning system | |
US20180195752A1 (en) | Air-conditioning control method, air-conditioning control apparatus, and storage medium | |
CN110186170B (en) | Thermal comfort index PMV control method and equipment | |
US11976835B2 (en) | Air conditioner, data transmission method, and air conditioning system | |
CN109269036B (en) | Cloud control method of multi-split air conditioner and multi-split air conditioner system | |
US20160321564A1 (en) | Operational parameter value learning device, operational parameter value learning method, and controller for learning device | |
CN106322669B (en) | A kind of air conditioner intelligent swing flap control method and system | |
CN104075402A (en) | Intelligent air conditioner control method and system | |
CN110966733A (en) | Intelligent control method and system for indoor environment and storage medium | |
CN108063701B (en) | Method and device for controlling intelligent equipment | |
JP2020154785A (en) | Prediction method, prediction program, and model learning method | |
CN112432345B (en) | Air conditioner, control method of starting mode of air conditioner and storage medium | |
CN111338227B (en) | Electronic appliance control method and control device based on reinforcement learning and storage medium | |
CN111121237A (en) | Air conditioner, control method thereof, server, and computer-readable storage medium | |
CN112308209A (en) | Personalized air conditioner intelligent learning method based on deep learning | |
CN113760024B (en) | Environmental control system based on 5G intelligent space | |
CN109298643A (en) | Apparatus control method, device, smart home unit and storage medium | |
CN110736232A (en) | Air conditioner control method and device | |
CN110925942B (en) | Air conditioner control method and device | |
CN115585538A (en) | Indoor temperature adjusting method and device, electronic equipment and storage medium | |
CN114556027B (en) | Air conditioner control device, air conditioner system, air conditioner control method, and recording medium | |
JP2020517856A (en) | Windlessness control method, device and computer-readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200114 |