CN113251557B - Scene state control method, device, system, equipment and storage medium - Google Patents

Scene state control method, device, system, equipment and storage medium Download PDF

Info

Publication number
CN113251557B
CN113251557B CN202110522235.7A CN202110522235A CN113251557B CN 113251557 B CN113251557 B CN 113251557B CN 202110522235 A CN202110522235 A CN 202110522235A CN 113251557 B CN113251557 B CN 113251557B
Authority
CN
China
Prior art keywords
scene state
target
parameter
designated space
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110522235.7A
Other languages
Chinese (zh)
Other versions
CN113251557A (en
Inventor
许洋艺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gree Electric Appliances Inc of Zhuhai
Zhuhai Lianyun Technology Co Ltd
Original Assignee
Gree Electric Appliances Inc of Zhuhai
Zhuhai Lianyun Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gree Electric Appliances Inc of Zhuhai, Zhuhai Lianyun Technology Co Ltd filed Critical Gree Electric Appliances Inc of Zhuhai
Priority to CN202110522235.7A priority Critical patent/CN113251557B/en
Publication of CN113251557A publication Critical patent/CN113251557A/en
Application granted granted Critical
Publication of CN113251557B publication Critical patent/CN113251557B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F24HEATING; RANGES; VENTILATING
    • F24FAIR-CONDITIONING; AIR-HUMIDIFICATION; VENTILATION; USE OF AIR CURRENTS FOR SCREENING
    • F24F11/00Control or safety arrangements
    • F24F11/30Control or safety arrangements for purposes related to the operation of the system, e.g. for safety or monitoring
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F24HEATING; RANGES; VENTILATING
    • F24FAIR-CONDITIONING; AIR-HUMIDIFICATION; VENTILATION; USE OF AIR CURRENTS FOR SCREENING
    • F24F11/00Control or safety arrangements
    • F24F11/50Control or safety arrangements characterised by user interfaces or communication
    • F24F11/56Remote control
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F24HEATING; RANGES; VENTILATING
    • F24FAIR-CONDITIONING; AIR-HUMIDIFICATION; VENTILATION; USE OF AIR CURRENTS FOR SCREENING
    • F24F11/00Control or safety arrangements
    • F24F11/62Control or safety arrangements characterised by the type of control or by internal processing, e.g. using fuzzy logic, adaptive control or estimation of values
    • F24F11/63Electronic processing
    • F24F11/64Electronic processing using pre-stored data
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F24HEATING; RANGES; VENTILATING
    • F24FAIR-CONDITIONING; AIR-HUMIDIFICATION; VENTILATION; USE OF AIR CURRENTS FOR SCREENING
    • F24F11/00Control or safety arrangements
    • F24F11/70Control systems characterised by their outputs; Constructional details thereof
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F24HEATING; RANGES; VENTILATING
    • F24FAIR-CONDITIONING; AIR-HUMIDIFICATION; VENTILATION; USE OF AIR CURRENTS FOR SCREENING
    • F24F11/00Control or safety arrangements
    • F24F11/88Electrical aspects, e.g. circuits
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F24HEATING; RANGES; VENTILATING
    • F24FAIR-CONDITIONING; AIR-HUMIDIFICATION; VENTILATION; USE OF AIR CURRENTS FOR SCREENING
    • F24F2120/00Control inputs relating to users or occupants
    • F24F2120/10Occupancy
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Abstract

The application relates to a method, a device, a system, equipment and a storage medium for controlling scene states; wherein the method comprises the following steps: acquiring identity information of an object entering a specified space at the current moment; wherein the designated space is a space to which the scene state applies; determining a target object from the objects based on the identity information; the target object is correspondingly bound with scene state parameters corresponding to different time periods; determining a target scene state parameter according to the current moment and a scene state parameter corresponding to the target object; and controlling at least one intelligent device to operate according to the target scene state parameters so as to enable the designated space to enter a target scene state. The method and the device are used for solving the problem that in the prior art, the accuracy and the timeliness of the adjusting scene state are poor.

Description

Scene state control method, device, system, equipment and storage medium
Technical Field
The present application relates to the field of smart home, and in particular, to a method, an apparatus, a system, a device, and a storage medium for controlling a scene state.
Background
Each person has different requirements on the state of the scene (e.g. temperature, humidity, light, etc.) in the room. When someone enters a room, each intelligent device needs to be manually adjusted, so that the indoor scene state is adjusted to the favorite state, the operation is complicated, the time of a user is wasted, and the timeliness of adjusting the scene state is poor; and when the user adjusts the indoor scene state each time, adjust according to own experience and intuition, the accuracy of adjusting is relatively poor.
Disclosure of Invention
The application provides a method, a device, a system, equipment and a storage medium for controlling a scene state, which are used for solving the problem that in the prior art, the accuracy and timeliness of adjusting the scene state are poor.
In a first aspect, an embodiment of the present application provides a method for controlling a scene state, including:
acquiring identity information of an object entering a specified space at the current moment; wherein the designated space is a space to which the scene state applies;
determining a target object from the objects based on the identity information; the target object is correspondingly bound with scene state parameters corresponding to different time periods;
determining a target scene state parameter according to the current moment and a scene state parameter corresponding to the target object;
and controlling at least one intelligent device to operate according to the target scene state parameters so as to enable the designated space to enter a target scene state.
Optionally, the determining a target scene state parameter according to the current time and the scene state parameter corresponding to the target object includes:
if it is determined that the object does not exist in the designated space before entering the designated space, determining the target scene state parameter based on the current time and the scene state parameter corresponding to the target object;
if the object exists in the designated space before the object enters the designated space, and the objects existing in the designated space are all target objects, acquiring current scene state parameters; and determining the target scene state parameters based on the current scene state parameters and the scene state parameters corresponding to the current time and the target object.
Optionally, the scene state parameters include: at least one environmental parameter and a parameter value corresponding to the environmental parameter; the determining the target scene state parameter based on the current time and the scene state parameter corresponding to the target object includes:
calculating an average environmental parameter value corresponding to each environmental parameter according to the number of the target objects entering the designated space at the current moment, the environmental parameters corresponding to the target objects and the parameter values; determining the target scene state parameter based on the average environmental parameter value;
alternatively, the first and second electrodes may be,
and for each environmental parameter, performing weighted calculation on the parameter value and the weight corresponding to the target object to obtain a weighted environmental parameter value, and determining the target scene state parameter based on the weighted environmental parameter value.
Optionally, the scene state parameters include: at least one environmental parameter and a parameter value corresponding to the environmental parameter; the determining the target scene state parameter based on the current scene state parameter and the scene state parameter corresponding to the current time and the target object includes:
calculating an average environmental parameter value corresponding to each environmental parameter according to the number of the target objects entering the designated space at the current moment, the environmental parameters corresponding to the target objects and the parameter values; calculating the average value of the average environmental parameter value and a target environmental parameter value in the current scene state parameter to obtain an average target environmental parameter value, and determining the target scene state parameter based on the average target environmental parameter value;
alternatively, the first and second electrodes may be,
for each environmental parameter, performing weighted calculation on the parameter value and the weight corresponding to the target object to obtain a weighted environmental parameter value; calculating the average value of the weighted environment parameter value and a target environment parameter value in the current scene state parameter to obtain an average target environment parameter value, and determining the target scene state parameter based on the average target environment parameter value;
and the target environment parameter value and the parameter value belong to indexes of the same environment parameter of the same intelligent device.
Optionally, the determining a target scene state parameter according to the current time and the scene state parameter corresponding to the target object includes:
if it is determined that a non-target object exists in the designated space before the object enters the designated space, acquiring current scene state parameters of the designated space;
and determining the target scene state parameters according to the current scene state parameters.
Optionally, the determining a target scene state parameter according to the current time and the scene state parameter corresponding to the target object includes:
when it is determined that non-target objects exist in the objects entering the designated space at the current moment, acquiring preset scene state parameters corresponding to the current moment;
determining the target scene state parameter according to the preset scene state parameter
In a second aspect, an embodiment of the present application provides a scene state control system, including: the system comprises at least one intelligent device, a collection device and a central control device; the intelligent equipment and the acquisition equipment are in communication connection with the central control equipment;
the acquisition equipment is used for acquiring the identity information of the object entering the designated space at the current moment; wherein the designated space is a space to which the scene state applies;
the central control equipment is used for determining a target object from the objects based on the identity information; the target object is correspondingly bound with scene state parameters corresponding to different time periods; determining a target scene state parameter according to the current moment and a scene state parameter corresponding to the target object; and controlling at least one intelligent device to operate according to the target scene state parameters so as to enable the designated space to enter a target scene state.
In a third aspect, an embodiment of the present application provides a device for controlling a scene state, including:
the acquisition module is used for acquiring the identity information of an object entering the designated space at the current moment; wherein the designated space is a space to which the scene state applies;
a first determining module for determining a target object from the objects based on the identity information; the target object is correspondingly bound with scene state parameters corresponding to different time periods;
the second determining module is used for determining a target scene state parameter according to the current moment and the scene state parameter corresponding to the target object;
and the control module is used for controlling at least one intelligent device to operate according to the target scene state parameters so as to enable the designated space to enter a target scene state.
In a fourth aspect, an embodiment of the present application provides an electronic device, including: the system comprises a processor, a memory and a communication bus, wherein the processor and the memory are communicated with each other through the communication bus;
the memory for storing a computer program;
the processor is configured to execute the program stored in the memory, and implement the method for controlling a scene state according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a computer-readable storage medium, which stores a computer program, and the computer program, when executed by a processor, implements the method for controlling a scene state according to the first aspect.
Compared with the prior art, the technical scheme provided by the embodiment of the application has the following advantages: according to the method provided by the embodiment of the application, after the object enters the designated space, the target scene state parameters are determined according to the target object in the object and the scene state parameters of the target object, the intelligent device is controlled based on the target scene state parameters, so that the designated space enters the target scene state, and after the object enters the designated space, the scene state of the designated space is triggered and adjusted, so that the timeliness of adjusting the scene state can be improved; in addition, when the scene state is adjusted, the scene state is adjusted according to the scene state parameters of the target object, the experience and intuition of a user are not relied on, and the accuracy of adjusting the scene state is high.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 is a schematic diagram of a system architecture of a method for controlling a scene state according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a method for controlling a scene state according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of a scene state control device according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the prior art, after a user enters a room, each intelligent device needs to be manually adjusted, so that the indoor scene state is adjusted to a favorite state, for example: the temperature of the air conditioner is set to be a certain temperature value, so that the indoor temperature reaches the temperature desired by a user, the process is complicated, the time of the user is wasted, and the timeliness of adjusting the scene state is poor; and when the user adjusts the indoor scene state each time, adjust according to own experience and intuition, the accuracy of adjusting is relatively poor.
In order to solve the above technical problem, an embodiment of the present application provides a method for controlling a scene state, so as to solve the problem in the prior art that accuracy and timeliness of adjusting the scene state are poor.
First, a system architecture of a scene state control method disclosed in an embodiment of the present application is described with reference to fig. 1. The system architecture includes: the system comprises a central control device 101, a collection device 102 and at least one intelligent device 103; the acquisition device 102 and the at least one intelligent device 103 are both in communication connection with the central control device 101 through a network. Wherein, the network includes but is not limited to: local WIFI local area network, Zigbee network or Bluetooth mesh network.
The intelligent device includes, but is not limited to, an air conditioner, a humidifier, an air purifier, an indoor lamp, a television, and the like.
The acquisition equipment 102 is used for acquiring the identity information of an object entering the designated space at the current moment; the designated space is a space applied by a scene state;
the central control device 101 is used for determining a target object from the objects based on the identity information; the target object is correspondingly bound with scene state parameters corresponding to different time periods; determining a target scene state parameter according to the current time and a scene state parameter corresponding to the target object; and controlling at least one intelligent device 103 to operate according to the target scene state parameters so as to enable the designated space to enter the target scene state.
In the embodiment of the application, after an object enters a designated space, a target scene state parameter is determined according to a target object in the object and a scene state parameter of the target object, intelligent equipment is controlled based on the target scene state parameter, so that the designated space enters a target scene state, and after the object enters the designated space, the scene state of the designated space is triggered and adjusted, so that the timeliness of adjusting the scene state can be improved; in addition, when the scene state is adjusted, the scene state is adjusted according to the scene state parameters of the target object, the experience and intuition of a user are not relied on, and the accuracy of adjusting the scene state is high.
A method for controlling a scene state according to an embodiment of the present application is described below with reference to fig. 2, where the method includes:
step 201, acquiring identity information of an object entering a specified space at the current moment; the designated space is a space applied by a scene state;
the object may be a human, a pet, a mobile device, or the like. When the object is a person, the identity information may be face information and/or fingerprint information, and in a specific implementation, a face recognition device and/or a fingerprint recognition device may be installed at a doorway of the designated space to acquire the identity information of the object. When the object is a pet or a mobile device, the identity information may be an identification on the pet or the mobile device, such as: when the two-dimensional code or the radio frequency card carried by the pet or the mobile device is specifically implemented, a camera or a radio frequency card reader can be installed at a doorway of a designated space so as to acquire identity information of an object.
The current time is the detailed time when the object enters the designated space, and the current time comprises the following steps: year, month, day, hour, minute, and second, for example: the time when the object enters the designated space is 2021, 5, 9, 18, 45 minutes and 30 seconds.
The designated space may be a space in which the entire home of the object is located, a room in which the home of the object or a company is located, or a space in which the entire company of the object is located.
Step 202, determining a target object from the objects based on the identity information; the target object is correspondingly bound with scene state parameters corresponding to different time periods;
the scene state parameters mainly include two categories, the first category is control parameters of a control authority category, such as: turning on or off certain intelligent equipment; the second type is environmental parameters, wherein each environmental parameter is configured with a parameter value, such as: the parameter value of the temperature of the air conditioner is 26 ℃, that is, the desired temperature of the air conditioner is set to 26 ℃.
The target object is an object which records scene state parameters corresponding to the object in a specified room in advance at different time periods. During specific implementation, the corresponding relation between the identity information of the target object and the scene state parameters corresponding to different time periods is established in advance.
For ease of understanding, here are illustrated: the scene state parameter set by the object a from 18 o 'clock 00 min 00 sec to 22 o' clock 59 min 59 sec is a1, and the scene state parameter set from 23 o 'clock 00 min to morning 5 o' clock 59 min 59 sec is a 2. The scene state parameter set by the object B from 18 o 'clock 00 min 00 sec to 22 o' clock 59 min 59 sec is B1, and the scene state parameter set from 23 o 'clock 00 min to morning 5 o' clock 59 min 59 sec is B2. In the control list, scene state parameters of the object a and the object B at different time periods are stored in advance.
The time when the object A returns home is 45 minutes 31 seconds at 18 days 18 at 5 months 9 days 2021, when the object A enters a room, the identity information of the object A is identified through face identification or fingerprint identification when the object A is a person, and the scene state parameter of the object A is determined to be A1 according to the current time and the identity information of the object A.
For another example: when the scene state parameter of an object is M1 in the period from 2 months to 4 months, and when the scene state parameter is M2 in the period from 4 months to 6 months, the belonged month is determined according to the current time when the object goes home, and then the corresponding scene state parameter is determined. And for the next year, determining the scene state parameters of the next year according to the scene state parameters of the months of the last year.
The scene state parameters comprise control parameters of at least one intelligent device. For example: the scene state parameters of a certain object include control parameters of an air conditioner, control parameters of a humidifier, control parameters of an air purifier, control parameters of a lamp 1, control parameters of a lamp 2, control parameters of a television and the like. For example, the control parameters for the air conditioner include: opening control parameters of authority class of the air conditioner, and controlling the parameter value of the temperature of the air conditioner; the control parameters for the television include: and opening the control parameters of the authority class of the television, and controlling the parameter value of the brightness and the parameter value of the volume of the television.
Step 203, determining a target scene state parameter according to the current time and the scene state parameter corresponding to the target object;
specifically, the corresponding relationship between the identity information of the target object and the scene state parameters at different time periods is established in advance. When an object enters the designated space, determining the object entering the designated space as a target object according to the identified identity information, determining scene state parameters corresponding to the object according to the identity information of the target object and the current time of entering the designated space, and determining the target scene state parameters according to the scene state parameters of each target object.
Before the object enters the designated space, according to different conditions that the object exists and the object does not exist in the designated space, the embodiment of the application provides different methods for determining the state parameters of the target scene.
The first method comprises the following steps: before the object enters the designated space, no object exists in the designated space.
At this time, each intelligent device in the designated space does not operate, and the control of the scene state is realized according to the object entering the designated space. Specifically, according to the difference of the identities of the objects entering the designated space, the objects entering the designated space are divided into:
(1) all of the objects entering the designated space are target objects.
At the moment, determining respective scene state parameters of the target object according to the current moment and the identity information of the target object; and determining the target scene state parameters according to the scene state parameters of the target objects.
Specifically, the scene state parameters include: at least one environmental parameter and a parameter value corresponding to the environmental parameter; determining a target scene state parameter based on the current time and a scene state parameter corresponding to the target object, including:
calculating average environmental parameter values corresponding to various environmental parameters according to the number of objects entering the specified space at the current moment, the environmental parameters and the parameter values corresponding to the target objects; target scene state parameters are determined based on the average environmental parameter values.
It should be noted that, when calculating the average environmental parameter value, it is impossible to calculate the parameter values of the same environmental parameter of the same smart device corresponding to different target objects, the parameter values of different environmental parameters of the same smart device (for example, brightness and sound of a television) or the parameter values of the environmental parameters of different smart devices.
For ease of understanding, this is still exemplified here: when an object A and an object B enter a room, determining that the object A and the object B are both target objects through face recognition or fingerprint recognition, and if the current time is 2021, 5, month, 9, 18, hour, 45 minutes and 31 seconds, determining that a parameter value of the temperature of the air conditioner corresponding to the object A is C1 and a parameter value of the mist generation amount of the humidifier is S1 according to a pre-established corresponding relation between identity information of the target objects and scene state parameters corresponding to different time periods; the parameter value of the temperature of the air conditioner corresponding to the object B is C2, and the parameter value of the mist outlet amount of the humidifier is S2; calculating an average value of C1 and C2 as an average environmental parameter value of the temperature of the air conditioner when calculating the average environmental parameter value; and calculating the average value of S1 and S2 as the average environmental parameter value of the mist outlet quantity of the humidifier.
The embodiment of the application provides a method for determining target scene state parameters, wherein before entering a designated room, no object exists in the designated room, and under the condition that intelligent equipment does not run, the target scene state parameters are determined based on the average value of the parameter values of all environment parameters, so that a more appropriate scene state can be provided for all target objects.
In practical applications, the requirements of the elderly and children in family members for the family environment (e.g., temperature and humidity) are higher, and additional care is needed for the elderly and children. In this regard, different weights may also be assigned to different target objects, such as: the method is characterized in that a larger weight is set for parameter values of environmental parameters corresponding to old people and children in families, and a smaller weight is set for parameter values of environmental parameters of old people and children in strong years, so that the requirements of the old people and the children on scene states are met.
Specifically, for each environmental parameter, the parameter value and the weight corresponding to the target object are weighted to obtain a weighted environmental parameter value, and the target scene state parameter is determined based on the weighted environmental parameter value.
For example: determining that the parameter value of the temperature of the air conditioner corresponding to the object A is C1 and the weight is C1; the parameter value of the temperature of the air conditioner corresponding to the object B is C2, and the weight is C2; the sum of C1 × C1+ C2 × C2 is calculated to obtain the weighted environment parameter value.
The embodiment of the application provides another method for determining the state parameters of the target scene, which can consider the identity roles of different target objects in the family and control the scene state more intelligently.
(2) At least one of the objects entering the designated space is a non-target object
The non-target object is an object which does not record scene state parameters of the object in different time periods in a specified room in advance. Specifically, after acquiring identity information of an object entering a specified space at the current moment, determining that a non-target object, namely a visitor, exists in the object entering the specified space at the current moment; acquiring a preset scene state parameter corresponding to the current moment; and controlling at least one intelligent device to operate according to the preset scene state parameters.
The method comprises the steps of determining whether non-target objects exist in objects entering a designated space through face recognition, namely when visitors visit the designated space, automatically switching scene states in the designated space to appropriate states of the seasons according to preset scene state parameters according to the seasons to which the current time belongs, wherein the optimum temperature of a human body is 26-28 ℃, and the optimum humidity is 45-65%. In concrete implementation, the indoor human body heat dissipation and the carbon dioxide emission can be detected, so that the current scene state can be adjusted.
In the embodiment of the application, if a non-target object exists in the designated space, the scene state is controlled according to the preset scene state parameter so as to take care of the requirement of the visitor on the scene state.
And the second method comprises the following steps: before the object enters the designated space, the object exists in the designated space.
The method mainly includes the following two methods to determine the state parameters of the target scene according to the difference of the identities of the objects existing in the designated space before the objects enter the designated space.
(1) All objects existing in the designated space are target objects
At this time, it is necessary to determine the target scene state parameter based on the current scene state parameter and the scene state parameter corresponding to the current time and the target object, in consideration of the current scene state parameter determined by the target object already existing in the designated space before the object enters the designated space.
Specifically, according to the number of objects entering a specified space at the current moment, the environmental parameters and parameter values corresponding to the target objects, average environmental parameter values corresponding to each environmental parameter are calculated; and calculating the average value of the average environment parameter value and the target environment parameter value in the current scene state parameter to obtain an average target environment parameter value, and determining the target scene state parameter based on the average target environment parameter value.
It should be noted that at least one environmental parameter is included in the current scene state parameter. And when the average value is calculated, determining the target environment parameter value of the current scene state parameter according to the type of the environment parameter when the average environment parameter value is calculated, so as to obtain the average target environment parameter value and further determine the target scene state parameter.
For example, when the average environmental parameter value is calculated, the average environmental parameter value of the temperature of the air conditioner is calculated, the parameter value of the temperature of the air conditioner (i.e., the target environmental parameter value) is determined from the current scene state parameter, and the average value of the average environmental parameter value and the target environmental parameter value is used as the average target environmental parameter value, so as to determine the target scene state parameter.
As for the method for calculating the average environmental parameter value, reference may be made to the method for calculating the average environmental parameter value in the first case, which is not described herein again.
Similarly, when the old and the children are in the family, the requirements of the old and the children on the scene state need to be mainly taken care of, and different weights can be assigned to different target objects. Specifically, for each environmental parameter, performing weighted calculation on the parameter value and the weight corresponding to the target object to obtain a weighted environmental parameter value; calculating the average value of the weighted environment parameter value and a target environment parameter value in the current scene state parameter to obtain an average target environment parameter value, and determining the target scene state parameter based on the average target environment parameter value; and the target environment parameter value and the parameter value belong to indexes of the same environment parameter of the same intelligent device.
It should be noted here that, for the current scene state parameters, the following parameters are included: and at least one item of environment parameter, when calculating the average value, determining a target environment parameter value of the current scene state parameter according to the type of the environment parameter when calculating the weighted environment parameter value, thereby obtaining an average target environment parameter value and further determining the target scene state parameter.
As for the method for calculating the weighted environment parameter value, reference may be made to the method for calculating the weighted environment parameter value in the first case, which is not described herein again.
(2) Specifying non-target objects among objects already existing in space
Specifically, if it is determined that a non-target object already exists in the designated space before the object enters the designated space, acquiring a current scene state parameter of the designated space; and determining the target scene state parameters according to the current scene state parameters.
And if the visitor exists in the designated space, controlling at least one intelligent device to operate by taking the acquired current scene state parameter as a target scene state parameter, and taking care of the requirement of the visitor on the scene state.
In the embodiment of the application, when determining the target scene state parameter, whether an object exists in a designated space needs to be considered before the object enters the designated space, and if not, the target scene state parameter is determined according to the scene state parameter of the target object entering the designated space; if the object exists and the existing objects are all the customized objects, the target scene state parameter is determined according to the current scene state parameter and the scene state parameter corresponding to the target object entering the designated space at the moment, different methods for determining the target scene state parameter are provided according to different actual conditions, the scene state of the designated space can be triggered and adjusted immediately after the object enters the designated space, and the accuracy and timeliness of adjusting the scene state can be effectively improved.
And 204, controlling at least one intelligent device to operate according to the target scene state parameters so as to enable the designated space to enter the target scene state.
During specific implementation, after the target scene state parameters are determined, the corresponding target scene state parameters can be sent to each intelligent device through the central control device, so that the designated space enters the target scene state.
In the embodiment of the application, after an object enters a designated space, a target scene state parameter is determined according to a target object in the object and a scene state parameter of the target object, intelligent equipment is controlled based on the target scene state parameter, so that the designated space enters a target scene state, and after the object enters the designated space, the scene state of the designated space is triggered and adjusted, so that the timeliness of adjusting the scene state can be improved; in addition, when the scene state is adjusted, the scene state is adjusted according to the scene state parameters of the target object, the experience and intuition of a user are not depended on, the accuracy of adjusting the scene state is higher, the method is more intelligent and convenient, the time of the user is saved, and the user experience is good.
Based on the same concept, the embodiment of the present application provides a control apparatus for a scene state, and the specific implementation of the apparatus may refer to the description of the method embodiment, and the repeated parts are not repeated, as shown in fig. 3, the apparatus mainly includes:
an obtaining module 301, configured to obtain identity information of an object entering a specified space at a current time; wherein the designated space is a space to which the scene state applies;
a first determining module 302, configured to determine a target object from the objects based on the identity information; the target object is correspondingly bound with target scene state parameters corresponding to different time periods;
a second determining module 303, configured to determine a target scene state parameter according to the current time and a scene state parameter corresponding to the target object;
a control module 304, configured to control the at least one smart device to operate according to the target scene state parameter, so that the designated space enters the target scene state.
In the embodiment of the application, after an object enters a designated space, a target scene state parameter is determined according to a target object in the object and a scene state parameter of the target object, the intelligent device is controlled based on the target scene state parameter, so that the designated space enters a target scene state, and after the object enters the designated space, the scene state of the designated space is triggered and adjusted, so that the timeliness of adjusting the scene state can be improved; in addition, when the scene state is adjusted, the scene state is adjusted according to the scene state parameters of the target object, the experience and intuition of a user are not relied on, and the accuracy of adjusting the scene state is high.
In a specific embodiment, the second determining module 303 is configured to determine a target scene state parameter based on a current time and a scene state parameter corresponding to a target object if it is determined that an object does not exist in a designated space before the object enters the designated space; if the object exists in the designated space before the object enters the designated space, and the objects existing in the designated space are all target objects, acquiring the current scene state parameters; and determining the target scene state parameters based on the current scene state parameters and the scene state parameters corresponding to the current time and the target object.
In an embodiment, the second determining module 303 is specifically configured to, when the scene status parameter includes: when at least one environmental parameter and a parameter value corresponding to the environmental parameter are detected, calculating an average environmental parameter value corresponding to each environmental parameter according to the number of objects entering a target object in a specified space at the current moment, the environmental parameter and the parameter value corresponding to the target object; determining a target scene state parameter based on the average environmental parameter value; or for each environmental parameter, performing weighted calculation on the parameter value and the weight corresponding to the target object to obtain a weighted environmental parameter value, and determining the target scene state parameter based on the weighted environmental parameter value.
In an embodiment, the second determining module 303 is specifically configured to, when the scene status parameter includes: when at least one environmental parameter and a parameter value corresponding to the environmental parameter are detected, calculating an average environmental parameter value corresponding to each environmental parameter according to the number of objects entering a target object in a specified space at the current moment, the environmental parameter and the parameter value corresponding to the target object; calculating the average value of the average environmental parameter value and a target environmental parameter value in the current scene state parameter to obtain an average target environmental parameter value, and determining a target scene state parameter based on the average target environmental parameter value; or for each environmental parameter, performing weighted calculation on the parameter value and the weight corresponding to the target object to obtain a weighted environmental parameter value; calculating the average value of the weighted environment parameter value and a target environment parameter value in the current scene state parameter to obtain an average target environment parameter value, and determining the target scene state parameter based on the average target environment parameter value; and the target environment parameter value and the parameter value belong to indexes of the same environment parameter of the same intelligent device.
In a specific embodiment, the second determining module 303 is configured to, if it is determined that a non-target object exists in the designated space before the object enters the designated space, obtain a current scene state parameter of the designated space; and determining the target scene state parameters according to the current scene state parameters.
In a specific embodiment, the second determining module 303 is configured to, when it is determined that a non-target object exists in objects entering the designated space at the current time, obtain a preset scene state parameter corresponding to the current time; and determining the target scene state parameters according to the preset scene state parameters.
Based on the same concept, an embodiment of the present application further provides an electronic device, as shown in fig. 4, the electronic device mainly includes: a processor 401, a memory 402 and a communication bus 403, wherein the processor 401 and the memory 402 communicate with each other via the communication bus 403. The memory 402 stores a program executable by the processor 401, and the processor 401 executes the program stored in the memory 402, so as to implement the following steps:
acquiring identity information of an object entering a specified space at the current moment; the designated space is a space applied by a scene state;
determining a target object from the objects based on the identity information; the target object is correspondingly bound with scene state parameters corresponding to different time periods;
determining a target scene state parameter according to the current time and a scene state parameter corresponding to the target object;
and controlling at least one intelligent device to operate according to the target scene state parameters so as to enable the designated space to enter the target scene state.
The communication bus 403 mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus 403 may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 4, but this does not indicate only one bus or one type of bus.
The Memory 402 may include a Random Access Memory (RAM) or a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. Alternatively, the memory may be at least one memory device located remotely from the aforementioned processor 401.
The Processor 401 may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), etc., and may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic devices, discrete gates or transistor logic devices, and discrete hardware components.
In still another embodiment of the present application, there is also provided a computer-readable storage medium having stored therein a computer program which, when run on a computer, causes the computer to execute a control method of a scene state described in the above-described embodiment.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wirelessly (e.g., infrared, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that includes one or more of the available media. The available media may be magnetic media (e.g., floppy disks, hard disks, tapes, etc.), optical media (e.g., DVDs), or semiconductor media (e.g., solid state disks), among others.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing are merely exemplary embodiments of the present invention, which enable those skilled in the art to understand or practice the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (9)

1. A method for controlling a scene state, comprising:
acquiring identity information of an object entering a specified space at the current moment; wherein the designated space is a space to which the scene state applies;
determining a target object from the objects based on the identity information; the target object is correspondingly bound with scene state parameters corresponding to different time periods;
determining a target scene state parameter according to the current moment and a scene state parameter corresponding to the target object;
controlling at least one intelligent device to operate according to the target scene state parameters so as to enable the designated space to enter a target scene state;
determining a target scene state parameter according to the current time and the scene state parameter corresponding to the target object, including:
if it is determined that the object does not exist in the designated space before entering the designated space, determining the target scene state parameter based on the current time and the scene state parameter corresponding to the target object;
if the object exists in the designated space before the object enters the designated space, and the objects existing in the designated space are all target objects, acquiring current scene state parameters; and determining the target scene state parameters based on the current scene state parameters and the scene state parameters corresponding to the current time and the target object.
2. The method for controlling scene state according to claim 1, wherein the scene state parameters include: at least one environmental parameter and a parameter value corresponding to the environmental parameter; the determining the target scene state parameter based on the current time and the scene state parameter corresponding to the target object includes:
calculating an average environmental parameter value corresponding to each environmental parameter according to the number of the target objects entering the designated space at the current moment, the environmental parameters corresponding to the target objects and the parameter values; determining the target scene state parameter based on the average environmental parameter value;
alternatively, the first and second electrodes may be,
and for each environmental parameter, performing weighted calculation on the parameter value and the weight corresponding to the target object to obtain a weighted environmental parameter value, and determining the target scene state parameter based on the weighted environmental parameter value.
3. The method for controlling scene state according to claim 1, wherein the scene state parameters include: at least one environmental parameter and a parameter value corresponding to the environmental parameter; the determining the target scene state parameter based on the current scene state parameter and the scene state parameter corresponding to the current time and the target object includes:
calculating an average environmental parameter value corresponding to each environmental parameter according to the number of the target objects entering the designated space at the current moment, the environmental parameters corresponding to the target objects and the parameter values; calculating the average value of the average environmental parameter value and a target environmental parameter value in the current scene state parameter to obtain an average target environmental parameter value, and determining the target scene state parameter based on the average target environmental parameter value;
alternatively, the first and second electrodes may be,
for each environmental parameter, performing weighted calculation on the parameter value and the weight corresponding to the target object to obtain a weighted environmental parameter value; calculating the average value of the weighted environment parameter value and a target environment parameter value in the current scene state parameter to obtain an average target environment parameter value, and determining the target scene state parameter based on the average target environment parameter value;
and the target environment parameter value and the parameter value belong to indexes of the same environment parameter of the same intelligent device.
4. The method for controlling the scene state according to claim 1, wherein the determining the target scene state parameter according to the current time and the scene state parameter corresponding to the target object includes:
if it is determined that a non-target object exists in the designated space before the object enters the designated space, acquiring a current scene state parameter of the designated space;
and determining the target scene state parameters according to the current scene state parameters.
5. The method for controlling the scene state according to claim 1, wherein the determining the target scene state parameter according to the current time and the scene state parameter corresponding to the target object comprises:
when it is determined that non-target objects exist in the objects entering the designated space at the current moment, acquiring preset scene state parameters corresponding to the current moment;
and determining the target scene state parameters according to the preset scene state parameters.
6. A system for controlling a state of a scene, comprising: the system comprises at least one intelligent device, a collection device and a central control device; the intelligent equipment and the acquisition equipment are in communication connection with the central control equipment;
the acquisition equipment is used for acquiring the identity information of the object entering the designated space at the current moment; wherein the designated space is a space to which the scene state applies;
the central control equipment is used for determining a target object from the objects based on the identity information; the target object is correspondingly bound with scene state parameters corresponding to different time periods; determining a target scene state parameter according to the current moment and a scene state parameter corresponding to the target object; controlling at least one intelligent device to operate according to the target scene state parameters so as to enable the designated space to enter a target scene state;
the central control device is configured to determine the target scene state parameter based on the current time and the scene state parameter corresponding to the target object if it is determined that no object exists in the designated space before the object enters the designated space; if the object exists in the designated space before the object enters the designated space, and the objects existing in the designated space are all target objects, acquiring current scene state parameters; and determining the target scene state parameters based on the current scene state parameters and the scene state parameters corresponding to the current time and the target object.
7. An apparatus for controlling a scene state, comprising:
the acquisition module is used for acquiring the identity information of an object entering the designated space at the current moment; wherein the designated space is a space to which the scene state applies;
a first determining module for determining a target object from the objects based on the identity information; the target object is correspondingly bound with target scene state parameters corresponding to different time periods;
the second determining module is used for determining a target scene state parameter according to the current moment and the scene state parameter corresponding to the target object;
the control module is used for controlling at least one intelligent device to operate according to the target scene state parameters so as to enable the designated space to enter a target scene state;
the second determining module is configured to determine the target scene state parameter based on the current time and the scene state parameter corresponding to the target object if it is determined that no object exists in the designated space before the object enters the designated space; if the object exists in the designated space before the object enters the designated space, and the objects existing in the designated space are all target objects, acquiring current scene state parameters; and determining the target scene state parameters based on the current scene state parameters and the scene state parameters corresponding to the current time and the target object.
8. An electronic device, comprising: the system comprises a processor, a memory and a communication bus, wherein the processor and the memory are communicated with each other through the communication bus;
the memory for storing a computer program;
the processor is configured to execute the program stored in the memory, and implement the method for controlling a scene state according to any one of claims 1 to 5.
9. A computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the method for controlling a scene state according to any one of claims 1 to 5.
CN202110522235.7A 2021-05-13 2021-05-13 Scene state control method, device, system, equipment and storage medium Active CN113251557B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110522235.7A CN113251557B (en) 2021-05-13 2021-05-13 Scene state control method, device, system, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110522235.7A CN113251557B (en) 2021-05-13 2021-05-13 Scene state control method, device, system, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113251557A CN113251557A (en) 2021-08-13
CN113251557B true CN113251557B (en) 2022-09-16

Family

ID=77181791

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110522235.7A Active CN113251557B (en) 2021-05-13 2021-05-13 Scene state control method, device, system, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113251557B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114115027A (en) * 2021-11-24 2022-03-01 珠海格力电器股份有限公司 Method, system, device, equipment and storage medium for adjusting target environment parameters
CN114415558A (en) * 2021-12-15 2022-04-29 珠海格力电器股份有限公司 Equipment control method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107103746A (en) * 2017-04-06 2017-08-29 绵阳美菱软件技术有限公司 A kind of air conditioning control device and method
CN107490154A (en) * 2017-09-19 2017-12-19 广东美的制冷设备有限公司 Air conditioner and its control method, device and computer-readable recording medium
CN111589139A (en) * 2020-05-11 2020-08-28 腾讯科技(深圳)有限公司 Virtual object display method and device, computer equipment and storage medium

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5170935A (en) * 1991-11-27 1992-12-15 Massachusetts Institute Of Technology Adaptable control of HVAC systems
US20140365017A1 (en) * 2013-06-05 2014-12-11 Jason Hanna Methods and systems for optimized hvac operation
CN106196415B (en) * 2014-08-15 2019-08-27 台达电子工业股份有限公司 Intelligent air conditioner control system and its intelligent control method
CN109682030A (en) * 2018-12-24 2019-04-26 广东美的制冷设备有限公司 Airhandling equipment and its control method, device, computer readable storage medium
CN110529983B (en) * 2019-09-25 2020-09-25 珠海格力电器股份有限公司 Method and device for controlling air conditioner
CN110749051A (en) * 2019-09-27 2020-02-04 青岛海尔空调器有限总公司 Intelligent air conditioner control method and intelligent air conditioner
CN110726231B (en) * 2019-10-29 2020-10-02 珠海格力电器股份有限公司 Control method and device of air conditioner
CN111442420A (en) * 2020-04-02 2020-07-24 青岛海尔空调器有限总公司 Intelligent regulation and control method and environment regulation system for environment scene

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107103746A (en) * 2017-04-06 2017-08-29 绵阳美菱软件技术有限公司 A kind of air conditioning control device and method
CN107490154A (en) * 2017-09-19 2017-12-19 广东美的制冷设备有限公司 Air conditioner and its control method, device and computer-readable recording medium
CN111589139A (en) * 2020-05-11 2020-08-28 腾讯科技(深圳)有限公司 Virtual object display method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN113251557A (en) 2021-08-13

Similar Documents

Publication Publication Date Title
US10438127B2 (en) In-home-presence probability calculation method, server apparatus, and in-home-presence probability calculation system
US10571877B2 (en) Systems and methods for programming and controlling devices with sensor data and learning
US10601604B2 (en) Data processing systems and methods for smart hub devices
JP6776316B2 (en) Control of the HVAC system during a demand response event
US9491571B2 (en) Methods and apparatus for using smart environment devices via application program interfaces
US10102566B2 (en) Alert-driven dynamic sensor-data sub-contracting
CN113251557B (en) Scene state control method, device, system, equipment and storage medium
US9772116B2 (en) Enhanced automated control scheduling
CN103631202A (en) Hotel guest room intelligent monitoring system and method based on internet of things
US20180211339A1 (en) Systems, methods, and computer-readable media for generating property and tenant insights based on sensor devices
CN110333663B (en) Method and system for setting intelligent household management authority and computer storage medium
CN115481315B (en) Recommendation information determining method and device, storage medium and electronic device
WO2023165051A1 (en) Identity determination method, storage medium and electronic apparatus
CN111007735A (en) Intelligent household equipment linkage method and system based on community license plate recognition
CN111105298B (en) Purchasing recommendation method and system based on intelligent scene of Internet of things
CN110992626A (en) Security method based on air conditioner and security air conditioner
WO2016073312A1 (en) Enhanced automated control scheduling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant