CN111025922B - Target equipment control method and electronic equipment - Google Patents
Target equipment control method and electronic equipment Download PDFInfo
- Publication number
- CN111025922B CN111025922B CN201911150788.3A CN201911150788A CN111025922B CN 111025922 B CN111025922 B CN 111025922B CN 201911150788 A CN201911150788 A CN 201911150788A CN 111025922 B CN111025922 B CN 111025922B
- Authority
- CN
- China
- Prior art keywords
- behavior
- data
- user
- target
- probability
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 50
- 230000006399 behavior Effects 0.000 claims abstract description 356
- 230000000694 effects Effects 0.000 claims abstract description 87
- 238000004458 analytical method Methods 0.000 claims abstract description 34
- 238000004891 communication Methods 0.000 claims abstract description 12
- 230000008859 change Effects 0.000 claims description 26
- 238000010801 machine learning Methods 0.000 claims description 17
- 230000006870 function Effects 0.000 description 17
- 230000008569 process Effects 0.000 description 15
- 230000033001 locomotion Effects 0.000 description 12
- 230000000391 smoking effect Effects 0.000 description 12
- 238000004378 air conditioning Methods 0.000 description 7
- 230000007937 eating Effects 0.000 description 7
- 230000007613 environmental effect Effects 0.000 description 7
- 230000009471 action Effects 0.000 description 6
- 238000004590 computer program Methods 0.000 description 6
- 238000001514 detection method Methods 0.000 description 6
- 239000000779 smoke Substances 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 235000013305 food Nutrition 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 238000009423 ventilation Methods 0.000 description 4
- 235000019504 cigarettes Nutrition 0.000 description 3
- 230000000284 resting effect Effects 0.000 description 3
- 230000003068 static effect Effects 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 230000020595 eating behavior Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000010438 heat treatment Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 238000010295 mobile communication Methods 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013499 data model Methods 0.000 description 1
- 230000037213 diet Effects 0.000 description 1
- 235000005911 diet Nutrition 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 230000004634 feeding behavior Effects 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 235000021268 hot food Nutrition 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 230000033764 rhythmic process Effects 0.000 description 1
- 230000001020 rhythmical effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 238000010897 surface acoustic wave method Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B15/00—Systems controlled by a computer
- G05B15/02—Systems controlled by a computer electric
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B19/00—Programme-control systems
- G05B19/02—Programme-control systems electric
- G05B19/418—Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS] or computer integrated manufacturing [CIM]
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/20—Pc systems
- G05B2219/26—Pc applications
- G05B2219/2642—Domotique, domestic, home control, automation, smart house
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/02—Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Manufacturing & Machinery (AREA)
- Quality & Reliability (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Telephonic Communication Services (AREA)
Abstract
The embodiment of the invention provides a target device control method and electronic equipment. The method is applied to the electronic equipment, and comprises the following steps: acquiring activity data of a user; according to the activity data, carrying out state analysis on the user to obtain state information of the user; according to the state information, the activity data and the historical behavior data of the target behavior, predicting the behavior of the user to obtain the predicted behavior of the user; the predicted behavior is the behavior with the highest predicted probability in the target behaviors; and controlling the target equipment to set the operation mode to be the operation mode corresponding to the predicted behavior, wherein the target equipment is equipment in communication connection with the electronic equipment. The embodiment of the invention solves the problem that the operation mode of the intelligent home system needs frequent selection by a user in the prior art.
Description
Technical Field
The present invention relates to the field of mobile communications technologies, and in particular, to a target device control method and an electronic device.
Background
With the rapid development of artificial intelligence (Artificial Intelligence, AI) technology, smart home systems are beginning to enter people's daily lives and have a trend of "intelligent" and "automation". The intelligent home is connected with various devices in the home through the internet of things, such as audio and video equipment, a lighting system, curtain control, air conditioner control, a security system, a digital cinema system, an audio and video server, a video cabinet system, a network household appliance and the like, and various functions and means of household appliance control, lighting control, telephone remote control, indoor and outdoor remote control, anti-theft alarm, environment monitoring, heating ventilation control, infrared forwarding, programmable timing control and the like are provided through a network. Compared with the common home, the intelligent home not only has the traditional living function, but also has the functions of building, network communication, information home appliances and equipment automation, provides an omnibearing information interaction function, and even saves funds for various energy costs.
In the prior art, the intelligent home system is usually preset with a plurality of operation modes, which operation mode is used is usually selected by a user, and if the operation mode needs to be frequently replaced in the use process of the user, the process is complicated, so that the operation mode of the intelligent home system is not most relevant to the current state of the user, and the advantages of the intelligent home system cannot be effectively utilized.
Disclosure of Invention
The embodiment of the invention provides a target equipment control method and electronic equipment, which are used for solving the problem that in the prior art, the operation mode of an intelligent home system needs frequent selection by a user.
In order to solve the technical problems, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a target device control method, which is applied to an electronic device, where the method includes:
acquiring activity data of a user; the activity data includes: at least one of picture data collected by a camera, environment data, data collected by a sensor and use data of the electronic equipment;
according to the activity data, carrying out state analysis on the user to obtain state information of the user;
According to the state information, the activity data and the historical behavior data of the target behavior, predicting the behavior of the user to obtain the predicted behavior of the user; the predicted behavior is the behavior with the highest predicted probability in the target behaviors;
and controlling the target equipment to set the operation mode to be the operation mode corresponding to the predicted behavior, wherein the target equipment is equipment in communication connection with the electronic equipment.
Optionally, the step of performing a state analysis on the user according to the activity data to obtain state information of the user includes:
if the usage data indicates that the electronic equipment is in a standby state and the environmental data indicates that the current moment is a first preset time of the user, determining that the state information of the user is a first state;
if the picture data acquired by the cameras at each interval are the same, and the environment data indicate that the current moment is a second preset time of the user, determining that the state information of the user is a second state; and/or
If the usage data indicates that the electronic equipment is in a usage state and the change amplitude of the depth information of the picture acquired by the camera in a preset time period is in a preset change range, determining that the state information of the user is a third state;
The first preset time and the second preset time are obtained by machine learning according to historical behavior data of the user.
Optionally, the step of predicting the behavior of the user according to the state information, the activity data and the historical behavior data of the target behavior to obtain the predicted behavior of the user includes:
acquiring target behaviors of the user; the target behavior is obtained by machine learning according to the historical behavior data of the user;
for each target behavior, determining the prediction probability of the target behavior at the current moment according to the state information, the activity data and the historical behavior data of the target behavior;
and taking the target behavior with the highest prediction probability as the predicted behavior of the user.
Optionally, the step of determining, for each target behavior, a predicted probability that the target behavior occurs at the current moment according to the state information, the activity data, and the historical behavior data of the target behavior includes:
for each target behavior, determining n reference probabilities of the first target behavior at the current moment according to the state information and the activity data; wherein n is a positive integer greater than or equal to 1;
Determining a preset posterior probability corresponding to each reference probability;
and determining the predicted probability of the target behavior at the current moment according to the reference probability and the preset posterior probability.
Optionally, the step of determining n reference probabilities of the first target behavior at the current moment according to the state information and the activity data includes:
determining a first reference probability of the state information under the first target behavior;
determining a second reference probability of the electronic device to generate a second target behavior associated with the first target behavior according to the usage data;
determining a third reference probability of the first target behavior at the current moment according to the time data in the environment data;
determining a fourth reference probability of occurrence of the first target behavior according to data acquired by a first sensor in the sensors; and/or
And determining a fifth reference probability of the first target behavior according to the data acquired by a second sensor in the sensors.
Optionally, the step of determining, according to the usage data, a second reference probability that the electronic device generates a second target behavior associated with the first target behavior includes:
According to the use data, determining relevant parameters of the second target behavior of the electronic equipment; the related parameters comprise whether occurrence happens, the time interval of occurrence is equal to the current time interval, and the type of the target object aimed at by the target behavior;
and according to a preset weight, weighting and summing each related parameter to obtain a second reference probability of the second target behavior.
Optionally, the step of determining a fourth reference probability of occurrence of the first target behavior according to the data acquired by the first sensor of the sensors includes:
determining the temperature variation in the current environment according to the data acquired by the first sensor in the sensors;
and determining a fourth reference probability of the target behavior according to the temperature variation.
Optionally, the step of determining a fifth reference probability of occurrence of the second target behavior according to the data acquired by the second sensor of the sensors includes:
determining the gas variation in the current environment according to the data acquired by a second sensor in the sensors;
and determining a fifth reference probability of the first target behavior according to the gas variation.
Optionally, the step of determining the predicted probability of the target behavior occurring at the current moment according to the reference probability and the preset posterior probability includes:
determining the predicted probability of the target behavior at the current moment according to the reference probability, the preset posterior probability and the following formula:
wherein Ps is the prediction probability, and P (i-1) represents the i-1 th reference probability;
p [ P (i-1)/P ] represents the posterior probability of the i-1 th reference probability.
In a second aspect, an embodiment of the present invention further provides an electronic device, including:
the data acquisition module is used for acquiring activity data of a user; the activity data includes: at least one of picture data collected by a camera, environment data, data collected by a sensor and use data of the electronic equipment;
the state analysis module is used for carrying out state analysis on the user according to the activity data to obtain the state information of the user;
the behavior prediction module is used for predicting the behavior of the user according to the state information, the activity data and the historical behavior data of the target behavior to obtain the predicted behavior of the user; the predicted behavior is the behavior with the highest predicted probability in the target behaviors;
And the equipment control module is used for controlling the target equipment to set the operation mode to the operation mode corresponding to the predicted behavior, and the target equipment is equipment in communication connection with the electronic equipment.
Optionally, the state analysis module includes:
the first analysis sub-module is used for determining that the state information of the user is a first state if the usage data indicates that the electronic equipment is in a standby state and the environment data indicates that the current moment is a first preset time of the user;
the second analysis submodule is used for determining that the state information of the user is a second state if the picture data acquired by the camera at each interval are the same and the environment data indicate that the current moment is a second preset time of the user; and/or
A third analysis sub-module, configured to determine that the state information of the user is a third state if the usage data indicates that the electronic device is in a usage state and a variation range of depth information of a picture acquired by the camera in a preset time period is within a preset variation range;
the first preset time and the second preset time are obtained by machine learning according to historical behavior data of the user.
Optionally, the behavior prediction module includes:
the behavior acquisition sub-module is used for acquiring the target behavior of the user; the target behavior is obtained by machine learning according to the historical behavior data of the user;
the probability determination submodule is used for determining the prediction probability of the target behavior at the current moment according to the state information, the activity data and the historical behavior data of the target behavior;
and the behavior determination submodule is used for taking the target behavior with the maximum prediction probability as the predicted behavior of the user.
Optionally, the probability determination submodule includes:
the first probability determining unit is used for determining n reference probabilities of the first target behaviors at the current moment according to the state information and the activity data aiming at each target behavior; wherein n is a positive integer greater than or equal to 1;
the second probability determining unit is used for determining a preset posterior probability corresponding to each reference probability;
and the third probability determining unit is used for determining the prediction probability of the target behavior at the current moment according to the reference probability and the preset posterior probability.
Optionally, the first probability determining unit is configured to:
Determining a first reference probability of the state information under the first target behavior;
determining a second reference probability of the electronic device to generate a second target behavior associated with the first target behavior according to the usage data;
determining a third reference probability of the first target behavior at the current moment according to the time data in the environment data;
determining a fourth reference probability of occurrence of the first target behavior according to data acquired by a first sensor in the sensors; and/or
And determining a fifth reference probability of the first target behavior according to the data acquired by a second sensor in the sensors.
Optionally, the determining, according to the usage data, a second reference probability that the electronic device generates a second target behavior associated with the first target behavior includes:
according to the use data, determining relevant parameters of the second target behavior of the electronic equipment; the related parameters comprise whether occurrence happens, the time interval of occurrence is equal to the current time interval, and the type of the target object aimed at by the target behavior;
and according to a preset weight, weighting and summing each related parameter to obtain a second reference probability of the second target behavior.
Optionally, the determining the fourth reference probability of the first target behavior according to the data acquired by the first sensor includes:
determining the temperature variation in the current environment according to the data acquired by the first sensor in the sensors;
and determining a fourth reference probability of the target behavior according to the temperature variation.
Optionally, the determining the fifth reference probability of the second target behavior according to the data acquired by the second sensor includes:
determining the gas variation in the current environment according to the data acquired by a second sensor in the sensors;
and determining a fifth reference probability of the first target behavior according to the gas variation.
Optionally, the third probability determining unit is configured to:
determining the predicted probability of the target behavior at the current moment according to the reference probability, the preset posterior probability and the following formula:
wherein Ps is the prediction probability, and P (i-1) represents the i-1 th reference probability;
p [ P (i-1)/P ] represents the posterior probability of the i-1 th reference probability.
In a third aspect, an embodiment of the present invention further provides an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the steps in the target device control method as described above when executing the computer program.
In a fourth aspect, embodiments of the present invention also provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the target device control method as described above.
In the embodiment of the invention, the state analysis is carried out on the user according to the activity data by acquiring the activity data of the user, so as to obtain the state information of the user; then, according to the state information, the activity data and the historical behavior data of the target behavior, predicting the behavior of the user to obtain the predicted behavior of the user; finally, the control target equipment sets the operation mode to be the operation mode corresponding to the predicted behavior, so that the automatic and intelligent control target equipment can select the operation mode without manual selection of a user; and the operation mode can be switched at any time along with the change of the predicted behavior of the user.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments of the present invention will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 shows one of flowcharts of a target device control method provided in an embodiment of the present invention;
FIG. 2 is a second flowchart of a target device control method according to an embodiment of the present invention;
FIG. 3 shows a block diagram of an exemplary electronic device of an embodiment of the invention;
FIG. 4 shows a flow chart of an example of an embodiment of the invention;
FIG. 5 shows one of the block diagrams of the electronic device provided by an embodiment of the invention;
fig. 6 shows a second block diagram of an electronic device according to an embodiment of the invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
In various embodiments of the present invention, it should be understood that the sequence numbers of the following processes do not mean the order of execution, and the order of execution of the processes should be determined by the functions and internal logic, and should not constitute any limitation on the implementation process of the embodiments of the present invention.
Referring to fig. 1, an embodiment of the present invention provides a target device control method, which is applied to an electronic device, and includes:
step 101, acquiring activity data of a user; the activity data includes: at least one of picture data collected by a camera, environment data, data collected by a sensor and use data of the electronic equipment.
The electronic equipment acquires activities and data of a user, wherein the activity data comprises at least one of picture data, environment data, data acquired by a sensor and use data of the electronic equipment; specifically, the camera may be a front-mounted rear-mounted camera of the electronic device; collecting picture data through a camera, analyzing the state of a user through the picture data, and if the images collected by the camera are the same every preset time, indicating that the user is in a repeated motion state, such as rhythmic swing of arms when walking; for another example, the pictures acquired by the front camera may be continuous pictures of the face of the user during the use of the terminal device by the user.
Environmental data, time data, temperature data, etc.
The sensor may be a motion type sensor such as a speed sensor, an angle sensor, etc.; but also detection type sensors such as smoke sensors, temperature sensors, etc. The data detected by the sensors may also be analyzed for user behavior, for example, a smoke sensor may be used to detect whether the user is currently smoking a cigarette.
The usage data of the electronic device is used to indicate whether the user is currently using the electronic device, or which applications of the electronic device are used, etc.
And 102, carrying out state analysis on the user according to the activity data to obtain the state information of the user.
The state information is the current state of the user, such as a static state, a standing state, a lying state or a repeated movement state. Since more user behavior is related to the status information, for example, if the user behavior is eating, the status information will not be lying down in general; for another example, if the user behavior is running, the state information is typically a repetitive motion state.
Alternatively, the probability of each target behavior occurring in each state information may be preset, and then the target behavior may be predicted by the probability.
Step 103, predicting the behavior of the user according to the state information, the activity data and the historical behavior data of the target behavior to obtain the predicted behavior of the user; the predicted behavior is the behavior with the highest predicted probability in the target behaviors.
The electronic equipment can be used for carrying out machine learning according to historical behavior data of a user; the historical behavior data comprise the data range of the activity data and the state information under the target behavior; for example, the activity data and the state information are used as a group of data, and the historical behavior data comprises the prediction probability of the target behavior corresponding to each group of different data.
Specifically, for a group of data (a, B), it is determined according to historical behavior data of target behaviors M, n, and O, and probabilities of occurrence of three target behaviors under the group of data are P1, P2, and P3, respectively, and then a target behavior corresponding to a maximum value of P1, P2, and P3 is taken as a predicted behavior of the user.
And 104, controlling the target equipment to set the operation mode to be the operation mode corresponding to the predicted behavior, wherein the target equipment is equipment in communication connection with the electronic equipment.
After the predicted behavior of the user is obtained, controlling the operation mode of the target equipment in communication connection with the electronic equipment according to the predicted behavior; for example, if the target device is an air conditioner, the air conditioner is controlled to operate according to an operation mode corresponding to the predicted behavior, and further, if the predicted behavior is smoking, the air conditioner is controlled to start a ventilation mode so as to discharge smoke; if the predicted behavior is rest, controlling the air conditioner to raise the temperature to a preset value, and avoiding the user from catching a cold when resting.
Or if the target device is a music playing device, controlling the target device to operate according to a mode corresponding to the predicted behavior; for example, if the predicted behavior of the user is running, controlling the music playing device to play music with strong rhythm sense; if the predicted behavior of the user is browsing the electronic equipment, controlling the music playing equipment to play relaxed and leisure music; and if the predicted behavior of the user is resting, controlling the music playing equipment to stop playing the music.
Thus, after the predicted behavior is obtained, the electronic equipment controls the target equipment to operate according to the operation mode corresponding to the predicted behavior, so that the operation mode selection is automatically realized, and manual selection of a user is not needed; and the running mode can be automatically changed at any time according to the predicted behavior change of the user, so that the running mode of the target equipment is ensured to be consistent with the predicted behavior, the use requirements of the user in different states are met, and the intelligence of the target equipment is fully exerted.
In the embodiment of the invention, the state analysis is carried out on the user according to the activity data by acquiring the activity data of the user to obtain the state information of the user; then, according to the state information, the activity data and the historical behavior data of the target behavior, predicting the behavior of the user to obtain the predicted behavior of the user; finally, the control target equipment sets the operation mode to be the operation mode corresponding to the predicted behavior, so that the automatic and intelligent control target equipment can select the operation mode without manual selection of a user; and the operation mode can be switched at any time along with the change of the predicted behavior of the user. The embodiment of the invention solves the problem that the operation mode of the intelligent home system needs to be selected by a user in the prior art.
Optionally, in an embodiment of the present invention, step 102 includes:
if the usage data indicates that the electronic equipment is in a standby state and the environmental data indicates that the current moment is a first preset time of the user, determining that the state information of the user is a first state;
secondly, if the picture data acquired by the camera at each interval are the same, and the environment data indicate that the current moment is a second preset time of the user, determining that the state information of the user is a second state; and/or
Thirdly, if the usage data indicates that the electronic equipment is in a usage state and the change amplitude of the depth information of the picture acquired by the camera in a preset time period is within a preset change range, determining that the state information of the user is a third state;
the first preset time and the second preset time are obtained by machine learning according to historical behavior data of the user.
Specifically, in the first case, if the usage data indicates that the electronic device is in a standby state and the environmental data indicates that the current time is a first preset time of the user, determining that the state information of the user is a first state; if the first preset time is a rest time of the user determined through machine learning, for example, the time is 12 late night or noon, the first state is a stationary state of the user.
In the second case, if the image data collected by the camera at each interval is the same, and the environmental data indicates that the current time is a second preset time of the user, for example, the second preset time is a repeated movement time determined by the user through machine learning, and the repeated movement can be walking, running, and the like; then the status information of the user is determined to be a second status. For example, when the front camera detects that the user limb repeatedly performs the fixing action at a certain frequency within a fixed time (for example, within 10 seconds), the user arm repeatedly performs the fixing action to and fro, for example, enters or exits the lens in the up-down, left-right directions, and the like, and the current second preset time (for example, 6am to 12 pm) is determined by combining the current time data of the network side, so that the user can be judged to be in the repeated fixed motion state.
In the third case, if the usage data indicates that the electronic device is in a usage state and the variation range of the depth information of the image acquired by the camera in a preset time period is within a preset variation range, determining that the state information of the user is a third state; depth information, i.e., image depth, refers to the number of bits used to store each pixel, and is also used to measure the color resolution of the image. If the depth variation range of the picture acquired by the camera is not large in a preset time period and is in a preset variation range, the state that the user is using the electronic equipment can be judged, and whether the user is in a lying state can be judged according to the relative relation between the front and the back (such as the face and the background) in the front lens image.
Optionally, in an embodiment of the present invention, step 103 includes:
acquiring target behaviors of the user; the target behavior is obtained by machine learning according to the historical behavior data of the user;
for each target behavior, determining the prediction probability of the target behavior at the current moment according to the state information, the activity data and the historical behavior data of the target behavior;
And taking the target behavior with the highest prediction probability as the predicted behavior of the user.
The historical behavior data can comprise activity data and state information of the user in a learning period, and the activity data and the state information of the user are collected in the learning period and are used for machine learning of the user to obtain target behaviors of the user.
The historical behavior data comprises occurrence probabilities (i.e. prediction probabilities) corresponding to different data ranges of the state information of the user under each target behavior; for example, the activity data and the state information are used as a group of data, each group of data range corresponding to each prediction probability is included in the historical behavior data, and each group of data range can include the respective data range of the activity data and the state information.
For each target behavior, determining the prediction probability of the target behavior at the current moment under the state information and the activity data according to the historical behavior data of the target behavior, and taking the target behavior with the maximum prediction probability as the predicted behavior of the user; specifically, for a group of data (a, B), it is determined according to historical behavior data of target behaviors M, n, and O, and probabilities of occurrence of three target behaviors under the group of data are P1, P2, and P3, respectively, and then a target behavior corresponding to a maximum value of P1, P2, and P3 is taken as a predicted behavior of the user.
Referring to fig. 2, an embodiment of the present invention provides a target device control method, which is applied to an electronic device, and includes:
step 201, acquiring activity data of a user; the activity data includes: at least one of picture data collected by a camera, environment data, data collected by a sensor and use data of the electronic equipment.
And 202, carrying out state analysis on the user according to the activity data to obtain the state information of the user.
Step 203, obtaining a target behavior of the user; the target behavior is obtained by machine learning according to the historical behavior data of the user.
Step 204, for each target behavior, determining n reference probabilities of the first target behavior at the current moment according to the state information and the activity data; wherein n is a positive integer greater than or equal to 1.
Wherein, each reference probability corresponds to a prediction direction, such as a reference probability corresponding to state information, which represents a probability corresponding to the state information; n prediction directions can be preset for each target behavior, and the reference probability of each reference direction is determined according to the state information and the activity data.
Step 205, determining a preset posterior probability corresponding to each reference probability.
The posterior probability indicates that a matter has occurred, and the reason for requiring that the matter occurs is the probability of occurrence caused by a certain factor, namely the reference probability of the prediction direction, and for the target behavior, the posterior probability is the probability of occurrence of the target behavior when the reference probability of the prediction direction is a certain value; for the target behavior, the posterior probability corresponding to each reference probability is preset, and the posterior probability corresponding to each reference probability can be obtained through a machine learning mode; and then recording the corresponding relation between each posterior probability and the reference probability.
And 206, determining the predicted probability of the target behavior at the current moment according to the reference probability and the preset posterior probability.
The prediction probability of the target behavior at the current moment is determined, namely, each reference probability is multiplied by the corresponding posterior probability to obtain the total prediction probability.
And step 207, taking the target behavior with the maximum prediction probability as the predicted behavior of the user.
And step 208, controlling the target equipment to set the operation mode to be the operation mode corresponding to the predicted behavior, wherein the target equipment is equipment in communication connection with the electronic equipment.
Optionally, in the embodiment of the present invention, the step of determining n reference probabilities of the first target behavior at the current moment according to the state information and the activity data includes:
determining a first reference probability of the state information under the first target behavior;
determining a second reference probability of the electronic device to generate a second target behavior associated with the first target behavior according to the usage data;
determining a third reference probability of the first target behavior at the current moment according to the time data in the environment data;
determining a fourth reference probability of occurrence of the first target behavior according to data acquired by a first sensor in the sensors; and/or
And determining a fifth reference probability of the first target behavior according to the data acquired by a second sensor in the sensors.
For each target behavior, a first reference probability corresponding to each state information is preset, and after the state information is determined, the first reference probability is determined. Similarly, the third reference probability corresponding to each time data is preset.
Further, the step of determining, according to the usage data, a second reference probability that the electronic device generates a second target behavior associated with the first target behavior includes:
According to the use data, determining relevant parameters of the second target behavior of the electronic equipment; the related parameters comprise whether occurrence happens, the time interval of occurrence is equal to the current time interval, and the type of the target object aimed at by the target behavior;
and according to a preset weight, weighting and summing each related parameter to obtain a second reference probability of the second target behavior.
Wherein, for the second reference probability, the second reference probability comprises a plurality of related parameters, wherein the related parameters comprise whether the occurrence happens, the time of occurrence is distant from the current time interval, and the type of the target object aimed by the target behavior; after relevant parameters are determined according to the use data, weighting and summing each relevant parameter to obtain a second reference probability of the second target behavior; for example, if the first target action is eating, the second target action is buying a take-away food; the relevant parameters include whether to purchase food take-away, the time of occurrence is from the current time interval, and the type of target object purchased.
Optionally, in an embodiment of the present invention, the step of determining, according to data collected by a first sensor in the sensors, a fourth reference probability that the first target behavior occurs includes:
Determining the temperature variation in the current environment according to the data acquired by the first sensor in the sensors;
and determining a fourth reference probability of the target behavior according to the temperature variation.
The first sensor can be an infrared sensor or a temperature sensor and is used for obtaining indoor heat and temperature variation; and determining a fourth reference probability of the target behavior according to the temperature change, wherein if the target behavior is eating, eating can cause the temperature change around the electronic device. Optionally, the temperature change amount may be set by a change percentage, and each change percentage is preset with a corresponding reference probability, and after the temperature change amount is obtained, a fourth reference probability of the target behavior is determined according to the temperature change amount.
Optionally, in an embodiment of the present invention, the step of determining, according to data collected by a second sensor in the sensors, a fifth reference probability that the second target behavior occurs includes:
determining the gas variation in the current environment according to the data acquired by a second sensor in the sensors;
and determining a fifth reference probability of the first target behavior according to the gas variation.
Optionally, the second sensor may be a smoke sensor or a gas sensor, and the gas change amount in the current environment is determined according to the acquired data; if the target behavior is, for example, smoking, a fifth reference probability of occurrence of the first target behavior may be determined based on the amount of gas change. Alternatively, the gas variation may be set by a variation percentage, where each variation percentage is preset with a corresponding reference probability, and after the gas variation is obtained, a fifth reference probability of the target behavior is determined according to the gas variation.
Optionally, in an embodiment of the present invention, step 206 includes:
determining the predicted probability of the target behavior at the current moment according to the reference probability, the preset posterior probability and the following formula 1:
equation 1:
in addition, the above formula 1 may be in the form of the following formula 2:
equation 2:
Ps=P0*p(P0/p)+P1*p(P1/p)+……+P(n-1)*p[{P(n-1)/p}]
wherein i is more than or equal to 1 and less than or equal to n; ps is the prediction probability, wherein Ps is the prediction probability, and P (i-1) represents the i-1 th reference probability;
p [ P (i-1)/P ] represents the posterior probability of the i-1 th reference probability; p [ P (i-1)/P ] represents a predicted probability of occurrence of the target behavior in the case where the reference probability is P (i-1); ps is the sum of the predicted probabilities of all the reference probabilities, i.e. the total predicted probability of the occurrence of the target time.
Referring to fig. 3, taking a target device as an air conditioner and a target behavior as a smoking or eating behavior as an example, the electronic device mainly includes the following units:
an activity data obtaining unit 301, configured to obtain current activity data of a user, where the current time, indoor/outdoor temperature, user behavior status, etc. are obtained through a camera, a network side, and a mobile device side;
a state analysis unit 302, which combines the current environment data and the user behavior data brought by the camera, and analyzes the state types of the user activity data, such as static state, repeated movement, lying down and the like;
the behavior prediction unit 303 utilizes machine learning and large data model to count a large amount of online data of the user in combination with the user original data and the user state, and then returns to and predicts the current behavior of the user, namely sleep, diet, exercise, bath, entertainment and the like;
the scheme generating unit 304 provides an intelligent air-conditioning control scheme for each user according to the behavior prediction result in combination with a large amount of user data, and continuously updates and learns the user-specific air-conditioning scheme model.
As shown in fig. 4, the method for controlling a target device according to the embodiment of the present invention mainly includes the following steps:
step 401, user activity data acquisition;
The air conditioning equipment generally only comprises functions of room temperature acquisition and the like, the obtained data is single in source, and electronic equipment such as mobile phones and tablet computers are owned by most users, so that the indoor use frequency is high, and the mobile equipment can be used for acquiring multi-user data.
The active data acquisition unit 301 acquires RGB and depth data through the picture data acquired by the camera; providing current weather, position and other environmental data through a network terminal; and acquiring use data of the electronic equipment, wherein the use data comprise electronic equipment end app operation data, user daily behavior data and the like.
Step 402, user state analysis;
after the current activity data of the user is acquired by the activity data acquisition unit 301, a correspondence relationship between the activity data and the user state needs to be established, that is, the current state (standing, sitting, lying down, etc.) of the user is acquired by using various data (camera, network, mobile terminal data, etc.), which is necessary for believing that most people will not lie down for smoking and will not lie down for eating. The user state is analyzed by integrating multiple data (camera image, network information, mobile device usage information) in consideration of the complexity of the data and the diversity of the states.
When the electronic equipment detects that the mobile phone is continuously in standby in a certain time, the user can be judged to be in a static state with high probability by combining current time data (such as 12 late night or noon) acquired by the network;
when the image data collected by the front-facing camera detects that the user limb repeatedly performs the fixing action at a certain frequency within a fixed time (such as within 10 seconds), the user arm repeatedly performs the fixing action to and fro, such as entering or exiting the lens in the up-down and left-right directions, and the like, the current time data of the network side is combined to determine the current second preset time (such as 6am to 12 pm), and the user can be judged to be in the repeated fixed motion state.
When the system equipment side continuously acquires the electronic equipment to run the app, the front-end lens detects the face information of the user and the corresponding depth information does not change greatly in the period of time, the state that the user is using the mobile equipment can be judged, and whether the user is in a lying state can be judged according to the relative relation between the front scene and the back scene (the face and the background) in the front-end lens image.
Step 403, predicting the current behavior of the user.
After the current state information of the user is obtained through the state analysis unit 302, the current behavior of the specific user can be predicted according to different states. The method and the device can further accurately judge the behaviors of the user, such as feeding, smoking, reading and the like, in the current state at the current time according to the user living habit in the database in combination with the network time data and the mobile equipment end use data. The proposal adopts a machine learning method and predicts by combining with big user data, and establishes a network big database through possible behaviors of the user, such as eating, smoking, resting, reading and the like;
Specific developments are described with respect to smoking or eating behavior detection:
specifically, when the state analysis module detects that the user is in a state of repeatedly fixing movement facing the mobile device, a first reference probability P0 is generated;
the electronic equipment side detects that the user uses the take-out APP (within 1 hour) at the mobile equipment side in the near future, if higher privacy authority is obtained, the electronic equipment side can accurately obtain articles purchased by the user (cigarettes, drinks or hot foods and the like), and generates a second reference probability P1 according to the occurrence time of the purchase take-out and whether the types of the commodities contain cigarettes, foods and the like;
acquiring current time information and life habits of users in a database, and determining a third reference probability P2 according to the current user information and the information in a large database;
acquiring indoor heat and temperature change through an infrared sensor and a temperature sensor end, and determining a fourth reference probability P3 according to the current temperature change;
acquiring the current indoor gas variation through the smoke and smell detection ports, and determining a fifth reference probability P4;
determining the predicted probability of the target behavior at the current moment according to the reference probability, the preset posterior probability and the following formula:
namely:
therefore, according to the current multi-source information of the user, and by combining with the network to collect and count a large amount of user data, the feeding or smoking behavior discrimination and prediction of the user state can be realized, namely, the mobile equipment is used while smoking or feeding, the larger Ps corresponds to the more likely to carry out smoking or feeding behavior of the current user, and then the exclusive air conditioner setting scheme is started.
Step 404, controlling the air conditioner to operate according to an operation mode corresponding to the predicted behavior.
After the indoor user behavior prediction is obtained, the optimal air conditioner setting scheme is automatically provided without manual adjustment of user interaction, and the scheme can be imported through an online large database or can be preset manually by a user.
Specifically, each group of people corresponds to a target group in daily life, has a close living habit, and the living habits (behaviors) of different groups of people are different, namely, a target group exclusive scheme is generated based on different groups of people (different living habits); the air conditioning scheme itself possesses several attributes including on/off time, temperature, humidity, intensity, silence, etc. For users, users of different ages have different air conditioner use habits, such as the old people turning on the air conditioner less, middle aged people turning on the air conditioner at a moderate temperature more, and young people turning on the air conditioner more overnight; different users have different air conditioning habits, so that male users are more prone to be hot and female users are more afraid of cold;
and for each specific user, each person has own air conditioner using habit, including using time, using duration, using season/weather, using mode and the like.
Considering the diversity between different schemes and user targets, the method is realized by adopting a Spark distributed processing engine in the example, and by providing classification tasks capable of executing the user schemes in parallel in a distributed environment, firstly using objects to model the user to use air conditioning information, wherein the information is put in a distributed set (key, value), namely PairRDD; different key fields would belong to different machines, while for user information, the key is a user identifier and the value is an aggregated table of attributes for the user using the air conditioner.
And when a new cluster is obtained, converting the original PairRDD into a new PairRDD, determining cluster-like attributes, and continuously updating user information along with the continuous execution of the task and simultaneously and reversely promoting the iteration of a user scheme.
Specifically, when the behavior prediction unit 303 predicts that the current behavior of the user is sleep, the current season and weather information are obtained according to the network side, and according to the information such as age, sex, physiological characteristic habit and the like of the user in the network database and the optimal air conditioner configuration attribute corresponding to the user in big data, the system sends a command to the air conditioner to start the sleep mode and mute, if the current outdoor temperature is cool, the heating mode is started to reset the temperature for one degree every hour, otherwise, the temperature is reset for one degree every hour;
Or when the behavior prediction unit 303 predicts that the current behavior of the user is eating, the system will send a command to the air conditioner to switch to the ventilation mode so that the food smell is dissipated;
for another example, when the behavior prediction unit 303 predicts that the current behavior of the user is smoking, and obtains the current season and weather information according to the network side, the system will send a command to the air conditioner to start ventilation/standby, and prompt the user to open the window;
when the behavior prediction unit 303 predicts that the user is currently in a leisure and entertainment state, the optimal air conditioning scheme combination is obtained according to different entertainment modes in the network database, if the user is detected to be reading, playing a game and watching a video, a command is sent to the air conditioner to start mute, and meanwhile, if the user is detected to be in a lying state, the command is sent to the air conditioner to appropriately raise the temperature to avoid catching a cold.
In the embodiment of the invention, the state analysis is carried out on the user according to the activity data by acquiring the activity data of the user to obtain the state information of the user; then, according to the state information, the activity data and the historical behavior data of the target behavior, predicting the behavior of the user to obtain the predicted behavior of the user; finally, the control target equipment sets the operation mode to be the operation mode corresponding to the predicted behavior, so that the automatic and intelligent control target equipment can select the operation mode without manual selection of a user; and the operation mode can be switched at any time along with the change of the predicted behavior of the user.
Having described the method for controlling the target device according to the embodiment of the present invention, an electronic device according to the embodiment of the present invention will be described below with reference to the accompanying drawings.
Referring to fig. 5, an embodiment of the present invention further provides an electronic device 500, where the electronic device 500 includes:
a data acquisition module 501, configured to acquire activity data of a user; the activity data includes: at least one of picture data collected by a camera, environment data, data collected by a sensor and use data of the electronic device 500;
the state analysis module 502 is configured to perform state analysis on the user according to the activity data, so as to obtain state information of the user;
a behavior prediction module 503, configured to predict the behavior of the user according to the state information, the activity data, and the historical behavior data of the target behavior, so as to obtain a predicted behavior of the user; the predicted behavior is the behavior with the highest predicted probability in the target behaviors;
and a device control module 504, configured to control a target device to set an operation mode to an operation mode corresponding to the predicted behavior, where the target device is a device communicatively connected to the electronic device 500.
Optionally, in an embodiment of the present invention, the state analysis module 502 includes:
a first analysis sub-module, configured to determine that the state information of the user is a first state if the usage data indicates that the electronic device 500 is in a standby state and the environmental data indicates that the current time is a first preset time of the user;
the second analysis submodule is used for determining that the state information of the user is a second state if the picture data acquired by the camera at each interval are the same and the environment data indicate that the current moment is a second preset time of the user; and/or
A third analysis sub-module, configured to determine that the state information of the user is a third state if the usage data indicates that the electronic device 500 is in a usage state and a variation range of depth information of a picture acquired by the camera in a preset time period is within a preset variation range;
the first preset time and the second preset time are obtained by machine learning according to historical behavior data of the user.
Optionally, in an embodiment of the present invention, the behavior prediction module 503 includes:
the behavior acquisition sub-module is used for acquiring the target behavior of the user; the target behavior is obtained by machine learning according to the historical behavior data of the user;
The probability determination submodule is used for determining the prediction probability of the target behavior at the current moment according to the state information, the activity data and the historical behavior data of the target behavior;
and the behavior determination submodule is used for taking the target behavior with the maximum prediction probability as the predicted behavior of the user.
Optionally, in an embodiment of the present invention, the probability determination submodule includes:
the first probability determining unit is used for determining n reference probabilities of the first target behaviors at the current moment according to the state information and the activity data aiming at each target behavior; wherein n is a positive integer greater than or equal to 1;
the second probability determining unit is used for determining a preset posterior probability corresponding to each reference probability;
and the third probability determining unit is used for determining the prediction probability of the target behavior at the current moment according to the reference probability and the preset posterior probability.
Optionally, in an embodiment of the present invention, the first probability determining unit is configured to:
determining a first reference probability of the state information under the first target behavior;
determining, from the usage data, a second reference probability of the electronic device 500 occurring a second target behavior associated with the first target behavior;
Determining a third reference probability of the first target behavior at the current moment according to the time data in the environment data;
determining a fourth reference probability of occurrence of the first target behavior according to data acquired by a first sensor in the sensors; and/or
And determining a fifth reference probability of the first target behavior according to the data acquired by a second sensor in the sensors.
Optionally, in an embodiment of the present invention, determining, according to the usage data, a second reference probability that the electronic device 500 generates a second target behavior associated with the first target behavior includes:
determining, according to the usage data, a relevant parameter of the electronic device 500 for generating a second target behavior; the related parameters comprise whether occurrence happens, the time interval of occurrence is equal to the current time interval, and the type of the target object aimed at by the target behavior;
and according to a preset weight, weighting and summing each related parameter to obtain a second reference probability of the second target behavior.
Optionally, in an embodiment of the present invention, the determining, according to the data collected by the first sensor in the sensors, a fourth reference probability that the first target behavior occurs includes:
Determining the temperature variation in the current environment according to the data acquired by the first sensor in the sensors;
and determining a fourth reference probability of the target behavior according to the temperature variation.
Optionally, in an embodiment of the present invention, the determining, according to data collected by a second sensor in the sensors, a fifth reference probability that the second target behavior occurs includes:
determining the gas variation in the current environment according to the data acquired by a second sensor in the sensors;
and determining a fifth reference probability of the first target behavior according to the gas variation.
Optionally, in an embodiment of the present invention, the third probability determining unit is configured to:
determining the predicted probability of the target behavior at the current moment according to the reference probability, the preset posterior probability and the following formula:
wherein Ps is the prediction probability, and P (i-1) represents the i-1 th reference probability;
p [ P (i-1)/P ] represents the posterior probability of the i-1 th reference probability.
The electronic device 500 provided in the embodiment of the present invention can implement each process implemented by the electronic device 500 in the method embodiment of fig. 1 to 4, and in order to avoid repetition, a description is omitted here.
In the embodiment of the invention, the data acquisition module 501 acquires the activity data of the user, and the state analysis module 502 performs state analysis on the user according to the activity data to obtain the state information of the user; the behavior prediction module 503 predicts the behavior of the user according to the state information, the activity data and the historical behavior data of the target behavior, so as to obtain the predicted behavior of the user; the device control module 504 controls the target device to set the operation mode to the operation mode corresponding to the predicted behavior, so as to realize automatic and intelligent control of the target device to select the operation mode without manual selection of a user; and the operation mode can be switched at any time along with the change of the predicted behavior of the user.
FIG. 6 is a schematic diagram of a hardware architecture of an electronic device implementing various embodiments of the present invention;
the electronic device 600 includes, but is not limited to: radio frequency unit 601, network module 602, audio output unit 603, input unit 604, sensor 605, display unit 606, user input unit 607, interface unit 608, memory 609, processor 610, and power supply 611. It will be appreciated by those skilled in the art that the electronic device structure shown in fig. 6 is not limiting of the electronic device and that the electronic device may include more or fewer components than shown, or may combine certain components, or a different arrangement of components. In the embodiment of the invention, the electronic equipment comprises, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer and the like.
Wherein, the processor 610 is configured to obtain activity data of a user; the activity data includes: at least one of picture data collected by a camera, environment data, data collected by a sensor and use data of the electronic equipment;
according to the activity data, carrying out state analysis on the user to obtain state information of the user;
according to the state information, the activity data and the historical behavior data of the target behavior, predicting the behavior of the user to obtain the predicted behavior of the user; the predicted behavior is the behavior with the highest predicted probability in the target behaviors;
and controlling the target equipment to set the operation mode to be the operation mode corresponding to the predicted behavior, wherein the target equipment is equipment in communication connection with the electronic equipment.
In the embodiment of the invention, the state analysis is carried out on the user according to the activity data by acquiring the activity data of the user, so as to obtain the state information of the user; then, according to the state information, the activity data and the historical behavior data of the target behavior, predicting the behavior of the user to obtain the predicted behavior of the user; finally, the control target equipment sets the operation mode to be the operation mode corresponding to the predicted behavior, so that the automatic and intelligent control target equipment can select the operation mode without manual selection of a user; and the operation mode can be switched at any time along with the change of the predicted behavior of the user.
It should be noted that, in this embodiment, the above-mentioned electronic device 600 may implement each process in the method embodiment of the present invention and achieve the same beneficial effects, and in order to avoid repetition, the description is omitted here.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 601 may be used to receive and send information or signals during a call, specifically, receive downlink data from a base station, and then process the downlink data with the processor 610; and, the uplink data is transmitted to the base station. Typically, the radio frequency unit 601 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 601 may also communicate with networks and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 602, such as helping the user to send and receive e-mail, browse web pages, and access streaming media, etc.
The audio output unit 603 may convert audio data received by the radio frequency unit 601 or the network module 602 or stored in the memory 609 into an audio signal and output as sound. Also, the audio output unit 603 may also provide audio output (e.g., a call signal reception sound, a message reception sound, etc.) related to a specific function performed by the electronic device 600. The audio output unit 603 includes a speaker, a buzzer, a receiver, and the like.
The input unit 604 is used for receiving audio or video signals. The input unit 604 may include a graphics processor (Graphics Processing Unit, GPU) 6041 and a microphone 6042, the graphics processor 6041 processing image data of still pictures or video obtained by an image capturing apparatus (such as a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 606. The image frames processed by the graphics processor 6041 may be stored in the memory 609 (or other storage medium) or transmitted via the radio frequency unit 601 or the network module 602. Microphone 6042 may receive sound and can process such sound into audio data. The processed audio data may be converted into a format output that can be transmitted to the mobile communication base station via the radio frequency unit 601 in the case of a telephone call mode.
The electronic device 600 also includes at least one sensor 605, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor and a proximity sensor, wherein the ambient light sensor can adjust the brightness of the display panel 6061 according to the brightness of ambient light, and the proximity sensor can turn off the display panel 6061 and/or the backlight when the electronic device 600 moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the acceleration in all directions (generally three axes), and can detect the gravity and direction when stationary, and can be used for recognizing the gesture of the electronic equipment (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and knocking), and the like; the sensor 605 may also include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., which are not described herein.
The display unit 606 is used to display information input by a user or information provided to the user. The display unit 606 may include a display panel 6061, and the display panel 6061 may be configured in the form of a liquid crystal display (Liquid Crystal Display, LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 607 may be used to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 607 includes a touch panel 6071 and other input devices 6072. Touch panel 6071, also referred to as a touch screen, may collect touch operations thereon or thereabout by a user (e.g., operations of the user on touch panel 6071 or thereabout using any suitable object or accessory such as a finger, stylus, or the like). The touch panel 6071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device and converts it into touch point coordinates, which are then sent to the processor 610, and receives and executes commands sent from the processor 610. In addition, the touch panel 6071 may be implemented in various types such as resistive, capacitive, infrared, and surface acoustic wave. The user input unit 607 may include other input devices 6072 in addition to the touch panel 6071. Specifically, other input devices 6072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a track ball, a mouse, and a joystick, which are not described herein.
Further, the touch panel 6071 may be overlaid on the display panel 6061, and when the touch panel 6071 detects a touch operation thereon or thereabout, the touch operation is transmitted to the processor 610 to determine a type of a touch event, and then the processor 610 provides a corresponding visual output on the display panel 6061 according to the type of the touch event. Although in fig. 6, the touch panel 6071 and the display panel 6061 are two independent components for implementing the input and output functions of the electronic device, in some embodiments, the touch panel 6071 and the display panel 6061 may be integrated to implement the input and output functions of the electronic device, which is not limited herein.
The interface unit 608 is an interface to which an external device is connected to the electronic apparatus 600. For example, the external devices may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 608 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the electronic apparatus 600 or may be used to transmit data between the electronic apparatus 600 and an external device.
The memory 609 may be used to store software programs as well as various data. The memory 609 may mainly include a storage program area that may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, etc.), and a storage data area; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the handset, etc. In addition, the memory 609 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 610 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 609, and calling data stored in the memory 609, thereby performing overall monitoring of the electronic device. The processor 610 may include one or more processing units; preferably, the processor 610 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 610.
The electronic device 600 may also include a power supply 611 (e.g., a battery) for powering the various components, and preferably the power supply 611 may be logically coupled to the processor 610 via a power management system that performs functions such as managing charging, discharging, and power consumption.
In addition, the electronic device 600 includes some functional modules, which are not shown, and will not be described herein.
Preferably, the embodiment of the present invention further provides an electronic device, including a processor 610, a memory 609, and a computer program stored in the memory 609 and capable of running on the processor 610, where the computer program when executed by the processor 610 implements each process of the above embodiment of the target device control method, and the same technical effects can be achieved, and for avoiding repetition, a detailed description is omitted herein.
The embodiment of the invention also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the respective processes of the above-mentioned target device control method embodiment, and can achieve the same technical effects, so that repetition is avoided, and no further description is given here. Wherein the computer readable storage medium is selected from Read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The embodiments of the present invention have been described above with reference to the accompanying drawings, but the present invention is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present invention and the scope of the claims, which are to be protected by the present invention.
Claims (9)
1. The target equipment control method is applied to electronic equipment, and the electronic equipment is a mobile terminal, and is characterized by comprising the following steps:
acquiring activity data of a user through the electronic equipment; the activity data includes: at least one of picture data collected by a camera, environment data, data collected by a sensor and use data of the electronic equipment;
according to the activity data, carrying out state analysis on the user to obtain state information of the user;
according to the state information, the activity data and the historical behavior data of the target behavior, predicting the behavior of the user to obtain the predicted behavior of the user; the predicted behavior is the behavior with the highest predicted probability in the target behaviors;
The control target device sets an operation mode to be an operation mode corresponding to the predicted behavior, and the target device is a device in communication connection with the electronic device;
the step of performing state analysis on the user according to the activity data to obtain state information of the user comprises the following steps:
and if the use data indicates that the electronic equipment is in a use state and the change amplitude of the depth information of the picture acquired by the camera in the preset time period is in a preset change range, determining that the state information of the user is a third state.
2. The method according to claim 1, wherein the step of predicting the behavior of the user based on the state information, the activity data, and the historical behavior data of the target behavior to obtain the predicted behavior of the user comprises:
acquiring target behaviors of the user; the target behavior is obtained by machine learning according to the historical behavior data of the user;
for each target behavior, determining the prediction probability of the target behavior at the current moment according to the state information, the activity data and the historical behavior data of the target behavior;
And taking the target behavior with the highest prediction probability as the predicted behavior of the user.
3. The target device control method according to claim 2, wherein the step of determining, for each target behavior, a predicted probability that the target behavior occurs at a current time based on the state information, the activity data, and the historical behavior data of the target behavior, comprises:
for each target behavior, determining n reference probabilities of the first target behavior at the current moment according to the state information and the activity data; wherein n is a positive integer greater than or equal to 1;
determining a preset posterior probability corresponding to each reference probability;
and determining the predicted probability of the target behavior at the current moment according to the reference probability and the preset posterior probability.
4. The method according to claim 3, wherein the step of determining n reference probabilities of occurrence of the first target behavior at the current time based on the state information and the activity data includes:
determining a first reference probability of the state information under the first target behavior;
determining a second reference probability of the electronic device to generate a second target behavior associated with the first target behavior according to the usage data; and/or
Determining a third reference probability of the first target behavior at the current moment according to the time data in the environment data; and/or
Determining a fourth reference probability of occurrence of the first target behavior according to data acquired by a first sensor in the sensors; and/or
And determining a fifth reference probability of the first target behavior according to the data acquired by a second sensor in the sensors.
5. The method of claim 4, wherein the step of determining a second reference probability that the electronic device will exhibit a second target behavior associated with the first target behavior based on the usage data comprises:
according to the use data, determining relevant parameters of the second target behavior of the electronic equipment; the related parameters comprise whether occurrence happens, the time interval of occurrence is equal to the current time interval, and the type of the target object aimed at by the target behavior;
and according to a preset weight, weighting and summing each related parameter to obtain a second reference probability of the second target behavior.
6. The method of claim 4, wherein the step of determining a fourth reference probability of the first target behavior occurring based on the data collected by the first one of the sensors comprises:
Determining the temperature variation in the current environment according to the data acquired by the first sensor in the sensors;
and determining a fourth reference probability of the target behavior according to the temperature variation.
7. The method of claim 4, wherein the step of determining a fifth reference probability of the first target behavior occurring based on the data collected by the second one of the sensors comprises:
determining the gas variation in the current environment according to the data acquired by a second sensor in the sensors;
and determining a fifth reference probability of the first target behavior according to the gas variation.
8. The target device control method according to claim 3, wherein the step of determining a predicted probability that the target behavior occurs at the current time based on the reference probability and the preset posterior probability includes:
determining the predicted probability of the target behavior at the current moment according to the reference probability, the preset posterior probability and the following formula:
wherein Ps is the prediction probability,representing the i-1 th reference probability;
the posterior probability representing the i-1 th reference probability.
9. An electronic device, wherein the electronic device is a mobile terminal, the electronic device comprising:
the data acquisition module is used for acquiring activity data of a user; the activity data includes: at least one of picture data collected by a camera, environment data, data collected by a sensor and use data of the electronic equipment;
the state analysis module is used for carrying out state analysis on the user according to the activity data to obtain the state information of the user;
the behavior prediction module is used for predicting the behavior of the user according to the state information, the activity data and the historical behavior data of the target behavior to obtain the predicted behavior of the user; the predicted behavior is the behavior with the highest predicted probability in the target behaviors;
the device control module is used for controlling a target device to set an operation mode to be an operation mode corresponding to the predicted behavior, wherein the target device is a device in communication connection with the electronic device;
the state analysis module comprises:
and the third analysis sub-module is used for determining that the state information of the user is a third state if the use data indicates that the electronic equipment is in a use state and the change amplitude of the depth information of the picture acquired by the camera in a preset time period is within a preset change range.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911150788.3A CN111025922B (en) | 2019-11-21 | 2019-11-21 | Target equipment control method and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911150788.3A CN111025922B (en) | 2019-11-21 | 2019-11-21 | Target equipment control method and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111025922A CN111025922A (en) | 2020-04-17 |
CN111025922B true CN111025922B (en) | 2023-09-12 |
Family
ID=70201902
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911150788.3A Active CN111025922B (en) | 2019-11-21 | 2019-11-21 | Target equipment control method and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111025922B (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112255925A (en) * | 2020-10-19 | 2021-01-22 | 珠海格力电器股份有限公司 | Method and device for controlling intelligent household equipment and computer equipment |
CN112099374A (en) * | 2020-11-11 | 2020-12-18 | 广东恩胜科技有限公司 | Indoor environment comfort control method and system, electronic equipment and storage medium |
CN112782988A (en) * | 2020-12-30 | 2021-05-11 | 深圳市微网力合信息技术有限公司 | Control method of intelligent household curtain based on Internet of things |
CN114675551A (en) * | 2022-02-23 | 2022-06-28 | 青岛海尔科技有限公司 | Method and device for determining operation behavior, storage medium and electronic device |
CN115051354B (en) * | 2022-05-19 | 2024-09-10 | 深圳市创诺新电子科技有限公司 | Household power utilization system management method |
CN115268269B (en) * | 2022-07-29 | 2023-06-02 | 无锡市低碳研究院有限公司 | Household energy consumption optimization system and method based on new energy low carbon |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107065796A (en) * | 2017-03-30 | 2017-08-18 | 上海斐讯数据通信技术有限公司 | A kind of electric control method based on user profile, device and system |
KR20170099721A (en) * | 2016-02-24 | 2017-09-01 | 삼성전자주식회사 | Server and controlling user environment method of electronic device using electronic device and at least one smart device |
CN107272433A (en) * | 2017-07-26 | 2017-10-20 | 深圳贯和通物联科技有限公司 | A kind of intelligent home furnishing control method and device |
CN107479393A (en) * | 2017-08-17 | 2017-12-15 | 北京天平检验行有限公司 | A kind of intelligent domestic system based on big data |
CN107666540A (en) * | 2017-10-17 | 2018-02-06 | 北京小米移动软件有限公司 | Terminal control method, device and storage medium |
CN107992003A (en) * | 2017-11-27 | 2018-05-04 | 武汉博虎科技有限公司 | User's behavior prediction method and device |
CN108427310A (en) * | 2018-05-17 | 2018-08-21 | 深圳市零度智控科技有限公司 | Intelligent home furnishing control method, device and computer readable storage medium |
CN109870919A (en) * | 2019-03-08 | 2019-06-11 | 佛山市云米电器科技有限公司 | A kind of intelligent home furnishing control method and system |
-
2019
- 2019-11-21 CN CN201911150788.3A patent/CN111025922B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20170099721A (en) * | 2016-02-24 | 2017-09-01 | 삼성전자주식회사 | Server and controlling user environment method of electronic device using electronic device and at least one smart device |
CN107065796A (en) * | 2017-03-30 | 2017-08-18 | 上海斐讯数据通信技术有限公司 | A kind of electric control method based on user profile, device and system |
CN107272433A (en) * | 2017-07-26 | 2017-10-20 | 深圳贯和通物联科技有限公司 | A kind of intelligent home furnishing control method and device |
CN107479393A (en) * | 2017-08-17 | 2017-12-15 | 北京天平检验行有限公司 | A kind of intelligent domestic system based on big data |
CN107666540A (en) * | 2017-10-17 | 2018-02-06 | 北京小米移动软件有限公司 | Terminal control method, device and storage medium |
CN107992003A (en) * | 2017-11-27 | 2018-05-04 | 武汉博虎科技有限公司 | User's behavior prediction method and device |
CN108427310A (en) * | 2018-05-17 | 2018-08-21 | 深圳市零度智控科技有限公司 | Intelligent home furnishing control method, device and computer readable storage medium |
CN109870919A (en) * | 2019-03-08 | 2019-06-11 | 佛山市云米电器科技有限公司 | A kind of intelligent home furnishing control method and system |
Also Published As
Publication number | Publication date |
---|---|
CN111025922A (en) | 2020-04-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111025922B (en) | Target equipment control method and electronic equipment | |
CN109946986B (en) | Household appliance control method, wearable device and computer readable storage medium | |
CN110032156B (en) | Control and adjustment method of household equipment, terminal and household equipment | |
CN108628217B (en) | Wearable device power consumption control method, wearable device and computer-readable storage medium | |
CN110013260B (en) | Emotion theme regulation and control method, equipment and computer-readable storage medium | |
CN110022401A (en) | A kind of control parameter setting method, terminal and computer readable storage medium | |
CN111415722B (en) | Screen control method and electronic equipment | |
CN108628515A (en) | A kind of operating method and mobile terminal of multimedia content | |
CN110519512A (en) | A kind of object processing method and terminal | |
CN109558046A (en) | A kind of information display method and terminal device | |
CN109167914A (en) | A kind of image processing method and mobile terminal | |
CN107832032A (en) | Screen locking display methods and mobile terminal | |
CN108600544A (en) | A kind of Single-hand control method and terminal | |
CN109521684A (en) | A kind of home equipment control method and terminal device | |
CN107967418B (en) | Face recognition method and mobile terminal | |
CN109785815A (en) | A kind of screen luminance adjustment method and mobile terminal | |
CN109862172A (en) | A kind of adjusting method and terminal of screen parameter | |
CN114077227A (en) | Page switching method and device, scene control panel, equipment and storage medium | |
CN109164908B (en) | Interface control method and mobile terminal | |
CN110058837A (en) | A kind of audio-frequency inputting method and terminal | |
CN114327332A (en) | Internet of things equipment setting method and device, electronic equipment and storage medium | |
CN109711282A (en) | Light adjusting method and device | |
CN109379503A (en) | A kind of income prompting method and mobile terminal | |
CN111294510A (en) | Monitoring method and electronic equipment | |
CN109324514A (en) | A kind of environment adjustment method and terminal device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |