CN111025922A - Target equipment control method and electronic equipment - Google Patents
Target equipment control method and electronic equipment Download PDFInfo
- Publication number
- CN111025922A CN111025922A CN201911150788.3A CN201911150788A CN111025922A CN 111025922 A CN111025922 A CN 111025922A CN 201911150788 A CN201911150788 A CN 201911150788A CN 111025922 A CN111025922 A CN 111025922A
- Authority
- CN
- China
- Prior art keywords
- behavior
- data
- user
- target
- determining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 56
- 230000006399 behavior Effects 0.000 claims abstract description 367
- 230000000694 effects Effects 0.000 claims abstract description 89
- 238000004458 analytical method Methods 0.000 claims abstract description 23
- 238000004891 communication Methods 0.000 claims abstract description 12
- 230000008859 change Effects 0.000 claims description 20
- 238000010801 machine learning Methods 0.000 claims description 18
- 230000007613 environmental effect Effects 0.000 claims description 14
- 230000008569 process Effects 0.000 description 19
- 230000006870 function Effects 0.000 description 16
- 230000000391 smoking effect Effects 0.000 description 12
- 230000033001 locomotion Effects 0.000 description 11
- 238000004590 computer program Methods 0.000 description 9
- 230000007937 eating Effects 0.000 description 9
- 230000009471 action Effects 0.000 description 6
- 239000000779 smoke Substances 0.000 description 6
- 238000004378 air conditioning Methods 0.000 description 5
- 238000001514 detection method Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 235000013305 food Nutrition 0.000 description 4
- 238000009423 ventilation Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 230000037406 food intake Effects 0.000 description 3
- 235000012631 food intake Nutrition 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 230000003252 repetitive effect Effects 0.000 description 3
- 235000019504 cigarettes Nutrition 0.000 description 2
- 230000020595 eating behavior Effects 0.000 description 2
- 238000010438 heat treatment Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 238000010295 mobile communication Methods 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000000284 resting effect Effects 0.000 description 2
- 206010063385 Intellectualisation Diseases 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000003287 bathing Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000013499 data model Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 235000021268 hot food Nutrition 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 235000012054 meals Nutrition 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000033764 rhythmic process Effects 0.000 description 1
- 238000010079 rubber tapping Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000010897 surface acoustic wave method Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B15/00—Systems controlled by a computer
- G05B15/02—Systems controlled by a computer electric
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B19/00—Programme-control systems
- G05B19/02—Programme-control systems electric
- G05B19/418—Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS] or computer integrated manufacturing [CIM]
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/20—Pc systems
- G05B2219/26—Pc applications
- G05B2219/2642—Domotique, domestic, home control, automation, smart house
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/02—Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Manufacturing & Machinery (AREA)
- Quality & Reliability (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Telephonic Communication Services (AREA)
Abstract
The embodiment of the invention provides a target device control method and electronic equipment. The method is applied to the electronic equipment and comprises the following steps: acquiring activity data of a user; according to the activity data, performing state analysis on the user to obtain state information of the user; according to the state information, the activity data and historical behavior data of the target behavior, performing behavior prediction on the user to obtain the predicted behavior of the user; wherein the predicted behavior is a behavior with the highest predicted probability in the target behaviors; and controlling the target device to set an operation mode to an operation mode corresponding to the predicted behavior, wherein the target device is a device which is in communication connection with the electronic device. The embodiment of the invention solves the problem that the operation mode of the intelligent home system needs to be frequently selected by a user in the prior art.
Description
Technical Field
The present invention relates to the field of mobile communications technologies, and in particular, to a target device control method and an electronic device.
Background
With the rapid development of Artificial Intelligence (AI) technology, smart home systems are beginning to enter people's daily lives and have trends of "intellectualization" and "automation". The intelligent home is characterized in that various devices in the home are connected together through the Internet of things technology, such as audio and video devices, lighting systems, curtain control, air conditioner control, security systems, digital cinema systems, audio and video servers, video cabinet systems, network home appliances and the like, and multiple functions and means such as home appliance control, lighting control, telephone remote control, indoor and outdoor remote control, anti-theft alarm, environment monitoring, heating and ventilation control, infrared forwarding, programmable timing control and the like are provided through a network. Compared with the common home, the intelligent home has the traditional living function, integrates the functions of building, network communication, information household appliance and equipment automation, provides an all-around information interaction function, and even saves funds for various energy expenses.
In the prior art, multiple operation modes are usually preset for the smart home system, which operation mode is specifically used usually needs to be selected by a user, and the process is complicated if the operation mode needs to be frequently changed in the use process of the user, so that the operation mode of the smart home system is usually not the most appropriate to the current state of the user, and the advantages of the smart home system cannot be effectively utilized.
Disclosure of Invention
The embodiment of the invention provides a target device control method and electronic equipment, and aims to solve the problem that in the prior art, the operation mode of an intelligent home system needs to be frequently selected by a user.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a target device control method, which is applied to an electronic device, and the method includes:
acquiring activity data of a user; the activity data includes: at least one of picture data, environment data, data collected by a sensor and use data of the electronic equipment, wherein the picture data, the environment data, the data collected by the sensor and the use data are collected by a camera;
according to the activity data, performing state analysis on the user to obtain state information of the user;
according to the state information, the activity data and historical behavior data of the target behavior, performing behavior prediction on the user to obtain the predicted behavior of the user; wherein the predicted behavior is a behavior with the highest predicted probability in the target behaviors;
and controlling the target device to set an operation mode to an operation mode corresponding to the predicted behavior, wherein the target device is a device which is in communication connection with the electronic device.
Optionally, the step of analyzing the state of the user according to the activity data to obtain the state information of the user includes:
if the usage data indicate that the electronic equipment is in a standby state and the environmental data indicate that the current time is first preset time of the user, determining that the state information of the user is in a first state;
if the image data acquired by the camera at each interval time are the same and the environmental data indicate that the current time is the second preset time of the user, determining that the state information of the user is in a second state; and/or
If the use data indicate that the electronic equipment is in a use state and the change amplitude of the depth information of the picture acquired by the camera in a preset time period is within a preset change range, determining that the state information of the user is in a third state;
the first preset time and the second preset time are obtained by machine learning according to the historical behavior data of the user.
Optionally, the step of performing behavior prediction on the user according to the state information, the activity data, and the historical behavior data of the target behavior to obtain the predicted behavior of the user includes:
acquiring a target behavior of the user; the target behavior is obtained by machine learning according to the historical behavior data of the user;
for each target behavior, determining the prediction probability of the target behavior at the current moment according to the state information, the activity data and the historical behavior data of the target behavior;
and taking the target behavior with the maximum prediction probability as the predicted behavior of the user.
Optionally, the step of determining, for each target behavior, a predicted probability of the target behavior occurring at the current time according to the state information, the activity data, and the historical behavior data of the target behavior includes:
for each target behavior, determining n reference probabilities of the first target behavior occurring at the current moment according to the state information and the activity data; wherein n is a positive integer greater than or equal to 1;
determining a preset posterior probability corresponding to each reference probability;
and determining the prediction probability of the target behavior at the current moment according to the reference probability and the preset posterior probability.
Optionally, the step of determining n reference probabilities of the occurrence of the first target behavior at the current time according to the state information and the activity data includes:
determining a first reference probability of the state information under the first target behavior;
determining a second reference probability of the electronic device for a second target behavior associated with the first target behavior according to the usage data;
determining a third reference probability of the first target behavior occurring at the current moment according to the time data in the environment data;
determining a fourth reference probability of occurrence of the first target behavior according to data collected by a first sensor of the sensors; and/or
And determining a fifth reference probability of the first target behavior according to data collected by a second sensor of the sensors.
Optionally, the step of determining, according to the usage data, a second reference probability that the electronic device has a second target behavior associated with the first target behavior includes:
determining relevant parameters of the electronic equipment for generating a second target behavior according to the use data; the relevant parameters comprise whether the target behavior occurs or not, the time interval between the occurrence moment and the current time interval, and the type of a target object for which the target behavior aims;
and according to a preset weight value, weighting and summing each relevant parameter to obtain a second reference probability of the second target behavior.
Optionally, the step of determining a fourth reference probability of occurrence of the first target behavior according to data collected by a first sensor of the sensors includes:
determining the temperature variation in the current environment according to data collected by a first sensor of the sensors;
and determining a fourth reference probability of the target behavior according to the temperature variation.
Optionally, the step of determining a fifth reference probability of occurrence of the second target behavior according to data collected by a second sensor of the sensors includes:
determining the gas variation in the current environment according to data acquired by a second sensor of the sensors;
determining a fifth reference probability of the first target behavior occurring according to the gas variation.
Optionally, the step of determining a prediction probability of the target behavior occurring at the current time according to the reference probability and the preset posterior probability includes:
determining the prediction probability of the target behavior at the current moment according to the reference probability, the preset posterior probability and the following formula:
wherein, Ps is the prediction probability, and P (i-1) represents the i-1 th reference probability;
p [ P (i-1)/P ] represents the posterior probability of the i-1 st reference probability.
In a second aspect, an embodiment of the present invention further provides an electronic device, where the electronic device includes:
the data acquisition module is used for acquiring activity data of a user; the activity data includes: at least one of picture data, environment data, data collected by a sensor and use data of the electronic equipment, wherein the picture data, the environment data, the data collected by the sensor and the use data are collected by a camera;
the state analysis module is used for carrying out state analysis on the user according to the activity data to obtain state information of the user;
the behavior prediction module is used for predicting the behavior of the user according to the state information, the activity data and the historical behavior data of the target behavior to obtain the predicted behavior of the user; wherein the predicted behavior is a behavior with the highest predicted probability in the target behaviors;
and the equipment control module is used for controlling target equipment to set an operation mode to an operation mode corresponding to the predicted behavior, wherein the target equipment is equipment in communication connection with the electronic equipment.
Optionally, the state analysis module comprises:
the first analysis submodule is used for determining that the state information of the user is in a first state if the use data indicate that the electronic equipment is in a standby state and the environmental data indicate that the current time is first preset time of the user;
the second analysis submodule is used for determining that the state information of the user is in a second state if the image data acquired by the camera at each interval time are the same and the environmental data indicate that the current time is second preset time of the user; and/or
The third analysis submodule is used for determining that the state information of the user is in a third state if the use data indicate that the electronic equipment is in a use state and the change amplitude of the depth information of the picture acquired by the camera in a preset time period is within a preset change range;
the first preset time and the second preset time are obtained by machine learning according to the historical behavior data of the user.
Optionally, the behavior prediction module comprises:
the behavior acquisition submodule is used for acquiring the target behavior of the user; the target behavior is obtained by machine learning according to the historical behavior data of the user;
the probability determination submodule is used for determining the prediction probability of the target behavior at the current moment according to the state information, the activity data and the historical behavior data of the target behavior aiming at each target behavior;
and the behavior determination submodule is used for taking the target behavior with the maximum prediction probability as the predicted behavior of the user.
Optionally, the probability determination submodule includes:
the first probability determining unit is used for determining n reference probabilities of the first target behaviors occurring at the current moment according to the state information and the activity data aiming at each target behavior; wherein n is a positive integer greater than or equal to 1;
the second probability determining unit is used for determining a preset posterior probability corresponding to each reference probability;
and the third probability determining unit is used for determining the prediction probability of the target behavior at the current moment according to the reference probability and the preset posterior probability.
Optionally, the first probability determination unit is configured to:
determining a first reference probability of the state information under the first target behavior;
determining a second reference probability of the electronic device for a second target behavior associated with the first target behavior according to the usage data;
determining a third reference probability of the first target behavior occurring at the current moment according to the time data in the environment data;
determining a fourth reference probability of occurrence of the first target behavior according to data collected by a first sensor of the sensors; and/or
And determining a fifth reference probability of the first target behavior according to data collected by a second sensor of the sensors.
Optionally, the determining, according to the usage data, a second reference probability that the electronic device has a second target behavior associated with the first target behavior includes:
determining relevant parameters of the electronic equipment for generating a second target behavior according to the use data; the relevant parameters comprise whether the target behavior occurs or not, the time interval between the occurrence moment and the current time interval, and the type of a target object for which the target behavior aims;
and according to a preset weight value, weighting and summing each relevant parameter to obtain a second reference probability of the second target behavior.
Optionally, the determining, according to data collected by a first sensor of the sensors, a fourth reference probability of the first target behavior occurring includes:
determining the temperature variation in the current environment according to data collected by a first sensor of the sensors;
and determining a fourth reference probability of the target behavior according to the temperature variation.
Optionally, the determining, according to data collected by a second sensor of the sensors, a fifth reference probability of occurrence of the second target behavior includes:
determining the gas variation in the current environment according to data acquired by a second sensor of the sensors;
determining a fifth reference probability of the first target behavior occurring according to the gas variation.
Optionally, the third probability determination unit is configured to:
determining the prediction probability of the target behavior at the current moment according to the reference probability, the preset posterior probability and the following formula:
wherein, Ps is the prediction probability, and P (i-1) represents the i-1 th reference probability;
p [ P (i-1)/P ] represents the posterior probability of the i-1 st reference probability.
In a third aspect, an embodiment of the present invention further provides an electronic device, where the electronic device includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor executes the computer program to implement the steps in the target device control method described above.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps in the target device control method as described above.
In the embodiment of the invention, the state of the user is analyzed by acquiring the activity data of the user according to the activity data to obtain the state information of the user; then, according to the state information, the activity data and the historical behavior data of the target behavior, behavior prediction is carried out on the user to obtain the predicted behavior of the user; finally, the target control equipment sets the operation mode to the operation mode corresponding to the predicted behavior, so that the automatic and intelligent target control equipment can select the operation mode without manual selection of a user; and the operation mode can be switched at any time along with the change of the user predicted behavior.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
Fig. 1 is a flowchart illustrating a target device control method according to an embodiment of the present invention;
fig. 2 is a second flowchart of a target device control method according to an embodiment of the present invention;
FIG. 3 shows a block diagram of an exemplary electronic device of an embodiment of the invention;
FIG. 4 shows a flow chart of an example of an embodiment of the present invention;
FIG. 5 shows one of the block diagrams of an electronic device provided by an embodiment of the invention;
fig. 6 shows a second block diagram of an electronic device according to an embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
In various embodiments of the present invention, it should be understood that the sequence numbers of the following processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
Referring to fig. 1, an embodiment of the present invention provides a target device control method, applied to an electronic device, where the method includes:
The electronic equipment acquires activities and data of a user, wherein the activity data comprises at least one of picture data, environment data, data acquired by a sensor and use data of the electronic equipment; specifically, the camera may be a front-mounted camera and a rear-mounted camera of the electronic device; acquiring picture data through a camera, and analyzing the state of a user through the picture data, for example, if images acquired by the camera at intervals of a preset time are the same, the user is in a repeated motion state, for example, the arm swings rhythmically when walking; for another example, during the process of using the terminal device by the user, the pictures acquired by the front camera may be continuous pictures of the face of the user.
Environmental data is time data, temperature data, etc.
The sensor may be a motion-type sensor, such as a speed sensor, an angle sensor, or the like; and may be a detection type sensor such as a smoke sensor, a temperature sensor, etc. The data detected by the sensors may also be used to analyze user behavior, such as a smoke sensor may be used to detect whether the user is currently smoking.
The usage data of the electronic device is used to indicate whether the user is currently using the electronic device, which applications of the electronic device are used, and the like.
And 102, analyzing the state of the user according to the activity data to obtain the state information of the user.
The state information is the current state of the user, such as a still state, a standing state, a lying state, or a repeated movement state. Since more user behavior is associated with the state information, for example, if the user behavior is eating, the state information will not be lying down; for another example, if the user behavior is running, the state information is typically a repetitive motion state.
Alternatively, the probability of occurrence of each target behavior under each state information may be set in advance, and then the target behavior may be predicted by the probability.
103, predicting the behavior of the user according to the state information, the activity data and the historical behavior data of the target behavior to obtain the predicted behavior of the user; wherein the predicted behavior is a behavior with the highest predicted probability in the target behaviors.
A plurality of target behaviors can be preset, such as eating, smoking, resting and the like, and the target behaviors can also be obtained by the electronic equipment through machine learning according to historical behavior data of a user; the historical behavior data comprises activity data and a data range of state information under the target behavior; for example, if the activity data and the state information are used as a set of data, the historical behavior data includes the predicted probability of the target behavior corresponding to each set of different data.
Specifically, for a group of data (a, B), the probabilities of occurrence of three target behaviors under the group of data are respectively P1, P2 and P3 according to the historical behavior data of the target behaviors M, n and O, and the target behavior corresponding to the maximum value among P1, P2 and P3 is taken as the predicted behavior of the user.
And 104, controlling a target device to set an operation mode to an operation mode corresponding to the predicted behavior, wherein the target device is a device in communication connection with the electronic device.
After the predicted behavior of the user is obtained, controlling the operation mode of target equipment in communication connection with the electronic equipment according to the predicted behavior; for example, if the target device is an air conditioner, the air conditioner is controlled to operate according to an operation mode corresponding to the predicted behavior, and further, if the predicted behavior is smoking, the air conditioner is controlled to start a ventilation mode to exhaust smoke; and if the predicted behavior is rest, controlling the air conditioner to raise the temperature to a preset value so as to prevent the user from catching a cold when the user has rest.
Or if the target equipment is music playing equipment, controlling the target equipment to operate according to a mode corresponding to the predicted behavior; for example, if the predicted behavior of the user is running, the music playing device is controlled to play music with strong rhythm; if the predicted behavior of the user is browsing the electronic equipment, controlling the music playing equipment to play relaxed and leisure music; and if the predicted behavior of the user is rest, controlling the music playing equipment to stop playing the music.
Therefore, after the predicted behavior is obtained, the electronic equipment controls the target equipment to operate according to the operation mode corresponding to the predicted behavior, operation mode selection is automatically realized, and manual selection by a user is not needed; and the operation mode can be automatically changed at any time according to the predicted behavior change of the user, so that the operation mode of the target equipment is ensured to be consistent with the predicted behavior, the use requirements of the user in different states are met, and the intelligence of the target equipment is fully exerted.
In the embodiment of the invention, the state of the user is analyzed by acquiring the activity data of the user according to the activity data to obtain the state information of the user; then, according to the state information, the activity data and the historical behavior data of the target behavior, behavior prediction is carried out on the user to obtain the predicted behavior of the user; finally, the target control equipment sets the operation mode to the operation mode corresponding to the predicted behavior, so that the automatic and intelligent target control equipment can select the operation mode without manual selection of a user; and the operation mode can be switched at any time along with the change of the user predicted behavior. The embodiment of the invention solves the problem that the operation mode of the intelligent home system needs to be selected by a user in the prior art.
Optionally, in this embodiment of the present invention, step 102 includes:
in case one, if the usage data indicates that the electronic device is in a standby state and the environmental data indicates that the current time is a first preset time of the user, determining that the state information of the user is in a first state;
in case two, if the image data acquired by the camera at each interval time are the same and the environmental data indicate that the current time is the second preset time of the user, determining that the state information of the user is in a second state; and/or
If the usage data indicates that the electronic equipment is in a usage state and the change range of the depth information of the picture acquired by the camera in a preset time period is within a preset change range, determining that the state information of the user is in a third state;
the first preset time and the second preset time are obtained by machine learning according to the historical behavior data of the user.
Specifically, in a first case, if the usage data indicates that the electronic device is in a standby state and the environment data indicates that the current time is a first preset time of the user, it is determined that the state information of the user is in a first state; if the first preset time is the rest time of the user determined through machine learning, for example, the time is 12pm late at night or midday, the first state is the stationary state of the user.
In the second case, if the image data acquired by the camera at each interval is the same, and the environmental data indicates that the current time is the second preset time of the user, for example, the second preset time is the repetitive movement time determined by the user through machine learning, the repetitive movement may be walking, running, or the like; determining that the status information of the user is a second status. For example, when the front camera detects that the user's limb repeatedly performs the fixing action at a certain frequency within a fixed time (for example, within 10 s), the user's arm repeatedly performs the fixing action back and forth, for example, enters or exits the lens in the up-down and left-right directions, and the current time is determined to be the second preset time (for example, 6am to 12pm) by combining the current time data of the network end, so that the user can be determined to be in the state of the repeated fixing movement.
In a third case, if the usage data indicates that the electronic device is in a usage state and the variation amplitude of the depth information of the picture acquired by the camera within a preset time period is within a preset variation range, determining that the state information of the user is in a third state; depth information is the image depth, which is the number of bits used to store each pixel and is also used to measure the color resolution of the image. If the depth change range of the picture acquired by the camera in a preset time period is not large and is within a preset change range, the state that the user uses the electronic equipment can be judged, and further whether the user is in a lying down state or not can be judged according to the relative relation between the front scene and the back scene (such as the face and the background) in the front lens image.
Optionally, in an embodiment of the present invention, step 103 includes:
acquiring a target behavior of the user; the target behavior is obtained by machine learning according to the historical behavior data of the user;
for each target behavior, determining the prediction probability of the target behavior at the current moment according to the state information, the activity data and the historical behavior data of the target behavior;
and taking the target behavior with the maximum prediction probability as the predicted behavior of the user.
The historical behavior data can include activity data and state information of the user in a learning period, and in the learning period, the activity data and the state information of the user are collected and used for machine learning of the user to obtain the target behavior of the user.
The historical behavior data comprises the occurrence probability (namely prediction probability) corresponding to different data ranges of the state information of the user under each target behavior; for example, if the activity data and the state information are used as a set of data, the historical behavior data includes each set of data range corresponding to each prediction probability, and each set of data range may include respective data ranges of the activity data and the state information.
For each target behavior, determining the prediction probability of the target behavior at the current moment under the state information and the activity data according to the historical behavior data of the target behavior, and taking the target behavior with the maximum prediction probability as the prediction behavior of the user; specifically, for a group of data (a, B), the probabilities of occurrence of three target behaviors under the group of data are respectively P1, P2 and P3 according to the historical behavior data of the target behaviors M, n and O, and the target behavior corresponding to the maximum value among P1, P2 and P3 is taken as the predicted behavior of the user.
Referring to fig. 2, an embodiment of the present invention provides a target device control method, applied to an electronic device, where the method includes:
Each reference probability corresponds to a prediction direction, such as a reference probability corresponding to state information, which represents a probability corresponding to the state information; n prediction directions can be preset for each target behavior, and the reference probability of each reference direction is determined according to the state information and the activity data.
Wherein, the posterior probability represents that the things have happened, the reason for the things is the size of the possibility caused by a certain factor, the factor is the reference probability of the prediction direction, and for the target behavior, the posterior probability is the probability of the target behavior when the reference probability of the prediction direction is a certain value; for the target behaviors, the posterior probability corresponding to each reference probability is preset, and the posterior probability corresponding to each reference probability can be obtained in a machine learning mode; and then recording the corresponding relation between each posterior probability and the reference probability.
And step 206, determining the prediction probability of the target behavior at the current moment according to the reference probability and the preset posterior probability.
And determining the prediction probability of the target behavior at the current moment, namely multiplying each reference probability by the corresponding posterior probability to obtain the total prediction probability.
And step 207, taking the target behavior with the maximum prediction probability as the predicted behavior of the user.
And 208, controlling a target device to set an operation mode to an operation mode corresponding to the predicted behavior, wherein the target device is a device in communication connection with the electronic device.
Optionally, in this embodiment of the present invention, the step of determining, according to the state information and the activity data, n reference probabilities of the first target behavior occurring at the current time includes:
determining a first reference probability of the state information under the first target behavior;
determining a second reference probability of the electronic device for a second target behavior associated with the first target behavior according to the usage data;
determining a third reference probability of the first target behavior occurring at the current moment according to the time data in the environment data;
determining a fourth reference probability of occurrence of the first target behavior according to data collected by a first sensor of the sensors; and/or
And determining a fifth reference probability of the first target behavior according to data collected by a second sensor of the sensors.
And for each target behavior, the first reference probability corresponding to each state information is preset, and the first reference probability is determined after the state information is determined. Similarly, the third reference probability corresponding to each time data is also preset.
Further, the step of determining a second reference probability that the electronic device will generate a second target behavior associated with the first target behavior according to the usage data includes:
determining relevant parameters of the electronic equipment for generating a second target behavior according to the use data; the relevant parameters comprise whether the target behavior occurs or not, the time interval between the occurrence moment and the current time interval, and the type of a target object for which the target behavior aims;
and according to a preset weight value, weighting and summing each relevant parameter to obtain a second reference probability of the second target behavior.
Wherein, for the second reference probability, it includes a plurality of related parameters, and the related parameters include whether the target action occurs, the time interval from the occurrence time to the current time interval, and the type of the target object for which the target action is directed; after determining relevant parameters according to the use data, weighting and summing each relevant parameter to obtain a second reference probability of the second target behavior; for example, if the first target behavior is eating, the second target behavior is buying food for takeout; the relevant parameters include whether to purchase the food takeaway, the time of occurrence from the current time interval, and the type of the target object purchased.
Optionally, in an embodiment of the present invention, the step of determining a fourth reference probability of occurrence of the first target behavior according to data collected by a first sensor of the sensors includes:
determining the temperature variation in the current environment according to data collected by a first sensor of the sensors;
and determining a fourth reference probability of the target behavior according to the temperature variation.
The first sensor can be an infrared sensor or a temperature sensor and is used for obtaining indoor heat and temperature variation; and determining a fourth reference probability of the target behavior according to the temperature variation, for example, if the target behavior is eating, the eating may cause the temperature variation around the electronic device. Optionally, the temperature variation may be represented by a variation percentage, and each variation percentage is preset with a corresponding reference probability, so that after the temperature variation is obtained, a fourth reference probability of the target behavior may be determined according to the temperature variation.
Optionally, in an embodiment of the present invention, the step of determining a fifth reference probability of occurrence of the second target behavior according to data collected by a second sensor of the sensors includes:
determining the gas variation in the current environment according to data acquired by a second sensor of the sensors;
determining a fifth reference probability of the first target behavior occurring according to the gas variation.
Optionally, the second sensor may be a smoke sensor or a gas sensor, and the gas variation condition in the current environment is determined according to the data collected by the second sensor; for example, if the target activity is smoking, a fifth reference probability of occurrence of the first target activity may be determined based on the gas variation. Optionally, the gas variation may be represented by a variation percentage, and each variation percentage is preset with a corresponding reference probability, so that after the gas variation is obtained, a fifth reference probability of the target behavior may be determined according to the gas variation.
Optionally, in an embodiment of the present invention, step 206 includes:
determining the prediction probability of the target behavior at the current moment according to the reference probability, the preset posterior probability and the following formula 1:
equation 1:
further, the above equation 1 may also be in the form of the following equation 2:
equation 2:
Ps=P0*p(P0/p)+P1*p(P1/p)+…...+P(n-1)*p[{P(n-1)/p}]
wherein i is more than or equal to 1 and less than or equal to n; ps is the prediction probability, wherein Ps is the prediction probability, and P (i-1) represents the i-1 th reference probability;
p [ P (i-1)/P ] represents the posterior probability of the i-1 th reference probability; p [ P (i-1)/P ] represents a predicted probability that the target behavior occurs in the case where the reference probability is P (i-1); ps is the sum of the prediction probabilities of all the reference probabilities, i.e., the total prediction probability of the target time occurrence.
Referring to fig. 3, as an example, taking the target device as an air conditioner and the target behavior as a smoking or eating behavior, the electronic device mainly includes the following units:
an activity data obtaining unit 301, configured to obtain current activity data of a user, and obtain current time, indoor/outdoor temperature, user behavior conditions, and the like through a camera, a network side, and a mobile device side;
a state analysis unit 302, which analyzes the state types of the user from the user activity data, such as still, repeated movement, lying down, etc., by combining the current environment data and the user behavior data brought by the camera;
the behavior prediction unit 303 is used for counting a large amount of user online data by combining the user original data and the user state, utilizing machine learning and combining a big data model, and then regressing and predicting the current behaviors of the user, namely sleeping, eating, moving, bathing, entertainment and the like;
and the scheme generation unit 304 is used for providing an intelligent air-conditioning control scheme for each user according to the behavior prediction result and combining a large amount of user data, and continuously updating and learning the user-specific air-conditioning scheme model.
As shown in fig. 4, the target device control method provided in the embodiment of the present invention mainly includes the following steps:
the air conditioning equipment generally only comprises functions of room temperature acquisition and the like, the acquired data source is single, electronic equipment such as a mobile phone and a tablet personal computer is owned by most users, the indoor use frequency is high, and multi-user data can be acquired through mobile equipment.
The active data acquisition unit 301 acquires RGB and depth data from the picture data acquired by the camera; providing current weather, position and other environment data through a network terminal; and acquiring use data of the electronic equipment, including electronic equipment end app running data, user daily behavior data and the like.
after obtaining the current activity data of the user through the activity data obtaining unit 301, it is necessary to establish the corresponding relationship between the activity data and the user state, that is, to obtain the current state of the user (standing, sitting, lying on the back, etc.) by using various data (camera, network, mobile end data, etc.), which is necessary to believe that most people do not lie on the back to smoke or have a meal. The user status is analyzed by integrating the metadata (camera image, network information, mobile device usage information) in consideration of the complexity of data and the diversity of status.
When the electronic equipment terminal detects that the mobile phone is continuously in standby in a certain time, the user can be judged to be in a static state at a large probability by combining the current time data (such as 12 o' clock or midday time in late night) acquired by the network terminal;
when the picture data collected by the front camera detects that the picture data is in a fixed time (for example, within 10 s), the limbs of the user repeat fixed actions at a certain frequency, the arms of the user repeat the fixed actions back and forth, for example, the user enters or exits the lens in the up-down direction, the left-right direction and the like, the current time is determined to be the second preset time (for example, 6am to 12pm) by combining the current time data of the network end, and the state of the user in the repeated fixed motion can be judged.
When the system device side continuously acquires the running app of the electronic device, the face information of the user is detected through the front lens, and the corresponding depth information does not change much in the period of time, the state that the user is using the mobile device can be judged, and further whether the user is in a lying down state or not can be judged according to the relative relation between the front scene and the back scene (the face and the background) in the front lens image.
And step 403, predicting the current behavior of the user.
After the current state information of the user is acquired through the state analysis unit 302, the current behavior of the user can be predicted for a specific user according to different states. The method can further accurately judge the behaviors of the user, such as eating, smoking, reading and the like at the current time and in the current state according to the time data of the network terminal, the use data of the mobile equipment terminal and the living habits of the user in the database. The proposal adopts a machine learning method and combines big data of a user to predict, and establishes a network big database through possible behaviors of the user, such as eating, smoking, resting, reading and the like;
the detection of smoking or eating behaviors is taken as an example and is specifically developed as follows:
specifically, when the state analysis module detects that the user is in a repeated fixed motion state facing the mobile device, a first reference probability P0 is generated;
the electronic equipment terminal detects that the user uses the takeout APP (within 1 hour) at the mobile equipment terminal in the near future, if higher privacy authority is obtained, the purchased products (cigarettes, drinks, hot food and the like) of the user can be accurately obtained, and a second reference probability P1 is generated according to the occurrence time of the takeout purchase and whether the types of the products include cigarettes, food and the like;
acquiring current time information and user living habits in a database, and determining a third reference probability P2 according to the current user information and information in a big database;
indoor heat and temperature changes are obtained through the infrared sensor and the temperature sensor, and a fourth reference probability P3 is determined according to the current temperature change;
acquiring the current indoor gas variation through a smoke and smell detection port, and determining a fifth reference probability P4;
determining the prediction probability of the target behavior at the current moment according to the reference probability, the preset posterior probability and the following formula:
namely:
therefore, according to the current multi-source information of the user, a large amount of user data are collected and counted by combining a network, the judgment and prediction of food intake or smoking behaviors of the user state can be realized, namely, the mobile equipment is used while smoking or food intake is carried out, the greater the Ps is, the more possible the current user smoking or food intake behaviors are, and then, the exclusive air conditioner setting scheme is started.
And step 404, controlling the air conditioner to operate according to the operation mode corresponding to the predicted behavior.
After the indoor user behavior prediction is obtained, the optimal air conditioner setting scheme is automatically provided without the need of user interaction and manual adjustment, and can be imported through an online large database or manually set by a user in advance.
Specifically, each group of people corresponds to one class of targets in daily life and has close living habits, and living habits (behaviors) of different groups of people are different, namely, a target group exclusive scheme is generated based on different groups of people (different living habits); the air conditioning scheme itself has several attributes including on/off time, temperature, humidity, intensity, silence, etc. For users, users in different age groups have different air conditioner use habits, such as the old people turn on the air conditioner less, the middle aged people turn on the air conditioner more and are used to moderate air conditioner temperature more, and the young people turn on the air conditioner more and all the night; the habits of users with different sexes for using air conditioners are different, so that male users are more susceptible to heat and female users are more susceptible to cold;
for each specific user, each person has own air conditioner use habit, including use time, use duration, use season \ weather, use mode, and the like.
In view of the diversity between different solutions and user objectives, the implementation in this example employs a Spark distributed processing engine, which first uses objects to model user usage air conditioning information, placed in a distributed set (keys, values), i.e., PairRDD, by providing classification tasks that enable parallel execution of user solutions in a distributed environment; different key fields may belong to different machines, and for user information, the key is a user identifier and the value is an attribute aggregation table of the user using the air conditioner.
Secondly, when a new cluster is obtained, the original PairRDD is converted into a new PairRDD, the cluster attribute is determined, and with the continuous execution of tasks, user information can be continuously updated and the iteration of a user scheme is promoted in turn.
Specifically, when the behavior prediction unit 303 predicts that the current behavior of the user is sleep, the system sends a command to turn on a "sleep mode" of the air conditioner and mutes the air conditioner according to the current season and weather information obtained by the network terminal, information such as age, gender, and physiological habits of the user in the network database, and the optimal air conditioner configuration attribute corresponding to the user in the big data, if the current outdoor temperature is cool, the heating mode is turned on, and the temperature is decreased once every hour, otherwise, the temperature is increased once every hour;
or when the behavior prediction unit 303 predicts that the current behavior of the user is eating, the system sends a command to the air conditioner to switch to the ventilation mode, so that the food smell is dissipated;
if the behavior prediction unit 303 predicts that the current behavior of the user is smoking, and acquires the current season and weather information according to the network, the system sends a command to the air conditioner to start ventilation/start standby, and prompts the user to open a window;
when the behavior prediction unit 303 predicts that the user is currently in behavior leisure and entertainment, an optimal air conditioner scheme combination is obtained according to different entertainment modes in the network database, if it is detected that the user is reading, playing a game and watching a video, a command is sent to the air conditioner to turn on or mute the air conditioner, and if it is detected that the user is in a lying down state, a command is sent to the air conditioner to properly raise the temperature to avoid catching a cold.
In the embodiment of the invention, the state of the user is analyzed by acquiring the activity data of the user according to the activity data to obtain the state information of the user; then, according to the state information, the activity data and the historical behavior data of the target behavior, behavior prediction is carried out on the user to obtain the predicted behavior of the user; finally, the target control equipment sets the operation mode to the operation mode corresponding to the predicted behavior, so that the automatic and intelligent target control equipment can select the operation mode without manual selection of a user; and the operation mode can be switched at any time along with the change of the user predicted behavior.
With the above description of the target device control method according to the embodiment of the present invention, the electronic device according to the embodiment of the present invention will be described with reference to the accompanying drawings.
Referring to fig. 5, an embodiment of the present invention further provides an electronic device 500, where the electronic device 500 includes:
a data obtaining module 501, configured to obtain activity data of a user; the activity data includes: at least one of picture data collected by a camera, environment data, data collected by a sensor, and usage data of the electronic device 500;
a state analysis module 502, configured to perform state analysis on the user according to the activity data to obtain state information of the user;
a behavior prediction module 503, configured to perform behavior prediction on the user according to the state information, the activity data, and historical behavior data of the target behavior, so as to obtain a predicted behavior of the user; wherein the predicted behavior is a behavior with the highest predicted probability in the target behaviors;
a device control module 504, configured to control a target device to set an operation mode to an operation mode corresponding to the predicted behavior, where the target device is a device communicatively connected to the electronic device 500.
Optionally, in this embodiment of the present invention, the state analysis module 502 includes:
a first analysis sub-module, configured to determine that the state information of the user is in a first state if the usage data indicates that the electronic device 500 is in a standby state and the environmental data indicates that the current time is a first preset time of the user;
the second analysis submodule is used for determining that the state information of the user is in a second state if the image data acquired by the camera at each interval time are the same and the environmental data indicate that the current time is second preset time of the user; and/or
A third analysis sub-module, configured to determine that the state information of the user is a third state if the usage data indicates that the electronic device 500 is in a usage state and a change range of depth information of a picture acquired by the camera within a preset time period is within a preset change range;
the first preset time and the second preset time are obtained by machine learning according to the historical behavior data of the user.
Optionally, in this embodiment of the present invention, the behavior prediction module 503 includes:
the behavior acquisition submodule is used for acquiring the target behavior of the user; the target behavior is obtained by machine learning according to the historical behavior data of the user;
the probability determination submodule is used for determining the prediction probability of the target behavior at the current moment according to the state information, the activity data and the historical behavior data of the target behavior aiming at each target behavior;
and the behavior determination submodule is used for taking the target behavior with the maximum prediction probability as the predicted behavior of the user.
Optionally, in this embodiment of the present invention, the probability determining sub-module includes:
the first probability determining unit is used for determining n reference probabilities of the first target behaviors occurring at the current moment according to the state information and the activity data aiming at each target behavior; wherein n is a positive integer greater than or equal to 1;
the second probability determining unit is used for determining a preset posterior probability corresponding to each reference probability;
and the third probability determining unit is used for determining the prediction probability of the target behavior at the current moment according to the reference probability and the preset posterior probability.
Optionally, in an embodiment of the present invention, the first probability determining unit is configured to:
determining a first reference probability of the state information under the first target behavior;
determining a second reference probability that the electronic device 500 has a second target behavior associated with the first target behavior according to the usage data;
determining a third reference probability of the first target behavior occurring at the current moment according to the time data in the environment data;
determining a fourth reference probability of occurrence of the first target behavior according to data collected by a first sensor of the sensors; and/or
And determining a fifth reference probability of the first target behavior according to data collected by a second sensor of the sensors.
Optionally, in this embodiment of the present invention, the determining, according to the usage data, a second reference probability that the electronic device 500 generates a second target behavior associated with the first target behavior includes:
determining a parameter related to the occurrence of a second target behavior of the electronic device 500 according to the usage data; the relevant parameters comprise whether the target behavior occurs or not, the time interval between the occurrence moment and the current time interval, and the type of a target object for which the target behavior aims;
and according to a preset weight value, weighting and summing each relevant parameter to obtain a second reference probability of the second target behavior.
Optionally, in this embodiment of the present invention, the determining, according to data collected by a first sensor of the sensors, a fourth reference probability of occurrence of the first target behavior includes:
determining the temperature variation in the current environment according to data collected by a first sensor of the sensors;
and determining a fourth reference probability of the target behavior according to the temperature variation.
Optionally, in an embodiment of the present invention, the determining, according to data collected by a second sensor of the sensors, a fifth reference probability of occurrence of the second target behavior includes:
determining the gas variation in the current environment according to data acquired by a second sensor of the sensors;
determining a fifth reference probability of the first target behavior occurring according to the gas variation.
Optionally, in this embodiment of the present invention, the third probability determining unit is configured to:
determining the prediction probability of the target behavior at the current moment according to the reference probability, the preset posterior probability and the following formula:
wherein, Ps is the prediction probability, and P (i-1) represents the i-1 th reference probability;
p [ P (i-1)/P ] represents the posterior probability of the i-1 st reference probability.
The electronic device 500 provided in the embodiment of the present invention can implement each process implemented by the electronic device 500 in the method embodiments of fig. 1 to fig. 4, and for avoiding repetition, details are not described here again.
In the embodiment of the present invention, the data obtaining module 501 obtains activity data of a user, and the state analyzing module 502 performs state analysis on the user according to the activity data to obtain state information of the user; the behavior prediction module 503 performs behavior prediction on the user according to the state information, the activity data and the historical behavior data of the target behavior to obtain a predicted behavior of the user; the device control module 504 controls the target device to set the operation mode to the operation mode corresponding to the predicted behavior, so that the automatic and intelligent control of the target device for selecting the operation mode is realized without manual selection of a user; and the operation mode can be switched at any time along with the change of the user predicted behavior.
FIG. 6 is a diagram illustrating a hardware configuration of an electronic device implementing various embodiments of the invention;
the electronic device 600 includes, but is not limited to: a radio frequency unit 601, a network module 602, an audio output unit 603, an input unit 604, a sensor 605, a display unit 606, a user input unit 607, an interface unit 608, a memory 609, a processor 610, and a power supply 611. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 6 does not constitute a limitation of the electronic device, and that the electronic device may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
The processor 610 is configured to obtain activity data of a user; the activity data includes: at least one of picture data, environment data, data collected by a sensor and use data of the electronic equipment, wherein the picture data, the environment data, the data collected by the sensor and the use data are collected by a camera;
according to the activity data, performing state analysis on the user to obtain state information of the user;
according to the state information, the activity data and historical behavior data of the target behavior, performing behavior prediction on the user to obtain the predicted behavior of the user; wherein the predicted behavior is a behavior with the highest predicted probability in the target behaviors;
and controlling the target device to set an operation mode to an operation mode corresponding to the predicted behavior, wherein the target device is a device which is in communication connection with the electronic device.
In the embodiment of the invention, the state of the user is analyzed by acquiring the activity data of the user according to the activity data to obtain the state information of the user; then, according to the state information, the activity data and the historical behavior data of the target behavior, behavior prediction is carried out on the user to obtain the predicted behavior of the user; finally, the target control equipment sets the operation mode to the operation mode corresponding to the predicted behavior, so that the automatic and intelligent target control equipment can select the operation mode without manual selection of a user; and the operation mode can be switched at any time along with the change of the user predicted behavior.
It should be noted that, in this embodiment, the electronic device 600 may implement each process in the method embodiment of the present invention and achieve the same beneficial effects, and for avoiding repetition, details are not described here.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 601 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 610; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 601 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. Further, the radio frequency unit 601 may also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 602, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 603 may convert audio data received by the radio frequency unit 601 or the network module 602 or stored in the memory 609 into an audio signal and output as sound. Also, the audio output unit 603 may also provide audio output related to a specific function performed by the electronic apparatus 600 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 603 includes a speaker, a buzzer, a receiver, and the like.
The input unit 604 is used to receive audio or video signals. The input Unit 604 may include a Graphics Processing Unit (GPU) 6041 and a microphone 6042, and the Graphics processor 6041 processes image data of a still picture or video obtained by an image capturing apparatus (such as a camera) in a video capture mode or an image capture mode. The processed image frames may be displayed on the display unit 606. The image frames processed by the graphic processor 6041 may be stored in the memory 609 (or other storage medium) or transmitted via the radio frequency unit 601 or the network module 602. The microphone 6042 can receive sound, and can process such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 601 in case of the phone call mode.
The electronic device 600 also includes at least one sensor 605, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 6061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 6061 and/or the backlight when the electronic apparatus 600 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 605 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 606 is used to display information input by the user or information provided to the user. The Display unit 606 may include a Display panel 6061, and the Display panel 6061 may be configured by a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 607 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 607 includes a touch panel 6071 and other input devices 6072. Touch panel 6071, also referred to as a touch screen, may collect touch operations by a user on or near it (e.g., operations by a user on or near touch panel 6071 using a finger, stylus, or any suitable object or accessory). The touch panel 6071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 610, receives a command from the processor 610, and executes the command. In addition, the touch panel 6071 can be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The user input unit 607 may include other input devices 6072 in addition to the touch panel 6071. Specifically, the other input devices 6072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a track ball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 6071 can be overlaid on the display panel 6061, and when the touch panel 6071 detects a touch operation on or near the touch panel 6071, the touch operation is transmitted to the processor 610 to determine the type of the touch event, and then the processor 610 provides a corresponding visual output on the display panel 6061 according to the type of the touch event. Although the touch panel 6071 and the display panel 6061 are shown in fig. 6 as two separate components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 6071 and the display panel 6061 may be integrated to implement the input and output functions of the electronic device, and this is not limited here.
The interface unit 608 is an interface for connecting an external device to the electronic apparatus 600. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 608 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the electronic device 600 or may be used to transmit data between the electronic device 600 and external devices.
The memory 609 may be used to store software programs as well as various data. The memory 609 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 609 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 610 is a control center of the electronic device, connects various parts of the whole electronic device by using various interfaces and lines, performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 609, and calling data stored in the memory 609, thereby performing overall monitoring of the electronic device. Processor 610 may include one or more processing units; preferably, the processor 610 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 610.
The electronic device 600 may further include a power supply 611 (e.g., a battery) for supplying power to the various components, and preferably, the power supply 611 may be logically connected to the processor 610 via a power management system, such that the power management system may be used to manage charging, discharging, and power consumption.
In addition, the electronic device 600 includes some functional modules that are not shown, and are not described in detail herein.
Preferably, an embodiment of the present invention further provides an electronic device, which includes a processor 610, a memory 609, and a computer program stored in the memory 609 and capable of running on the processor 610, where the computer program, when executed by the processor 610, implements each process of the above target device control method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the above-mentioned target device control method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.
Claims (10)
1. A target device control method is applied to an electronic device, and is characterized by comprising the following steps:
acquiring activity data of a user; the activity data includes: at least one of picture data, environment data, data collected by a sensor and use data of the electronic equipment, wherein the picture data, the environment data, the data collected by the sensor and the use data are collected by a camera;
according to the activity data, performing state analysis on the user to obtain state information of the user;
according to the state information, the activity data and historical behavior data of the target behavior, performing behavior prediction on the user to obtain the predicted behavior of the user; wherein the predicted behavior is a behavior with the highest predicted probability in the target behaviors;
and controlling the target device to set an operation mode to an operation mode corresponding to the predicted behavior, wherein the target device is a device which is in communication connection with the electronic device.
2. The method for controlling a target device according to claim 1, wherein the step of analyzing the state of the user according to the activity data to obtain the state information of the user comprises:
if the usage data indicate that the electronic equipment is in a standby state and the environmental data indicate that the current time is first preset time of the user, determining that the state information of the user is in a first state;
if the image data acquired by the camera at each interval time are the same and the environmental data indicate that the current time is the second preset time of the user, determining that the state information of the user is in a second state; and/or
If the use data indicate that the electronic equipment is in a use state and the change amplitude of the depth information of the picture acquired by the camera in a preset time period is within a preset change range, determining that the state information of the user is in a third state;
the first preset time and the second preset time are obtained by machine learning according to the historical behavior data of the user.
3. The method for controlling a target device according to claim 1, wherein the step of predicting the behavior of the user according to the state information, the activity data, and the historical behavior data of the target behavior to obtain the predicted behavior of the user comprises:
acquiring a target behavior of the user; the target behavior is obtained by machine learning according to the historical behavior data of the user;
for each target behavior, determining the prediction probability of the target behavior at the current moment according to the state information, the activity data and the historical behavior data of the target behavior;
and taking the target behavior with the maximum prediction probability as the predicted behavior of the user.
4. The target device control method according to claim 3, wherein the step of determining, for each target behavior, a predicted probability of the target behavior occurring at a current time based on the state information, activity data, and historical behavior data of the target behavior comprises:
for each target behavior, determining n reference probabilities of the first target behavior occurring at the current moment according to the state information and the activity data; wherein n is a positive integer greater than or equal to 1;
determining a preset posterior probability corresponding to each reference probability;
and determining the prediction probability of the target behavior at the current moment according to the reference probability and the preset posterior probability.
5. The method for controlling target equipment according to claim 4, wherein the step of determining n reference probabilities of the occurrence of the first target behavior at the current time according to the state information and the activity data comprises:
determining a first reference probability of the state information under the first target behavior;
determining a second reference probability of the electronic device for a second target behavior associated with the first target behavior according to the usage data;
determining a third reference probability of the first target behavior occurring at the current moment according to the time data in the environment data;
determining a fourth reference probability of occurrence of the first target behavior according to data collected by a first sensor of the sensors; and/or
And determining a fifth reference probability of the first target behavior according to data collected by a second sensor of the sensors.
6. The method of claim 5, wherein the step of determining a second reference probability of the electronic device occurring a second target behavior associated with the first target behavior based on the usage data comprises:
determining relevant parameters of the electronic equipment for generating a second target behavior according to the use data; the relevant parameters comprise whether the target behavior occurs or not, the time interval between the occurrence moment and the current time interval, and the type of a target object for which the target behavior aims;
and according to a preset weight value, weighting and summing each relevant parameter to obtain a second reference probability of the second target behavior.
7. The target apparatus control method according to claim 5, wherein the step of determining a fourth reference probability of occurrence of the first target behavior from data collected by a first sensor of the sensors includes:
determining the temperature variation in the current environment according to data collected by a first sensor of the sensors;
and determining a fourth reference probability of the target behavior according to the temperature variation.
8. The method of claim 5, wherein the step of determining a fifth reference probability of the second target behavior occurring based on data collected by a second one of the sensors comprises:
determining the gas variation in the current environment according to data acquired by a second sensor of the sensors;
determining a fifth reference probability of the first target behavior occurring according to the gas variation.
9. The target device control method according to claim 4, wherein the step of determining the predicted probability that the target behavior occurs at the current time based on the reference probability and the preset a posteriori probability comprises:
determining the prediction probability of the target behavior at the current moment according to the reference probability, the preset posterior probability and the following formula:
wherein, Ps is the prediction probability, and P (i-1) represents the i-1 th reference probability;
p [ P (i-1)/P ] represents the posterior probability of the i-1 st reference probability.
10. An electronic device, characterized in that the electronic device comprises:
the data acquisition module is used for acquiring activity data of a user; the activity data includes: at least one of picture data, environment data, data collected by a sensor and use data of the electronic equipment, wherein the picture data, the environment data, the data collected by the sensor and the use data are collected by a camera;
the state analysis module is used for carrying out state analysis on the user according to the activity data to obtain state information of the user;
the behavior prediction module is used for predicting the behavior of the user according to the state information, the activity data and the historical behavior data of the target behavior to obtain the predicted behavior of the user; wherein the predicted behavior is a behavior with the highest predicted probability in the target behaviors;
and the equipment control module is used for controlling target equipment to set an operation mode to an operation mode corresponding to the predicted behavior, wherein the target equipment is equipment in communication connection with the electronic equipment.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911150788.3A CN111025922B (en) | 2019-11-21 | 2019-11-21 | Target equipment control method and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911150788.3A CN111025922B (en) | 2019-11-21 | 2019-11-21 | Target equipment control method and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111025922A true CN111025922A (en) | 2020-04-17 |
CN111025922B CN111025922B (en) | 2023-09-12 |
Family
ID=70201902
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911150788.3A Active CN111025922B (en) | 2019-11-21 | 2019-11-21 | Target equipment control method and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111025922B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112099374A (en) * | 2020-11-11 | 2020-12-18 | 广东恩胜科技有限公司 | Indoor environment comfort control method and system, electronic equipment and storage medium |
CN112255925A (en) * | 2020-10-19 | 2021-01-22 | 珠海格力电器股份有限公司 | Method and device for controlling intelligent household equipment and computer equipment |
CN112782988A (en) * | 2020-12-30 | 2021-05-11 | 深圳市微网力合信息技术有限公司 | Control method of intelligent household curtain based on Internet of things |
CN114675551A (en) * | 2022-02-23 | 2022-06-28 | 青岛海尔科技有限公司 | Method and device for determining operation behavior, storage medium and electronic device |
CN115051354A (en) * | 2022-05-19 | 2022-09-13 | 深圳市创诺新电子科技有限公司 | Household power utilization system management method |
CN115268269A (en) * | 2022-07-29 | 2022-11-01 | 无锡市低碳研究院有限公司 | Household energy consumption optimization system and method based on new energy low carbon |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107065796A (en) * | 2017-03-30 | 2017-08-18 | 上海斐讯数据通信技术有限公司 | A kind of electric control method based on user profile, device and system |
KR20170099721A (en) * | 2016-02-24 | 2017-09-01 | 삼성전자주식회사 | Server and controlling user environment method of electronic device using electronic device and at least one smart device |
CN107272433A (en) * | 2017-07-26 | 2017-10-20 | 深圳贯和通物联科技有限公司 | A kind of intelligent home furnishing control method and device |
CN107479393A (en) * | 2017-08-17 | 2017-12-15 | 北京天平检验行有限公司 | A kind of intelligent domestic system based on big data |
CN107666540A (en) * | 2017-10-17 | 2018-02-06 | 北京小米移动软件有限公司 | Terminal control method, device and storage medium |
CN107992003A (en) * | 2017-11-27 | 2018-05-04 | 武汉博虎科技有限公司 | User's behavior prediction method and device |
CN108427310A (en) * | 2018-05-17 | 2018-08-21 | 深圳市零度智控科技有限公司 | Intelligent home furnishing control method, device and computer readable storage medium |
CN109870919A (en) * | 2019-03-08 | 2019-06-11 | 佛山市云米电器科技有限公司 | A kind of intelligent home furnishing control method and system |
-
2019
- 2019-11-21 CN CN201911150788.3A patent/CN111025922B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20170099721A (en) * | 2016-02-24 | 2017-09-01 | 삼성전자주식회사 | Server and controlling user environment method of electronic device using electronic device and at least one smart device |
CN107065796A (en) * | 2017-03-30 | 2017-08-18 | 上海斐讯数据通信技术有限公司 | A kind of electric control method based on user profile, device and system |
CN107272433A (en) * | 2017-07-26 | 2017-10-20 | 深圳贯和通物联科技有限公司 | A kind of intelligent home furnishing control method and device |
CN107479393A (en) * | 2017-08-17 | 2017-12-15 | 北京天平检验行有限公司 | A kind of intelligent domestic system based on big data |
CN107666540A (en) * | 2017-10-17 | 2018-02-06 | 北京小米移动软件有限公司 | Terminal control method, device and storage medium |
CN107992003A (en) * | 2017-11-27 | 2018-05-04 | 武汉博虎科技有限公司 | User's behavior prediction method and device |
CN108427310A (en) * | 2018-05-17 | 2018-08-21 | 深圳市零度智控科技有限公司 | Intelligent home furnishing control method, device and computer readable storage medium |
CN109870919A (en) * | 2019-03-08 | 2019-06-11 | 佛山市云米电器科技有限公司 | A kind of intelligent home furnishing control method and system |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112255925A (en) * | 2020-10-19 | 2021-01-22 | 珠海格力电器股份有限公司 | Method and device for controlling intelligent household equipment and computer equipment |
CN112099374A (en) * | 2020-11-11 | 2020-12-18 | 广东恩胜科技有限公司 | Indoor environment comfort control method and system, electronic equipment and storage medium |
CN112782988A (en) * | 2020-12-30 | 2021-05-11 | 深圳市微网力合信息技术有限公司 | Control method of intelligent household curtain based on Internet of things |
CN114675551A (en) * | 2022-02-23 | 2022-06-28 | 青岛海尔科技有限公司 | Method and device for determining operation behavior, storage medium and electronic device |
CN115051354A (en) * | 2022-05-19 | 2022-09-13 | 深圳市创诺新电子科技有限公司 | Household power utilization system management method |
CN115268269A (en) * | 2022-07-29 | 2022-11-01 | 无锡市低碳研究院有限公司 | Household energy consumption optimization system and method based on new energy low carbon |
Also Published As
Publication number | Publication date |
---|---|
CN111025922B (en) | 2023-09-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111025922B (en) | Target equipment control method and electronic equipment | |
CN107580184B (en) | A kind of image pickup method and mobile terminal | |
CN109461117A (en) | A kind of image processing method and mobile terminal | |
CN110032156B (en) | Control and adjustment method of household equipment, terminal and household equipment | |
CN108628515A (en) | A kind of operating method and mobile terminal of multimedia content | |
CN109409244A (en) | A kind of object puts the output method and mobile terminal of scheme | |
CN110519512A (en) | A kind of object processing method and terminal | |
CN109167914A (en) | A kind of image processing method and mobile terminal | |
CN108600544A (en) | A kind of Single-hand control method and terminal | |
CN108681483A (en) | A kind of task processing method and device | |
CN109922294A (en) | A kind of method for processing video frequency and mobile terminal | |
CN108174109A (en) | A kind of photographic method and mobile terminal | |
CN109558046A (en) | A kind of information display method and terminal device | |
CN109413264A (en) | A kind of background picture method of adjustment and terminal device | |
CN109862172A (en) | A kind of adjusting method and terminal of screen parameter | |
CN109448069A (en) | A kind of template generation method and mobile terminal | |
CN114077227A (en) | Page switching method and device, scene control panel, equipment and storage medium | |
CN113495617A (en) | Method and device for controlling equipment, terminal equipment and storage medium | |
CN110058837A (en) | A kind of audio-frequency inputting method and terminal | |
CN108646966B (en) | Screen-off time adjusting method and device | |
CN107908348B (en) | The method and mobile terminal of display | |
CN109711282A (en) | Light adjusting method and device | |
CN109379503A (en) | A kind of income prompting method and mobile terminal | |
CN109508401A (en) | A kind of reminding method and mobile terminal | |
CN109164908A (en) | A kind of interface control method and mobile terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |