CN118151547A - Method and device for automatically controlling equipment based on human body characteristics - Google Patents

Method and device for automatically controlling equipment based on human body characteristics Download PDF

Info

Publication number
CN118151547A
CN118151547A CN202211564552.6A CN202211564552A CN118151547A CN 118151547 A CN118151547 A CN 118151547A CN 202211564552 A CN202211564552 A CN 202211564552A CN 118151547 A CN118151547 A CN 118151547A
Authority
CN
China
Prior art keywords
user
human body
body characteristics
parameters
detection area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211564552.6A
Other languages
Chinese (zh)
Inventor
陈小平
伍房林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Lizi Technology Co Ltd
Original Assignee
Guangdong Lizi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Lizi Technology Co Ltd filed Critical Guangdong Lizi Technology Co Ltd
Priority to CN202211564552.6A priority Critical patent/CN118151547A/en
Publication of CN118151547A publication Critical patent/CN118151547A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B15/00Systems controlled by a computer
    • G05B15/02Systems controlled by a computer electric
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/418Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS] or computer integrated manufacturing [CIM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/26Pc applications
    • G05B2219/2642Domotique, domestic, home control, automation, smart house

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Automation & Control Theory (AREA)
  • General Engineering & Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Quality & Reliability (AREA)
  • Manufacturing & Machinery (AREA)
  • Selective Calling Equipment (AREA)

Abstract

The invention discloses a method and a device for automatically controlling equipment based on human body characteristics, wherein the method comprises the following steps: collecting human body characteristics of a user in a detection area; according to the human body characteristics of the user, determining a control mode matched with the human body characteristics of the user; generating scene control parameters of the detection area according to the control mode; and controlling at least one target device to execute the operation matched with the human body characteristics of the user according to the scene control parameters. Therefore, the human body characteristics of the user in the detection area can be acquired, the corresponding scene control parameters are generated, the plurality of target devices in the detection area are further controlled to execute corresponding operations, the devices can be automatically controlled to execute the corresponding operations according to the human body characteristics of the user, the intelligent level of device control is improved, the control flexibility of the devices is improved, the control steps of the devices are simplified, the device use requirements of the user are further met, and the user experience is improved.

Description

Method and device for automatically controlling equipment based on human body characteristics
Technical Field
The invention relates to the technical field of automatic equipment control, in particular to a method and a device for automatically controlling equipment based on human body characteristics.
Background
With the rapid development of technology, more and more smart home (such as smart sweeper, smart curtain, etc.) are accepted by the vast families.
In practical application, the existing intelligent home realizes the linkage between home devices through the Internet of things technology, and controls different home devices to execute different operations according to subjective use will of users. However, because these existing home devices still need to be manually controlled, a great deal of inconvenience may be brought in the process of using the devices by users, for example, users cannot conveniently control the devices due to the loss of the remote controllers of the devices, and users need to frequently and manually adjust the air outlet temperature of the air conditioning devices due to the change of the body temperature of the users, which reduces the control flexibility of the home devices, and further causes the home devices to fail to meet the demands of the users.
Therefore, it is important to provide a method for improving the control flexibility of the home devices to meet the use demands of users.
Disclosure of Invention
The invention aims to solve the technical problem of providing a method and a device for automatically controlling equipment based on human body characteristics, which can improve the control flexibility of household equipment and further meet the use requirements of household equipment users.
To solve the above technical problem, a first aspect of the present invention discloses a method for automatically controlling a device based on human body characteristics, the method comprising:
collecting human body characteristics of a user in a detection area;
Determining a control mode matched with the human body characteristics of the user according to the human body characteristics of the user;
Generating scene control parameters of the detection area according to the control mode;
And controlling at least one target device to execute the operation matched with the human body characteristics of the user according to the scene control parameters.
As an optional implementation manner, in the first aspect of the present invention, the acquiring human body features of the user in the detection area includes:
monitoring whether a user exists in the detection area or not in real time through a monitoring and identifying device;
When the existence of the user in the detection area is monitored, identifying characteristic acquisition points of the user, wherein the characteristic acquisition points comprise face recognition acquisition points and/or limb action recognition acquisition points of the user;
and acquiring human body characteristics of the user according to the characteristic acquisition points, wherein the human body characteristics comprise facial expression characteristics and/or limb action characteristics of the user.
As an alternative embodiment, in the first aspect of the present invention, the method further includes:
Acquiring acquisition point information of the characteristic acquisition points, wherein the acquisition point information comprises the point number and the point position of the characteristic acquisition points;
judging whether the number of the points of the characteristic acquisition points is equal to the number of preset characteristic acquisition points or not;
When the number of the points of the characteristic acquisition points is judged to be not equal to the number of the preset characteristic acquisition points, generating characteristic acquisition point correction parameters of the user according to the acquisition point information, wherein the characteristic acquisition point correction parameters comprise interference correction parameters and/or disability correction parameters, the interference correction parameters are used for indicating and correcting the phenomenon of missing or overflowing of the characteristic acquisition points caused by the influence of environmental factors, and the disability correction parameters are used for indicating and correcting the phenomenon of missing or overflowing of the characteristic acquisition points caused by the individual defects of the user;
correcting the characteristic acquisition points of the user according to the characteristic acquisition point correction parameters to obtain corrected characteristic acquisition points of the user;
and collecting human body characteristics of the user according to the characteristic collection points, including:
and acquiring human body characteristics of the user according to the corrected characteristic acquisition points of the user.
As an optional implementation manner, in the first aspect of the present invention, the generating, according to the acquisition point information, a characteristic acquisition point correction parameter of the user includes:
Acquiring environmental parameters of the detection area, wherein the environmental parameters comprise at least one of temperature and humidity parameters, environmental atmosphere parameters and object parameters of the detection area, the environmental atmosphere parameters comprise object colors and/or environment lights of the detection area, and the object parameters comprise the space position and corresponding space occupation ratio of at least one object existing in the detection area;
Generating an environmental interference parameter existing when performing an operation of identifying the feature acquisition point of the user according to the environmental parameter;
and generating characteristic acquisition point correction parameters of the user according to the environment interference parameters and the acquisition point information, wherein the characteristic acquisition point correction parameters specifically comprise interference correction parameters.
As an optional implementation manner, in the first aspect of the present invention, the generating, according to the acquisition point information, a characteristic acquisition point correction parameter of the user includes:
judging whether the user is a disabled user or not according to the environment interference parameters of the detection area and the acquisition point information which are determined in advance;
when the user is judged to be the disabled user, acquiring the disabled state of the user, wherein the disabled state comprises a disabled level and/or a disabled position;
And generating characteristic acquisition point correction parameters of the user according to the disability state and the acquisition point information, wherein the characteristic acquisition point correction parameters specifically comprise disability correction parameters.
As an alternative embodiment, in the first aspect of the present invention, the method further includes:
After the presence of the user in the detection area is monitored, acquiring the voice sent by the user in real time, and analyzing the voice content of the voice, wherein the voice content comprises the tone, loudness and tone color of the voice sent by the user;
determining the user type of the user according to the voice content;
Determining a conventional voice state corresponding to the user according to the user type of the user, wherein the conventional voice state comprises conventional tones and/or conventional loudness;
determining a state matching degree of the voice sent by the user according to the voice content, the user type and the conventional voice state, wherein the state matching degree comprises a tone matching degree and/or a loudness matching degree of the voice sent by the user;
and determining human body characteristics of the user according to the state matching degree, wherein the human body characteristics further comprise voice characteristics of the user.
As an optional implementation manner, in the first aspect of the present invention, after the acquiring the human body characteristics of the user in the detection area, the method further includes:
Determining an expected emotion influence index of each content in the environment parameters on the user according to the environment parameters of the detection area and the user type of the user, wherein the environment parameters are determined in advance;
Predicting the emotion influence degree of the environmental parameter on the user according to all the expected emotion influence indexes;
And determining a control mode matched with the human body characteristics of the user according to the human body characteristics of the user, wherein the control mode comprises the following steps:
And determining the emotional state information of the user according to the emotional influence degree and the human body characteristics, and determining a control mode matched with the emotional state information of the user according to the emotional state information of the user.
The second aspect of the invention discloses a device for automatically controlling equipment based on human body characteristics, which comprises:
The acquisition module is used for acquiring human body characteristics of a user in the detection area;
the determining module is used for determining a control mode matched with the human body characteristics of the user according to the human body characteristics of the user;
the generation module is used for generating scene control parameters of the detection area according to the control mode;
And the control execution module is used for controlling at least one target device to execute the operation matched with the human body characteristics of the user according to the scene control parameters.
In a second aspect of the present invention, the method for acquiring the human body characteristics of the user in the detection area by the acquisition module specifically includes:
monitoring whether a user exists in the detection area or not in real time through a monitoring and identifying device;
When the existence of the user in the detection area is monitored, identifying characteristic acquisition points of the user, wherein the characteristic acquisition points comprise face recognition acquisition points and/or limb action recognition acquisition points of the user;
and acquiring human body characteristics of the user according to the characteristic acquisition points, wherein the human body characteristics comprise facial expression characteristics and/or limb action characteristics of the user.
As an alternative embodiment, in the second aspect of the present invention, the apparatus further includes:
the acquisition module is used for acquiring acquisition point information of the characteristic acquisition points, wherein the acquisition point information comprises the point number and the point position of the characteristic acquisition points;
the judging module is used for judging whether the number of the points of the characteristic acquisition points is equal to the number of the preset characteristic acquisition points;
The generating module is further configured to generate, according to the collection point information, a feature collection point correction parameter of the user when the judging module judges that the number of points of the feature collection points is not equal to the preset number of feature collection points, where the feature collection point correction parameter includes an interference correction parameter and/or a disability correction parameter, where the interference correction parameter is used to indicate and correct a phenomenon that the feature collection points are missing or overflowed due to an environmental factor, and the disability correction parameter is used to indicate and correct a phenomenon that the feature collection points are missing or overflowed due to an individual defect of the user;
the correction module is used for correcting the characteristic acquisition points of the user according to the characteristic acquisition point correction parameters to obtain corrected characteristic acquisition points of the user;
And the acquisition module acquires the human body characteristics of the user according to the characteristic acquisition points, wherein the mode specifically comprises the following steps:
and acquiring human body characteristics of the user according to the corrected characteristic acquisition points of the user.
In a second aspect of the present invention, the generating module generates the feature collection point correction parameter of the user according to the collection point information specifically includes:
Acquiring environmental parameters of the detection area, wherein the environmental parameters comprise at least one of temperature and humidity parameters, environmental atmosphere parameters and object parameters of the detection area, the environmental atmosphere parameters comprise object colors and/or environment lights of the detection area, and the object parameters comprise the space position and corresponding space occupation ratio of at least one object existing in the detection area;
Generating an environmental interference parameter existing when performing an operation of identifying the feature acquisition point of the user according to the environmental parameter;
and generating characteristic acquisition point correction parameters of the user according to the environment interference parameters and the acquisition point information, wherein the characteristic acquisition point correction parameters specifically comprise interference correction parameters.
In a second aspect of the present invention, the generating module generates the feature collection point correction parameter of the user according to the collection point information specifically includes:
judging whether the user is a disabled user or not according to the environment interference parameters of the detection area and the acquisition point information which are determined in advance;
when the user is judged to be the disabled user, acquiring the disabled state of the user, wherein the disabled state comprises a disabled level and/or a disabled position;
And generating characteristic acquisition point correction parameters of the user according to the disability state and the acquisition point information, wherein the characteristic acquisition point correction parameters specifically comprise disability correction parameters.
As an optional implementation manner, in the second aspect of the present invention, the obtaining module is further configured to:
After the presence of the user in the detection area is monitored, acquiring the voice of the user in real time;
The apparatus further comprises:
The analysis module is used for analyzing the voice content of the voice sent by the user, wherein the voice content comprises the tone, loudness and tone color of the voice sent by the user;
The determining module is further used for determining the user type of the user according to the voice content; determining a conventional voice state corresponding to the user according to the user type of the user, wherein the conventional voice state comprises conventional tones and/or conventional loudness; determining a state matching degree of the voice sent by the user according to the voice content, the user type and the conventional voice state, wherein the state matching degree comprises a tone matching degree and/or a loudness matching degree of the voice sent by the user; and determining human body characteristics of the user according to the state matching degree, wherein the human body characteristics further comprise voice characteristics of the user.
As an optional implementation manner, in the second aspect of the present invention, the determining module is further configured to:
After the acquisition module acquires human body characteristics of a user in a detection area, determining an expected emotion influence index of each content in the environment parameters on the user according to the environment parameters of the detection area and the user type of the user, which are determined in advance;
The apparatus further comprises:
the prediction module is used for predicting the emotion influence degree of the environment parameter on the user according to all the expected emotion influence indexes;
And the determining module determines a control mode matched with the human body characteristics of the user according to the human body characteristics of the user, wherein the method specifically comprises the following steps:
And determining the emotional state information of the user according to the emotional influence degree and the human body characteristics, and determining a control mode matched with the emotional state information of the user according to the emotional state information of the user.
In a third aspect, the present invention discloses another apparatus for automatically controlling a device based on a human body feature, the apparatus comprising:
a memory storing executable program code;
a processor coupled to the memory;
The processor invokes the executable program code stored in the memory to perform the method for automatically controlling a device based on human features disclosed in the first aspect of the present invention.
A fourth aspect of the present invention discloses a computer storage medium storing computer instructions which, when called, are used to perform the method of controlling an apparatus based on human body characteristics disclosed in the first aspect of the present invention.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
In the embodiment of the invention, the human body characteristics of the user in the detection area are collected; according to the human body characteristics of the user, determining a control mode matched with the human body characteristics of the user; generating scene control parameters of the detection area according to the control mode; and controlling at least one target device to execute the operation matched with the human body characteristics of the user according to the scene control parameters. Therefore, the human body characteristics of the user in the detection area can be acquired, the corresponding scene control parameters are generated, the plurality of target devices in the detection area are further controlled to execute corresponding operations, the devices can be automatically controlled to execute the corresponding operations according to the human body characteristics of the user, the intelligent level of device control is improved, the control flexibility of the devices is improved, the control steps of the devices are simplified, the device use requirements of the user are further met, and the user experience is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic view of a scene of an automatic control device based on human body characteristics according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method for automatically controlling a device based on human body characteristics according to an embodiment of the present invention;
Fig. 3 is a schematic diagram of an electronic control component module of a monitoring and identifying device according to an embodiment of the present invention;
FIG. 4 is a flow chart of another method for automatically controlling a device based on human body characteristics according to an embodiment of the present invention;
Fig. 5 is a schematic structural view of an apparatus for automatically controlling a device based on human body characteristics according to an embodiment of the present invention;
FIG. 6 is a schematic view of another apparatus for automatically controlling a device based on human body characteristics according to an embodiment of the present invention;
fig. 7 is a schematic structural view of an apparatus for automatically controlling a device based on human body characteristics according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terms first, second and the like in the description and in the claims and in the above-described figures are used for distinguishing between different objects and not necessarily for describing a sequential or chronological order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, apparatus, article, or article that comprises a list of steps or elements is not limited to only those listed but may optionally include other steps or elements not listed or inherent to such process, method, article, or article.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the invention. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
The invention discloses a method and a device for automatically controlling equipment based on human body characteristics, which can collect human body characteristics of a user in a detection area and generate corresponding scene control parameters so as to control a plurality of target equipment in the detection area to execute corresponding operations, and can automatically control the equipment to execute the corresponding operations according to the human body characteristics of the user, thereby being beneficial to improving the use flexibility of the equipment and further meeting the use requirements of the equipment of the user; the automatic equipment service can be provided according to the human body characteristics of the user, the intelligent level of equipment control is improved, and further the user experience is improved. The following will describe in detail.
In order to better understand the method and apparatus for automatically controlling a device based on human body features according to the present invention, first, a scene to which the method for automatically controlling a device based on human body features is applicable will be described, and specifically, the scene schematic diagram may be shown in fig. 1, and fig. 1 is a scene schematic diagram according to an embodiment of the present invention. As shown in fig. 1, the scene includes a detection area, a user, a monitoring and identifying device, and a target device. The target devices shown in fig. 1 refer to intelligent home devices, i.e., home devices (e.g., intelligent air conditioners, intelligent sweeping machines, intelligent televisions, etc.) that can receive control parameters to automatically perform corresponding operations, and these target devices may be located in a detection area or in other areas where the control parameters can be received, such as intelligent curtains located on an external balcony. Alternatively, the monitoring and identifying device shown in fig. 1 may be installed in the target device (for example, a built-in camera of the smart television), or may be an independent monitoring and identifying device (for example, an independent monitoring camera, etc.). Optionally, the detection area shown in fig. 1 is a working coverage area of the monitoring and identifying device, that is, an area where the device can monitor and identify a user, and the detection area may be a living room, a kitchen, a balcony, etc. Specifically, the monitoring and identifying device can monitor the detection area in real time and identify the human body characteristics (facial expression characteristics and/or limb action characteristics) of the user when the user exists in the detection area in the working process, and determine corresponding control parameters according to the identified human body characteristics, and further generate scene control parameters of the detection area according to the control parameters, wherein the scene control parameters are used for controlling a plurality of target devices to execute corresponding operations so as to create different scene atmospheres (such as a happy scene atmosphere, a sad scene atmosphere and the like) of the detection area.
It should be noted that the schematic view of the scene shown in fig. 1 is only for illustrating a scene to which the method for automatically controlling the device based on the human body features is applied, and the detection area, the user, the monitoring and identifying device, the target device, etc. are also only schematically illustrated, which is not limited to the schematic view of the scene shown in fig. 1. The application scenario to which the method for automatically controlling the device based on the human body characteristics is applicable is described above, and the method and the device for automatically controlling the device based on the human body characteristics are described in detail below.
Example 1
Referring to fig. 2, fig. 2 is a flow chart of a method for automatically controlling a device based on human body characteristics according to an embodiment of the present invention. The method for automatically controlling the device based on the human body characteristics described in fig. 2 can be applied to a device carrying a monitoring and identifying device, where the device may include a home device (for example, a built-in camera of a smart television), an independently existing monitoring and identifying device (for example, a monitoring camera, etc.), a server or a control platform for controlling a smart home, where the server includes a local server or a cloud server, etc. The embodiment of the invention is not limited. As shown in fig. 2, the method of automatically controlling a device based on human features may include the operations of:
101. Human body characteristics of a user in a detection area are collected.
In the embodiment of the invention, the detection area may be an area selected by a user (such as a living room, a kitchen, a bedroom, etc.), or may be a detection area automatically determined by the monitoring and identifying device, such as an intelligent camera, and the detection area is selected by changing through automatic rotation. Further optionally, the schematic diagram of the electronic control component module of the monitoring and identifying device is shown in fig. 3, and the 3D face identifying module and the human body gesture identifying module shown in fig. 3 identify the user in the detection area to obtain corresponding face and/or limb data, so as to determine the human body characteristics of the user.
In the embodiment of the invention, optionally, the human body features of the user refer to facial expression features, limb action features and the like of the user. For example, facial expression features may include smiling faces, crying faces, sad faces, and the like; the limb movement features may include a fork movement, a nodding movement, a trembling movement, a tilting movement, etc.
102. And determining a control mode matched with the human body characteristics of the user according to the human body characteristics of the user.
In this embodiment of the present invention, optionally, after the 3D face recognition module and the human body gesture recognition module of the monitoring and recognition device collect the human body characteristics of the user, a control mode matched with the human body characteristics is determined by a controller (such as an MCU controller shown in fig. 3) of the monitoring and recognition device. Optionally, the control mode may be generated by a built-in controller of the monitoring and identifying device, or may be generated by uploading to a corresponding server or cloud control platform, which is not limited herein.
103. And generating scene control parameters of the detection area according to the control mode.
In this embodiment of the present invention, optionally, the generated control mode is sent to the corresponding terminal device through a preset transfer module (such as a WiFi module shown in fig. 3) in the monitoring and identifying device, and the terminal device generates the corresponding scene control parameter, for example, the corresponding control parameter is sent to the user mobile phone (i.e., the terminal device) through the WiFi module of the monitoring and identifying device, and the scene control parameter of the detection area is generated through the relevant APP on the user mobile phone. Further alternatively, the categories of the scene control parameters may include a happy scene control parameter, a fear scene control parameter, an anger scene control parameter, a sad scene control parameter, and the like.
104. And controlling at least one target device to execute the operation matched with the human body characteristics of the user according to the scene control parameters.
In the embodiment of the invention, optionally, the scene control parameter can control a plurality of home devices in the detection area to automatically execute different device operations, so as to build different area scene atmospheres, for example, when the scene control parameter is a happy scene control parameter, the happy music of the intelligent sound box is controlled by the happy scene control parameter, the intelligent curtain is pulled open, the intelligent lamp tube is adjusted to the light with a warm tone, and the like.
Therefore, the embodiment of the invention can collect the human body characteristics of the user in the detection area, generate corresponding scene control parameters, further control a plurality of target devices in the detection area to execute corresponding operations, automatically control the devices to execute the corresponding operations according to the human body characteristics of the user, and improve the intelligent level of device control, thereby improving the control flexibility of the devices, simplifying the control steps of the devices, further meeting the use requirements of the devices of the user, and improving the user experience.
In an alternative embodiment, the step 101 collects the human body characteristics of the user in the detection area, including:
monitoring whether a user exists in the detection area or not in real time through a monitoring and identifying device;
When the existence of the user in the detection area is monitored, identifying a characteristic acquisition point of the user;
and collecting human body characteristics of the user according to the characteristic collection points.
In this alternative embodiment, specifically, whether the user is within the monitoring identification range of the device is detected by a human body sensing module (for example, a thermal sensing module, or an infrared sensing module as shown in fig. 2) corresponding to the monitoring identification device.
In this alternative embodiment, the feature collection points may be preset key points of the human body image for facilitating the recognition and analysis of the monitoring and recognition device, and the key points may be located at different parts and/or different joints of the human body. Further optionally, the feature collection points include face recognition collection points and/or limb motion recognition collection points of the user, through which the face and the facial contours of the user can be constructed; the limb movement of the user can be used for identifying the acquisition point, so that the limb and the joint of the user can be constructed. Still further optionally, the human body features of the user are determined by the positions and distances of all feature acquisition points and the constructed different human body parts, and the human body features include facial expression features and/or limb action features of the user.
Therefore, when a user exists in the detection area, the optional embodiment can identify the characteristic acquisition point of the user and acquire the facial expression characteristic and/or limb action characteristic of the user in real time according to the characteristic acquisition point, so that the characteristic identification function of the monitoring identification device can be awakened only when the user exists, the phenomenon that the monitoring identification device is frequently awakened is reduced, and the energy-saving effect is further achieved; human body characteristic collection can be carried out according to the characteristic collection points corresponding to the user, and accuracy of the human body characteristics of the user is improved.
In another alternative embodiment, the method further comprises:
After monitoring that the user exists in the detection area, acquiring voice sent by the user in real time, and analyzing voice content of the voice;
determining the user type of the user according to the voice content;
determining a conventional voice state corresponding to the user according to the user type of the user, wherein the conventional voice state comprises conventional tones and/or conventional loudness;
according to the voice content, the user type and the conventional voice state, determining the state matching degree of the voice sent by the user, wherein the state matching degree comprises the tone matching degree and/or the loudness matching degree of the voice sent by the user;
and determining the human body characteristics of the user according to the state matching degree, wherein the human body characteristics also comprise the voice characteristics of the user.
In this optional embodiment, optionally, when the monitoring and identifying device monitors that the user exists in the detection area, a voice wake-up state is automatically triggered, that is, when the user sends out voice, the voice is obtained in real time, and voice content of the voice is obtained through analysis, where the voice content includes a tone, loudness and timbre of the voice sent out by the user. Further alternatively, because the timbres of different human individuals are different, the user identity and the user category of the user can be confirmed according to the timbre of the analyzed voice, the user category can be searched and confirmed in a pre-registered category library according to the user identity, or can be determined by the device autonomously, the user category can be divided according to factors such as age bracket, gender, race, physical state, physiological state and the like, and for example, the user category can be a young male yellow race, a middle-aged female white race, an elderly male disabled person and the like.
In this alternative embodiment, optionally, since the voice tones and loudness of the users of different user categories are different, the conventional tone and/or conventional loudness of the user's voice can be determined according to the determined user type, e.g., the tone and loudness of teenagers' voice will be higher, the voice tone of young males will be lower after the sound period, etc. Further optionally, comparing the determined conventional voice and/or conventional loudness with the analyzed voice content of the user, so as to determine a state matching degree of the voice sent by the user, and further determine a voice feature of the user according to the state matching degree, for example, when the state matching degree of the user is higher, determining that the voice of the user is similar to the conventional voice and/or conventional loudness, and at the moment, the voice feature of the user is calm voice; when the state matching degree of the user is low and the voice of the user is higher than the conventional voice and/or the conventional loudness, the voice of the user is characterized as excited voice.
Therefore, the optional embodiment can analyze the voice content of the voice sent by the user, determine the user type of the user and the corresponding conventional voice and/or conventional loudness, further determine the state matching degree of the voice of the user, further determine the voice characteristics (human body characteristics) corresponding to the user, collect the human body characteristics of the user in a multi-dimensional manner, further improve the comprehensiveness and accuracy of the control mode matched with the human body characteristics, and further improve the fault tolerance and reliability of the voice control method.
Example two
Referring to fig. 4, fig. 4 is a flowchart of a method for automatically controlling a device based on human body characteristics according to an embodiment of the present invention. The method for automatically controlling the device based on the human body characteristics described in fig. 4 can be applied to a device carrying a monitoring and identifying device, where the device may include a home device (for example, a built-in camera of a smart television), an independently existing monitoring and identifying device (for example, a monitoring camera, etc.), a server or a control platform for controlling a smart home, where the server includes a local server or a cloud server, etc. The embodiment of the invention is not limited. As shown in fig. 4, the method of automatically controlling a device based on human features may include the operations of:
201. human body characteristics of a user in a detection area are collected.
202. And determining the expected emotion influence index of each content in the environment parameters on the user according to the environment parameters of the predetermined detection area and the user type of the user.
In an embodiment of the present invention, optionally, the environmental parameter includes at least one of a temperature and humidity parameter of the detection area, an environmental atmosphere parameter, and an item parameter, where the environmental atmosphere parameter includes an item color of the detection area and/or an environmental light, and the item parameter includes a spatial position of at least one item existing in the detection area and a corresponding spatial duty ratio.
In the embodiment of the present invention, specifically, the method for determining the expected emotion influence index of each content in the environment parameters to the user specifically includes: and presetting an emotion index rule table of different environment contents, wherein the emotion index rule table comprises a plurality of expected emotion influence indexes corresponding to conventional environment contents, and according to each content in the determined environment parameters, the emotion index rule table corresponds to each other one by one, for example, the content of the emotion index rule table can be that the expected emotion influence index corresponding to warm-color system lamplight (such as red-orange lamplight and the like) rises by a certain value as a happiness index, and the like. The predicted emotion influence index corresponding to each content in the environment parameters is determined, so that the predicted influence degree of different factors in the complex environment on the emotion of the user can be distinguished, and the fault tolerance of judging the emotion of the user is improved.
203. And predicting the emotion influence degree of the environmental parameters on the user according to all the expected emotion influence indexes.
In the embodiment of the invention, specifically, according to determining all expected emotion influence indexes, the emotion influence corresponding to each index is synthesized to predict the comprehensive emotion influence degree of the environment parameter on the user. For example, when the predicted emotional impact index of one content is five percent higher than the happiness index and the predicted emotional impact index of the other content is ten percent higher than the sadness index, the total predicted emotional impact of the user is five percent higher than the sadness index after the combined emotional impact.
204. According to the emotion influence degree and the human body characteristics, determining the emotion state information of the user, and according to the emotion state information of the user, determining a control mode matched with the emotion state information of the user.
In the embodiment of the present invention, optionally, the method for determining the emotional state information of the user according to the emotional influence degree and the human body features is specifically: according to the emotion influence degree of the user and the human body characteristics of all aspects, corresponding emotion state information is searched in a corresponding emotion matching library, wherein the emotion state information can be happy, sad, anger, fear, sad, annoying state and the like. For example, when the user is identified as having a broken mouth and limb movements are cross, and the corresponding emotion influence degree is combined, the current emotion state information of the user can be matched in the corresponding emotion matching library to be a happy state.
205. And generating scene control parameters of the detection area according to the control mode.
206. And controlling at least one target device to execute the operation matched with the human body characteristics of the user according to the scene control parameters.
In the embodiment of the present invention, for other descriptions of step 201, step 205 and step 206, please refer to the detailed descriptions of step 101, step 103 and step 104 in the first embodiment, and the detailed descriptions of the embodiment of the present invention are omitted.
Therefore, the embodiment of the invention can collect the human body characteristics of the user in the detection area, generate corresponding scene control parameters, further control a plurality of target devices in the detection area to execute corresponding operations, automatically control the devices to execute the corresponding operations according to the human body characteristics of the user, and improve the intelligent level of device control, thereby improving the control flexibility of the devices, simplifying the control steps of the devices, further meeting the use requirements of the devices of the user, and improving the user experience; according to the intelligent household intelligent control system, the environment parameters and the human body characteristics of the user can be combined, the emotion state information corresponding to the user is determined, and then the corresponding operation is executed by the automatic control equipment according to the scene control parameters matched with the emotion state information, humanized household services can be provided for the user according to the emotion state of the user, and then the good emotion state of the user is maintained, so that the comfort and happiness of the user in the intelligent household experience are improved.
In an alternative embodiment, the method further comprises:
acquiring acquisition point information of the feature acquisition points, wherein the acquisition point information comprises the number of the feature acquisition points and the point position;
judging whether the number of points of the feature acquisition points is equal to the number of preset feature acquisition points;
When the number of points of the feature acquisition points is judged to be not equal to the number of preset feature acquisition points, generating feature acquisition point correction parameters of a user according to the acquisition point information;
Correcting the characteristic acquisition points of the user according to the characteristic acquisition point correction parameters to obtain corrected characteristic acquisition points of the user;
and collecting human body characteristics of the user according to the characteristic collection points, comprising:
and acquiring human body characteristics of the user according to the corrected characteristic acquisition points of the user.
In this optional embodiment, optionally, the feature acquisition point correction parameter includes an interference correction parameter and/or a disability correction parameter, where the interference correction parameter is used to indicate correction of a phenomenon of missing or overflowing of the feature acquisition point caused by an influence of an environmental factor, and the disability correction parameter is used to indicate correction of a phenomenon of missing or overflowing of the feature acquisition point caused by an individual defect of the user;
In this alternative embodiment, the number of preset feature collection points may be optionally determined by a technician through a manikin test, or may be automatically generated by the monitoring and identifying device according to different users, which is not limited herein.
Therefore, when the number of the characteristic acquisition points of the user is judged to be not equal to the number of the preset characteristic acquisition points, the optional embodiment can generate corresponding interference correction parameters and/or disability correction parameters and correct the characteristic acquisition points of the user, and the characteristic acquisition points can be complemented or deleted as required, so that the external factor interference during the operation of executing the characteristic acquisition point identification of the user is reduced, and the identification accuracy is improved; the application degree of the invention to the disabled can be improved, the practicability of the invention is further improved, and the invention is embodied for the personal care of the disabled.
In another optional embodiment, generating the characteristic acquisition point correction parameter of the user according to the acquisition point information includes:
Acquiring environmental parameters of a detection area;
Generating an environmental interference parameter existing when performing an operation of identifying a feature acquisition point of a user according to the environmental parameter;
And generating characteristic acquisition point correction parameters of the user according to the environment interference parameters and the acquisition point information, wherein the characteristic acquisition point correction parameters specifically comprise the interference correction parameters.
In this alternative embodiment, optionally, the environmental parameter includes at least one of a temperature and humidity parameter of the detection area, an environmental atmosphere parameter, and an item parameter. Further optionally, the ambient atmosphere parameter comprises an item hue and/or ambient light of the detection area, the item parameter comprises a spatial position of at least one item present in the detection area and a corresponding spatial duty cycle;
In this optional embodiment, optionally, the environmental interference parameter is used to indicate the degree to which the feature acquisition point is interfered by the environmental factor, for example, the current user starts from the occurrence into the detection area and performs lock tracking on the feature acquisition point, when the user moves to the position blocked by the object, the identified feature acquisition point is blocked, and at this time, according to the corresponding object position and/or space occupation ratio, an environmental interference parameter capable of indicating the missing feature acquisition point is generated; similarly, when the user moves to a strong light position, strong light may cause the feature collection points of the user identified by the device to increase, and at this time, environmental interference parameters capable of indicating overflowed feature collection points are generated according to the corresponding environmental parameters.
Therefore, the optional embodiment can combine the influence of environmental factors to correct the characteristic acquisition points of the identified user in real time, can reduce the interference influence caused by the environmental factors on the monitoring and identifying device when the monitoring and identifying device executes the identifying work, improves the acquisition accuracy and the fault tolerance of the human body characteristics of the user, and further improves the practicability of the invention.
In yet another optional embodiment, generating the characteristic acquisition point correction parameter of the user according to the acquisition point information includes:
Judging whether the user is a disabled user or not according to the environment interference parameters and the acquisition point information of the predetermined detection area;
when the user is judged to be a disabled user, acquiring the disabled state of the user, wherein the disabled state comprises a disabled grade and/or a disabled part;
And generating characteristic acquisition point correction parameters of the user according to the disability state and the acquisition point information, wherein the characteristic acquisition point correction parameters specifically comprise the disability correction parameters.
In this alternative embodiment, optionally, when the feature collection point of the user is not affected by the environmental factor and there is a missing or overflowed feature collection point, the user is determined to be a disabled user. For example, when the user has a missing body part or an abnormal multi-out part, the feature acquisition point of the user identified by the monitoring and identifying device is missing or overflowed, and the user is judged to be a disabled user without influence of environmental factors.
In this optional embodiment, optionally, by acquiring the disability level and/or the disabled location of the disabled user, the disability degree of the disabled user can be determined, and thus, the corresponding feature acquisition point correction parameter can be generated for the disabled user more accurately and more specifically.
Therefore, when the user is judged to be the disabled user, the disabled correction parameters of the corresponding characteristic acquisition points can be generated according to the disabled state of the disabled user, the application degree of the invention to the disabled can be improved, the practicability of the invention is further improved, and the invention is embodied for the personal care of the disabled.
Example III
Referring to fig. 5, fig. 5 is a schematic structural diagram of an apparatus for automatically controlling a device based on human body characteristics according to an embodiment of the present invention. The device for automatically controlling the equipment based on the human body characteristics, which is described in fig. 5, can be applied to equipment with a monitoring and identifying device, wherein the equipment can comprise household equipment (such as a built-in camera of a smart television), independently-existing monitoring and identifying equipment (such as a monitoring camera) and a server or a control platform for controlling smart home, and the server comprises a local server or a cloud server and the like. The embodiment of the invention is not limited. As shown in fig. 5, the apparatus for automatically controlling a device based on human body characteristics may include:
The acquisition module 301 is configured to acquire a human body feature of a user in the detection area.
A determining module 302, configured to determine, according to the human body characteristics of the user, a control mode that matches the human body characteristics of the user.
The generating module 303 is configured to generate a scene control parameter of the detection area according to the control mode.
The control execution module 304 is configured to control at least one target device to perform an operation matching with a human body feature of the user according to the scene control parameter.
Therefore, the embodiment of the invention can collect the human body characteristics of the user in the detection area, generate corresponding scene control parameters, further control a plurality of target devices in the detection area to execute corresponding operations, automatically control the devices to execute the corresponding operations according to the human body characteristics of the user, and improve the intelligent level of device control, thereby improving the control flexibility of the devices, simplifying the control steps of the devices, further meeting the use requirements of the devices of the user, and improving the user experience.
In an alternative embodiment, the method for collecting the human body characteristics of the user in the detection area by using the collection module specifically includes:
monitoring whether a user exists in the detection area or not in real time through a monitoring and identifying device;
When the existence of the user in the detection area is monitored, identifying the characteristic acquisition points of the user, wherein the characteristic acquisition points comprise face recognition acquisition points and/or limb action identification acquisition points of the user;
and acquiring human body characteristics of the user according to the characteristic acquisition points, wherein the human body characteristics comprise facial expression characteristics and/or limb action characteristics of the user.
Therefore, when a user exists in the detection area, the optional embodiment can identify the characteristic acquisition point of the user and acquire the facial expression characteristic and/or limb action characteristic of the user in real time according to the characteristic acquisition point, so that the characteristic identification function of the monitoring identification device can be awakened only when the user exists, the phenomenon that the monitoring identification device is frequently awakened is reduced, and the energy-saving effect is further achieved; human body characteristic collection can be carried out according to the characteristic collection points corresponding to the user, and accuracy of the human body characteristics of the user is improved.
In another alternative embodiment, as shown in fig. 6, the apparatus further comprises:
An acquisition module 305, configured to acquire acquisition point information of feature acquisition points, where the acquisition point information includes a number of points and a point location of the feature acquisition points;
A judging module 306, configured to judge whether the number of points of the feature collection points is equal to the number of preset feature collection points;
The generating module 303 is further configured to generate, according to the collection point information, a feature collection point correction parameter of a user when the number of points of the feature collection point is determined by the determining module 306 not to be equal to the preset number of feature collection points, where the feature collection point correction parameter includes an interference correction parameter and/or a disability correction parameter, where the interference correction parameter is used to indicate and correct a phenomenon that the feature collection point is missing or overflowed due to an environmental factor, and the disability correction parameter is used to indicate and correct a phenomenon that the feature collection point is missing or overflowed due to an individual defect of the user;
the correction module 307 is configured to correct the feature acquisition point of the user according to the feature acquisition point correction parameter, and obtain a corrected feature acquisition point of the user;
and, the above-mentioned acquisition module 301 specifically includes, according to the feature acquisition points, the manner of acquiring the human body features of the user:
and acquiring human body characteristics of the user according to the corrected characteristic acquisition points of the user.
As can be seen, when the device based on the automatic human body feature control device described in fig. 6 is implemented and it is determined that the number of feature acquisition points of the user is not equal to the number of preset feature acquisition points, corresponding interference correction parameters and/or disability correction parameters are generated and the feature acquisition points of the user are corrected, so that the feature acquisition points can be complemented or deleted as required, which is beneficial to reducing the interference of external factors when the feature acquisition points of the user are identified, and further improving the identification accuracy; the application degree of the invention to the disabled can be improved, the practicability of the invention is further improved, and the invention is embodied for the personal care of the disabled.
In yet another alternative embodiment, the generating module 303 generates the characteristic collection point correction parameter of the user according to the collection point information specifically includes:
Acquiring environmental parameters of a detection area, wherein the environmental parameters comprise at least one of temperature and humidity parameters, environmental atmosphere parameters and object parameters of the detection area, the environmental atmosphere parameters comprise object colors and/or environment lights of the detection area, and the object parameters comprise the space position and corresponding space duty ratio of at least one object existing in the detection area;
Generating an environmental interference parameter existing when performing an operation of identifying a feature acquisition point of a user according to the environmental parameter;
And generating characteristic acquisition point correction parameters of the user according to the environment interference parameters and the acquisition point information, wherein the characteristic acquisition point correction parameters specifically comprise the interference correction parameters.
Therefore, the optional embodiment can combine the influence of environmental factors to correct the characteristic acquisition points of the identified user in real time, can reduce the interference influence caused by the environmental factors on the monitoring and identifying device when the monitoring and identifying device executes the identifying work, improves the acquisition accuracy and the fault tolerance of the human body characteristics of the user, and further improves the practicability of the invention.
In yet another alternative embodiment, the generating module 303 generates the characteristic collection point correction parameter of the user according to the collection point information specifically includes:
Judging whether the user is a disabled user or not according to the environment interference parameters and the acquisition point information of the predetermined detection area;
when the user is judged to be a disabled user, acquiring the disabled state of the user, wherein the disabled state comprises a disabled grade and/or a disabled part;
And generating characteristic acquisition point correction parameters of the user according to the disability state and the acquisition point information, wherein the characteristic acquisition point correction parameters specifically comprise the disability correction parameters.
Therefore, when the user is judged to be the disabled user, the disabled correction parameters of the corresponding characteristic acquisition points can be generated according to the disabled state of the disabled user, the application degree of the invention to the disabled can be improved, the practicability of the invention is further improved, and the invention is embodied for the personal care of the disabled.
In yet another alternative embodiment, the acquiring module 305 is further configured to:
After monitoring that the user exists in the detection area, acquiring the voice of the user in real time;
And, as shown in fig. 6, the apparatus further includes:
An analysis module 308 for analyzing voice content of the voice uttered by the user, the voice content including pitch, loudness and timbre of the voice uttered by the user;
The determining module 302 is further configured to determine a user type of the user according to the voice content; determining a conventional voice state corresponding to the user according to the user type of the user, wherein the conventional voice state comprises conventional tones and/or conventional loudness; according to the voice content, the user type and the conventional voice state, determining the state matching degree of the voice sent by the user, wherein the state matching degree comprises the tone matching degree and/or the loudness matching degree of the voice sent by the user; and determining the human body characteristics of the user according to the state matching degree, wherein the human body characteristics also comprise the voice characteristics of the user.
Therefore, the device for implementing the automatic control device based on human body features described in fig. 6 can analyze the voice content of the voice sent by the user, determine the user type of the user and the corresponding conventional voice and/or conventional loudness, further determine the state matching degree of the voice of the user, further determine the corresponding voice features (human body features) of the user, collect the human body features of the user in a multi-dimensional manner, further improve the comprehensiveness and accuracy of the control mode matched with the human body features, and further improve the fault tolerance and reliability of the invention. .
In yet another alternative embodiment, the determining module 302 is further configured to:
After the acquisition module 301 acquires the human body characteristics of the user in the detection area, determining an expected emotion influence index of each content in the environment parameters on the user according to the environment parameters of the detection area and the user type of the user, which are determined in advance;
And, as shown in fig. 6, the apparatus further includes:
A prediction module 309, configured to predict the emotion influence degree of the environmental parameter on the user according to all the predicted emotion influence indexes;
And, the determining module 302 determines, according to the human body characteristics of the user, the control mode matching the human body characteristics of the user in a manner specifically including:
According to the emotion influence degree and the human body characteristics, determining the emotion state information of the user, and according to the emotion state information of the user, determining a control mode matched with the emotion state information of the user.
Therefore, the device based on the human body characteristic automatic control device described in fig. 6 can combine the environmental parameters and the human body characteristics of the user to determine the emotional state information corresponding to the user, and further the device is controlled to execute the corresponding operation according to the scene control parameters matched with the emotional state information, so that the device is controlled to execute the operation matched with the emotional state according to the emotional state of the user, which is beneficial to providing humanized home service for the user, further maintaining the good emotional state of the user, and further improving the comfort and happiness of the user when experiencing smart home.
Example IV
Referring to fig. 7, fig. 7 is a schematic structural diagram of an apparatus for automatically controlling a device based on human body characteristics according to an embodiment of the present invention. As shown in fig. 7, the apparatus for automatically controlling a device based on human body characteristics may include:
A memory 401 storing executable program codes;
a processor 402 coupled with the memory 401;
The processor 402 invokes executable program codes stored in the memory 401 to perform the steps in the method for automatically controlling the apparatus based on human body characteristics described in the first embodiment or the second embodiment of the present invention.
Example five
The embodiment of the invention discloses a computer storage medium which stores computer instructions for executing the steps in the method for automatically controlling equipment based on human body characteristics described in the first embodiment or the second embodiment of the invention when the computer instructions are called.
Example six
An embodiment of the present invention discloses a computer program product comprising a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause a computer to perform the steps in the method for automatically controlling an apparatus based on human body characteristics described in the first or second embodiment.
The apparatus embodiments described above are merely illustrative, wherein the modules illustrated as separate components may or may not be physically separate, and the components shown as modules may or may not be physical, i.e., may be located in one place, or may be distributed over a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above detailed description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course by means of hardware. Based on such understanding, the foregoing technical solutions may be embodied essentially or in part in the form of a software product that may be stored in a computer-readable storage medium including Read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), programmable Read-Only Memory (Programmable Read-Only Memory, PROM), erasable programmable Read-Only Memory (Erasable Programmable Read Only Memory, EPROM), one-time programmable Read-Only Memory (OTPROM), electrically erasable programmable Read-Only Memory (EEPROM), compact disc Read-Only Memory (Compact Disc Read-Only Memory, CD-ROM) or other optical disc Memory, magnetic disc Memory, tape Memory, or any other medium that can be used for computer-readable carrying or storing data.
Finally, it should be noted that: the embodiment of the invention discloses a method and a device for automatically controlling equipment based on human body characteristics, which are disclosed as preferred embodiments of the invention, and are only used for illustrating the technical scheme of the invention, but not limiting the technical scheme; although the invention has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art will understand that; the technical scheme recorded in the various embodiments can be modified or part of technical features in the technical scheme can be replaced equivalently; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions.

Claims (10)

1. A method of automatically controlling a device based on a human feature, the method comprising:
collecting human body characteristics of a user in a detection area;
Determining a control mode matched with the human body characteristics of the user according to the human body characteristics of the user;
Generating scene control parameters of the detection area according to the control mode;
And controlling at least one target device to execute the operation matched with the human body characteristics of the user according to the scene control parameters.
2. The method of automatically controlling a device based on human features according to claim 1, wherein the acquiring human features of the user in the detection area comprises:
monitoring whether a user exists in the detection area or not in real time through a monitoring and identifying device;
When the existence of the user in the detection area is monitored, identifying characteristic acquisition points of the user, wherein the characteristic acquisition points comprise face recognition acquisition points and/or limb action recognition acquisition points of the user;
and acquiring human body characteristics of the user according to the characteristic acquisition points, wherein the human body characteristics comprise facial expression characteristics and/or limb action characteristics of the user.
3. The method of automatically controlling a device based on a human feature of claim 2, further comprising:
Acquiring acquisition point information of the characteristic acquisition points, wherein the acquisition point information comprises the point number and the point position of the characteristic acquisition points;
judging whether the number of the points of the characteristic acquisition points is equal to the number of preset characteristic acquisition points or not;
When the number of the points of the characteristic acquisition points is judged to be not equal to the number of the preset characteristic acquisition points, generating characteristic acquisition point correction parameters of the user according to the acquisition point information, wherein the characteristic acquisition point correction parameters comprise interference correction parameters and/or disability correction parameters, the interference correction parameters are used for indicating and correcting the phenomenon of missing or overflowing of the characteristic acquisition points caused by the influence of environmental factors, and the disability correction parameters are used for indicating and correcting the phenomenon of missing or overflowing of the characteristic acquisition points caused by the individual defects of the user;
correcting the characteristic acquisition points of the user according to the characteristic acquisition point correction parameters to obtain corrected characteristic acquisition points of the user;
and collecting human body characteristics of the user according to the characteristic collection points, including:
and acquiring human body characteristics of the user according to the corrected characteristic acquisition points of the user.
4. The method for automatically controlling a device based on human body characteristics according to claim 3, wherein generating characteristic acquisition point correction parameters of the user according to the acquisition point information comprises:
Acquiring environmental parameters of the detection area, wherein the environmental parameters comprise at least one of temperature and humidity parameters, environmental atmosphere parameters and object parameters of the detection area, the environmental atmosphere parameters comprise object colors and/or environment lights of the detection area, and the object parameters comprise the space position and corresponding space occupation ratio of at least one object existing in the detection area;
Generating an environmental interference parameter existing when performing an operation of identifying the feature acquisition point of the user according to the environmental parameter;
and generating characteristic acquisition point correction parameters of the user according to the environment interference parameters and the acquisition point information, wherein the characteristic acquisition point correction parameters specifically comprise interference correction parameters.
5. The method for automatically controlling a device based on human body characteristics according to claim 3, wherein generating characteristic acquisition point correction parameters of the user according to the acquisition point information comprises:
judging whether the user is a disabled user or not according to the environment interference parameters of the detection area and the acquisition point information which are determined in advance;
when the user is judged to be the disabled user, acquiring the disabled state of the user, wherein the disabled state comprises a disabled level and/or a disabled position;
And generating characteristic acquisition point correction parameters of the user according to the disability state and the acquisition point information, wherein the characteristic acquisition point correction parameters specifically comprise disability correction parameters.
6. The method of automatically controlling a device based on a human feature of claim 2, further comprising:
After the presence of the user in the detection area is monitored, acquiring the voice sent by the user in real time, and analyzing the voice content of the voice, wherein the voice content comprises the tone, loudness and tone color of the voice sent by the user;
determining the user type of the user according to the voice content;
Determining a conventional voice state corresponding to the user according to the user type of the user, wherein the conventional voice state comprises conventional tones and/or conventional loudness;
determining a state matching degree of the voice sent by the user according to the voice content, the user type and the conventional voice state, wherein the state matching degree comprises a tone matching degree and/or a loudness matching degree of the voice sent by the user;
and determining human body characteristics of the user according to the state matching degree, wherein the human body characteristics further comprise voice characteristics of the user.
7. The method of automatically controlling a device based on human features according to claim 1, wherein after the collecting human features of the user in the detection area, the method further comprises:
Determining an expected emotion influence index of each content in the environment parameters on the user according to the environment parameters of the detection area and the user type of the user, wherein the environment parameters are determined in advance;
Predicting the emotion influence degree of the environmental parameter on the user according to all the expected emotion influence indexes;
And determining a control mode matched with the human body characteristics of the user according to the human body characteristics of the user, wherein the control mode comprises the following steps:
And determining the emotional state information of the user according to the emotional influence degree and the human body characteristics, and determining a control mode matched with the emotional state information of the user according to the emotional state information of the user.
8. An apparatus for automatically controlling a device based on a human body characteristic, the apparatus comprising:
The acquisition module is used for acquiring human body characteristics of a user in the detection area;
the determining module is used for determining a control mode matched with the human body characteristics of the user according to the human body characteristics of the user;
the generation module is used for generating scene control parameters of the detection area according to the control mode;
And the control execution module is used for controlling at least one target device to execute the operation matched with the human body characteristics of the user according to the scene control parameters.
9. An apparatus for automatically controlling a device based on a human body characteristic, the apparatus comprising:
a memory storing executable program code;
a processor coupled to the memory;
The processor invokes the executable program code stored in the memory to perform the method of automatically controlling a device based on human features as claimed in any one of claims 1 to 7.
10. A computer storage medium storing computer instructions which, when invoked, are adapted to perform the method of automatically controlling a device based on human features of any one of claims 1-7.
CN202211564552.6A 2022-12-07 2022-12-07 Method and device for automatically controlling equipment based on human body characteristics Pending CN118151547A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211564552.6A CN118151547A (en) 2022-12-07 2022-12-07 Method and device for automatically controlling equipment based on human body characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211564552.6A CN118151547A (en) 2022-12-07 2022-12-07 Method and device for automatically controlling equipment based on human body characteristics

Publications (1)

Publication Number Publication Date
CN118151547A true CN118151547A (en) 2024-06-07

Family

ID=91297312

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211564552.6A Pending CN118151547A (en) 2022-12-07 2022-12-07 Method and device for automatically controlling equipment based on human body characteristics

Country Status (1)

Country Link
CN (1) CN118151547A (en)

Similar Documents

Publication Publication Date Title
US20220317641A1 (en) Device control method, conflict processing method, corresponding apparatus and electronic device
US11243502B2 (en) Interactive environmental controller
US20190074011A1 (en) Controlling connected devices using a relationship graph
WO2020253162A1 (en) Robot and control method therefor, and intelligent home control system
CN109951363B (en) Data processing method, device and system
JP6713057B2 (en) Mobile body control device and mobile body control program
CN108427310A (en) Intelligent home furnishing control method, device and computer readable storage medium
JP3697286B2 (en) Condition monitoring device
CN114821236A (en) Smart home environment sensing method, system, storage medium and electronic device
CN109059176A (en) Air regulator and its control method and control device
CN115356943A (en) Wireless intelligent home system based on BP neural network
JP2005199078A (en) State monitoring device
CN109343481B (en) Method and device for controlling device
CN110286600A (en) The scene setting method and device of smart home operating system
CN118151547A (en) Method and device for automatically controlling equipment based on human body characteristics
CN111045339B (en) Method for describing intelligent home environment requirements based on user behaviors
CN115718433A (en) Control method and device of intelligent equipment, intelligent system and storage medium
CN115309255A (en) Output control method and device for regional environment state
CN116794989A (en) User state determining method and device based on scene equipment linkage
CN116974212A (en) Equipment control method and device based on multi-mode information
CN110928206B (en) Method and device for controlling intelligent equipment to work through intelligent medicine chest
CN115499257B (en) Optimal control method and device for intelligent equipment based on virtual area diagram
CN115309833A (en) Display control method and device for area security situation
CN117092934A (en) Multi-mode sensing fusion intelligent switch control method and device
CN117515652A (en) Heating equipment intelligent control method and device based on user action induction

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination