CN116300491B - Control method, device, equipment and medium based on intelligent wearable equipment - Google Patents

Control method, device, equipment and medium based on intelligent wearable equipment Download PDF

Info

Publication number
CN116300491B
CN116300491B CN202310124977.3A CN202310124977A CN116300491B CN 116300491 B CN116300491 B CN 116300491B CN 202310124977 A CN202310124977 A CN 202310124977A CN 116300491 B CN116300491 B CN 116300491B
Authority
CN
China
Prior art keywords
target
room
equipment
data
target user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310124977.3A
Other languages
Chinese (zh)
Other versions
CN116300491A (en
Inventor
蔡曦瑶
何建强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Pomp Technology Co ltd
Original Assignee
Shenzhen Pomp Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Pomp Technology Co ltd filed Critical Shenzhen Pomp Technology Co ltd
Priority to CN202310124977.3A priority Critical patent/CN116300491B/en
Publication of CN116300491A publication Critical patent/CN116300491A/en
Application granted granted Critical
Publication of CN116300491B publication Critical patent/CN116300491B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/0205Simultaneously evaluating both cardiovascular conditions and different types of body conditions, e.g. heart and respiratory condition
    • A61B5/02055Simultaneously evaluating both cardiovascular condition and temperature
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/021Measuring pressure in heart or blood vessels
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/145Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue
    • A61B5/14542Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue for measuring blood gases
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/145Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue
    • A61B5/1455Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue using optical sensors, e.g. spectral photometrical oximeters
    • A61B5/14551Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue using optical sensors, e.g. spectral photometrical oximeters for measuring blood gases
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6801Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
    • A61B5/6802Sensor mounted on worn items
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6801Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
    • A61B5/6802Sensor mounted on worn items
    • A61B5/681Wristwatch-type devices
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6801Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
    • A61B5/6813Specially adapted to be attached to a specific body part
    • A61B5/6822Neck
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6801Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
    • A61B5/6813Specially adapted to be attached to a specific body part
    • A61B5/6824Arm or wrist
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6801Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
    • A61B5/6813Specially adapted to be attached to a specific body part
    • A61B5/6829Foot or ankle
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M21/00Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
    • A61M21/02Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis for inducing sleep or relaxation, e.g. by direct nerve stimulation, hypnosis, analgesia
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B15/00Systems controlled by a computer
    • G05B15/02Systems controlled by a computer electric
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/418Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS] or computer integrated manufacturing [CIM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/35Categorising the entire scene, e.g. birthday party or wedding scene
    • G06V20/36Indoor scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M21/00Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
    • A61M2021/0005Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus
    • A61M2021/0027Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus by the hearing sense
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M21/00Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
    • A61M2021/0005Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus
    • A61M2021/0044Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus by the sight sense
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/26Pc applications
    • G05B2219/2642Domotique, domestic, home control, automation, smart house
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biomedical Technology (AREA)
  • Pathology (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Biophysics (AREA)
  • Cardiology (AREA)
  • General Physics & Mathematics (AREA)
  • Physiology (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Anesthesiology (AREA)
  • Automation & Control Theory (AREA)
  • General Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Hematology (AREA)
  • Vascular Medicine (AREA)
  • Pulmonology (AREA)
  • Psychology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Pain & Pain Management (AREA)
  • Manufacturing & Machinery (AREA)
  • Quality & Reliability (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The embodiment of the invention discloses a control method, a device, equipment and a medium based on intelligent wearable equipment, wherein the method comprises the following steps: judging whether the target user is in an inactive state according to first physiological characteristic data and first action data acquired by the intelligent wearable equipment for the target user; if yes, acquiring an environment image acquired by the intelligent wearable device, and determining a target room identifier and a first device position corresponding to the intelligent wearable device according to the house three-dimensional map, the room standard image library and the environment image; acquiring images of a target user according to the first equipment position and each image acquisition equipment corresponding to the target room identifier to obtain a user image set; determining a target sleep stage of a target user according to the user image set, the first physiological characteristic data and the first action data; and generating each household sleep-aiding control instruction according to the target sleep stage. Therefore, the environment condition and the action data of the user are considered, and the sleeping condition of the target user is accurately determined.

Description

Control method, device, equipment and medium based on intelligent wearable equipment
Technical Field
The invention relates to the technical field of intelligent wearing equipment, in particular to a control method, a device, equipment and a medium based on the intelligent wearing equipment.
Background
At present, intelligent wearable equipment is widely applied, and the intelligent wearable equipment can detect the change of physiological characteristic data of a human body intelligently and simply, and can not provide targeted sleep-aiding service for users according to the change of the physiological characteristic data, so that user experience is poor. Along with the development of intelligent household equipment, intelligent wearable equipment and intelligent household equipment are combined to provide targeted sleep-aiding service for users according to the change of physiological characteristic data, so that the problems are solved to a certain extent. In the prior art, a control method and a control device based on physiological characteristic data are provided, physiological conditions of a user are judged through the physiological characteristic data detected by wearable electronic equipment (namely intelligent wearable equipment), and external equipment (namely intelligent household equipment) is controlled according to the physiological conditions. However, the inventor finds that the existing scheme controls the intelligent home equipment only according to the physiological condition determined based on the physiological characteristic data detected by the intelligent wearable equipment, and the intelligent wearable equipment cannot accurately determine the sleeping condition of the user, so that the sleeping quality of the user cannot be improved, and the effect of providing targeted service to the user is poor.
Disclosure of Invention
Based on the above, it is necessary to control the intelligent home equipment according to the physiological condition determined based on the physiological characteristic data detected by the intelligent wearable equipment in the prior art, and the intelligent wearable equipment cannot accurately determine the sleeping condition of the user, so that the user cannot be well helped to improve the sleeping quality, and the technical problem of poor effect of providing targeted service to the user is caused.
The application provides a control method based on intelligent wearable equipment, which comprises the following steps:
acquiring first physiological characteristic data and first action data acquired by intelligent wearable equipment for a target user;
judging whether the target user is in an inactive state according to the first physiological characteristic data and the first action data;
if the intelligent wearable device is in an inactive state, acquiring an environment image acquired by the intelligent wearable device, and determining a target room identifier and a first device position corresponding to the intelligent wearable device according to a preset house three-dimensional map, a preset room standard image library and the environment image;
acquiring images of the target user according to the first equipment position and each image acquisition equipment corresponding to the target room identifier to obtain a user image set;
Determining a target sleep stage of the target user according to the user image set, the first physiological characteristic data and the first action data;
and generating each household sleep-aiding control instruction according to the target sleep stage.
Further, the step of generating each home sleep control instruction according to the target sleep stage includes:
acquiring first environment data of a room corresponding to the target room identifier;
according to the target room identification, acquiring each household equipment identification from a preset household equipment library to be used as a controllable household equipment identification set;
if the room type corresponding to the target room identifier is a living room, generating each home sleep-aiding control instruction according to the target sleep stage and each home device identifier except for video playing of the equipment type of the controllable home device identifier set, if the video playing equipment in an open state exists in each video playing equipment corresponding to the controllable home device identifier set, and if the fact that a human body except for the target user exists is determined according to the first environment data, controlling each video playing equipment in the open state according to a preset sleep volume threshold value, and generating a prompt information display instruction for each video playing equipment in the open state, and if each video playing equipment in the open state exists in each video playing equipment corresponding to the controllable home device identifier set, and if the fact that a human body except for the target user does not exist is determined according to the first environment data, controlling each video playing equipment in the open state to be closed;
And if the room type corresponding to the target room identifier is bedroom or study room, generating each household sleep-aiding control instruction according to the target sleep stage and the controllable household equipment identifier set.
Further, after the step of generating each home sleep control instruction according to the target sleep stage, the method includes:
judging the type of sensitive interference according to the first environmental data and the sensitive interference configuration corresponding to the target user;
and generating each household anti-interference control instruction according to the sensitive interference configuration corresponding to the target user, the target sleep stage, the sensitive interference type and the first environment data.
Further, if the sensitive interference type includes noise interference, the step of generating each home anti-interference control instruction according to the sensitive interference configuration corresponding to the target user, the target sleep stage, the sensitive interference type and the first environmental data includes:
determining noise data from the first environmental data;
and generating an active noise reduction control instruction according to the inverted sound wave corresponding to the noise data, wherein the active noise reduction control instruction is used for controlling active noise reduction equipment in a room corresponding to the target room identifier so as to neutralize noise.
Further, before the step of determining the type of the sensitive interference according to the first environmental data and the sensitive interference configuration corresponding to the target user, the method further includes:
taking any one sleep stage as a sleep stage to be analyzed;
when the target user enters the sleep stage to be analyzed, acquiring environment data corresponding to the target user as second environment data;
when the target user is in the sleep stage to be analyzed, respectively adopting each preset adjusting scheme to carry out environmental adjustment, acquiring the adjusted environmental data as third environmental data and acquiring physiological characteristic data as second physiological characteristic data;
determining single-stage sensitive interference information corresponding to the to-be-analyzed sleep stage according to the second environmental data, the third environmental data and the second physiological characteristic data;
and taking all the single-stage sensitive interference information corresponding to all the sleeping stages corresponding to the target user as the sensitive interference configuration.
Further, the step of acquiring the image of the target user according to the first device position and each image acquisition device corresponding to the target room identifier to obtain a user image set includes:
According to the first equipment position, controlling any one of the image acquisition equipment corresponding to the target room identifier to acquire a second image for the target user;
according to the second image, the image acquisition equipment facing the face of the target user is found out from the image acquisition equipment corresponding to the target room identifier and used as first hit equipment;
according to the position of the first device, the first hit device is controlled to shoot the face of the target user, and a face image set is obtained;
according to the first equipment position, finding out the image acquisition equipment closest to each image acquisition equipment corresponding to the target room identifier as second hit equipment;
according to the position of the first equipment, the second hit equipment is controlled to shoot the target user, and a human body image set is obtained;
and taking the human face image set and the human body image set as the user image set.
Further, before the step of determining the target room identifier and the first device position corresponding to the intelligent wearable device according to the preset three-dimensional map of the house, the preset room standard image library and the environment image, the method further includes:
Acquiring a map construction signal;
responding to the map construction signal, and acquiring each room depth image shot by the intelligent wearable equipment, and a room number and a room type corresponding to each room depth image;
constructing a three-dimensional map according to each room depth image, the room number and the room type corresponding to each room depth image, and obtaining the house three-dimensional map;
taking each room depth image as the room standard image library;
and determining the position information of each intelligent home device in the three-dimensional map.
The application also provides a control device based on the intelligent wearable equipment, which comprises:
the data acquisition module is used for acquiring first physiological characteristic data and first action data acquired by the intelligent wearable device for the target user;
the judging module is used for judging whether the target user is in an inactive state according to the first physiological characteristic data and the first action data;
the position information determining module is used for acquiring an environment image acquired by the intelligent wearable device if the intelligent wearable device is in an inactive state, and determining a target room identifier and a first device position corresponding to the intelligent wearable device according to a preset house three-dimensional map, a preset room standard image library and the environment image;
The user image set determining module is used for acquiring images of the target user according to the first equipment position and each image acquisition equipment corresponding to the target room identifier to obtain a user image set;
the target sleeping stage determining module is used for determining a target sleeping stage of the target user according to the user image set, the first physiological characteristic data and the first action data;
and the instruction generation module is used for generating each household sleep-aiding control instruction according to the target sleep stage.
The application also provides an intelligent wearable, comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of:
acquiring first physiological characteristic data and first action data acquired by intelligent wearable equipment for a target user;
judging whether the target user is in an inactive state according to the first physiological characteristic data and the first action data;
if the intelligent wearable device is in an inactive state, acquiring an environment image acquired by the intelligent wearable device, and determining a target room identifier and a first device position corresponding to the intelligent wearable device according to a preset house three-dimensional map, a preset room standard image library and the environment image;
Acquiring images of the target user according to the first equipment position and each image acquisition equipment corresponding to the target room identifier to obtain a user image set;
determining a target sleep stage of the target user according to the user image set, the first physiological characteristic data and the first action data;
and generating each household sleep-aiding control instruction according to the target sleep stage.
The present application also provides a computer readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform the steps of:
acquiring first physiological characteristic data and first action data acquired by intelligent wearable equipment for a target user;
judging whether the target user is in an inactive state according to the first physiological characteristic data and the first action data;
if the intelligent wearable device is in an inactive state, acquiring an environment image acquired by the intelligent wearable device, and determining a target room identifier and a first device position corresponding to the intelligent wearable device according to a preset house three-dimensional map, a preset room standard image library and the environment image;
acquiring images of the target user according to the first equipment position and each image acquisition equipment corresponding to the target room identifier to obtain a user image set;
Determining a target sleep stage of the target user according to the user image set, the first physiological characteristic data and the first action data;
and generating each household sleep-aiding control instruction according to the target sleep stage.
According to the control method based on the intelligent wearable equipment, when the target user is judged to be in the inactive state based on the first physiological characteristic data and the first action data, a target sleep stage of the target user is determined according to the user image set determined based on the environment image, the first physiological characteristic data and the first action data, and finally, each household sleep control instruction is generated according to the target sleep stage. The intelligent household equipment is controlled through the comprehensive physiological characteristic data, the action data and the user image set determined based on the environment image, the environment condition and the action data of the user are considered, the sleeping condition of the target user is accurately determined, and therefore the intelligent household equipment is beneficial to well helping the user to improve sleeping quality, is beneficial to improving the effect of providing targeted service for the user, and improves user experience.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Wherein:
FIG. 1 is a flow chart of a control method based on a smart wearable device in one embodiment;
FIG. 2 is a block diagram of a control device based on a smart wearable device in one embodiment;
FIG. 3 is a block diagram of a computer device in one embodiment.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It can be understood that the program file for implementing the control method based on the intelligent wearable device can be loaded in the intelligent wearable device or the central console. The intelligent wearable device communicates with the intelligent household device through the center console. The center console can be a small server or a computer. The value range of the smart home device includes, but is not limited to: video playback devices, image acquisition devices, stereo sets, intelligent (window) curtain, electric light, desk lamp, phone, doorbell, humidifier, air purifier, video playback devices's value range includes, but is not limited to: television, desktop computer, laptop computer, tablet computer, the range of values of image acquisition devices includes, but is not limited to: television with camera, desktop computer with camera, portable computer, tablet computer, camera, laser scanner, infrared scanner.
As shown in fig. 1, in one embodiment, a control method based on an intelligent wearable device is provided. The method specifically comprises the following steps:
s1: acquiring first physiological characteristic data and first action data acquired by intelligent wearable equipment for a target user;
specifically, the intelligent wearable device collects physiological characteristic data of a target user according to a preset time interval, the collected physiological characteristic data are used as first physiological characteristic data, the intelligent wearable device collects first physiological characteristic data, action data of the target user are collected, and the collected action data are used as first action data.
The first physiological characteristic data includes, but is not limited to: heart rate, blood oxygen content, blood pressure and body temperature.
The first action data includes, but is not limited to: position change data of the intelligent wearing equipment and environment temperature change data of the intelligent wearing equipment.
The value range of the smart wearable device includes, but is not limited to: intelligent wrist-watch, intelligent bracelet, intelligent necklace and intelligent foot ring.
S2: judging whether the target user is in an inactive state according to the first physiological characteristic data and the first action data;
specifically, the first physiological characteristic data and the first action data are input into a preset active state classification model to carry out classification prediction, a vector element with the largest value is found out from vectors obtained through prediction, and a classification label corresponding to the found vector element is used as a hit label; if the hit label is inactive, determining that the target user is in an inactive state; and if the hit label is active, determining that the target user is in an active state.
If the target user is in an active state, the target user is not in a state of being ready to fall asleep or already falling asleep, and the steps S3 to S6 do not need to be continuously executed.
For example, the first physiological characteristic data indicates that the target user is trapped, but the target user does not want to sleep, and at this time, hand movement occurs, and the hand movement drives the intelligent wearable device to move, so that the change of the position change data of the intelligent wearable device in the first motion data is larger, and the change of the environmental temperature change data of the intelligent wearable device in the first motion data is smaller, and at this time, it is determined that the target user is in an active state.
For another example, the first physiological characteristic data indicates that the target user is trapped, but the target user wants to sleep, and the hand will not move basically at this time, so that the change of the position change data of the intelligent wearable device in the first motion data is smaller, and the change of the environmental temperature change data of the intelligent wearable device in the first motion data may be larger, and at this time, it is determined that the target user is in an inactive state.
The active state classification model is a classification model that includes both inactive and active classification labels. The model structure and model training method of the active state classification model may be selected from the prior art, and will not be described in detail herein.
S3: if the intelligent wearable device is in an inactive state, acquiring an environment image acquired by the intelligent wearable device, and determining a target room identifier and a first device position corresponding to the intelligent wearable device according to a preset house three-dimensional map, a preset room standard image library and the environment image;
the room standard image library includes: room identification and room standard images.
Because the positioning system of the intelligent wearing equipment cannot accurately position the inside of the house, the target room identification and the first equipment position corresponding to the intelligent wearing equipment are determined according to the preset house three-dimensional map, the preset room standard image library and the environment image, and the problems are solved to a certain extent.
Specifically, if the target user is in an inactive state, that is, the target user is in an inactive state, which means that the target user is in a state of being ready to fall asleep or having fallen asleep, therefore, the intelligent wearable device is controlled to shoot an image, the shot image is taken as an environment image, objects in the environment image are compared with objects in room standard images in a preset room standard image library, so that room standard images with the same environment are found out from the room standard image library, and a room identifier corresponding to the found room standard image is taken as a target room identifier; according to the position of a room corresponding to the target room identifier in the house three-dimensional map, the image depth corresponding to the room standard image and the house three-dimensional map, the position of the intelligent wearable device in the house three-dimensional map is calculated, and the calculated position is used as a first device position.
It can be understood that, the specific implementation method for finding the room standard image with the same environment from the room standard image library by comparing the object in the environment image with the object in the room standard image in the preset room standard image library can be selected from the prior art, and will not be described herein.
The specific implementation method for calculating the position of the intelligent wearable device in the three-dimensional map of the house according to the position of the room corresponding to the target room identifier in the three-dimensional map of the house, the image depth corresponding to the room standard image and the three-dimensional map of the house can be selected from the prior art, and will not be described in detail herein.
S4: acquiring images of the target user according to the first equipment position and each image acquisition equipment corresponding to the target room identifier to obtain a user image set;
specifically, based on a preset screening rule, screening proper image acquisition equipment from all image acquisition equipment corresponding to the target room identifier according to the first equipment position, and controlling the screened image acquisition equipment to acquire images of the target user according to the first equipment position, wherein all acquired images are used as a user image set.
S5: determining a target sleep stage of the target user according to the user image set, the first physiological characteristic data and the first action data;
specifically, according to the gesture of the target user in the user image set, judging whether the target user sits or lies down; if the target user sits down or lies down, it means that the target user meets the sleeping posture requirement, therefore, according to the first action data, whether the target user is in a state of not wanted to fall asleep such as watching a mobile phone, reading, etc., if the target user is in a state of not wanted to fall asleep, the subsequent part of step S5 and step S6 do not need to be executed, if the target user is not in a state of unwanted to fall asleep, the first physiological characteristic data is input into a preset sleep stage classification model to determine a sleep stage, and the determined sleep stage is taken as the target sleep stage. Compared with the method for determining the sleeping stage according to the physiological characteristic data only, the method for determining the sleeping stage of the target user according to the user image set, the first physiological characteristic data and the first action data is higher in accuracy, so that the accuracy of the constructed sleeping environment is improved.
The sleep stage classification model is a multi-classification model, and the classification label of the sleep stage classification model comprises: a preparation stage of falling asleep, a shallow sleep stage, a deep sleep stage and a deep sleep stage. The model structure and model training method of the classification model at the sleep stage can be selected from the prior art, and will not be described here.
S6: and generating each household sleep-aiding control instruction according to the target sleep stage.
Specifically, each home sleep control instruction is generated according to sleep-aiding configuration corresponding to the target sleep stage and each intelligent home device in a room corresponding to the target room identifier, so that each home sleep control instruction is generated for the intersection of each device corresponding to the sleep-aiding configuration and each intelligent home device in the room corresponding to the target room identifier. And controlling the intelligent home equipment in the room corresponding to the target room identifier to work according to the home sleep-aiding control instructions, and providing a sleep environment for the target user.
The sleep-aiding configuration describes which smart home devices need to be turned on.
The intelligent household equipment can be controlled by each household sleep-aiding control instruction to adjust the intelligent curtain to improve the light of a room, play music suitable for sleeping of a target user, start a humidifier, adjust light, adjust telephone ring, adjust doorbell ring and the like.
According to the embodiment, the intelligent household equipment is controlled through the comprehensive physiological characteristic data, the action data and the user image set determined based on the environment image, the environment condition and the action data of the user are considered, the sleeping condition of the target user is accurately determined, and therefore the sleeping quality of the user is favorably improved, the effect of providing targeted service for the user is favorably improved, and the user experience is improved.
In one embodiment, the step of generating each home sleep control instruction according to the target sleep stage includes:
s61: acquiring first environment data of a room corresponding to the target room identifier;
specifically, the intelligent wearable device and the intelligent household device acquire the environmental data of the room corresponding to the target room identifier, and the acquired environmental data is used as the first environmental data.
The environment data of the room corresponding to the target room identification includes, but is not limited to: image, video, audio, ambient temperature, ambient humidity, ambient dust, light.
S62: according to the target room identification, acquiring each household equipment identification from a preset household equipment library to be used as a controllable household equipment identification set;
Home equipment libraries include, but are not limited to: room identification, device type, home device identification, and room type. The room identification may be a room name, be data uniquely identifying a room, etc. The home equipment identifier may be data that uniquely identifies an intelligent home equipment, such as a device name, a device I, etc.
Specifically, each home equipment identifier with the same room identifier as the target room identifier is obtained from a preset home equipment library, and each obtained home equipment identifier is used as a controllable home equipment identifier set.
S63: if the room type corresponding to the target room identifier is a living room, generating each home sleep-aiding control instruction according to the target sleep stage and each home device identifier except for video playing of the equipment type of the controllable home device identifier set, if the video playing equipment in an open state exists in each video playing equipment corresponding to the controllable home device identifier set, and if the fact that a human body except for the target user exists is determined according to the first environment data, controlling each video playing equipment in the open state according to a preset sleep volume threshold value, and generating a prompt information display instruction for each video playing equipment in the open state, and if each video playing equipment in the open state exists in each video playing equipment corresponding to the controllable home device identifier set, and if the fact that a human body except for the target user does not exist is determined according to the first environment data, controlling each video playing equipment in the open state to be closed;
Specifically, if the room type corresponding to the target room identifier is a living room, it means that the target user wearing the intelligent wearable device is in the living room, so that each home-help sleep control instruction is generated according to the living-room sleep-help configuration corresponding to the target sleep stage and each home-help device identifier except for video playing of the device type of the controllable home-help device identifier set, and the home-help sleep control instruction is generated from the intersection set of each device corresponding to the living-room sleep-help configuration corresponding to the target sleep stage and each device corresponding to each home-help device identifier except for video playing of the device type of the controllable home-help device identifier set.
If the video playing devices in the on state exist in the video playing devices corresponding to the controllable home device identification set, and the existence of the human body except the target user is determined according to the first environment data, the fact that the human body except the target user is in a living room means that the human body except the target user is in operation, and the sleeping requirements of the target user and the video watching interests of other people are considered, therefore, the video playing devices in the on state are controlled according to the preset sleeping volume threshold value to achieve volume reduction, and prompt information display instructions are generated for each video playing device in the on state, so that prompt information of sleeping people is displayed on a display screen of the video playing device in the on state under the control of the prompt information display instructions, and the other people in the living room can quickly understand that sleeping people in the living room.
If the video playing devices corresponding to the controllable household device identification set are in the on state, and the fact that the human body except the target user does not exist is determined according to the first environment data, which means that the living room only has the target user, the video playing devices in the on state are not watched, and therefore the video playing devices in the on state are controlled to be closed, and sleeping environments of the target user are improved under the conditions that electric energy consumption is reduced and service lives of the devices are prolonged.
And determining whether the human body beyond the target user exists in the living room or not according to the image in the first environment data. The specific method steps for determining whether the living room has a human body other than the target user according to the image in the first environmental data are not described herein.
S64: and if the room type corresponding to the target room identifier is bedroom or study room, generating each household sleep-aiding control instruction according to the target sleep stage and the controllable household equipment identifier set.
Specifically, if the room type corresponding to the target room identifier is a bedroom or a study room, generating each home sleep-aiding control instruction according to the intersection of each device corresponding to the sleep-aiding configuration corresponding to the target sleep stage and each device corresponding to the controllable home device identifier set.
According to the embodiment, when the target user is located in the living room and the target user is located in the bedroom or the study room, different strategies are adopted to generate the home sleep-aiding control instruction, so that a proper sleep environment is constructed for the target user under the condition of considering the rights and interests of the same living person, and the user experience is further improved.
In one embodiment, after the step of generating the home sleep control command according to the target sleep stage, the method includes:
s71: judging the type of sensitive interference according to the first environmental data and the sensitive interference configuration corresponding to the target user;
specifically, the first environmental data and the sensitive interference configuration corresponding to the target user are input into a preset interference type classification model to carry out classification prediction, each vector element with a value larger than a preset probability is found out from vectors obtained through prediction, and the classification label corresponding to the found vector element is used as the sensitive interference type.
The interference type classification model is a multi-classification model, and the value range of the classification label of the interference type classification model comprises: noise interference, light interference, air humidity interference, and air dust interference. The model structure and model training method of the interference type classification model may be selected from the prior art, and will not be described in detail herein.
S72: and generating each household anti-interference control instruction according to the sensitive interference configuration corresponding to the target user, the target sleep stage, the sensitive interference type and the first environment data.
In a sleeping environment, possible disturbances include: noise interference, light interference, air humidity interference, and air dust interference.
Sensitive interference configurations include, but are not limited to: user identification, sleep stage, interference identification and interference threshold description. The interference identifier may be an interference name, interference I, etc. that uniquely identifies an interference. The interference threshold describes a threshold that is used to describe the interference that is caused, e.g., the interference is identified as noise interference, and the interference threshold is described as being greater than a decibels.
Specifically, each interference identifier and each interference threshold description corresponding to the target sleep stage are found out from the sensitive interference configuration corresponding to the target user, data corresponding to each interference type in the sensitive interference types are found out from the first environment data according to each found interference identifier, the data are used as data to be analyzed, and a household anti-interference control instruction is generated according to the data to be analyzed and each found interference threshold description, so that intelligent household equipment is controlled through the household anti-interference control instruction. Therefore, the generated household anti-interference control instruction meets the requirements of the found interference threshold description, and the personalized anti-interference requirement of the target user in the target sleep stage is met.
According to the embodiment, each household anti-interference control instruction is generated according to the sensitive interference configuration corresponding to the target user, the target sleep stage, the sensitive interference type and the first environment data, so that a sleep environment is built aiming at sensitive interference of the target user, the effect of providing targeted service for the user is further improved, and the user experience is further improved.
In one embodiment, if the sensitive interference type includes noise interference, the step of generating each home anti-interference control instruction according to the sensitive interference configuration corresponding to the target user, the target sleep stage, the sensitive interference type, and the first environmental data includes:
s711: determining noise data from the first environmental data;
specifically, noise data is determined from audio in the first environmental data.
S712: and generating an active noise reduction control instruction according to the inverted sound wave corresponding to the noise data, wherein the active noise reduction control instruction is used for controlling active noise reduction equipment in a room corresponding to the target room identifier so as to neutralize noise.
Specifically, an active noise reduction control instruction is generated according to noise data, so that the intelligent household equipment is controlled to generate an inverted sound wave corresponding to the noise data through the active noise reduction control instruction, and noise reduction in a neutralization mode is realized.
It will be appreciated that the walls, roof and floor in the room may be noise reflective surfaces, and that there are parallel reflective surfaces, and that after noise enters the room, it may be reflected multiple times by the reflective surfaces in the room, thereby prolonging the duration of the effect of the noise. For example, floor and ceiling options, an active noise reduction device may be mounted on the ceiling.
It will be appreciated that the sound-damping effect of glass is less than that of a wall, and therefore, active noise-reducing devices may also be mounted on the side of the glass door or window remote from the room, thereby reducing noise entering the room.
The embodiment controls the active noise reduction equipment in the room corresponding to the target room identifier to neutralize noise, reduces the influence of noise interference on the target user, and further improves the user experience.
In an embodiment, before the step of determining the type of sensitive interference according to the first environmental data and the sensitive interference configuration corresponding to the target user, the method further includes:
S701: taking any one sleep stage as a sleep stage to be analyzed;
specifically, any one of the individual falling phases is taken as the falling phase to be analyzed.
The range of values for the falling asleep phase includes: a preparation stage of falling asleep, a shallow sleep stage, a deep sleep stage and a deep sleep stage.
S702: when the target user enters the sleep stage to be analyzed, acquiring environment data corresponding to the target user as second environment data;
specifically, when the target user enters the sleep stage to be analyzed, the environment data corresponding to the target user is obtained as second environment data, so that environment data with environment not adjusted is obtained.
S703: when the target user is in the sleep stage to be analyzed, respectively adopting each preset adjusting scheme to carry out environmental adjustment, acquiring the adjusted environmental data as third environmental data and acquiring physiological characteristic data as second physiological characteristic data;
modulation schemes include, but are not limited to: generating analog noise, adjusting light, and adjusting air humidity.
Specifically, when the target user is in the sleep stage to be analyzed, each preset adjusting scheme is adopted to conduct environment adjustment, after each adjusting scheme is adjusted, the adjusted environment data is taken as third environment data, and the physiological characteristic data is taken as second physiological characteristic data, so that the environment data and the physiological characteristic data after the environment adjustment are obtained.
S704: determining single-stage sensitive interference information corresponding to the to-be-analyzed sleep stage according to the second environmental data, the third environmental data and the second physiological characteristic data;
specifically, a deep learning model trained in advance is adopted to analyze the second environmental data, the third environmental data and the second physiological characteristic data, so that single-stage sensitive interference information corresponding to the sleeping stage to be analyzed is determined.
It can be appreciated that by re-executing steps S701 to S704, all the single-stage sensitive interference information corresponding to all the sleep stages corresponding to the target user can be determined.
Single-stage sensitive interference information includes, but is not limited to: sleep stage, interference identification, and interference threshold description.
S705: and taking all the single-stage sensitive interference information corresponding to all the sleeping stages corresponding to the target user as the sensitive interference configuration.
The embodiment realizes that sensitive interference information for changing the sleep of the target user in each sleep stage is found out by adjusting the environment data, and provides a basis for constructing a sleep environment more suitable for the target user.
In one embodiment, the step of acquiring an image of the target user according to the first device location and the respective image acquisition devices corresponding to the target room identifier to obtain a user image set includes:
S41: according to the first equipment position, controlling any one of the image acquisition equipment corresponding to the target room identifier to acquire a second image for the target user;
specifically, any one of the image acquisition devices corresponding to the target room identification is controlled, an image is acquired for the first device position, and the acquired image is used as a second image.
S42: according to the second image, the image acquisition equipment facing the face of the target user is found out from the image acquisition equipment corresponding to the target room identifier and used as first hit equipment;
specifically, according to the image area of the target user in the second image, the gesture of the target user can be determined, the face direction of the target user is determined according to the gesture, the image acquisition devices facing the face of the target user are found out from the image acquisition devices corresponding to the target room identifier according to the face direction, and the found image acquisition devices are used as first hit devices.
S43: according to the position of the first device, the first hit device is controlled to shoot the face of the target user, and a face image set is obtained;
Specifically, according to the distance between the first equipment position and the installation position of the first hit equipment, shooting parameters of the first hit equipment are determined, the first hit equipment is controlled according to the determined shooting parameters, face images are shot on faces of target users located in the first equipment, and all face images obtained through shooting are used as a face image set. Therefore, face images are shot aiming at the faces of the target users, and a basis is provided for analyzing the states of the target users based on the face images.
S44: according to the first equipment position, finding out the image acquisition equipment closest to each image acquisition equipment corresponding to the target room identifier as second hit equipment;
specifically, the image acquisition device closest to the first device position is found out from the image acquisition devices corresponding to the target room identification, and each found image acquisition device is used as a second hit device.
It can be understood that other screening rules may also be used to screen the image capturing devices from the image capturing devices corresponding to the target room identifier according to the first device position, so as to serve as second hit devices.
S45: according to the position of the first equipment, the second hit equipment is controlled to shoot the target user, and a human body image set is obtained;
specifically, the second hit device is controlled to shoot the whole human body of the target user at the first device position, and all the shot human body images are used as a human body image set.
S46: and taking the human face image set and the human body image set as the user image set.
Specifically, the face image set and the human body image set are combined, and data obtained by combining are used as the user image set.
It can be understood that if the image acquisition devices facing the face of the target user are not available in the image acquisition devices corresponding to the target room identifier, the face image set is empty.
According to the method and the device, the position information of the three-dimensional map is determined through the wearing equipment, so that the relative position of the user relative to each image acquisition equipment is determined, the image acquisition equipment which is optimal for shooting relative to the user is further determined to shoot the image information of the user, automatic shooting of a face image set and a human body image set is realized, accuracy of acquiring target information of a shooting image is improved, a basis is provided for accurately determining the target sleeping stage of a target user based on the user image set, and the degree of automation of the method and the device is improved.
In an embodiment, before the step of determining the target room identifier and the first device position corresponding to the smart wearable device according to the preset three-dimensional map of the house, the preset room standard image library, and the environment image, the method further includes:
s11: acquiring a map construction signal;
specifically, the intelligent wearable device depth camera device can be used for scanning and acquiring, and the map construction signal input by the user can be acquired, and the map construction signal sent by the third party application can also be acquired.
The map construction signal is a signal for constructing a three-dimensional map of a house.
S12: responding to the map construction signal, and acquiring each room depth image shot by the intelligent wearable equipment, and a room number and a room type corresponding to each room depth image;
specifically, when the map building signal is received, each room depth image shot by the intelligent wearable device, and a room number and a room type corresponding to each room depth image may be acquired.
Each pixel point in the room depth image includes color information and depth information.
S13: constructing a three-dimensional map according to each room depth image, the room number and the room type corresponding to each room depth image, and obtaining the house three-dimensional map;
Specifically, a single-room three-dimensional map is constructed according to each room depth image corresponding to the same room number, and each single-room three-dimensional map is connected according to each single-room three-dimensional map and the room type corresponding to each single-room three-dimensional map so as to form the house three-dimensional map.
It will be appreciated that the method for constructing a three-dimensional map (i.e., constructing a three-dimensional model) based on the depth image may be selected from the prior art, and will not be described herein.
Optionally, the three-dimensional map of the house further carries a geographic location, and the last stored positioning location before the intelligent wearable device acquires the map building signal can be used as the geographic location carried by the three-dimensional map of the house.
It will be appreciated that the geographic location entered by the user may also be obtained as the geographic location carried by the three-dimensional map of the house.
When a user lives and works in a plurality of houses, the target user corresponds to a plurality of three-dimensional maps of the houses, and the three-dimensional maps of the houses of the rooms in which the target user is located can be automatically and accurately determined by the aid of the three-dimensional maps of the houses and the geographic positions.
S13: taking each room depth image as the room standard image library;
It will be appreciated that in another embodiment of the application, other methods may be used to construct a three-dimensional map of a house and determine a library of room standard images, such as constructing a three-dimensional map of a house and determining a library of room standard images based on three-dimensional images captured by a three-dimensional camera.
S14: and determining the position information of each intelligent home device in the three-dimensional map.
After the three-dimensional map is constructed, the three-dimensional map can be mapped according to the position information of each intelligent household device, specifically, the position information of each intelligent household device can be acquired according to an intelligent internet of things service center (namely a center console), and then the intelligent wearing device maps the position information of each intelligent household device into the three-dimensional map, so that the subsequent intelligent wearing device can quickly search and determine other available controllable intelligent household devices according to the position information of the subsequent intelligent wearing device.
The embodiment realizes building of the house three-dimensional map and determination of the room standard image library based on the intelligent wearable equipment, and simplifies operation.
As shown in fig. 2, the present application further provides a control device based on the intelligent wearable device, where the device includes:
the data acquisition module 801 is configured to acquire first physiological characteristic data and first action data acquired by the intelligent wearable device for a target user;
A judging module 802, configured to judge whether the target user is in an inactive state according to the first physiological characteristic data and the first action data;
the location information determining module 803 is configured to obtain an environmental image acquired by the intelligent wearable device if the intelligent wearable device is in an inactive state, and determine a target room identifier and a first device location corresponding to the intelligent wearable device according to a preset three-dimensional map of a house, a preset room standard image library and the environmental image;
a user image set determining module 804, configured to acquire an image of the target user according to the first device position and each image acquisition device corresponding to the target room identifier, so as to obtain a user image set;
a target sleep stage determining module 805 configured to determine a target sleep stage of the target user according to the user image set, the first physiological characteristic data, and the first action data;
the instruction generating module 806 is configured to generate each home sleep control instruction according to the target sleep stage.
According to the embodiment, the intelligent household equipment is controlled through the comprehensive physiological characteristic data, the action data and the user image set determined based on the environment image, the environment condition and the action data of the user are considered, the sleeping condition of the target user is accurately determined, and therefore the sleeping quality of the user is favorably improved, the effect of providing targeted service for the user is favorably improved, and the user experience is improved.
FIG. 3 illustrates an internal block diagram of a computer device in one embodiment. The computer device may specifically be a terminal or a server. As shown in fig. 3, the computer device includes a processor, a memory, and a network interface connected by a system bus. The memory includes a nonvolatile storage medium and an internal memory. The non-volatile storage medium of the computer device stores an operating system, and may also store a computer program that, when executed by the processor, may cause the processor to implement a control method based on the smart wearable device. The internal memory may also store a computer program that, when executed by the processor, may cause the processor to perform a control method based on the smart wearable device. It will be appreciated by those skilled in the art that the structure shown in FIG. 3 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In one embodiment, a computer device is presented comprising a memory and a processor, the memory storing a computer program that, when executed by the processor, causes the processor to perform the steps of:
Acquiring first physiological characteristic data and first action data acquired by intelligent wearable equipment for a target user;
judging whether the target user is in an inactive state according to the first physiological characteristic data and the first action data;
if the intelligent wearable device is in an inactive state, acquiring an environment image acquired by the intelligent wearable device, and determining a target room identifier and a first device position corresponding to the intelligent wearable device according to a preset house three-dimensional map, a preset room standard image library and the environment image;
acquiring images of the target user according to the first equipment position and each image acquisition equipment corresponding to the target room identifier to obtain a user image set;
determining a target sleep stage of the target user according to the user image set, the first physiological characteristic data and the first action data;
and generating each household sleep-aiding control instruction according to the target sleep stage.
According to the embodiment, the intelligent household equipment is controlled through the comprehensive physiological characteristic data, the action data and the user image set determined based on the environment image, the environment condition and the action data of the user are considered, the sleeping condition of the target user is accurately determined, and therefore the sleeping quality of the user is favorably improved, the effect of providing targeted service for the user is favorably improved, and the user experience is improved.
In one embodiment, a smart wearable device is presented, comprising a memory and a processor, the memory storing a computer program that, when executed by the processor, causes the processor to perform the steps of:
acquiring first physiological characteristic data and first action data acquired by intelligent wearable equipment for a target user;
judging whether the target user is in an inactive state according to the first physiological characteristic data and the first action data;
if the intelligent wearable device is in an inactive state, acquiring an environment image acquired by the intelligent wearable device, and determining a target room identifier and a first device position corresponding to the intelligent wearable device according to a preset house three-dimensional map, a preset room standard image library and the environment image;
acquiring images of the target user according to the first equipment position and each image acquisition equipment corresponding to the target room identifier to obtain a user image set;
determining a target sleep stage of the target user according to the user image set, the first physiological characteristic data and the first action data;
and generating each household sleep-aiding control instruction according to the target sleep stage.
According to the embodiment, the intelligent household equipment is controlled through the comprehensive physiological characteristic data, the action data and the user image set determined based on the environment image, the environment condition and the action data of the user are considered, the sleeping condition of the target user is accurately determined, and therefore the sleeping quality of the user is favorably improved, the effect of providing targeted service for the user is favorably improved, and the user experience is improved.
In one embodiment, a computer-readable storage medium is provided, storing a computer program which, when executed by a processor, causes the processor to perform the steps of:
acquiring first physiological characteristic data and first action data acquired by intelligent wearable equipment for a target user;
judging whether the target user is in an inactive state according to the first physiological characteristic data and the first action data;
if the intelligent wearable device is in an inactive state, acquiring an environment image acquired by the intelligent wearable device, and determining a target room identifier and a first device position corresponding to the intelligent wearable device according to a preset house three-dimensional map, a preset room standard image library and the environment image;
acquiring images of the target user according to the first equipment position and each image acquisition equipment corresponding to the target room identifier to obtain a user image set;
Determining a target sleep stage of the target user according to the user image set, the first physiological characteristic data and the first action data;
and generating each household sleep-aiding control instruction according to the target sleep stage.
According to the intelligent household equipment control method and the intelligent household equipment, the intelligent household equipment is controlled through the comprehensive physiological characteristic data, the action data and the user image set determined based on the environment image, the environment condition and the action data of the user are considered, the sleeping condition of the target user is accurately determined, and therefore the intelligent household equipment control method and the intelligent household equipment are beneficial to well helping the user to improve sleeping quality, beneficial to improving the effect of providing targeted services for the user and improving user experience.
Those skilled in the art will appreciate that all or part of the processes in the methods of the above embodiments may be implemented by a computer program for instructing relevant hardware, where the program may be stored in a non-volatile computer readable storage medium, and where the program, when executed, may include processes in the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (RAM), synchronous RAM (SRAM), double data Rate SRAM (RSRAM), enhanced SRAM (ESRAM), synchronous link (Synchnk) RAM (SLRAM), memory bus (Rambus) direct RAM (RRAM), direct memory bus dynamic RAM (RRAM), and memory bus dynamic RAM (RRAM), among others.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application and are described in detail herein without thereby limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of protection of the present application is to be determined by the appended claims.

Claims (6)

1. A control method based on an intelligent wearable device, the method comprising:
acquiring first physiological characteristic data and first action data acquired by intelligent wearable equipment for a target user;
judging whether the target user is in an inactive state according to the first physiological characteristic data and the first action data;
if the intelligent wearable device is in an inactive state, acquiring an environment image acquired by the intelligent wearable device, and determining a target room identifier and a first device position corresponding to the intelligent wearable device according to a preset house three-dimensional map, a preset room standard image library and the environment image;
Acquiring images of the target user according to the first equipment position and each image acquisition equipment corresponding to the target room identifier to obtain a user image set;
determining a target sleep stage of the target user according to the user image set, the first physiological characteristic data and the first action data;
generating each household sleep-aiding control instruction according to the target sleep stage: acquiring first environment data of a room corresponding to the target room identifier;
according to the target room identification, acquiring each household equipment identification from a preset household equipment library to be used as a controllable household equipment identification set;
if the room type corresponding to the target room identifier is a living room, generating each home sleep-aiding control instruction according to the target sleep stage and each home device identifier except for video playing of the equipment type of the controllable home device identifier set, if the video playing equipment in an open state exists in each video playing equipment corresponding to the controllable home device identifier set, and if the fact that a human body except for the target user exists is determined according to the first environment data, controlling each video playing equipment in the open state according to a preset sleep volume threshold value, and generating a prompt information display instruction for each video playing equipment in the open state, and if each video playing equipment in the open state exists in each video playing equipment corresponding to the controllable home device identifier set, and if the fact that a human body except for the target user does not exist is determined according to the first environment data, controlling each video playing equipment in the open state to be closed;
If the room type corresponding to the target room identifier is bedroom or study room, generating each household sleep-aiding control instruction according to the target sleep stage and the controllable household equipment identifier set;
taking any one sleep stage as a sleep stage to be analyzed;
when the target user enters the sleep stage to be analyzed, acquiring environment data corresponding to the target user as second environment data;
when the target user is in the sleep stage to be analyzed, respectively adopting each preset adjusting scheme to carry out environmental adjustment, acquiring the adjusted environmental data as third environmental data and acquiring physiological characteristic data as second physiological characteristic data;
determining single-stage sensitive interference information corresponding to the to-be-analyzed sleep stage according to the second environmental data, the third environmental data and the second physiological characteristic data;
taking all the single-stage sensitive interference information corresponding to all the sleeping stages corresponding to the target user as the sensitive interference configuration;
judging the type of sensitive interference according to the first environmental data and the sensitive interference configuration corresponding to the target user;
if the sensitive interference type includes noise interference, the step of generating each home anti-interference control instruction according to the sensitive interference configuration corresponding to the target user, the target sleep stage, the sensitive interference type and the first environment data includes:
Determining noise data from the first environmental data;
and generating an active noise reduction control instruction according to the inverted sound wave corresponding to the noise data, wherein the active noise reduction control instruction is used for controlling active noise reduction equipment in a room corresponding to the target room identifier so as to neutralize noise.
2. The method for controlling a wearable device according to claim 1, wherein the step of acquiring the image of the target user according to the first device location and each image acquisition device corresponding to the target room identifier to obtain a user image set includes:
according to the first equipment position, controlling any one of the image acquisition equipment corresponding to the target room identifier to acquire a second image for the target user;
according to the second image, the image acquisition equipment facing the face of the target user is found out from the image acquisition equipment corresponding to the target room identifier and used as first hit equipment;
according to the position of the first device, the first hit device is controlled to shoot the face of the target user, and a face image set is obtained;
According to the first equipment position, finding out the image acquisition equipment closest to each image acquisition equipment corresponding to the target room identifier as second hit equipment;
according to the position of the first equipment, the second hit equipment is controlled to shoot the target user, and a human body image set is obtained;
and taking the human face image set and the human body image set as the user image set.
3. The method for controlling an intelligent wearable device according to claim 1, wherein before the step of determining the target room identifier and the first device position corresponding to the intelligent wearable device according to the preset three-dimensional map of the house, the preset standard image library of the room, and the environment image, the method further comprises:
acquiring a map construction signal;
responding to the map construction signal, and acquiring each room depth image shot by the intelligent wearable equipment, and a room number and a room type corresponding to each room depth image;
constructing a three-dimensional map according to each room depth image, the room number and the room type corresponding to each room depth image, and obtaining the house three-dimensional map;
Taking each room depth image as the room standard image library;
and determining the position information of each intelligent home device in the three-dimensional map.
4. Control device based on intelligent wearing equipment, characterized in that, the device includes:
the data acquisition module is used for acquiring first physiological characteristic data and first action data acquired by the intelligent wearable device for the target user;
the judging module is used for judging whether the target user is in an inactive state according to the first physiological characteristic data and the first action data;
the position information determining module is used for acquiring an environment image acquired by the intelligent wearable device if the intelligent wearable device is in an inactive state, and determining a target room identifier and a first device position corresponding to the intelligent wearable device according to a preset house three-dimensional map, a preset room standard image library and the environment image;
the user image set determining module is used for acquiring images of the target user according to the first equipment position and each image acquisition equipment corresponding to the target room identifier to obtain a user image set;
the target sleeping stage determining module is used for determining a target sleeping stage of the target user according to the user image set, the first physiological characteristic data and the first action data;
The instruction generation module is used for generating each household sleep-aiding control instruction according to the target sleep stage: acquiring first environment data of a room corresponding to the target room identifier;
according to the target room identification, acquiring each household equipment identification from a preset household equipment library to be used as a controllable household equipment identification set;
if the room type corresponding to the target room identifier is a living room, generating each home sleep-aiding control instruction according to the target sleep stage and each home device identifier except for video playing of the equipment type of the controllable home device identifier set, if the video playing equipment in an open state exists in each video playing equipment corresponding to the controllable home device identifier set, and if the fact that a human body except for the target user exists is determined according to the first environment data, controlling each video playing equipment in the open state according to a preset sleep volume threshold value, and generating a prompt information display instruction for each video playing equipment in the open state, and if each video playing equipment in the open state exists in each video playing equipment corresponding to the controllable home device identifier set, and if the fact that a human body except for the target user does not exist is determined according to the first environment data, controlling each video playing equipment in the open state to be closed;
If the room type corresponding to the target room identifier is bedroom or study room, generating each household sleep-aiding control instruction according to the target sleep stage and the controllable household equipment identifier set;
taking any one sleep stage as a sleep stage to be analyzed;
when the target user enters the sleep stage to be analyzed, acquiring environment data corresponding to the target user as second environment data;
when the target user is in the sleep stage to be analyzed, respectively adopting each preset adjusting scheme to carry out environmental adjustment, acquiring the adjusted environmental data as third environmental data and acquiring physiological characteristic data as second physiological characteristic data;
determining single-stage sensitive interference information corresponding to the to-be-analyzed sleep stage according to the second environmental data, the third environmental data and the second physiological characteristic data;
taking all the single-stage sensitive interference information corresponding to all the sleeping stages corresponding to the target user as the sensitive interference configuration;
judging the type of sensitive interference according to the first environmental data and the sensitive interference configuration corresponding to the target user;
if the sensitive interference type includes noise interference, the step of generating each home anti-interference control instruction according to the sensitive interference configuration corresponding to the target user, the target sleep stage, the sensitive interference type and the first environment data includes:
Determining noise data from the first environmental data;
and generating an active noise reduction control instruction according to the inverted sound wave corresponding to the noise data, wherein the active noise reduction control instruction is used for controlling active noise reduction equipment in a room corresponding to the target room identifier so as to neutralize noise.
5. A computer readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform the steps of the method of any one of claims 1 to 3.
6. A smart wearable device comprising a memory and a processor, the memory storing a computer program that, when executed by the processor, causes the processor to perform the steps of the method of any of claims 1-3.
CN202310124977.3A 2023-02-01 2023-02-01 Control method, device, equipment and medium based on intelligent wearable equipment Active CN116300491B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310124977.3A CN116300491B (en) 2023-02-01 2023-02-01 Control method, device, equipment and medium based on intelligent wearable equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310124977.3A CN116300491B (en) 2023-02-01 2023-02-01 Control method, device, equipment and medium based on intelligent wearable equipment

Publications (2)

Publication Number Publication Date
CN116300491A CN116300491A (en) 2023-06-23
CN116300491B true CN116300491B (en) 2023-11-17

Family

ID=86795232

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310124977.3A Active CN116300491B (en) 2023-02-01 2023-02-01 Control method, device, equipment and medium based on intelligent wearable equipment

Country Status (1)

Country Link
CN (1) CN116300491B (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103228203A (en) * 2010-12-03 2013-07-31 皇家飞利浦电子股份有限公司 Sleep disturbance monitoring apparatus
CN106017509A (en) * 2016-05-30 2016-10-12 北京航空航天大学 Method for determining anti-interference attitude in multi-source interference environment, and test platform
WO2016180163A1 (en) * 2015-11-02 2016-11-17 中兴通讯股份有限公司 Household control and adjustment method and system
CN110247717A (en) * 2018-03-07 2019-09-17 索尼公司 For the electronic equipment of wireless communication, method and computer readable storage medium
CN110559537A (en) * 2019-09-12 2019-12-13 杭州趣安科技有限公司 Relaxation auxiliary sleep-aiding system and equipment for intervening sleep based on physiological indexes
CN112213951A (en) * 2020-09-11 2021-01-12 深圳数联天下智能科技有限公司 Linkage control method and device for mattress
CN113080897A (en) * 2021-04-02 2021-07-09 北京正气和健康科技有限公司 System and method for evaluating sleep onset time based on physiological and environmental data analysis
CN113310194A (en) * 2020-02-27 2021-08-27 青岛海尔空调器有限总公司 Intelligent adjustment method and sleep environment adjustment system for sleep environment
CN114296356A (en) * 2021-11-16 2022-04-08 宁波小匠物联网科技有限公司 Intelligent temperature control device and control method
CN114569863A (en) * 2022-05-07 2022-06-03 深圳市心流科技有限公司 Sleep-assisted awakening method and system, electronic equipment and storage medium
CN115137310A (en) * 2022-07-26 2022-10-04 池瑾璟 Sleep data management method and system based on music identification of Internet of things
CN115137315A (en) * 2022-09-06 2022-10-04 深圳市心流科技有限公司 Sleep environment scoring method, device, terminal and storage medium
WO2022218038A1 (en) * 2021-04-15 2022-10-20 海尔(深圳)研发有限责任公司 Method, apparatus, and system for environment control in smart home system, and device
CN115407675A (en) * 2022-09-05 2022-11-29 深圳创维-Rgb电子有限公司 Sleep-aiding control method and device based on Internet of things, electronic equipment and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8193941B2 (en) * 2009-05-06 2012-06-05 Empire Technology Development Llc Snoring treatment
US10991355B2 (en) * 2019-02-18 2021-04-27 Bose Corporation Dynamic sound masking based on monitoring biosignals and environmental noises

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103228203A (en) * 2010-12-03 2013-07-31 皇家飞利浦电子股份有限公司 Sleep disturbance monitoring apparatus
WO2016180163A1 (en) * 2015-11-02 2016-11-17 中兴通讯股份有限公司 Household control and adjustment method and system
CN106017509A (en) * 2016-05-30 2016-10-12 北京航空航天大学 Method for determining anti-interference attitude in multi-source interference environment, and test platform
CN110247717A (en) * 2018-03-07 2019-09-17 索尼公司 For the electronic equipment of wireless communication, method and computer readable storage medium
CN110559537A (en) * 2019-09-12 2019-12-13 杭州趣安科技有限公司 Relaxation auxiliary sleep-aiding system and equipment for intervening sleep based on physiological indexes
WO2021169877A1 (en) * 2020-02-27 2021-09-02 青岛海尔空调器有限总公司 Method for intelligently adjusting sleep environment, and sleep environment adjustment system
CN113310194A (en) * 2020-02-27 2021-08-27 青岛海尔空调器有限总公司 Intelligent adjustment method and sleep environment adjustment system for sleep environment
CN112213951A (en) * 2020-09-11 2021-01-12 深圳数联天下智能科技有限公司 Linkage control method and device for mattress
CN113080897A (en) * 2021-04-02 2021-07-09 北京正气和健康科技有限公司 System and method for evaluating sleep onset time based on physiological and environmental data analysis
WO2022218038A1 (en) * 2021-04-15 2022-10-20 海尔(深圳)研发有限责任公司 Method, apparatus, and system for environment control in smart home system, and device
CN114296356A (en) * 2021-11-16 2022-04-08 宁波小匠物联网科技有限公司 Intelligent temperature control device and control method
CN114569863A (en) * 2022-05-07 2022-06-03 深圳市心流科技有限公司 Sleep-assisted awakening method and system, electronic equipment and storage medium
CN115137310A (en) * 2022-07-26 2022-10-04 池瑾璟 Sleep data management method and system based on music identification of Internet of things
CN115407675A (en) * 2022-09-05 2022-11-29 深圳创维-Rgb电子有限公司 Sleep-aiding control method and device based on Internet of things, electronic equipment and storage medium
CN115137315A (en) * 2022-09-06 2022-10-04 深圳市心流科技有限公司 Sleep environment scoring method, device, terminal and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于可穿戴设备感知的智能家居能源优化;陈思运;刘烃;沈超;苏曼;高峰;徐占伯;师嘉悦;贾战培;;计算机研究与发展(03);第204-215页 *

Also Published As

Publication number Publication date
CN116300491A (en) 2023-06-23

Similar Documents

Publication Publication Date Title
CN110291489B (en) Computationally efficient human identification intelligent assistant computer
US11010601B2 (en) Intelligent assistant device communicating non-verbal cues
US20180349708A1 (en) Methods and Systems for Presenting Image Data for Detected Regions of Interest
US20180330169A1 (en) Methods and Systems for Presenting Image Data for Detected Regions of Interest
CN107883541B (en) Air conditioner control method and device
CN112166350B (en) System and method for ultrasonic sensing in smart devices
JP6698544B2 (en) System and method for output display generation based on ambient conditions
US20150302725A1 (en) Monitoring & security systems and methods with learning capabilities
US10559172B1 (en) Customized notifications based on device characteristics
US11968412B1 (en) Bandwidth estimation for video streams
CN111465983A (en) System and method for determining occupancy
WO2018208365A1 (en) Methods and systems for presenting image data for detected regions of interest
US10791607B1 (en) Configuring and controlling light emitters
CN114859749B (en) Intelligent home management method and system based on Internet of things
CN111147935A (en) Control method of television, intelligent household control equipment and storage medium
KR20190104798A (en) Electronic device and method for controlling external electronic device based on use pattern information corresponding to user
CN116300491B (en) Control method, device, equipment and medium based on intelligent wearable equipment
US11586410B2 (en) Information processing device, information processing terminal, information processing method, and program
CN114500590A (en) Intelligent device voice broadcasting method and device, computer device and storage medium
JP2005199373A (en) Communication device and communication method
CN111919250B (en) Intelligent assistant device for conveying non-language prompt
CN114636231A (en) Control method and device of air conditioner, terminal and medium
CN114928623A (en) Apparatus and method for controlling information communication
CN113542689A (en) Image processing method based on wireless Internet of things and related equipment
Lu Dynamic HVAC operations based on occupancy patterns with real-time vision-based system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant