CN113488078B - Health state monitoring method and equipment - Google Patents

Health state monitoring method and equipment Download PDF

Info

Publication number
CN113488078B
CN113488078B CN202010332434.7A CN202010332434A CN113488078B CN 113488078 B CN113488078 B CN 113488078B CN 202010332434 A CN202010332434 A CN 202010332434A CN 113488078 B CN113488078 B CN 113488078B
Authority
CN
China
Prior art keywords
sound
monitoring object
information
target monitoring
intelligent
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010332434.7A
Other languages
Chinese (zh)
Other versions
CN113488078A (en
Inventor
刘波
高雪松
孟卫明
王月岭
王彦芳
唐至威
刘帅帅
蒋鹏民
田羽慧
陈维强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Co Ltd
Original Assignee
Hisense Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Co Ltd filed Critical Hisense Co Ltd
Priority to CN202010332434.7A priority Critical patent/CN113488078B/en
Publication of CN113488078A publication Critical patent/CN113488078A/en
Application granted granted Critical
Publication of CN113488078B publication Critical patent/CN113488078B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/66Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for extracting parameters related to health condition
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J5/00Radiation pyrometry, e.g. infrared or optical thermometry
    • G01J5/0022Radiation pyrometry, e.g. infrared or optical thermometry for sensing the radiation of moving bodies
    • G01J5/0025Living bodies
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/18Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using ultrasonic, sonic, or infrasonic waves
    • G01S5/20Position of source determined by a plurality of spaced direction-finders
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Abstract

The application discloses a health state monitoring method and equipment. In the application, intelligent manager equipment receives sound data and obtains sound characteristic information of the sound data; if the sound characteristic information is matched with the characteristic information of the sound emitted when the health state is abnormal, acquiring the sound source position of the sound data; a first control instruction is sent, and the first control instruction is used for triggering a camera to acquire an infrared thermal imaging image and a visible light image of an area corresponding to the sound source position; obtaining a face of a target monitoring object in the visible light image, and obtaining the body temperature of the target monitoring object in the infrared thermal imaging image according to the face position of the target monitoring object; and further sending a second control instruction to the intelligent wearable device corresponding to the target monitoring object so as to obtain the health state data detected by the intelligent wearable device.

Description

Health state monitoring method and equipment
Technical Field
The application relates to the technical field of intelligent home, in particular to a health state monitoring method and equipment.
Background
With the increasing popularity of home appliances, the need for health status monitoring is also becoming more and more urgent.
At present, health care and monitoring modes based on health data acquisition equipment are more and more, such as measuring body temperature by using an intelligent temperature measuring gun, monitoring heart rate by intelligent wearing equipment and the like.
In the health state monitoring mode, the measurement and the recording of the health data can be carried out only when the user uses the corresponding intelligent equipment, and the health state monitoring mode belongs to passive monitoring and has low intelligent level.
Disclosure of Invention
The embodiment of the application provides a health state monitoring method and equipment, so as to improve the intelligentization of health state monitoring.
According to an aspect in an exemplary embodiment, there is provided a health status monitoring method, the method comprising:
receiving sound data and obtaining sound characteristic information of the sound data;
if the sound characteristic information is matched with the characteristic information of the sound emitted when the health state is abnormal, acquiring the sound source position of the sound data;
a first control instruction is sent, and the first control instruction is used for triggering a camera to acquire an infrared thermal imaging image and a visible light image of an area corresponding to the sound source position;
receiving infrared thermal imaging image data and visible light image data, obtaining the face of a target monitoring object in the visible light image, and obtaining the body temperature of the target monitoring object in the infrared thermal imaging image according to the face position of the target monitoring object.
In some embodiments of the present application, further comprising: the information of the target monitoring object and the health state information of the target monitoring object are sent to a server, or the health state information of the target monitoring object is sent to a user terminal requesting inquiry; wherein the health status information includes at least a body temperature of the target monitored subject.
In some embodiments of the present application, after obtaining the body temperature of the target monitoring object in the infrared thermal imaging image, the method further includes: judging whether the body temperature of the target monitoring object is in a set normal body temperature range, and if the body temperature of the target monitoring object is not in the normal body temperature range, sending information of the target monitoring object and health state information of the target monitoring object to a server; wherein the health status information includes at least a body temperature of the target monitored subject.
In some embodiments of the present application, if the extracted sound feature information matches the feature information of the sound emitted when the health status is abnormal, the method further includes:
sending a second control instruction to intelligent wearable equipment corresponding to the target monitoring object;
receiving health state detection data sent by the intelligent wearable equipment according to the second control instruction;
And sending the information of the target monitoring object and the health state information of the target monitoring object to a server, wherein the health state information at least comprises the body temperature of the target monitoring object and the health state detection data of the intelligent wearable device.
In some embodiments of the present application, the sending the first control instruction includes: judging whether sound generated when the health state is abnormal meets the monitoring control condition specified by the control strategy according to the preset control strategy, and if so, sending a first control instruction.
In some embodiments of the present application, the sound generated when the health state is abnormal includes: at least one of coughing sounds and sneezing sounds.
In some embodiments of the present application, the acoustic sensor comprises a microphone array, and the target camera comprises an infrared camera module and a visible light camera module.
According to an aspect of exemplary embodiments, there is provided a health status monitoring method, comprising:
receiving sound data and obtaining sound characteristic information of the sound data;
if the sound characteristic information is matched with the characteristic information of sound emitted when the health state is abnormal, a control instruction is sent to intelligent terminal equipment corresponding to the target monitoring object according to the target monitoring object corresponding to the sound characteristic information, and the control instruction is used for triggering the intelligent terminal equipment to acquire the health state information of the target monitoring object;
And receiving the health state information acquired and sent by the intelligent terminal equipment.
In some embodiments of the present application, further comprising: and sending the information of the target monitoring object and the health state information of the target monitoring object to a server, or sending the health state information of the target monitoring object to a user terminal requesting inquiry.
According to an aspect in an exemplary embodiment, there is provided a smart housekeeping device configured to:
receiving sound data and acquiring sound characteristic information of the sound data;
if the sound characteristic information is matched with the characteristic information of the sound emitted when the health state is abnormal, acquiring the sound source position of the sound data;
a first control instruction is sent, and the first control instruction is used for triggering a camera to acquire an infrared thermal imaging image and a visible light image of an area corresponding to the sound source position;
receiving infrared thermal imaging image data and visible light image data, obtaining the face of a target monitoring object in the visible light image, and obtaining the body temperature of the target monitoring object in the infrared thermal imaging image according to the face position of the target monitoring object.
In some embodiments of the present application, the smart key device is further configured to: the information of the target monitoring object and the health state information of the target monitoring object are sent to a server, or the health state information of the target monitoring object is sent to a user terminal requesting inquiry; wherein the health status information includes at least a body temperature of the target monitored subject.
In some embodiments of the present application, the smart-keeper device is further configured to:
if the extracted sound characteristic information is matched with the characteristic information of the sound emitted when the health state is abnormal, a second control instruction is sent to the intelligent wearable equipment corresponding to the target monitoring object;
receiving health state detection data sent by the intelligent wearable equipment according to the second control instruction;
and sending the information of the target monitoring object and the health state information of the target monitoring object to a server, wherein the health state information at least comprises the body temperature of the target monitoring object and the health state detection data of the intelligent wearable device.
According to an aspect in an exemplary embodiment, there is provided a smart housekeeping device configured to:
Receiving sound data and obtaining sound characteristic information of the sound data;
if the sound characteristic information is matched with the characteristic information of sound emitted when the health state is abnormal, a control instruction is sent to intelligent wearable equipment corresponding to the target monitoring object according to the target monitoring object corresponding to the sound characteristic information, and the control instruction is used for triggering the intelligent wearable equipment to acquire the health state information of the target monitoring object;
and receiving the health state information acquired and sent by the intelligent wearable equipment.
In some embodiments of the present application, the smart-keeper device is further configured to: and sending the information of the target monitoring object and the health state information of the target monitoring object to a server, or sending the health state information of the target monitoring object to a user terminal requesting inquiry.
According to an aspect in an exemplary embodiment, there is provided an intelligent terminal apparatus, including: the camera comprises a visible light camera module and an infrared thermal imaging camera module; the controller is configured to:
Acquiring audio data detected by the acoustic sensor and acquiring characteristic information of the audio data;
if the sound characteristic information is matched with the characteristic information of the sound emitted when the health state is abnormal, acquiring the sound source position of the sound data;
triggering the camera to acquire an infrared thermal imaging image and a visible light image of the area corresponding to the sound source position;
according to the infrared thermal imaging image data and the visible light image data acquired by the camera, the face of the target monitoring object in the visible light image is obtained, and the body temperature of the target monitoring object in the infrared thermal imaging image is obtained according to the face position of the target monitoring object.
In some embodiments of the present application, the controller is further configured to: the information of the target monitoring object and the health state information of the target monitoring object are sent to a server, or the health state information of the target monitoring object is sent to a user terminal requesting inquiry; wherein the health status information includes at least a body temperature of the target monitored subject.
In some embodiments of the present application, when detecting a sound (such as a cough) generated by a user in an abnormal health state, the intelligent housekeeping device may wake up the intelligent terminal provided with the camera to collect an infrared thermal imaging image and a visible light image of a sound source position corresponding area, perform face recognition according to data in the visible light image, and if the face of the target monitoring object is identified, detect and obtain the body temperature of the target monitoring object according to the infrared thermal imaging image, thereby realizing intelligent monitoring of the health state of the user and collection of health state data, and improving user experience.
On the basis of conforming to the common knowledge in the field, the above preferred conditions can be arbitrarily combined to obtain the preferred embodiments of the application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, a brief description will be given below of the drawings that are needed in the embodiments or the prior art descriptions, it being obvious that the drawings in the following description are some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort to a person skilled in the art.
FIG. 1 schematically illustrates a structural diagram of a health state monitoring system in an embodiment of the present application;
fig. 2 exemplarily shows a schematic structural diagram of a camera in an embodiment of the present application;
FIG. 3 illustrates a schematic diagram of a smart housekeeping device in an embodiment of the present application;
FIG. 4 schematically illustrates a health status monitoring flow provided by an embodiment of the present application;
fig. 5 schematically illustrates a structural diagram of an intelligent terminal device provided in an embodiment of the present application.
Detailed Description
The following description will be given in detail of the technical solutions in the embodiments of the present application with reference to the accompanying drawings. Wherein, in the description of the embodiments of the present application, unless otherwise indicated, "/" means or, for example, a/B may represent a or B; the text "and/or" is merely an association relation describing the associated object, meaning that there may be three relations, e.g., a and/or B, may represent: the three cases where a exists alone, a and B exist together, and B exists alone, and in addition, in the description of the embodiments of the present application, "plural" means two or more than two.
The terms "first", "second" are used in the following for descriptive purposes only and are not to be construed as implying or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first", "a second", or the like may include one or more such features, either explicitly or implicitly, and in the description of embodiments of the present application, the meaning of "a plurality" is two or more, unless otherwise indicated.
Fig. 1 schematically illustrates a structure of a health monitoring system provided in an embodiment of the present application.
As shown, various types of smart monitoring devices (acoustic sensor 101, camera 102, smart wearable device 103 as shown in the figure) are connected to smart housekeeping device 104, and smart housekeeping device 104 is connected to server 105 via a network (not shown in the figure). The server 105 may also be in communication connection with a mobile terminal 107 of a user via a mobile communication network 106. In some application scenarios, the intelligent monitoring device and the intelligent housekeeping device may be connected through a local area network, and the intelligent housekeeping device may be connected with the server through the internet.
The devices such as the acoustic sensor 101, the camera 102, the intelligent wearable device 103, and the like, which are connected with the intelligent housekeeping device, may be collectively referred to as an intelligent terminal device. Devices of the intelligent terminal device having a monitoring function (such as an audio/video data acquisition function) may be referred to as intelligent monitoring devices (such as an acoustic sensor 101, a camera 102). In some embodiments, the sound collection and image collection functions may be integrated in one intelligent terminal device, such as a "smart sensor" that may have both audio collection functions (in which acoustic sensors are located) and image or video collection functions (in which cameras are located). In the embodiments of the present application, the classification and naming of the devices are merely examples, and do not limit the present application.
The type of the intelligent monitoring device may be one type or may be plural types (two or more types), for example, the intelligent monitoring device may include an acoustic sensor with a communication function, a camera, an intelligent wearable device, and the like. The number of the intelligent monitoring devices can be one or a plurality of intelligent monitoring devices. Fig. 1 shows only an acoustic sensor 101, a camera 102, a smart wearable device 103 by way of example. The acoustic sensor and the camera can be fixedly installed in an area needing to be monitored, such as a living room, a bedroom, a kitchen and the like in a residence, and the intelligent wearable device can be worn on a monitored user.
The intelligent monitoring equipment has a data acquisition function and a communication function, and can send the acquired monitoring data to intelligent housekeeper equipment. In some examples, the intelligent monitoring device includes a data collector operable to collect monitoring data and a communicator connected thereto operable to send the monitoring data to the intelligent housekeeping device. In other examples, the intelligent monitoring device comprises a data collector, a communicator and a processor connected with the data collector and the communicator, wherein the data collector is used for collecting monitoring data, the processor processes according to the collected data, and the communicator sends the monitoring data collected by the data collector and/or information obtained by processing by the processor to the intelligent housekeeping device.
The acoustic sensor in the intelligent monitoring device may be a single microphone or may be a microphone array, for example, a microphone array with 6 microphones forming a surrounding structure. The microphone array has the characteristics of beam forming, noise suppression and voice enhancement in a specific beam direction, can meet the requirements of long-distance and high-quality voice data acquisition, and ensures higher voice recognition success rate and accuracy. Meanwhile, based on the characteristic that the microphone array has beam forming, the position of the sound source can be positioned according to the difference of sound data collected by each microphone in the microphone array.
The camera in the intelligent monitoring equipment at least comprises an infrared thermal imaging camera module and a visible light camera module. In some embodiments, the ir thermal imaging camera module and the visible light camera module may be independently deployed to achieve independent control, and the mounting positions of the ir thermal imaging camera module and the visible light camera module for acquiring the same area (such as the living room range) are preferably as close as possible, so as to determine the body temperature of the object in the image acquired by the ir thermal imaging camera module according to the object identified in the image acquired by the visible light camera module. In other embodiments, the infrared thermal imaging camera module and the visible light camera module may be disposed in the same camera device, for example, the two modules may share a pan-tilt, so as to realize unified control over the shooting angles of the two modules. The cradle head is a supporting component of the camera module, and can be driven by the control motor to rotate in different dimensions or angles, so that the infrared thermal imaging camera module and the visible light camera module are driven to adjust shooting angles simultaneously.
Fig. 2 exemplarily illustrates a structure of an image capturing apparatus in an embodiment of the present application. As shown in the figure, the camera device 200 includes an infrared thermal imaging camera module 201 and a visible light camera module 202, and a control motor 203 is connected with a pan-tilt of the infrared thermal imaging camera module 201 and the visible light camera module 202, for controlling the pan-tilt to rotate so that the infrared thermal imaging camera module 201 and the visible light camera module 202 collect images of a designated area. The communicator 204 may send the ir thermal imaging image data collected by the ir thermal imaging camera module 201 to the intelligent housekeeping device, and send the visible light image data collected by the visible light camera module 202 to the intelligent housekeeping device.
In some embodiments, the smart caretaker device may send a control instruction to the image capturing device 200, where the control instruction may trigger (wake up) the infrared thermal imaging camera module 201 and the visible light camera module 202 to perform image capturing, and may also control the rotation angle of the cradle head of the image capturing device so that the infrared thermal imaging camera module 201 and the visible light camera module 202 face the area indicated by the control instruction.
The smart wearable device 103 may include a smart bracelet or the like, which may be used to monitor human health status data such as heart rate, blood pressure, etc.
The smart-keeper device 104 may be a stand-alone device, may be formed integrally with other devices, or may be implemented by adding the functionality provided by the embodiments of the present application to other devices (e.g., a home gateway or a set-top box).
The intelligent housekeeping equipment 104 may have functions of sound data processing, image data processing, monitoring control, and the like. The sound data processing function is mainly used for processing and analyzing sound data detected by the acoustic sensor, such as processing of audio noise reduction, echo cancellation and the like, and can also be used for voice recognition, sound source position positioning and the like. The voice recognition and sound source position positioning functions can be independently completed by intelligent manager equipment, and can also be completed by a request server. The image data processing function is mainly used for processing and analyzing the thermal imaging image data acquired by the thermal imaging camera module to obtain human body temperature information, processing and analyzing the visible light image data acquired by the visible light camera module to perform face recognition. The image data processing function can be independently completed by intelligent manager equipment, and can also be completed by a request server. The monitoring control function is mainly used for sending control instructions to cameras (an infrared camera module and a visible light camera module) so as to wake up the cameras to collect image data and control shooting angles of the cameras. The monitoring control function can also be used for sending control instructions to the intelligent wearable device so as to trigger the intelligent wearable device to detect and report data.
The smart caretaker device 104 may send health status information of the target monitored object (e.g., body temperature detected by the infrared thermal imaging camera module, heart rate detected by the smart wearable device) to the server for transmission by the server to a mobile terminal used by the user associated with the target monitored object via the mobile communication network. The intelligent housekeeping device 104 may also provide health status query functions, such as a user may send a query request to the intelligent housekeeping device 104 through an Application (APP) on a terminal (such as a mobile phone) used, and the intelligent housekeeping device 104 may respond to the request and send health status information of a target monitoring object requested to be queried to the user terminal, so that the user may view the health status of the target monitoring object through the APP.
Fig. 3 illustrates a structure of a smart housekeeping device in an embodiment of the present application. As shown, the smart manager device 300 includes a sound data processing module 301, a monitoring control module 302, an image data processing module 303, and further may further include a query response module 304.
The sound data processing module 301 is configured to receive sound data (for example, sound data detected and sent by an acoustic sensor or sound data sent by a device with an acoustic sensor), perform sound data processing (for example, noise reduction, etc.) and extract sound feature information, and determine whether the extracted sound feature information matches feature information of sound emitted when the health state is abnormal. In other embodiments, the sound data processing module 301 may perform only sound data processing (such as noise reduction, etc.), and send the processed sound data to the server, so as to request the server to perform the extraction and comparison of the sound feature information, and receive the processing result returned by the server.
The monitoring control module 302 is configured to send a control instruction to the target camera if it is determined that the current monitored sound is a sound (such as a cough) generated when the health status is abnormal according to the comparison result of the obtained sound feature information obtained by the sound data processing module 301, where the control instruction is configured to trigger the target camera to collect an infrared thermal imaging image and a visible light image of an area corresponding to the sound source position of the sound.
The image data processing module 303 is configured to process and analyze image data, for example, perform face recognition on a visible light image collected by the visible light camera module, and determine whether a target monitoring object exists in the visible light image; and analyzing the infrared thermal imaging image acquired by the infrared thermal imaging camera to obtain the body temperature of the target monitoring object. In other embodiments, the image data processing module 303 may also send image data to the server to request the server to perform the image analysis operation (such as face recognition and body temperature acquisition), and receive the processing result returned by the server.
After the image data processing module 303 determines that the face of the target monitored object exists in the visible light image, the monitoring control module 302 may instruct the image data processing module 303 to obtain the body temperature of the target monitored object from the infrared thermal imaging image.
In some embodiments, the sound data processing module 301 may also obtain the sound source position of the sound data and provide it to the monitoring control module 302; the monitoring control module 302 may determine a shooting angle according to a sound source position of sound data, so that the camera can shoot an image of an area where the position is located, and send the shooting angle information to the target camera in a control instruction.
In some embodiments, the monitoring control module 302 may also send a control instruction to the smart wearable device of the target monitoring object to trigger the smart wearable device to detect and report the health status data of the target monitoring object.
In some embodiments, the monitoring control module 302 may also send information of the target monitoring object and health status information of the target monitoring object to the server. The health status information of the target monitoring object may include a body temperature obtained from the thermal imaging image, and further may further include health status data (such as a heart rate) reported by the smart wearable device.
The query response module 304 is configured to receive a health status information query request sent by a user terminal, and in response, send health status information of an object requested to be queried to the user terminal.
The server 105 may analyze the health status information of the target monitoring object sent by the intelligent manager device, and may send the health status information of the target monitoring object to the mobile terminal of the relevant user. In other embodiments, if the server 105 determines that the alarm condition is met according to the analysis result of the health status information of the target monitoring object, an alarm may be performed, for example, by sending alarm information to the mobile terminal 107 used by the alarm target user (such as a family member of the target monitoring object) corresponding to the target monitoring object through the mobile communication network 106.
The server 105 may be a server that is deployed independently, may be a distributed server, or may be a server cluster. The server 105 may employ cloud technology to provide powerful processing capabilities.
Based on the above architecture, in an actual application scenario, some intelligent monitoring devices in a household (residence) can be connected to intelligent housekeeping devices and connected to a server, the intelligent housekeeping devices trigger an infrared thermal imaging camera module and a visible light camera module to shoot images of a cough sound position area if judging that the sound is cough sound according to the sound collected by a microphone array in the intelligent monitoring devices, and if a target monitoring object is obtained according to the image identification shot by the visible light camera module, the body temperature of the target monitoring object is determined according to the image shot by the thermal imaging camera, and the intelligent wearing devices worn by the target monitoring object are controlled to detect and report heart rate data. Therefore, when the intelligent manager equipment monitors that people cough, health state information such as the body temperature and the heart rate of the target monitoring object can be timely obtained, so that other family members of the family where the monitoring object is located can inquire through the APP, or the intelligent manager equipment sends the information to the server, and the information is sent to the mobile terminal used by the other family members of the family where the target monitoring object is located.
Taking the system architecture shown in fig. 1 as an example, in the embodiment of the present application, the system may be first built, which specifically includes the following configuration operations:
(1) Registering the intelligent terminal equipment.
And connecting one or more intelligent terminal devices in the geographic range with the intelligent manager device, and registering the intelligent terminal devices in the intelligent manager device and the server. The registered smart terminal device may include a smart terminal device (such as an acoustic sensor, a camera) with a monitoring function, a smart wearable device, and the like. Wherein a geographic area may be a residence, a suite of residences, a production facility, a corporate office, etc. The intelligent monitoring devices in the geographic range may include, in addition to the intelligent monitoring devices fixedly installed in the geographic range, a wearable device movable in the geographic range, for example, the intelligent monitoring devices installed in each location in a home and the wearable devices of family members residing in the home may be connected to the intelligent housekeeping devices in the home and registered in the intelligent housekeeping devices and servers, for example, the intelligent monitoring devices in one home of a home may be registered to form a list of intelligent terminal devices associated with the home.
The list of intelligent terminal equipment can include relevant information of the equipment, for example, the list can include: the ID of the device, the address of the device (e.g., IP address, MAC address, etc.), the type of device (e.g., acoustic sensor, camera), the location of the device, etc.
(2) Information of the target monitoring object is registered.
Information of the target monitoring object may be registered in the smart manager device and/or the server. The registered information of the target monitoring object may include a user identifier, a user identity (such as a role of a family member), face feature information, and the like, and further may further include information of an intelligent wearable device associated with the target monitoring object, so that the intelligent manager device sends a control instruction to the intelligent wearable device corresponding to the target monitoring object, so as to trigger the intelligent wearable device to detect the health state of the target monitoring object.
(3) The call number of the terminal is registered.
And registering the calling number of the terminal of the user needing to acquire the health state information or the alarm information of the target monitoring object in the server. The terminal may include a mobile terminal, such as a smart phone, a tablet computer, a smart bracelet, and the like, and may also include a fixed terminal, such as a landline phone.
The operation of registering the number of the terminal is an optional operation.
The registration may be accomplished by a health management application in the user's mobile terminal. The health management application may provide the user with an associated settings interface such that the user may complete the registration operations described above based on the settings interface.
In an actual application scenario, the call number of the terminal used by the family member in the role of "administrator" may be registered, so that when the health state of the target monitoring object in the family member is abnormal, the server may notify the health state information of the target monitoring object to the family member as "administrator", so that the family "administrator" may comprehensively understand the health state of the family member.
(4) Characteristic information of sound made when the health state is abnormal is configured.
Feature information of sounds emitted when the health status is abnormal may be configured in the smart manager device and/or the server so as to compare the feature information of the detected sounds with the configured sound feature information, thereby determining whether the detected sounds are sounds emitted when the health status is abnormal. For example, the characteristic information of the sound generated when the health state is abnormal may include characteristic information of cough sound, characteristic information of sneeze sound, and the like.
In this embodiment of the present application, an association relationship may be established between the registered information such as the intelligent monitoring device list, the target monitoring object, the call number of the terminal, and the like, and identification information may be further used for identification, for example, identification may be performed using a home address or other information. For example, a microphone array and a camera installed in a living room in a residence with an address of "fifth street No. 100" and a smart bracelet worn by a family member with a role of "mother" in the family are registered as smart monitoring devices on a smart manager device and a server, the family member with the role of "mother" is registered as a target monitoring object, the mobile phone number of the family member with the role of "family manager" in the family is registered, and the registration information is associated with the family address of "fifth street No. 100".
The various configuration information may be default settings in the intelligent manager device and/or server, may be user-set, or a combination of both.
Fig. 4 schematically illustrates a flow chart of a health status monitoring method provided in an embodiment of the present application, where the flow chart may be executed by a smart-keeper device. As shown, the process may include the steps of:
S401: and receiving sound data and obtaining sound characteristic information of the sound data.
In this step, sound data collected and transmitted by a device having a sound collection function (e.g., an acoustic sensor) may be received. Taking an acoustic sensor as an example, the acoustic sensor (such as a microphone array) sends sound data to the intelligent housekeeping device after collecting the sound data, and the intelligent housekeeping device can perform simple processing such as noise reduction and the like on the sound data to obtain clean sound.
S402: if the obtained sound characteristic information is matched with the characteristic information of the sound emitted when the health state is abnormal, the sound source position of the sound data is obtained.
In this step, the intelligent housekeeping device may extract the feature information of the sound data, compare the feature information with the feature information of the sound (such as cough, sneeze, etc.) emitted when the health status is abnormal, and if the feature information is matched with the feature information of the sound (such as cough, sneeze, etc.), confirm that the received sound data belongs to the sound emitted when the health status is abnormal. In other embodiments, the intelligent housekeeping device may also send the sound data to the server and request the server to identify, and in response, the server may compare the characteristic information with the characteristic information of sounds (such as coughs, sneezes, etc.) sent when the health status is abnormal, and if the characteristic information and the characteristic information match, confirm that the received sound data belongs to the sounds sent when the health status is abnormal, and return the identification result to the intelligent housekeeping device.
In the embodiment of the application, the voice recognition model can be trained by collecting massive voice data in advance, so that the voice recognition model capable of accurately recognizing the voices such as sneeze and cough can be obtained, and the voice data collected and sent by the acoustic sensor can be recognized by using the voice recognition model.
After confirming that the received sound data belongs to the sound made when the health state is abnormal, the intelligent manager device can determine the sound source position of the sound data. In this embodiment of the present application, since the acoustic sensor may be a microphone array, the intelligent housekeeping device may perform positioning of the sound position by using a related algorithm according to the sound data collected by each microphone in the array sent by the microphone array.
S403: and sending a first control instruction to intelligent terminal equipment provided with a camera, wherein the first control instruction is used for triggering the intelligent terminal equipment to acquire an infrared thermal imaging image and a visible light image of a region corresponding to the sound source position.
In this step, the intelligent manager device may determine a photographing angle according to the sound source position determined in S402, so that the camera may photograph an area where the source position is located at the photographing angle.
The intelligent manager device may obtain an address (e.g., an IP address, a MAC address) of the camera according to the registered intelligent monitoring device list, and send a control instruction to the camera according to the address, where the control instruction carries a shooting angle determined according to the source location.
In some embodiments, a control policy or a condition may be preset, after the intelligent manager device determines that the sound monitored by the acoustic sensor is a sound generated when the health state is abnormal, whether the condition of monitoring control is met or not may be judged according to the control policy or the condition, if yes, a first control instruction is sent to the target camera, otherwise, the control instruction is not sent.
The control policy or condition may be set according to actual needs, for example, when it is determined that the number of cough sounds monitored in a set period of time (for example, in one minute) exceeds a threshold (for example, 5 times), which indicates that the user may be abnormal, it is determined that the control condition is satisfied, and thus the first control instruction is sent. This can eliminate unnecessary control operations by the user due to occasional coughing (not physical abnormalities).
S404: receiving infrared thermal imaging image data and visible light image data which are acquired and transmitted by intelligent terminal equipment (such as a camera), obtaining the face of a target monitoring object in the visible light image, and obtaining the body temperature of the target monitoring object in the infrared thermal imaging image according to the face position of the target monitoring object.
In the step, the infrared thermal imaging camera module and the visible light camera module can be awakened by the control instruction, and a control motor in the camera can control the camera holder to select corresponding angles according to shooting angle information carried in the control instruction, so that lenses of the infrared thermal imaging camera module and the visible light camera module face corresponding areas. And the infrared thermal imaging camera module and the visible light camera module are awakened and then shot to obtain infrared thermal imaging image data and visible light image data, and the infrared thermal imaging image data and the visible light image data are sent to intelligent housekeeping equipment.
After receiving the infrared thermal imaging image data and the visible light image data, the intelligent manager equipment carries out face recognition on the visible light image data, compares the characteristic information of the recognized face with the characteristic information of the face of a pre-configured target monitoring object, if the face of the target monitoring object is confirmed to be recognized, detects the body temperature of an object (such as the head of the area where the face is located) at the corresponding position in the infrared thermal imaging image according to the position of the face of the target monitoring object in the visible light image, and calculates and acquires the temperature of the corresponding area from an infrared temperature lattice according to the infrared thermal imaging principle, so as to obtain the body temperature of the target monitoring object. In some embodiments of the present application, since the distances between the infrared thermal imaging camera module and the visible light camera module are close and the shooting angles are substantially the same, the positions of the same target in the shot images in the respective images are substantially the same, so that the position of the target monitoring object in the infrared thermal imaging image can be determined according to the face position of the target monitoring object in the visible light image, and then the body temperature of the target monitoring object can be detected.
S405: and sending the information of the target monitoring object and the health state information of the target monitoring object to a server, or sending the health state information of the target monitoring object to a user terminal requesting inquiry. Wherein the health status information includes at least a body temperature of the target monitored subject. This step is an optional step.
In some embodiments, the smart manager device may send the information of the target monitoring object and the health status information of the target monitoring object to the server, so as to send the information to the mobile terminal of the associated family member through the server, for example, through a short message, or may send the information to a health management application in the mobile terminal of the associated family program, or may keep the information on the server side, so that when the associated family member initiates a query request to the server through the health management application to query the information.
In some embodiments, after the intelligent housekeeping device obtains the body temperature of the target monitoring object detected in the infrared thermal imaging image data, it may be determined whether the body temperature of the target monitoring object is within a set normal body temperature range, and if the body temperature of the target monitoring object is not within the normal body temperature range, the information of the target monitoring object and the health status information of the target monitoring object are sent to the server.
In other embodiments, the intelligent housekeeping device may also store the information that may be provided to the associated family member for viewing when the family member initiates a query request to the intelligent housekeeping device via the health management application.
In some embodiments, if the extracted sound feature information is matched with the feature information of the sound emitted when the health status is abnormal, the intelligent manager device may further send a second control instruction to the target intelligent wearable device corresponding to the target monitored object, so as to trigger or instruct the target intelligent wearable device to detect the health status data (such as heart rate, etc.) of the target monitored object, and send the detected data to the intelligent manager device. After receiving the health state detection data sent by the intelligent wearable device according to the second control instruction, the intelligent manager device further comprises the health state detection data of the intelligent wearable device in the monitoring state information sent to the server.
In some embodiments, the voice command is responded to if the intelligent housekeeping device confirms that the sound data detected by the acoustic sensor does not belong to the sound emitted when the health status is abnormal, but matches with the preset voice command. That is, the text obtained by recognition is subjected to semantic understanding and semantic analysis, and decision is made according to the semantic analysis result. For example, after the user speaks the voice of "health monitoring", the system presets that the voice is analyzed and the text is recognized through voice recognition, the intention understanding of the user's speaking is realized through semantic understanding, and when the intelligent manager device judges that the instruction spoken by the user is "health monitoring", the intelligent monitoring device preset by the system, such as the connected intelligent bracelet, is triggered to inquire the heart rate and the blood pressure of the user; and activates the infrared thermal imaging camera module to monitor the AI (artificial intelligence) body temperature, and activates the visible light camera module to realize the role recognition of personnel.
In some embodiments, the operations of sound collection, recognition, and image/video collection and recognition described above may also be implemented in a smart terminal device.
Fig. 5 exemplarily shows a structure of the intelligent terminal apparatus 500. As shown, the smart terminal device is provided with an acoustic sensor 501, a camera 502, and a controller 503 coupled to the acoustic sensor 501 and the camera 502. The camera comprises a visible light camera module and an infrared thermal imaging camera module. The controller 503 is configured to:
acquiring audio data detected by the acoustic sensor and acquiring characteristic information of the audio data; if the sound characteristic information is matched with the characteristic information of the sound emitted when the health state is abnormal, acquiring the sound source position of the sound data; triggering the camera to acquire an infrared thermal imaging image and a visible light image of the area corresponding to the sound source position; according to the infrared thermal imaging image data and the visible light image data acquired by the camera, the face of the target monitoring object in the visible light image is obtained, and the body temperature of the target monitoring object in the infrared thermal imaging image is obtained according to the face position of the target monitoring object.
In some embodiments, the controller 503 is further configured to: the information of the target monitoring object and the health state information of the target monitoring object are sent to a server, or the health state information of the target monitoring object is sent to a user terminal requesting inquiry; wherein the health status information includes at least a body temperature of the target monitored subject.
It should be noted that, the above intelligent terminal device provided in the embodiment of the present application can implement the method steps implemented in the embodiment of the method and achieve the same technical effects, and the same parts and beneficial effects as those of the embodiment of the method in the embodiment are not described in detail herein.
In some embodiments of the present application, the smart housekeeping device receives sound data, and obtains sound characteristic information of the sound data; if the sound characteristic information is matched with the characteristic information of the sound emitted when the health state is abnormal, a control instruction is sent to intelligent terminal equipment (such as intelligent wearing equipment) corresponding to the target monitoring object according to the target monitoring object corresponding to the sound characteristic information so as to trigger the intelligent terminal equipment to acquire the health state information of the target monitoring object, and the health state information acquired and sent by the intelligent terminal equipment is received. Further, the intelligent manager device may send the information of the target monitoring object and the health status information of the target monitoring object to the server, or send the health status information of the target monitoring object to the user terminal requesting the query.
The specific implementation manner of some steps in the above-mentioned flow may refer to the foregoing embodiments, and the details and advantages of the same parts and advantages as those of the method embodiments in this embodiment are not described in detail herein.
The combination of one or more embodiments of the present application can support the following three intelligent monitoring modes, so as to facilitate the user to select according to the use scenario or convenience, and when any mode triggers a monitoring event, the monitoring can be performed according to the associated intelligent device and the preset action preset by the user. Specifically, the following monitoring methods may be included:
(1) Active monitoring mode
The abnormal health behaviors of the user are actively monitored, for example, the sounds such as coughs and sneezes are actively monitored through the microphone array, and when the occurrence of the coughs or the sneezes is monitored, the intelligent manager equipment can wake up the intelligent monitoring equipment (such as a camera and an intelligent bracelet) automatically, and health data are acquired automatically, so that the intelligent control of the perception and decision of the health information of the user is realized.
(2) Awakening monitoring mode
When the user speaks a voice command of 'health monitoring', the voice is analyzed and recognized through voice recognition, semantic understanding is realized, intention understanding of user speaking is realized, when intelligent manager equipment judges that the command spoken by the user is 'health monitoring', a preset intelligent monitoring device is triggered to monitor data, for example, a connected intelligent bracelet is triggered to inquire heart rate and blood pressure of the user, an infrared thermal imaging camera module is activated to monitor AI body temperature according to images shot by the intelligent monitoring device, and a visible light camera module is activated to recognize roles of people according to the images shot by the intelligent manager equipment.
(3) Timing monitoring mode
In order to facilitate the nursing of the old, a family manager can actively monitor the health information of the user through a timing mechanism according to living and behavior habits of the old, and when the timer mechanism is triggered, the intelligent manager equipment can wake up the intelligent monitoring equipment autonomously to automatically collect health data, so that the perception and decision-making intelligent control of the health information of the user is realized.
As can be seen from the above description, in the embodiment of the present application, when the acoustic sensor detects the sound (such as a cough) generated when the health status of the user is abnormal, the intelligent housekeeping device may wake up the camera to collect the infrared thermal imaging image and the visible light image of the area corresponding to the sound source position, perform face recognition according to the data in the visible light image, and if the face of the target monitoring object is identified, detect the body temperature of the target monitoring object according to the infrared thermal imaging image, thereby realizing intelligent monitoring of the health status of the user and collection of the health status data, and improving the user experience.
According to the intelligent monitoring system, the combination of one or more embodiments of the intelligent monitoring system is used for actively monitoring the abnormal health behaviors of the user through intelligent means such as a voice feature recognition technology, an intelligent control technology and a face recognition technology, automatically waking up intelligent monitoring equipment (such as a camera) and automatically collecting health data, realizing the intelligent control of perception and decision of the health information of the user, and solving the problems that the traditional intelligent equipment is passively monitored, depends on the behavior habits of the user (namely, operates only under the triggering of user instructions), does not have health data management and the like.
The intelligent manager equipment in the embodiment of the application has the functions of equipment interconnection, data sharing, cloud management and the like, provides necessary foundation for the intelligent monitoring equipment which is accessed, and can cover various health data of users along with the access of more intelligent equipment which covers various health type data, so that health assessment indexes are finer and comprehensive.
The embodiment of the application can realize the application under the scenes of high speed, low time delay, mass connection and the like based on the Internet technologies such as 5G, internet of things (IOT) and the like, so that the aspects of equipment access, instantaneity, system compatibility, power consumption and the like can meet the service requirements, and the user experience is improved.
According to the embodiment of the application, intelligent health data monitoring combining active monitoring and passive response modes is realized, so that convenience is brought to users, especially health data management of user groups such as old people can be realized, and the requirements of nursing the old people can be met. For example, a family member, who is "mother" among family members, typically moves within a residence, belongs to the elderly person who needs to be cared for, and may have their facial characteristic information stored in a smart housekeeping device and/or server. After the sound data are sent to the intelligent manager equipment, when the intelligent manager equipment receives the sound data monitored by the microphone array installed in the living room of the house, the sound is judged to be the cough sound, a camera installed in the living room is awakened to shoot a thermal imaging image and a visible light image of an area where the cough sound is located, after the face of a family member with the role of mother is identified from the visible light image, the body temperature of the family member is obtained according to the thermal imaging image, and the intelligent wearable equipment worn by the family member is triggered to detect and report health data, so that health state information of the family member is obtained, and the family member with the role of a family manager in the family can further obtain health state information of a monitored object, so that a response measure to be taken subsequently is determined, and intelligent aged care is realized.
According to yet another aspect of the exemplary embodiments, the present application further provides a computer storage medium having stored therein computer program instructions which, when executed on a computer, cause the computer to perform the processing method as described above.
Since the intelligent housekeeping device and the computer storage medium in the embodiments of the present application may be applied to the above-mentioned processing method, the technical effects that can be obtained by the intelligent housekeeping device and the computer storage medium may also refer to the above-mentioned method embodiments, and the embodiments of the present application are not described herein again.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the method embodiments described above may be performed by hardware associated with program instructions. The foregoing program may be stored in a computer readable storage medium. The program, when executed, performs steps including the method embodiments described above; and the aforementioned storage medium includes: various media that can store program code, such as ROM, RAM, magnetic or optical disks.
While specific embodiments of the present application have been described above, it will be appreciated by those skilled in the art that these are by way of example only, and the scope of the present application is defined by the appended claims. Various changes and modifications to these embodiments may be made by those skilled in the art without departing from the principles and spirit of the application, but such changes and modifications fall within the scope of the application.

Claims (6)

1. A method of health status monitoring, the method comprising:
the method comprises the steps of connecting with one or more intelligent terminal devices in a geographic range for registration, and recording related information of the intelligent terminal devices to form a registered intelligent monitoring device list; the registered intelligent terminal equipment comprises intelligent terminal equipment with a monitoring function and intelligent wearing equipment; the related information of the intelligent terminal equipment comprises the following steps: an equipment Identity (ID), an equipment address, an equipment type and a position of equipment;
registering information of a target monitoring object; the registered information of the target monitoring object comprises user identification, user identity, face characteristic information and information of intelligent wearable equipment associated with the target monitoring object;
registering a calling number of a terminal of a user needing to obtain health state information or alarm information of a target monitoring object;
configuring characteristic information of sound generated when the health state is abnormal;
establishing an incidence relation among a registered intelligent monitoring equipment list, a target monitoring object and a calling number of a terminal;
receiving sound data and obtaining sound characteristic information of the sound data;
if the sound characteristic information is determined to be matched with the characteristic information of the sound emitted when the health state is abnormal through the sound identification model, positioning the position of a sound source according to the difference of the sound data collected by each microphone in the microphone array to obtain the sound source position of the sound data; the sound made when the health state is abnormal comprises: at least one of coughing sounds and sneezing sounds;
When the number of sounds generated when the health state monitored within a set period of time is abnormal exceeds a threshold value, a first control instruction is sent, and the first control instruction is used for triggering a camera to acquire an infrared thermal imaging image and a visible light image of an area corresponding to the sound source position;
receiving infrared thermal imaging image data and visible light image data, obtaining a face of a target monitoring object in the visible light image, and obtaining the body temperature of the target monitoring object in the infrared thermal imaging image according to the face position of the target monitoring object;
if the voice characteristic information is determined to be not matched with the characteristic information of the voice emitted when the health state is abnormal through the voice recognition model, and then the voice characteristic information is determined to be matched with a preset voice command through the voice recognition model, responding to the voice command, carrying out semantic understanding and semantic analysis on the text obtained through recognition, and carrying out decision making according to a semantic analysis result;
the voice recognition model is obtained by training preset voice data and is used for recognizing the voice emitted when the health state is abnormal.
2. The method as recited in claim 1, further comprising:
the information of the target monitoring object and the health state information of the target monitoring object are sent to a server, or the health state information of the target monitoring object is sent to a user terminal requesting inquiry; wherein the health status information includes at least a body temperature of the target monitored subject.
3. The method of claim 1, further comprising, after obtaining the body temperature of the target monitored subject in the infrared thermographic image:
judging whether the body temperature of the target monitoring object is within a set normal body temperature range;
if the body temperature of the target monitoring object is not in the normal body temperature range, sending the information of the target monitoring object and the health state information of the target monitoring object to a server; wherein the health status information includes at least a body temperature of the target monitored subject.
4. The method of claim 1, further comprising, if the extracted sound characteristic information matches the characteristic information of the sound emitted when the health state is abnormal:
sending a second control instruction to intelligent wearable equipment corresponding to the target monitoring object;
Receiving health state detection data sent by the intelligent wearable equipment according to the second control instruction;
and sending the information of the target monitoring object and the health state information of the target monitoring object to a server, wherein the health state information at least comprises the body temperature of the target monitoring object and the health state detection data of the intelligent wearable device.
5. A smart housekeeping device, characterized in that it is configured to perform the method of any one of claims 1-4.
6. An intelligent terminal device, characterized by comprising: the camera comprises a visible light camera module and an infrared thermal imaging camera module;
the controller is configured to:
the method comprises the steps of connecting with one or more intelligent terminal devices in a geographic range for registration, and recording related information of the intelligent terminal devices to form a registered intelligent monitoring device list; the registered intelligent terminal equipment comprises intelligent terminal equipment with a monitoring function and intelligent wearing equipment; the related information of the intelligent terminal equipment comprises the following steps: an equipment Identity (ID), an equipment address, an equipment type and a position of equipment;
Registering information of a target monitoring object; the registered information of the target monitoring object comprises user identification, user identity, face characteristic information and information of intelligent wearable equipment associated with the target monitoring object;
registering a calling number of a terminal of a user needing to obtain health state information or alarm information of a target monitoring object;
configuring characteristic information of sound generated when the health state is abnormal;
establishing an incidence relation among a registered intelligent monitoring equipment list, a target monitoring object and a calling number of a terminal;
acquiring audio data detected by a microphone array, and acquiring characteristic information of the audio data;
if the sound characteristic information is determined to be matched with the characteristic information of the sound emitted when the health state is abnormal through the sound identification model, positioning the position of a sound source according to the difference of the sound data collected by each microphone in the microphone array to obtain the sound source position of the sound data; the sound made when the health state is abnormal comprises: at least one of coughing sounds and sneezing sounds;
triggering the camera to acquire an infrared thermal imaging image and a visible light image of a region corresponding to the sound source position when the number of sounds generated when the health state monitored in a set long time is abnormal exceeds a threshold value;
Acquiring a face of a target monitoring object in the visible light image according to the infrared thermal imaging image data and the visible light image data acquired by the camera, and acquiring the body temperature of the target monitoring object in the infrared thermal imaging image according to the face position of the target monitoring object;
if the voice characteristic information is determined to be not matched with the characteristic information of the voice emitted when the health state is abnormal through the voice recognition model, and then the voice characteristic information is determined to be matched with a preset voice command through the voice recognition model, responding to the voice command, carrying out semantic understanding and semantic analysis on the text obtained through recognition, and carrying out decision making according to a semantic analysis result;
the voice recognition model is obtained by training preset voice data and is used for recognizing the voice emitted when the health state is abnormal.
CN202010332434.7A 2020-04-24 2020-04-24 Health state monitoring method and equipment Active CN113488078B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010332434.7A CN113488078B (en) 2020-04-24 2020-04-24 Health state monitoring method and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010332434.7A CN113488078B (en) 2020-04-24 2020-04-24 Health state monitoring method and equipment

Publications (2)

Publication Number Publication Date
CN113488078A CN113488078A (en) 2021-10-08
CN113488078B true CN113488078B (en) 2024-03-29

Family

ID=77932531

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010332434.7A Active CN113488078B (en) 2020-04-24 2020-04-24 Health state monitoring method and equipment

Country Status (1)

Country Link
CN (1) CN113488078B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113990037A (en) * 2021-11-01 2022-01-28 珠海华发人居生活研究院有限公司 Intelligent monitoring system for health and accidents of old people

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104382653A (en) * 2014-12-10 2015-03-04 广西大学 Wearable equipment system for protecting life safety of old persons
CN105898219A (en) * 2016-04-22 2016-08-24 北京小米移动软件有限公司 Method and apparatus for monitoring object
CN109595757A (en) * 2018-11-30 2019-04-09 广东美的制冷设备有限公司 Control method, device and the air conditioner with it of air conditioner
CN109998496A (en) * 2019-01-31 2019-07-12 中国人民解放军海军工程大学 A kind of autonomous type body temperature automatic collection and respiratory monitoring system and method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104382653A (en) * 2014-12-10 2015-03-04 广西大学 Wearable equipment system for protecting life safety of old persons
CN105898219A (en) * 2016-04-22 2016-08-24 北京小米移动软件有限公司 Method and apparatus for monitoring object
CN109595757A (en) * 2018-11-30 2019-04-09 广东美的制冷设备有限公司 Control method, device and the air conditioner with it of air conditioner
CN109998496A (en) * 2019-01-31 2019-07-12 中国人民解放军海军工程大学 A kind of autonomous type body temperature automatic collection and respiratory monitoring system and method

Also Published As

Publication number Publication date
CN113488078A (en) 2021-10-08

Similar Documents

Publication Publication Date Title
US11785146B2 (en) Doorbell call center
CN110291489B (en) Computationally efficient human identification intelligent assistant computer
CN102196251B (en) Smart-city intelligent monitoring method and system
CN108073577A (en) A kind of alarm method and system based on recognition of face
CA2748061A1 (en) Video analytics as a trigger for video communications
CN102176746A (en) Intelligent monitoring system used for safe access of local cell region and realization method thereof
CN109151393A (en) A kind of sound fixation and recognition method for detecting
CN113467258A (en) Intelligent monitoring method and equipment thereof
US20120033031A1 (en) Method and system for locating an individual
WO2018098448A1 (en) Neighborhood security cameras
CN113488078B (en) Health state monitoring method and equipment
US11170899B2 (en) Biometric data capturing and analysis using a hybrid sensing systems
CN211842015U (en) Household dialogue robot based on multi-microphone fusion
CN102625084B (en) Intelligent television monitoring system
CN115842901A (en) Joint triggering type alarm method and device based on Internet of things technology
EP4026046A1 (en) Biometric data capturing and analysis using a hybrid sensing system
CN112804492A (en) Communication prompting method and device for electronic cat eye
CN110659603A (en) Data processing method and device
US11936981B1 (en) Autonomous camera sensitivity adjustment based on threat level
CN210038916U (en) Access control terminal and system with face recognition function
CN219715857U (en) Intelligent glasses and intelligent control system
EP4152761A1 (en) Image convergence in a smart security camera system with a secondary processor
CN115379169A (en) Inspection monitoring method and system
WO2018194671A1 (en) Assistance notifications in response to assistance events
CN114176026A (en) Pet health monitoring system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant