CN115965993A - Tracking, identifying and monitoring system and method based on thermal image - Google Patents

Tracking, identifying and monitoring system and method based on thermal image Download PDF

Info

Publication number
CN115965993A
CN115965993A CN202211613936.2A CN202211613936A CN115965993A CN 115965993 A CN115965993 A CN 115965993A CN 202211613936 A CN202211613936 A CN 202211613936A CN 115965993 A CN115965993 A CN 115965993A
Authority
CN
China
Prior art keywords
monitoring
human
range
thermal image
detection area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211613936.2A
Other languages
Chinese (zh)
Inventor
钟金峯
魏家博
王忠祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huayan Intelligent Co ltd
Original Assignee
Huayan Intelligent Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huayan Intelligent Co ltd filed Critical Huayan Intelligent Co ltd
Priority to CN202211613936.2A priority Critical patent/CN115965993A/en
Publication of CN115965993A publication Critical patent/CN115965993A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Emergency Alarm Devices (AREA)

Abstract

The invention relates to a tracking, identifying and monitoring system and method based on thermal images, the system comprises at least one monitoring host and a monitoring background, wherein the monitoring host is arranged in a ward, a bathroom toilet, a specific office or operation place and other fields needing to be monitored, the monitoring host shoots the thermal images of the attended personnel in the place through an infrared lens, and analyzes each frame of thermal images by utilizing an AI humanoid detection model which is trained in advance so as to judge the humanoid actions in the thermal images, when the humanoid actions accord with the conditions of sending out warning messages, the monitoring host sends out warning messages to the monitoring background, so that the manager in the monitoring background can immediately find whether the attended personnel or the attended personnel are in accidental actions such as bed falling, falling or standing for a long time and handling the accidents.

Description

Tracking, identifying and monitoring system and method based on thermal image
Technical Field
The present invention relates to a monitoring system and method, and more particularly to a monitoring system and method for determining whether a person being monitored or a person under monitoring is falling down from a bed, falling down, or standing still for a long time based on infrared thermal images.
Background
With the arrival of an aging society, the demand of auxiliary care by science and technology in the future is only higher, and the developed new science and technology not only can meet the demand of care institutions, but also can be expected to be applied to general public families and guard the living safety of the family members.
The nursing is a work which is quite labor-intensive, according to public information statistics, the shortage proportion of the long-term manpower in the advanced countries is about 20-50%, namely 1 nursing staff needs to bear 1.5 personal powers of nursing work at most, and the nursing staff is easy to leave due to large long-term working pressure and long working time and directly influences the nursing quality to form a vicious circle. Therefore, if the workload of the care takers can be reduced by science and technology, it is expected that the number of care takers per care taker will be increased and the safety of the care taker will be improved.
The conventional image recognition technology is mostly based on the input of full-color or black-and-white images captured by a general camera unit as data for image recognition and judgment, and the common technologies include face recognition, pupil (iris) recognition, human skeleton recognition and the like. However, in places where high privacy is required, such as hospital wards, resident rooms of a long-distance institution, and specific toilets, the image recognition technology is not suitable for being introduced into the field of care because the image data used by the technology can clearly present the appearance of the photographed person and invade the privacy of the person due to regulatory restrictions and human rights, so that the places still need to rely on a lot of manpower to assist in the care.
Disclosure of Invention
In view of the unexpected accident that the cared person may fall down at bedside or in bathroom in the current care institution/medical institution and no good technical auxiliary scheme is available, the present invention provides a tracking identification monitoring system and method based on thermal image to detect whether the cared person is abnormal or in an urgent or difficult behavior and automatically send out an emergency warning or rescue signal if necessary.
To achieve the above objects, the present invention provides a thermal image-based tracking and identification monitoring system, comprising:
at least one monitoring host computer, for installing in an environmental position in order to monitor the personnel state that this environmental position belongs to, each monitoring host computer includes:
a control unit connected with at least one infrared lens to continuously shoot the environment position to obtain a plurality of frames of thermal images;
the operation unit is connected with the control unit and receives the multi-frame thermal images through the control unit, analyzes the continuously received multi-frame thermal images by adopting a trained AI human shape detection model to judge whether human shapes exist in an effective detection area in the multi-frame thermal images and the motions of the human shapes in a monitoring range, and sends out an alarm message when the motions of the human shapes meet the condition of sending out the alarm message, wherein the alarm message comprises at least one of preparation for getting out of bed, falling down, sitting for a long time and danger;
a memory unit connected to the control unit and the arithmetic unit for storing data and program;
the output/input unit is connected with the control unit and the arithmetic unit and comprises at least one transmission interface used for establishing the connection and data transmission between the monitoring host and other external devices;
a monitoring background, which is connected with each monitoring host in communication, wherein, the monitoring background comprises:
the cloud host is in communication connection with each monitoring host to receive the thermal image pictures and the warning messages shot by each monitoring host;
a fixed point host which is connected with the cloud host and displays the warning message;
when the AI human shape detection model identifies each frame of thermal image, the following procedures are executed:
judging whether the human figure in the thermal image is positioned in the effective detection area or not, and if not, abandoning the human figure;
respectively appointing an identification code (ID) for each shape in the effective detection area, and removing the ID when the shape leaves the effective detection area;
identifying the action of the human figure and adding one to the count value corresponding to the action;
when the counting value of the humanoid action is accumulated to a threshold value, the arithmetic unit sends out an alarm message.
The invention constructs an Artificial Intelligence (AI) human shape detection model (neural network model) by a deep learning method, carries out multi-person tracking and action recognition on human shapes in the thermal image through the trained AI human shape detection model, and can automatically send out a warning message for a caregiver to confirm when the motion of the observed human shape accords with a preset rule for sending out an emergency report, thereby ensuring the safety of the observed person. Detectable behavior patterns of the present invention include, but are not limited to: dangerous action behaviors such as getting up on a bed to get out of the bed, getting out of the bed, falling at the bedside, sitting in a toilet for a long time, falling in a toilet, and static behaviors in a specific office or work place, wherein dangerous and safe events often occur.
Furthermore, the invention carries out human shape action recognition based on infrared thermal image data, and the thermal image can not clearly present human face and limb detail actions, thereby ensuring the personal privacy of the party and ensuring the human right under the condition of providing safe care monitoring.
Drawings
FIG. 1: the invention relates to a flow chart for constructing an AI human shape detection model.
FIG. 2 is a schematic diagram: the invention discloses a system block diagram of a tracking, identifying and monitoring system based on thermal images.
FIG. 3A: the invention is applied to the schematic diagram of the single monitoring mode.
FIG. 3B: the invention is applied to the schematic diagram of the multi-person monitoring mode.
FIG. 4: the invention discloses a flow chart of a tracking, identifying and monitoring method.
Fig. 5A to 5D: the invention relates to a thermal image picture monitored by a single person.
Fig. 6A to 6D: the invention relates to a thermal image picture monitored by multiple persons.
FIG. 7 is a schematic view of: the invention also discloses another flow chart of the tracking, identifying and monitoring method.
Fig. 8A to 8D: the invention relates to a thermal image picture monitored by a bathroom toilet.
Detailed Description
The technical means adopted by the invention to achieve the predetermined object of the invention are further described below with reference to the drawings and the preferred embodiments of the invention.
The invention constructs Artificial Intelligence (AI) human shape detection model (analog neural network model) by deep learning method to detect human shape and identify human shape action in real time, the AI human shape detection model has the advantage that each frame of thermal image can immediately identify the 'real-time state' of human shape, for example, the real-time state is sitting on a bed, at the moment, there are two possibilities, one is to stand up and leave the bed, the other is sitting on the bed from outside the bed, after judging action by counting the real-time state of each frame of thermal image, the invention can quickly and effectively send out warning when necessary.
Please refer to fig. 1, which is a flow chart of an AI humanoid detection model establishing method according to the present invention: mainly comprises the following steps:
collecting and labeling image and picture data S01:
the invention takes the thermal image shot by the infrared thermal imager as the data source, and the data source comprises the image specially demonstrated by people, or the action image/picture data of the person to be watched in the actual field (such as medical institution, long-shot institution), or the person to be monitored in the specific office or workplace. The thermal infrared imager can capture thermal images of a plurality of different parties, including twenty-four hours or in different time ranges at different intervals, obtain thermal image pictures of various different actions as far as possible according to the persons to be irradiated, the time ranges and maximization and diversity of action mode differences, classify and label the obtained thermal image pictures in a manual mode, and assign appointed labels for the different actions. For example, the classification items include, but are not limited to, "toilet sedentary, toilet fall", "getting up/out of bed/fall", "other", and the like, wherein the "other" items mainly refer to situations where the subject sits in a wheelchair, uses a walker, bends over, scans a clothes handler, and assists in bathing.
Initial model building and training S02:
after the classification labeling is completed, about 700 pictures are extracted from the thermal image pictures according to the labels of each action, wherein 90% of the number of each action is used for training (train) and 10% is used for testing or verification (test/validation). During the test, the same action is used for testing ten times, and more than nine times (including) are correctly marked as passing thresholds, so that an initial AI human shape detection model is obtained. And the image data collection marking and retraining of key action behaviors are carried out aiming at transition period sample states and other items between action conversions. Neural network model (machine learning method): object Detection (Object Detection) methods, such as fast R-CNN, YOLO, retinaNet, etc., can be used, which use convolutional neural networks to capture image features. Taking YOLOv3 as an example, the input layer is a thermal image with a size of 640x480, the middle layer adopts a Darknet-53 architecture to comprise 53 convolution layers, and the output layer predicts 7 action types. Binary Cross Entropy (Binary Cross Entropy) is used as a classification loss function and Mean Squared Error (Mean Squared Error) is used as a bounding box prediction loss function in the training process. The training data had approximately 5000 labeled thermal images and the preprocessing included gaussian blur, horizontal flipping and less than 15 degrees of rotation. According to the experimental result, the AI human shape detection model Tiny YOLOv3 (YOLO v3 reduced version) can successfully identify 7 different kinds of actions established by the invention, the Average Precision (MAP) reaches 95%, and the detection speed can reach 3-4 FPS (Frames Per Second) in the 4 th generation of raspberry. The invention is beneficial to the realization of auxiliary care and monitoring by the built thermal image data set and the application of the thermal image data set to the human shape detection action classification.
Field actual test S03:
in order to obtain an AI humanoid detection model with higher prediction accuracy, the invention can firstly erect the initial AI humanoid detection model which passes the test in a target field for demonstration and final test. In different fields (such as 3-5), multiple sets of equipment (such as 5-10 sets) are erected in each field for field demonstration and final test. And observing the reaction of each set of equipment within a period of time and adjusting the reaction at any time, such as hardware erection angle, visual area range and software setting parameters, if the reaction is abnormal, using abnormal data as picture data collection labels of key action behaviors of retraining and test verification, and optimizing retraining data sources to obtain the finally available AI human shape detection model.
Referring to fig. 2, a system block diagram of the present invention includes one or more monitoring hosts 10 and a monitoring backend 20, where the different monitoring hosts 10 are respectively installed at a plurality of different predetermined locations, such as a room in a ward to monitor images of a human body on a bed, or a bathroom to monitor images of a human body near a toilet, and each monitoring host 10 is communicatively connected to the monitoring backend 20 to report the identification result to the monitoring backend 20.
Each monitoring host 10 includes: a control unit 11, an arithmetic unit 12, a storage unit 13, and an input/output unit 14. The control unit 11 may be a control circuit board, such as a Printed Circuit Board Assembly (PCBA) developed based on Raspberry Pi or Arduino kits, or a mass-produced version thereof, and may be connected to an infrared lens 15, a sensor, an expansion board, or other components, wherein the infrared lens 15 captures a thermal image of the installation location.
The operation unit 12 is connected to the control unit 11 and includes a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), or other microprocessors
Figure BDA0003999807190000051
Movidius TM The computing unit 12 receives the thermal image captured by the infrared lens 15 through the control unit 11, performs data calculation, database operation, and executes the AI human shape detection model for thermal image recognition.
The storage unit 13 is connected to the control unit 11 and the operation unit 12, and may include a built-in memory on the control circuit board or an external expansion memory card, etc. for storing an operating system, programs and data.
The i/o unit 14 is connected to the control unit 11 and the operation unit 12, and includes one or more i/o interfaces with different specifications, including at least one transmission interface, such as an HDMI interface, a USB interface, a wired network transmission interface, a wireless network transmission interface, or a connector with other standard specifications, for establishing connection and data transmission between the monitoring host 10 and other external devices, for example, the monitoring host 10 may be connected to the monitoring backend through the wired or wireless i/o unit 14.
When the monitoring host 10 is installed in a specific monitoring place, taking a ward room as an example, the infrared lens 15 can be erected on the ceiling of a bedside, a walkway or a facing wall, and the angle between the visual angle and the horizontal included angle of the infrared lens 15 can be between 15 and 60 degrees, so that the hardware installation position of the movement and staying area of the person to be observed is preferably convenient. The "effective detection area" that the monitoring host 10 can monitor includes the complete or local bed setting range, the walkways around the bed, and other areas, as shown in fig. 3A, each infrared lens 15 can monitor a bed independently to execute the single monitoring mode; alternatively, as shown in fig. 3B, each infrared lens 15 is mounted on the middle aisle at the head and/or tail of two beds, and the two beds and the aisles are brought into the effective detection area and the bed range to execute the multi-person monitoring mode.
When the monitoring host 10 is installed in a bathroom, the infrared lens 15 may be installed on the ceiling above, in front of, or on the left/right sides of the toilet, so as to facilitate observation of the installation position of the hardware in the moving and staying area of the person to be monitored. The "valid detection area" that the monitoring host 10 can monitor includes: such as the area around the toilet when the toilet is used, the area on the toilet, the walkway around the toilet, and the like.
In the monitoring backend 20, a cloud host 21, a fixed point host 22 or a mobile device 23 may be included. The cloud host 21 is connected to each monitoring host 10 and receives the thermal image and the warning message sent by the monitoring host 10. The fixed-point host 22 is fixedly installed at a fixed point, such as a nursing station, and can be connected to the cloud host 21 and display the warning message. The mobile device 23 is provided for the nursing staff or the care staff to carry about, and has an application program (APP) installed therein, and connects with the cloud host 21 and displays the picture taken by the infrared lens 15 through the APP, and displays the warning message.
Please refer to fig. 4, which is a flowchart illustrating a tracking, identifying and monitoring method according to the present invention, in this embodiment, taking a situation that movement in a patient room is prone to danger as an example, and the tracking, identifying and monitoring method includes the following steps:
setting the detection area range S41: the user can set a "valid detection area" and one or more "monitoring ranges" by himself, wherein the monitoring range can be a "bed range". For example, a length range of 0-80% of the left side of the visual area is selected as an effective detection area, and a length range of 10-40% of the left side is selected as a bed range, wherein the bed range can partially or completely fall within the effective detection area. Referring to fig. 5A to 5D and fig. 6A to 6D, the thermal image pictures are shown, wherein the white long rectangle frames shown in fig. 5A to 5D represent the bed range, and the white long rectangle frames on the left and right sides of each of fig. 6A to 6D also represent the bed range.
Setting the detection frequency S42: the user can set the number of thermal image pictures to be processed per unit time, for example, the infrared lens 15 can be set to detect the real-time image at any frequency of 1-12 FPS (frame per second); if not intentionally set, a predetermined frequency value (e.g., 3 FPS) can be directly used as the image capturing frequency of the ir lens 15, and the following steps S43 to S46 are further performed for each frame of thermal image by using the AI human shape detection model constructed as described above.
Thermal image human shape detection S43: if the AI humanoid detecting model detects one or more humanoids, then further determines whether the humanoid is located in the "valid detecting region", if so, then proceeds to the next step S44, otherwise, discards the humanoid. Referring to FIGS. 5A-5D, a single person monitoring mode is shown, wherein a black frame region indicates that a detected human exists; in the multi-person monitoring mode shown in fig. 6A to 6D, the black frame area in each thermal image picture represents that a person exists.
ID assigned and human shape tracked S44: an independent identification code (ID) is assigned to each recognized figure, for example, with the numbers 0,1,2, and so on as IDs, the AI figure detection model starts tracking the figure, and if a new figure appears in the valid detection area, the new figure is assigned a new ID. If the human figure has an operation shape, the next step S45 is performed. If the human figure leaves the effective detection area, the ID of the human figure is removed. For example, in the thermal image pictures of fig. 6A to 6D, two human figures are respectively designated as IDs of "0" and "1".
Human-shaped motion determination S45: comparing the human-shaped actions in each frame of thermal image picture according to the previously trained AI human-shaped detection model, that is, judging the action with the highest similarity in the human-shaped gesture by using the action which is trained in advance by the AI human-shaped detection model, and accumulating the action count. If the action is not pre-trained, the not-listed count is discarded. As shown in fig. 5A to 5D, for example, if the monitored environment is in a ward, it can be determined that the humanoid motion belongs to the motions of "lie-down", "ready-to-get-out (sit)", "out-of-bed (stand)", "falling over"; in fig. 6A to 6D, the left figure in each thermal image maintains lying down, and the right figure is in turn lying down, ready to get out of bed, having got out of bed, falling down, etc.
And sending out an alarm S46: if the action of the ward is judged to be ready for getting out of bed, falling down or sitting down, and the accumulated count value of the action reaches a set threshold value, a related warning message is sent, wherein different actions can be respectively set to different corresponding threshold values, for example, the threshold value corresponding to the ready for getting out of bed can be set to be relatively larger; the threshold value corresponding to the out-of-bed and fall actions can be set to be relatively small.
Taking a bed rise as an example, the AI model detects the human shape and determines that the human shape lies down to a sitting posture, if 3 frames of thermal image pictures (3 FPS) are detected every second and the threshold value is set to 15 in step S42, if the sitting posture (representing preparation for getting out of the bed) of the human shape is maintained for more than five seconds, the count value of the sitting posture action will exceed the threshold value 15, a warning message will be sent to the monitoring background 20, the count value of the action is returned to 0, and then the next frame of image picture is continuously detected. Taking the human figure of fig. 5B as an example, when the determination result is the sitting posture and the count value of the motion exceeds the corresponding threshold value, the alarm "want to leave the bed" is first issued; taking the example of fig. 5C as an example, after a series of actions, the figure has departed from the set bed range and the accumulated count value of the actions exceeds the corresponding threshold value, then it is determined that the person has got out of the bed; referring to fig. 5C as an example, when it is determined that the human figure is not only out of the set bed range but also in the effective detection area and the motion is falling, the accumulated count value of the motion exceeds the corresponding threshold value, an alarm "has got out of bed and fallen" is issued. Similarly, the right-hand human figures in fig. 6B, 6C, and 6D are judged as warnings of "want to get out of bed", "get out of bed", and fall over ", respectively.
Please refer to fig. 7, which is another flowchart of the tracking, identifying and monitoring method of the present invention, in the present embodiment, a situation that is dangerous when standing still for a long time is taken as an example, and the tracking, identifying and monitoring method includes the following steps:
setting the detection area range S71: taking 100% of the whole thermal image captured by the infrared lens 15 as the visible area, the user can set an "effective detection area" and one or more "monitoring ranges", for example, in the bathroom, the length range of the visible area is 0-100% as the "effective detection area", and the monitoring range can be a "toilet range", a "specific office range", or a "work place range", etc. which are the areas to be monitored. The frame covers the toilet bowl and the surrounding area as the "toilet bowl area", wherein the toilet bowl area may partially or completely fall within the effective detection area. Referring to fig. 8A to 8D, thermal image pictures are shown, wherein the white rectangle shown in fig. 8A to 8D represents the range of the toilet bowl.
Setting the detection frequency S72: the user can set the number of thermal image pictures to be processed per unit time, for example, the infrared lens 15 can be set to detect the real-time image at any frequency of 1-12 FPS (frame per second); if not intentionally set, a predetermined frequency value (e.g., 3 FPS) can be directly used as the image capturing frequency of the ir lens 15, and the following steps S73-S76 are further performed for each frame of thermal image by using the AI human shape detection model constructed as described above.
Thermal image human shape detection S73: if the AI human shape detection model detects one or more human shapes, then further determines whether the human shape is located in the "valid detection region", if so, proceeds to the next step S74, and if not, discards the human shape. Referring to FIGS. 8A-8D, the black frame area indicates that a human is detected.
ID and tracking humanoid S74: for each recognized figure, an independent identification code (ID) is assigned, for example, with the numbers 0,1,2, and so on as IDs, the AI figure detection model starts tracking the figure, and if a new figure appears in the valid detection area, the new figure is assigned a new ID. If the human figure has an operation shape, the next step S75 is performed. If the human figure leaves the effective detection area, the ID of the human figure is removed.
Human-shaped motion determination S75: comparing the human figure motion in each frame of thermal image picture according to the previously trained AI human figure detection model, that is, using the pre-trained motion of the AI human figure detection model to determine the motion with the highest similarity in human figure posture, and accumulating the motion count. If the image is blurred or the action judgment is not easy to be confirmed, the action judgment record with 3-10 frames of previous action judgment records are used to perform the action correction with continuous action with more action records, or with heavier weight or higher possibility, and the action count is accumulated, so as to correct the action judgment which is not easy to be confirmed, thereby ensuring the action of the person to be protected or the person to be monitored to be correctly and instantly warned. If the action is not pre-trained, the not-listed count is discarded. As shown in fig. 8A to 8D, for example, if the monitored environment is inside a bathroom, it can be determined whether the humanoid motion belongs to motions such as "permanent sitting (sitting) and" falling (fall) ".
Issuing an alert S76: if the action of the caretaker in the toilet is determined as sedentary, falling and dangerous, and the accumulated count value of the action reaches a set threshold value, a related warning message is sent out, wherein different actions can be respectively set to different corresponding threshold values, for example, the threshold value corresponding to the sedentary action in the toilet can be set to be relatively large, and the threshold value corresponding to the falling and dangerous action can be set to be relatively small.
The above description is related to the monitoring of the bathroom and toilet in combination with the thermal image, and the same applies to the monitoring of the areas where dangerous and safe events occur frequently, such as standing still for a long time. Fig. 8A to 8D show that the person to be protected or the person to be monitored issues a sedentary warning when the toilet is sedentary and continues to sedentary for a predetermined time after the sedentary warning is issued, a sedentary and dangerous warning when the person continues to sedentary for a predetermined time after the sedentary warning is issued, a fall warning when the person falls down within the monitoring range around the toilet, and a fall and dangerous warning when the person falls down for a predetermined time. The warning report is provided in real time through different behaviors and different danger degrees of the behaviors, so that the timely life safety of the person to be monitored is ensured. When determining whether to send out a long-time sitting, falling, danger warning, the principle is similar to the warning message of ward monitoring, for example, as follows:
sedentariness warning: FIG. 8B shows that the sitting person sitting on the toilet bowl takes a predetermined time (1-10 minutes), and generates a "sedentary warning". If the second predetermined time (e.g. 20 minutes) is exceeded, the alarm is immediately issued.
And (3) fall warning: FIG. 8C shows the person falling or sitting outside the specified toilet range in the effective detection area for a predetermined time (e.g., 1-5 seconds), giving an alarm of "fall alarm".
And (4) danger warning: FIG. 8D shows the person falling or sitting outside the range of the toilet for a predetermined time (e.g., 1-300 seconds) in the effective detection area, and immediately warns the person as a "danger warning".
The former 'sedentary alarm' can also be extended to a specific office or work place to alarm the standing still behavior of the person or person under guardianship for a long time, so the alarm message can be changed to 'standing still', including the dangerous and safe behaviors such as sedentary, lying on the ground, etc. If "" still "" lasts for a longer time (e.g., 500 seconds), then the "" danger alert "" is immediately alerted.
In summary, in order to detect the abnormal and urgent behaviors of the person to be protected, the present invention uses the infrared thermal image captured by the thermal imager as the data source, and has the following advantages:
1. the trained AI humanoid detection model is used for simultaneously tracking and detecting actions of a plurality of persons, and the dangerous behaviors are judged by comparing program rules, and an emergency report is sent out under the condition conforming state, so that the safety of a person to be cared is ensured. Detectable behavior aspects of the present invention include, but are not limited to: the dangerous actions of getting up on the bed to get out of the bed, getting out of the bed, falling off at the bedside, sitting on a closestool for a long time, falling off in a toilet, standing still for a long time and the like which often cause dangerous and safe events.
2. The detail actions of the human face and the limbs are fuzzy and difficult to identify, so that the privacy right of the individual is not infringed. However, the user can clearly shoot the face of the person by using a traditional camera, and the person can be erased after the user passes through a post-processing, so that the personal data outflow doubt cannot be ensured 100%.
3. The human body has stable heat source which can be sensed and separated from the surrounding environment, so that the human body can be clearly identified even if the surrounding environment has insufficient light source. However, the conventional camera cannot obtain a clear image under a dim light source and is not easy to identify. Meanwhile, the invention can also judge whether the body temperature of the patient is too high to cause health hazard according to the human body thermal image.
Although the present invention has been described with reference to a preferred embodiment, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A thermal image based tracking, identification and monitoring system is characterized by comprising:
at least one monitoring host computer, supply to install in an environmental position in order to monitor the personnel state that this environmental position is located, each monitoring host computer includes:
a control unit connected with at least one infrared lens, wherein the infrared lens continuously shoots the environment position to obtain a plurality of frames of thermal images;
the computing unit is connected with the control unit and receives the multi-frame thermal images through the control unit, analyzes the continuously received multi-frame thermal images by adopting a trained AI human shape detection model to judge whether human bodies exist in an effective detection area in the multi-frame thermal images and the motion of the human bodies in a monitoring range, and sends out a warning message when the motion of the human bodies meets the condition of sending out the warning message, wherein the warning message comprises at least one of preparation for getting out of bed, falling down, sitting for a long time, danger or static behaviors;
a memory unit connected to the control unit and the arithmetic unit for storing data and program;
the output/input unit is connected with the control unit and the arithmetic unit and comprises at least one transmission interface used for establishing the connection and data transmission between the monitoring host and other external devices;
a monitoring background in communication connection with each monitoring host, wherein the monitoring background comprises:
the cloud host is in communication connection with each monitoring host to receive the thermal image pictures and the warning messages shot by each monitoring host;
a fixed point host which is connected with the cloud host and displays the warning message;
when the AI human shape detection model identifies each frame of thermal image, the following procedures are executed:
judging whether the human figure in the thermal image is positioned in the effective detection area or not, and if not, abandoning the human figure;
respectively appointing an identification code (ID) for each human shape in the effective detection area, and removing the ID when the human shape leaves the effective detection area;
identifying the human-shaped action and adding one to the count value corresponding to the action;
when the counting value of the humanoid action is accumulated to a threshold value, the arithmetic unit sends out an alarm message.
2. The system according to claim 1, wherein the monitoring backend further comprises a mobile device, and an application program is installed in the mobile device, and the mobile device is connected to the cloud host through the application program and receives the warning message.
3. The system according to claim 1, wherein the monitoring range comprises a bed range, a toilet range, an office range, or a workplace range, and at least a portion of the monitoring range is located within the active detection area.
4. The system according to claim 3, wherein the effective detection area and the monitoring range are set according to instructions inputted by a user.
5. The system of claim 1, wherein when the AI human shape detection model is recognizing the human shape, the AI human shape detection model recognizes the continuous motion with the highest motion recognition record, or the highest weight, or the highest probability as the human shape motion according to the previous first number of frames, and increments the count corresponding to the motion.
6. A tracking, identifying and monitoring method based on thermal image is characterized by comprising the following steps:
receiving a plurality of frames of thermal images continuously shot by an infrared lens;
utilizing a pre-trained AI human shape detection model to identify each frame of thermal image, wherein the AI human shape detection model executes the following procedures:
judging whether the human figure in the thermal image is positioned in an effective detection area or not, and if not, abandoning the human figure;
respectively appointing an identification code (ID) for each shape in the effective detection area, and removing the ID when the shape leaves the effective detection area;
identifying the human-shaped action and adding one to the count value corresponding to the action;
and judging whether the counting value of the humanoid action is accumulated to a threshold value or not, and if so, generating an alarm message.
7. The method as claimed in claim 6, wherein the step of receiving the thermal image further comprises:
setting the detection area range: the whole picture shot by the infrared lens is taken as a visible area, and the effective detection area and one or more monitoring ranges are appointed in the visible area according to an instruction input by a user;
setting a detection frequency: the number of thermal image frames to be identified by the AI human shape detection model in a unit time is set.
8. The method as claimed in claim 6, wherein the monitoring range includes a bed range, a toilet range, an office range, or a workplace range, and at least a portion of the monitoring range is located within the active detection area.
9. The method as claimed in claim 6, wherein the warning message includes at least one of a preparation to get out of bed, a bed out of bed, a fall, a sedentary, a dangerous or a stationary behavior.
10. The method as claimed in claim 6, wherein when the AI human shape detection model is recognizing the motion of the human shape, the AI human shape detection model recognizes the continuous motion with the highest motion recognition record, or the highest weight, or the highest possibility as the motion of the human shape according to the previous first number of frames, and increments the count corresponding to the motion.
CN202211613936.2A 2022-12-15 2022-12-15 Tracking, identifying and monitoring system and method based on thermal image Pending CN115965993A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211613936.2A CN115965993A (en) 2022-12-15 2022-12-15 Tracking, identifying and monitoring system and method based on thermal image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211613936.2A CN115965993A (en) 2022-12-15 2022-12-15 Tracking, identifying and monitoring system and method based on thermal image

Publications (1)

Publication Number Publication Date
CN115965993A true CN115965993A (en) 2023-04-14

Family

ID=87353715

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211613936.2A Pending CN115965993A (en) 2022-12-15 2022-12-15 Tracking, identifying and monitoring system and method based on thermal image

Country Status (1)

Country Link
CN (1) CN115965993A (en)

Similar Documents

Publication Publication Date Title
CN112784662A (en) Video-based fall risk evaluation system
CN109558865A (en) A kind of abnormal state detection method to the special caregiver of need based on human body key point
Tzeng et al. Design of fall detection system with floor pressure and infrared image
CN108629300B (en) Fall detection method
Liu et al. A fall detection system using k-nearest neighbor classifier
US7106885B2 (en) Method and apparatus for subject physical position and security determination
CN103325080B (en) A kind of home for the aged based on technology of Internet of things intelligent safeguard system and method
CN112489368A (en) Intelligent falling identification and detection alarm method and system
Shoaib et al. View-invariant fall detection for elderly in real home environment
JP6822328B2 (en) Watching support system and its control method
CN107077214A (en) For the method and system of the communication used within the hospital
JP6579411B1 (en) Monitoring system and monitoring method for care facility or hospital
Liciotti et al. Human activity analysis for in-home fall risk assessment
CN114469076A (en) Identity feature fused old solitary people falling identification method and system
CN109325474A (en) A kind of abnormal state detection method of couple of special caregiver of need
Banerjee et al. Monitoring hospital rooms for safety using depth images
CN113706824A (en) Old man nurses system at home based on thing networking control
CN113384267A (en) Fall real-time detection method, system, terminal equipment and storage medium
JP3767898B2 (en) Human behavior understanding system
CN117197998A (en) Sensor integrated nursing system of thing networking
CN116883946A (en) Method, device, equipment and storage medium for detecting abnormal behaviors of old people in real time
CN115965993A (en) Tracking, identifying and monitoring system and method based on thermal image
Inoue et al. Bed exit action detection based on patient posture with long short-term memory
TWI796786B (en) Tracking and identification monitoring system and method based on thermal image
WO2020241034A1 (en) Monitoring system and monitoring method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination