CN117132949B - All-weather fall detection method based on deep learning - Google Patents

All-weather fall detection method based on deep learning Download PDF

Info

Publication number
CN117132949B
CN117132949B CN202311402745.6A CN202311402745A CN117132949B CN 117132949 B CN117132949 B CN 117132949B CN 202311402745 A CN202311402745 A CN 202311402745A CN 117132949 B CN117132949 B CN 117132949B
Authority
CN
China
Prior art keywords
human body
frame
detection
target detection
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311402745.6A
Other languages
Chinese (zh)
Other versions
CN117132949A (en
Inventor
王京华
曲从程
赵永兴
徐梓毓
刘亚东
刘自卫
黄江南
郝瑞源
汤发源
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changchun University of Science and Technology
Original Assignee
Changchun University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changchun University of Science and Technology filed Critical Changchun University of Science and Technology
Priority to CN202311402745.6A priority Critical patent/CN117132949B/en
Publication of CN117132949A publication Critical patent/CN117132949A/en
Application granted granted Critical
Publication of CN117132949B publication Critical patent/CN117132949B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/44Event detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Abstract

The method adopts security monitoring equipment to collect image information, and compared with environmental equipment, wearable equipment and traditional visual information-based detection methods, the method has the advantages of high accuracy, suitability for complex environments, all-weather detection and the like. The security monitoring cameras based on wide deployment detect the state of the human body, can effectively distinguish the rest area and the activity area of a patient, and perform fall detection on related personnel in the activity area, so that the security monitoring cameras not only have the characteristics of all weather, high real-time performance, high fault tolerance and the like, but also can effectively reduce the cost of hardware deployment.

Description

All-weather fall detection method based on deep learning
Technical Field
The invention relates to the technical fields of Internet of things, computer vision and behavior recognition, in particular to an all-weather fall detection method based on deep learning.
Background
The fall detection technology based on the monitoring video has high recognition rate, low requirement on external conditions and no need of additional equipment such as sensors and the like for human bodies, and is becoming one of the mainstream technologies for fall detection. For example, in a ward or nursing home environment, the deployment of wearable devices by patients or elderly people may be relatively resistant, and may also cause inconvenience in their lives or even activities, especially in a hospital environment, where patients may need to infuse or wear medical devices, which may also cause inconvenience.
The existing fall detection method is roughly divided into three parts:
1. the detection method based on the environment equipment comprises the following steps: by installing one or more sensors in the environment to collect human motion and posture information and utilizing the sensors to conduct data fusion to judge whether falling occurs, the method is greatly affected by the environment, complex to deploy and high in cost.
2. Detection method based on wearable sensor equipment: the sensor and the micro controller form small equipment, and the human body gesture is analyzed by collecting signals such as human body acceleration information and the like in real time, so that whether a human body falls down or not is judged, but the identification rate of the method is low, and false alarm is easy.
3. The detection method based on the traditional visual information comprises the following steps: the image processing technology and the computer vision technology are combined, and the human body falling judgment is carried out by analyzing the gravity center or the head position of the human body, but the reliability is low, the fault tolerance is low, and the false alarm rate is higher especially for night environments.
Disclosure of Invention
The invention provides an all-weather fall detection method based on deep learning, which aims to solve the problems of complex sensor deployment, high cost or low recognition rate, false alarm and the like in the existing fall detection method. The method can utilize the security monitoring equipment to perform all-weather real-time fall detection on personnel in the environment, and is beneficial to timely finding the fall behaviors of patients or old people and timely alarming.
An all-weather fall detection method based on deep learning is realized by the following steps:
step one, collecting video images in a video monitoring area, uploading the video images to an upper computer frame by frame through an RTSP streaming media server, and preprocessing the video images received by the upper computer;
step two, the upper computer acquires the characteristics of the video image frame, judges whether the detected image is a color scene in a daytime mode or an infrared gray scale scene in a night mode, and if the detected image is the color scene, executes the step three; if the scene is the infrared gray scale scene, executing the fourth step;
step three, judging the aspect ratio characteristic of the human body through target detection, if the aspect ratio characteristic of the human body target detection boundary frame exceeds a set falling threshold, primarily judging that the human body falls, executing step five, otherwise, considering the gesture to be normal, carrying out next frame detection, and returning to execute step two;
step four, coloring the infrared gray scene image through a neural network model, and returning to the step three;
step five, judging the human body activity area according to the intersection ratio of the human body target detection boundary frame obtained by target detection and boundary frames such as beds, chairs and sofas, and if the human body is in the activity area, selecting the frame according to the size of the detected human body target detection boundary frame, detecting the human body posture, and executing step six; otherwise, the person is considered to be in the rest area, the next frame detection is carried out, and the step two is returned;
step six, carrying out joint falling judgment through the set aspect ratio feature threshold weight and skeleton diagram weight, setting a falling zone bit to be 1 if falling behaviors occur, otherwise, carrying out next frame detection, and returning to the step two.
The invention has the beneficial effects that:
compared with the environment, the wearable device and the traditional visual information-based detection method have the advantages of high accuracy, suitability for complex environments, all-weather detection and the like. Based on the widely deployed security monitoring cameras, the human body state is detected, the rest area and the activity area of a patient can be effectively distinguished, and the fall detection is carried out on related personnel in the activity area, so that the method has the characteristics of all weather, high real-time performance, high fault tolerance and the like, and the cost of hardware deployment can be effectively reduced.
Drawings
In order to make the objects, technical solutions and deployment effects of the present invention more clear, the present invention provides the following description of the accompanying drawings:
FIG. 1 is a detection flow chart of an all-weather fall detection method based on deep learning;
fig. 2 is an effect diagram of the fall detection system of the present invention;
fig. 3 is a diagram of the resolution principle and process flow of the present invention for daytime and nighttime.
Detailed Description
The embodiment will be described with reference to fig. 1 to 3, which are all-weather fall detection methods based on deep learning, the methods being implemented by:
and S01, the security monitoring camera collects video image information within the monitoring area, and the image is in a mainstream H.264 coding format and is pushed to the RTSP push server.
S02, the upper computer reads the video through RTSP pull stream, and performs standardization, data normalization and other treatments on the acquired image.
S03, analyzing the image characteristics by the upper computer, distinguishing whether the detected image is a daytime color scene or a night infrared gray scale scene, if the detected image is the daytime color scene, entering the step S04, and if the detected image is the night infrared gray scale scene, entering the step S05.
In the step, the camera switches the daytime RGB and night infrared monitoring modes according to the illumination intensity, the upper computer acquires the characteristics of the detected picture, and the detected time period is judged to belong to the daytime or the night through three-channel pixel value comparison at five selected positions.
In this step, the camera can automatically switch between daytime and nighttime according to the illumination intensity, the corresponding imaging effect is a full-color RGB image and a gray-scale image, which are three-channel images, wherein the difference is that the full-color images are superimposed to form different colors by the combination change of three color channels of red (R) green (G) blue (B), and each pixel point r=g=b of the gray-scale image.
The upper computer acquires the characteristics of the detected picture, and distinguishes whether the image is full-color or gray according to whether three channel numerical values are not completely consistent, so that the daytime and the night are distinguished. In order to prevent interference of gray surfaces of parts of objects to program resolution day and night, the upper computer respectively carries out RGB feature judgment on 5 positions in the detected video frame, wherein the 5 positions are respectively: upper left corner L1, lower left corner L2, upper right corner R1, lower right corner R2, center point Center. The picture of each angle is characterized by a List [ R, G, B ] form; and respectively judging List lists of 5 positions, if all of the 5 positions are R (0-255) =G (0-255) =B (0-255), switching a night detection mode by the system, and otherwise, switching a daytime detection mode.
S04, judging the aspect ratio of the human body through the YoloX-S target detection network, setting a judgment threshold value to be 0.8, finishing preliminary judgment if the threshold value is exceeded, and entering step S06.
In this step, the object detection of the human body is performed on the video from the camera through YoloX-S, the aspect ratio of the human body can be identified through the object detection frame, the aspect ratio of the human body is obtained through the coordinates of the human body BBox, the frame coding is in the form of [ x_min, y_min, x_max, y_max ], wherein x_min represents the upper left corner coordinates of the frame, and x_max, y_max represents the lower right corner coordinates of the frame.
The calculation formula of the aspect ratio is as follows:
setting the human body boundary frame width-height ratio threshold value to be 0.8, ifThe aspect ratio is abnormal.
S05, coloring the infrared gray level image at night through a neural network model, judging a scene area where the video frame is located through the neural network, further performing coloring operation, and returning to S04.
In the step, feature extraction is performed on the video frames through a neural network. The characteristics of each pixel point of the photo in the night infrared monitoring mode are R (0-255) =G (0-255) =B (0-255), and the common scene picture is expanded by using the COCO data set and the self-built data set so as to meet the generalization capability under different scenes. The coloring network adopts an image coloring algorithm based on a convolutional neural network, which is proposed by Luke Melas-Kyriazi et al of Harvard university, and the network is optimized through deep separable convolution, so that the parameter quantity and the operation quantity of the network are smaller, and the requirements in actual detection are met. Analyzing the environment of the video according to the trained model and reasonably predicting: the model analyzes the information of the whole scene, and the video frame is completely colored by changing the values of three RGB channels according to the reasonable RGB colors matched by the characteristics of the pixel points of the scene.
S06, judging the activity area of the human body through target detection on the white-day color image or the image colored at night, and if the human body is in the activity area, selecting the frame according to the size of the target detection frame and then entering step S07.
In the step, after receiving the video frame, the upper computer performs target analysis through the target detection network to distinguish human bodies and other objects, and the falling-like behavior mainly occurs in sitting, lying and other behaviors, and the object information such as a bed, a chair, a sofa and the like is selected by using the target detection frame to distinguish the object information from the human body information, and the human body activity area is judged by judging the intersection ratio of the areas of the target detection frames. Namely: the human body target detection frame codes are as follows:
BBox=[x_min,y_min,x_max,y_max]
setting a frame of A (person):
A(person):BBox=[x 1 ,y 1 ,x 2 ,y 2 ]
setting a frame A as a human body target detection boundary frame BBox, x 1 ,y 1 Detection edge for human body targetUpper left corner coordinate of bounding box, x 2 ,y 2 Detecting the lower right corner coordinates of the boundary frame for the human body target;
setting the frame of B (bed, sofa, etc.):
B(bed,sofa and etc.):BBox=[x 3 ,y 3 ,x 4 ,y 4 ]
setting a frame B as a furniture boundary frame, x, for a bed, a sofa and the like to rest 3 ,y 3 For the upper left corner x of the furniture bounding box 4 ,y 4 The right lower corner coordinate of the furniture boundary frame;
A. area of B intersection:
S =|x 2 -x 3 |×|y 2 -y 3 |
A. area of B union:
S =|x 2 -x 1 |×|y 2 -y 1 |-|x 4 -x 3 |×|y 4 -y 3 |-S
cross ratio:
setting the determined threshold value of the active area to be 80%, namely when the IOU is greater than or equal to 80%, determining that the human body is not in the active area in the rest area, otherwise, determining that the human body is in the active area.
S07, detecting the gesture of the detected target human body, detecting frame coordinates through the human body target, selecting an area frame where the human body is located by utilizing the value of BBox, and sending the area frame to an OpenPose network for gesture recognition. Through collecting various falling and normal human posture pictures, utilizing openelse to carry out the discernment of skeleton map, remain the image that has the skeleton map through setting the bottom background of generating the picture to black. Training is carried out by utilizing a gesture detection network, a data set is divided into a skeleton diagram of a normal gesture and a skeleton diagram of a falling gesture, and the training model is exported through a PyTorch deep learning frame to carry out human gesture detection after deployment.
S08, carrying out combined falling judgment through the aspect ratio characteristic and the human posture characteristic, wherein the aspect ratio is more than or equal to 0.8 and enters a posture detection judgment stage, but setting the contribution ratio of the human boundary frame width-height characteristic ratio alpha and the human posture characteristic beta to the falling detection judgment to be 0.4 and 0.6 respectively in order to improve the falling detection accuracy, and carrying out combined falling posture judgment through the two characteristics. If a fall behavior occurs, the fall flag bit fall=1.
The aspect ratio is characterized by:
human posture characteristics:
and (3) joint judgment:
acc=0.4×α+0.6×β
if the falling probability acc after the combined judgment is more than or equal to 74%, judging that the human body is in a falling state, wherein the human body falling data flag bit fall=1.
In this embodiment, the human body falling data flag bit fall=1 represents the falling behavior, and the upper computer sends the flag bit information representing the human body state to the alarm through the serial port (16 system HEX), thereby realizing the alarm function. The information transmission can be performed through a USB-to-RS 485 device, and the device performs corresponding alarm operation after receiving 16 HEX information.
Finally, it is noted that the above preferred embodiments are only for illustrating the technical solution of the present invention, and although the present invention has been described in detail by the above preferred embodiments, it will be apparent to those skilled in the art that modifications may be made to the technical solution described in the above embodiments, or equivalents may be substituted for some of the technical features thereof, and such modifications and substitutions should also be considered as the protection scope of the present invention.

Claims (7)

1. The all-weather falling detection method based on deep learning is characterized by comprising the following steps of: the method is realized by the following steps:
step one, collecting video images in a video monitoring area, uploading the video images to an upper computer frame by frame through an RTSP streaming media server, and preprocessing the video images received by the upper computer;
step two, the upper computer acquires the characteristics of the video image frame, judges whether the detected image is a color scene in a daytime mode or an infrared gray scale scene in a night mode, and if the detected image is the color scene, executes the step three; if the scene is the infrared gray scale scene, executing the fourth step;
step three, judging the aspect ratio characteristic of the human body through target detection, if the aspect ratio characteristic of the human body target detection boundary frame exceeds a set falling threshold, primarily judging that the human body falls, executing step five, otherwise, considering the gesture to be normal, carrying out next frame detection, and returning to execute step two;
the aspect ratio feature of the human body target detection boundary box is used for preliminarily judging the current morphological feature of the human body, and the specific process is as follows:
performing human body target detection on the video image through a YoloX-S target detection network to obtain a human body target detection boundary box, and acquiring the human body aspect ratio through coordinates of the human body target detection boundary box BBox: and setting the encoding form of the human body target detection boundary box BBox as [ x_min, y_min, x_max, y_max ], and calculating the boundary box aspect ratio as follows:
wherein x is the width of the boundary frame, y is the height of the boundary frame, x_min, y_min are the upper left corner coordinates of the boundary frame, x_max, y_max are the lower right corner coordinates of the boundary frame, the falling threshold of the aspect ratio of the human body is set to be 0.8, ifThe aspect ratio is abnormal;
step four, coloring the infrared gray scene image through a neural network model, and returning to the step three;
step five, judging whether the human body is in an active area according to the intersection ratio of the human body target detection boundary frame obtained by target detection and the boundary frame of the bed, the chair and the sofa, if the human body is in the active area, selecting the frame according to the size of the detected human body target detection boundary frame, detecting the human body posture, and executing the step six; otherwise, the person is considered to be in the rest area, the next frame detection is carried out, and the step two is returned;
the gesture detection is carried out on the human body target after frame selection, and the specific process is as follows:
according to the human body target detection boundary frame coordinates, selecting an area frame where a human body target is located by adopting a BBox value, transmitting the area frame to an OpenPose network for gesture recognition, collecting falling and normal human body gesture images, adopting the OpenPose network for skeleton diagram recognition, reserving an image with a skeleton diagram by setting a bottom layer of a generated image as a black background, training by utilizing a gesture detection network, dividing a data set into the skeleton diagram of a normal gesture and the skeleton diagram of a falling gesture, and deriving a training model by PyTorch for deployment to realize detection of the human body gesture;
step six, carrying out joint falling judgment through the set aspect ratio feature threshold weight and skeleton diagram weight, setting a falling zone bit to be 1 if falling behaviors occur, otherwise, carrying out next frame detection, and returning to the step two.
2. The deep learning-based all-weather fall detection method as claimed in claim 1, wherein: and step six, when the falling zone bit is 1, the upper computer sends alarm information to the alarm sub-equipment.
3. The deep learning-based all-weather fall detection method as claimed in claim 1, wherein: in the second step, the camera can automatically switch between daytime and nighttime according to the illumination intensity, the corresponding imaging effect is a full-color RGB image and a gray image, which are three-channel images, wherein the difference is that the full-color images are mutually overlapped to form different colors through the combination change of the R, G, B three color channels, and each pixel point R=G=B of the gray image.
4. The deep learning-based all-weather fall detection method as claimed in claim 1, wherein: in the second step, the upper computer acquires the characteristics of the detected image frames, and the process of judging the color scene and the night infrared gray level scene is as follows: in the video image frame, RGB feature judgment is performed on five positions, which are an upper left corner L1, a lower left corner L2, an upper right corner R1, a lower right corner R2 and a Center point Center, respectively, and image features of each angle are in List form, and the List of the five positions is respectively judged.
5. The deep learning-based all-weather fall detection method as claimed in claim 1, wherein: in the fourth step, coloring the night image through a neural network model, wherein the specific process is as follows:
the mode of COCO data set and self-built data set is adopted to expand images at night, and the environment where the video is located is analyzed and predicted according to the trained model: the model analyzes the information of the whole scene, matches RGB colors according to the pixel point characteristics of the scene, and performs complete coloring on the video image frame by changing the values of three RGB channels.
6. The deep learning-based all-weather fall detection method as claimed in claim 1, wherein: in the fifth step, a target area is detected through a target detection network, the judgment of a human body activity area is performed, and frame selection is performed according to the size of the detected target detection frame, specifically: after the upper computer receives the video image frames, target analysis is carried out through a target detection network, human bodies and objects are distinguished, the intersection ratio of the areas of the target detection frames is respectively judged, the judgment of the active areas of the human bodies is carried out, the threshold value of the judged active areas is set to be 80%, namely, when the IOU is more than or equal to 80%, the rest behaviors of people in lying, lying and sitting are judged, and otherwise, the human bodies are in the active areas.
7. The deep learning-based all-weather fall detection method as claimed in claim 1, wherein: the specific process of the step six is as follows: when the aspect ratio of the human body target detection boundary box is set to be more than or equal to 0.8, carrying out gesture detection judgment:
setting the contribution ratio of the width-height feature ratio alpha and the human body posture feature beta of the human body target detection boundary frame to fall detection judgment to be 0.4 and 0.6 respectively, and judging the combined fall posture through the two features, wherein the judgment formula is as follows:
aspect ratio features:
human posture characteristics:
and (3) joint judgment:
acc=0.4×α+0.6×β
if acc is greater than or equal to 74%, the human body is judged to be in a falling state, and a falling flag bit fall=1.
CN202311402745.6A 2023-10-27 2023-10-27 All-weather fall detection method based on deep learning Active CN117132949B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311402745.6A CN117132949B (en) 2023-10-27 2023-10-27 All-weather fall detection method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311402745.6A CN117132949B (en) 2023-10-27 2023-10-27 All-weather fall detection method based on deep learning

Publications (2)

Publication Number Publication Date
CN117132949A CN117132949A (en) 2023-11-28
CN117132949B true CN117132949B (en) 2024-02-09

Family

ID=88853082

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311402745.6A Active CN117132949B (en) 2023-10-27 2023-10-27 All-weather fall detection method based on deep learning

Country Status (1)

Country Link
CN (1) CN117132949B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111242844A (en) * 2020-01-19 2020-06-05 腾讯科技(深圳)有限公司 Image processing method, image processing apparatus, server, and storage medium
CN111726514A (en) * 2019-03-20 2020-09-29 浙江宇视科技有限公司 Camera and day and night mode switching method, device, equipment and medium thereof
CN113963442A (en) * 2021-10-25 2022-01-21 重庆科技学院 Fall-down behavior identification method based on comprehensive body state features
CN114495280A (en) * 2022-01-29 2022-05-13 吉林大学第一医院 Whole-day non-accompanying ward patient falling detection method based on video monitoring
CN115082825A (en) * 2022-06-16 2022-09-20 中新国际联合研究院 Video-based real-time human body falling detection and alarm method and device
CN115116127A (en) * 2022-05-25 2022-09-27 西安北斗安全技术有限公司 Fall detection method based on computer vision and artificial intelligence
CN115984967A (en) * 2023-01-05 2023-04-18 北京轩宇空间科技有限公司 Human body falling detection method, device and system based on deep learning
CN116935108A (en) * 2023-07-14 2023-10-24 蔚复来(浙江)科技股份有限公司 Method, device, equipment and medium for monitoring abnormal garbage throwing behavior

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230103112A1 (en) * 2021-09-28 2023-03-30 Darvis Inc. System and method for monitoring activity performed by subject

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111726514A (en) * 2019-03-20 2020-09-29 浙江宇视科技有限公司 Camera and day and night mode switching method, device, equipment and medium thereof
CN111242844A (en) * 2020-01-19 2020-06-05 腾讯科技(深圳)有限公司 Image processing method, image processing apparatus, server, and storage medium
CN113963442A (en) * 2021-10-25 2022-01-21 重庆科技学院 Fall-down behavior identification method based on comprehensive body state features
CN114495280A (en) * 2022-01-29 2022-05-13 吉林大学第一医院 Whole-day non-accompanying ward patient falling detection method based on video monitoring
CN115116127A (en) * 2022-05-25 2022-09-27 西安北斗安全技术有限公司 Fall detection method based on computer vision and artificial intelligence
CN115082825A (en) * 2022-06-16 2022-09-20 中新国际联合研究院 Video-based real-time human body falling detection and alarm method and device
CN115984967A (en) * 2023-01-05 2023-04-18 北京轩宇空间科技有限公司 Human body falling detection method, device and system based on deep learning
CN116935108A (en) * 2023-07-14 2023-10-24 蔚复来(浙江)科技股份有限公司 Method, device, equipment and medium for monitoring abnormal garbage throwing behavior

Also Published As

Publication number Publication date
CN117132949A (en) 2023-11-28

Similar Documents

Publication Publication Date Title
CN109919132B (en) Pedestrian falling identification method based on skeleton detection
Adhikari et al. Activity recognition for indoor fall detection using convolutional neural network
US7479980B2 (en) Monitoring system
US6774905B2 (en) Image data processing
CN102387345B (en) Safety monitoring system based on omnidirectional vision for old people living alone
CN102306304B (en) Face occluder identification method and device
EP2467805B1 (en) Method and system for image analysis
CN110321780B (en) Abnormal falling behavior detection method based on space-time motion characteristics
CN105283129A (en) Information processing device, information processing method, and program
US11298050B2 (en) Posture estimation device, behavior estimation device, storage medium storing posture estimation program, and posture estimation method
Zhang et al. Evaluating depth-based computer vision methods for fall detection under occlusions
KR100822476B1 (en) Remote emergency monitoring system and method
CN114842397A (en) Real-time old man falling detection method based on anomaly detection
Debard et al. Camera based fall detection using multiple features validated with real life video
CN102117484A (en) Processing system, processing method and image classification method using image color information
CN115331283A (en) Detection system for detecting falling of people in living space and detection method thereof
CN113392765A (en) Tumble detection method and system based on machine vision
JP6851221B2 (en) Image monitoring device
CN114898261A (en) Sleep quality assessment method and system based on fusion of video and physiological data
CN117132949B (en) All-weather fall detection method based on deep learning
CA2393932C (en) Human object surveillance using size, shape, movement
CN105718886A (en) Moving personnel safety abnormity tumbling detection method
Park et al. A track-based human movement analysis and privacy protection system adaptive to environmental contexts
WO2023096394A1 (en) Server for determining posture type and operation method thereof
US11275947B2 (en) Image processing system, image processing method, and image processing program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant