CN115984967A - Human body falling detection method, device and system based on deep learning - Google Patents

Human body falling detection method, device and system based on deep learning Download PDF

Info

Publication number
CN115984967A
CN115984967A CN202310011636.5A CN202310011636A CN115984967A CN 115984967 A CN115984967 A CN 115984967A CN 202310011636 A CN202310011636 A CN 202310011636A CN 115984967 A CN115984967 A CN 115984967A
Authority
CN
China
Prior art keywords
human body
module
potential
risk object
fall
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310011636.5A
Other languages
Chinese (zh)
Inventor
谢冰
刘鸿瑾
张绍林
朱梦尧
李健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sunwise Space Technology Ltd
Original Assignee
Beijing Sunwise Space Technology Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sunwise Space Technology Ltd filed Critical Beijing Sunwise Space Technology Ltd
Priority to CN202310011636.5A priority Critical patent/CN115984967A/en
Publication of CN115984967A publication Critical patent/CN115984967A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Alarm Systems (AREA)

Abstract

A human body falling detection method, a device and a system based on deep learning are provided, and the method comprises the following steps: s100, inputting a picture into a trained Model1 which is deployed in edge embedded terminal equipment in advance for recognition, and obtaining human body area information and head area information; s200, after filtering, judging that the shift of the gravity center of the human body and the change of the length-width ratio of the human body both exceed a preset value, and taking the human body as a potential fall risk object; s300, performing human body key point level fine-grained identification on the potential falling risk object by adopting a trained Model2 which is deployed in the edge embedded terminal equipment in advance, and fusing the posture judgment result with the information filtered in the S200 to judge whether the potential falling risk object falls or not; and S400, sending the falling state to a server cloud platform through the external IoT equipment. The invention aims to solve the problems of low accuracy, difficult realization, high cost and poor privacy of the fall detection of the user in the prior art.

Description

Human body falling detection method, device and system based on deep learning
Technical Field
The application relates to the technical field of fall detection, in particular to a human body fall detection method, device and system based on deep learning.
Background
With the increasingly serious aging problem in China, falling is very common in the elderly population, and for young people, the falling is a little thing for patting the buttocks to climb up; but for the elderly, the risk is not inferior to heart disease and stroke attacks. The older the age, the more likely it is that a fall will occur, and most older people will have osteoporosis, osteopenias, and falls can have serious consequences for them.
Present fall detection equipment now can be divided into wearing formula and non-wearing formula, and wearable device generally adopts accelerometer or gravity sensor. The device is usually placed at a single position of a wrist, a neck and the like of a body, and whether a user falls down or not is judged after an output signal of a sensor is processed. Although the detection accuracy is high, most of the used people are old people, the memory of the old people is obviously declined, and the old people are easy to forget to wear, so that the equipment cannot timely care the safety of users. The non-wearable equipment does not need to be in direct contact with a user, the most common device is a camera, and the judgment of the postures of the crowd can be easily completed by adopting an image recognition technology. However, most of the devices in the market adopt a cloud identification technology, which is not good for protecting the privacy of the old.
Disclosure of Invention
The application provides a human body falling detection method, device and system based on deep learning, and aims to solve the problems that in the prior art, falling detection of a user is low in accuracy, difficult to realize, high in cost and poor in privacy.
In order to achieve the above object, the present invention employs the following techniques:
a human body falling detection method based on deep learning comprises the following steps:
s100, inputting pictures acquired in the area by a camera into a trained Model1 which is pre-deployed in edge embedded terminal equipment, and identifying a human body target to obtain human body area information and head area information;
s200, recording and filtering the human body area information and the head area information, judging whether the shift of the gravity center of the human body and the change of the height-width ratio of the human body both exceed a preset value or not according to the filtered information, if so, taking the human body as a potential falling risk object, executing S300, if not, judging that the human body does not fall, and ending;
s300, carrying out human body key point feature recognition on the potential falling risk object by adopting a trained Model2 which is deployed in the edge embedded terminal equipment in advance, acquiring key point coordinate information, judging the posture of the potential falling risk object, and fusing the posture judgment result with the judgment result in the S200 so as to finally judge whether the potential falling risk object falls down;
and S400, when the judgment result of the S300 is that the terminal falls down, sending the falling state to a server cloud platform through the extension IoT device connected with the edge embedded terminal device, so that the server cloud platform sends alarm information to the pre-bound emergency contact terminal.
A human fall detection device based on deep learning, comprising:
the first identification module is used for inputting pictures acquired by the camera in the area into a trained Model1 which is pre-deployed in the edge embedded terminal equipment, identifying a human body target and obtaining human body area information and head area information;
the first judgment module is used for recording and filtering the human body area information and the head area information, judging whether the shift of the gravity center of the human body and the change of the height-width ratio of the human body both exceed a preset value or not according to the filtered information, if so, taking the human body as a potential falling risk object, executing the second judgment module, if not, judging that the human body does not fall, and ending;
the second judgment module is used for carrying out human body key point feature identification on the potential falling risk object by adopting a trained Model2 which is pre-deployed in the edge embedded terminal equipment, acquiring key point coordinate information, judging the posture of the potential falling risk object, and fusing the posture judgment result with the judgment result of the first judgment module so as to finally judge whether the potential falling risk object falls down;
and the sending module is used for sending the falling state to the server cloud platform through the extension IoT equipment connected with the edge embedded terminal equipment when the judgment result of the second judgment module is that the terminal falls down, so that the server cloud platform sends alarm information to the pre-bound emergency contact terminal.
A human body falling detection system based on deep learning comprises edge embedded terminal equipment and IoT equipment, wherein the edge embedded terminal equipment comprises a video processing subsystem, a neural network module and a judgment processing module;
the video processing subsystem is used for receiving and outputting pictures acquired in the area through the camera;
the neural network module is used for loading the trained Model1 to perform human body target recognition on the picture input by the video processing subsystem, and obtaining human body region information and head region information;
the judgment processing module is used for recording and filtering the human body area information and the head area information, judging whether the shift of the gravity center of the human body and the change of the height-width ratio of the human body both exceed a preset value or not according to the filtered information, if so, taking the human body as a potential falling risk object, if not, judging that the human body does not fall, and ending;
the network connection module is used for receiving the judgment result of the judgment processing module and transmitting the judgment result to the model control module;
the Model control module is used for controlling the neural network module to load the trained Model2 when the judgment result is received as the potential falling risk object, so as to perform human body key point feature identification on the potential falling risk object and acquire key point coordinate information;
the judgment processing module is used for judging the posture of the potential falling risk object according to the coordinate information of the key point, and fusing the posture judgment result with the judgment result of the first judgment module so as to finally judge whether the potential falling risk object falls;
the network connection module is used for receiving the final judgment result of the judgment processing module and sending the falling state to the server cloud platform when the judgment result is that the terminal falls down, so that the server cloud platform sends alarm information to the pre-bound emergency contact terminal.
The invention has the beneficial effects that:
1. the model Modle1 is used for recognition and then filtering, whether the object is a potential falling object is judged for the first time, the change of the height-width ratio and the gravity center offset of a human body are considered in the judgment, meanwhile, the head acceleration is further considered, secondary key point coordinate information extraction is further carried out through the Modle2, the posture is analyzed, the posture result is combined with the initial judgment result for final judgment, the judgment accuracy is improved, and the misjudgment rate is reduced;
2. the pre-trained model is deployed in the edge embedded equipment, the falling information is sent out by using the extendedness IoT equipment, all detection algorithms perform edge reasoning locally, any cloud internet technology is not involved, and the privacy problem of the user can be protected to the greatest extent.
Drawings
Fig. 1 is a flowchart example of a human fall detection method according to an embodiment of the present application.
Fig. 2 is a schematic diagram of extracting coordinate information of key points of a human body according to an embodiment of the present application.
Fig. 3 is an example of an operation flow of a fall determination flow according to an embodiment of the present application.
Fig. 4 is a structural example of a model module 1 according to an embodiment of the present application.
Fig. 5 is a structural example of a model module 2 according to an embodiment of the present application.
Fig. 6 is a block diagram of a human fall detection apparatus according to an embodiment of the present application.
Fig. 7 is a block diagram of a human fall detection system according to an embodiment of the present application.
Fig. 8 is a demonstration example of the deduction effect of the model module 1 according to the embodiment of the present application.
Fig. 9 is a demonstration example of the deduction effect of the model module 2 according to the embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the following detailed description of the embodiments of the present invention is provided with reference to the accompanying drawings, but the described embodiments of the present invention are a part of the embodiments of the present invention, not all of the embodiments of the present invention.
In an aspect of the embodiments of the present application, a method for detecting a human fall based on deep learning is provided, as shown in fig. 1 and fig. 3, some exemplary flowcharts are shown, and specifically, the method for detecting the human fall includes the following steps:
s100, inputting the pictures acquired by the camera in the area into a trained Model1 which is pre-deployed in the edge embedded terminal equipment, and identifying the human body target to obtain the human body area information and the head area information.
The Model1 adopts ResNet18 as a YOLO V2 Model of the main feature extraction network, and the structure of the Model is shown in FIG. 4. Fig. 8 shows an example of a demonstration of the model module 1.
In some more specific embodiments, human body images in different scenes are collected through the camera device, and the shooting scenes include but are not limited to places such as a living room, a bedroom, a study room, a corridor, an outdoor place and the like; human actions include, but are not limited to, standing, sitting, falling, bending, squatting, lying on the side, and the like. The camera device is required to perform different acquisition under the conditions of sufficient day illumination and insufficient night illumination, namely, the acquired image containing infrared information under the condition of insufficient night illumination.
As an optional implementation manner, in order to improve the identification accuracy of the model algorithm, in this embodiment, motion sequences of a human body should be shot from multiple angles in the different scenes, and frame images in the human body motion video are extracted; the multiple angle shots include a side, a front, and a back of a human body.
S200, recording and filtering the human body area information and the head area information, and activating a next model Modle2 for targets with excessive center-of-gravity shift and severe change of human body height-width ratio.
Specifically, after filtering, judging whether the shift of the gravity center of the human body and the change of the height-width ratio of the human body both exceed a preset range according to the filtered information, if so, taking the human body as a potential fall risk object, activating the next model Modle2, and executing S300; if not, judging that no fall occurs, and ending.
The human body coordinate information and the head coordinate information within the photographing region may be obtained, saved, and filtered through the step S100. Because the influence of environment and uncertain factors can be caused, the result deduced by the model can have certain errors, and a Kalman filtering algorithm is adopted to filter the observed data.
When a human body falls down, the gravity center and the aspect ratio of the human body are greatly changed. Because the change of the gravity center and the height-width ratio of the human body only reflects the action of the human body to be greatly changed, the falling event cannot be explained. Here, it is necessary to perform a measurement on the acceleration of the head of the human body, so that the auxiliary system performs the falling risk judgment, that is, when it is judged whether the deviation of the center of gravity of the human body and the change of the height-width ratio of the human body both exceed a predetermined value, it is also necessary to judge whether the acceleration of the head of the human body exceeds a predetermined value, and if both exceed, it is judged as a potential falling risk object.
Specifically, in the step S200, whether the deviation of the center of gravity of the human body, the change of the height-width ratio of the human body, and the acceleration of the head of the human body exceed the preset values is determined by the following weighted summation method:
is provided withNThe time, the head region coordinates obtained from the filtered information are: (x h N()y h N() ) The coordinates of the human body region are (x b N()y b N() ),NTemporal head accelerationa h N() The calculation of (1) adopts continuous 10 frames of images, namely coordinates of N-9 time to N time, and is calculated by using the following formula:
Figure 709744DEST_PATH_IMAGE001
Figure 95726DEST_PATH_IMAGE002
wherein,a hx N() is composed ofNTime of dayxThe acceleration in the direction of the vehicle,a hy N() is composed ofNTime of dayyA directional acceleration;
Nbarycentric coordinates of time (x g N()y g N() ) The head region coordinates and the body region coordinates are weighted and summed to obtain:x g N() =x h N() +λ(x b N() -x h N() ),y g N() =y h N() +λ(y b N() -y h N() ),λthe value range is (0.5 to 1) as a weight factor;
Nshift of center of gravity of timex g N() Is composed ofNThe time barycentric coordinate andN-1 difference in barycentric coordinates at time:
Figure 203359DEST_PATH_IMAGE003
defining potential fall risk coefficientsL r1 The value is (0 to 1), and the following formula is adopted:
Figure 67410DEST_PATH_IMAGE004
wherein,w b is the information of the width of the human body,h b the height/length information of the human body is obtained from the filtered information;
risk coefficient of potential fallL Nr1() When the current value is greater than or equal to a preset threshold value, judging the potential falling risk object;L Nr1() and when the current value is less than the preset threshold value, judging that the falling does not occur.
Accordingly, whether to activate the next Model2 is selected by comprehensively judging the center of gravity, the aspect ratio, and the head acceleration of the human body.
And S300, carrying out human body key point feature identification on the potential falling risk object by adopting a trained Model2 which is pre-deployed in the edge embedded terminal equipment, acquiring key point coordinate information, judging the posture of the potential falling risk object, and fusing the posture judgment result with the judgment result in the S200 so as to finally judge whether the potential falling risk object falls down.
When the Model2 is activated, an object with a potential fall in the area is described, and at this time, the coordinate information of the key point of the object can be acquired through the Model2, so as to further perform detailed judgment on the posture of the object. The structure of the Model2 is shown in fig. 5, a YOLO V3 Tiny Model using Resnet18 as a backbone feature extraction network, and the identified human body key points are shown in fig. 2. Fig. 9 shows an example of a demonstration of the model module 2.
After the characteristic key points of the monitored object are obtained, the posture of the object is further judged, including standing, bending, falling and sitting, and whether the object falls or not can be comprehensively judged by further combining the result of the step S200.
Specifically, when the model Modle2 is adopted for recognition in S300, the method isMAt any moment, according to the coordinate information of the key points, the gesture of the potential falling risk object is carefully identified by using a KNN classification algorithm, and the probability of the falling category is obtainedη(M);
And S300, fusing the posture judgment result and the judgment result in S200, and adopting the following formula:
L Mr() (ML Nr1()
wherein,L Mr() is composed ofMA fusion result of the moments;
judgment ofL Mr() Whether the potential fall risk object is larger than or equal to a preset fall threshold value or not is judged to be fallen finally if the potential fall risk object is larger than or equal to the preset fall threshold value, and if the potential fall risk object is not larger than the preset fall threshold value, the potential fall risk object is judged to be fallen.
And S400, when the judgment result of the S300 is that the terminal falls down, sending the falling state to a server cloud platform through the extension IOT device connected with the edge embedded terminal device, so that the server cloud platform sends alarm information to the pre-bound emergency contact terminal.
Because the camera and the edge embedded terminal device do not have the internet function, when the abnormal falling behavior is detected, the alarm information needs to be sent out by means of other IOT devices. In the embodiment, all detection algorithms perform edge reasoning locally, and any cloud internet technology is not involved, so that the privacy problem of the user can be protected to a great extent.
In the above example, model1 and Model2 are trained by:
1) Building a corresponding neural network model through a Darknet framework at a computer terminal;
2) Marking the characteristics of the collected pictures;
3) And training the neural network model at the server side by using the marked picture.
The trained Model1 and Model2 are deployed to the edge embedded terminal equipment through the following steps:
1) Quantizing and pruning the trained neural network model by using a quantization algorithm;
2) Converting the quantized and pruned model into an instruction file which can be recognized by a bottom chip to obtain a file model;
3) And deploying the file model in the edge embedded terminal equipment for being called by an application program constructed by a user, so as to operate the corresponding neural network model.
In another aspect of the embodiments of the present application, there is provided a human body fall detection device based on deep learning, as shown in fig. 6, the structure of which includes a first identification module, a first determination module, a second determination module, and a transmission module.
The first recognition module inputs pictures acquired by the camera in the area into a trained Model1 which is pre-deployed in the edge embedded terminal equipment, recognizes a human body target, and obtains human body area information and head area information.
The first judging module records and filters the human body area information and the head area information, judges whether the shift of the gravity center of the human body and the change of the height-width ratio of the human body both exceed a preset value or not according to the filtered information, if so, the human body is taken as a potential falling risk object, executes the second judging module, if not, judges that the human body does not fall, and ends the process.
The second judging module adopts a trained Model2 which is pre-deployed in the edge embedded terminal equipment to perform human body key point feature recognition on the potential falling risk object, acquires key point coordinate information, judges the posture of the potential falling risk object, and fuses the posture judging result and the judging result of the first judging module so as to finally judge whether the potential falling risk object falls down.
When the judgment result of the second judgment module is that the mobile terminal falls down, the sending module sends the falling state to the server cloud platform through the extension IoT device connected with the edge embedded terminal device, so that the server cloud platform sends alarm information to the pre-bound emergency contact terminal.
In another aspect of the embodiments of the present application, a deep learning based human fall detection system is provided, which mainly includes an Edge embedded terminal Device (Edge Device) and an IoT Device (IoT Device).
As shown in fig. 7, the edge embedded terminal device includes a video Processing subsystem (VPSS), a neural Network module (CNN operators), a Post Processing module (Post Processing), a video image subsystem (VGS), a video coding module (VC), a video output module (VO), and the like, and the IoT device includes a model Control module (module Control) and a Network connection module (Network Agent). The neural network module can load the trained Model1 and Model2 from the external SD card to perform inference calculation on the image data. The operation of the neural network module may be controlled by the Core0/1 or model control module of the IoT device.
The edge embedded terminal equipment and the IoT equipment are connected through a Soft BUS. The video processing subsystem is connected with the video image subsystem and the neural network module, the neural network module is connected with the judgment processing module, the judgment processing module is connected with the network connection module, the network connection module is connected with the model control module, the model control module is connected with the neural network module, the video image subsystem is connected with the video coding module and the video output module, the video coding module is connected with the SD card through an SDIO interface, and the video output module is connected with an external display through an MIPI/HDMI interface.
The video processing subsystem receives RGB Stream video data acquired in the area from the camera equipment and outputs two paths of images, a channel CH0 is a high-resolution image and is used for storage, output and display, and a channel CH1 outputs a low-resolution image for neural network detection.
For the picture output by the channel CH1, the neural network module loads the trained Model1 to identify the human body target, so that human body region information and head region information are obtained and then sent to the judgment processing module, the judgment processing module records and filters the human body region information and the head region information, whether the deviation of the gravity center of the human body and the change of the height-width ratio of the human body both exceed a preset value or not is judged according to the filtered information, if so, the picture is taken as a potential falling risk object, if not, the picture is judged not to fall, and the process is finished; and transmitting the judgment result to the network connection module. Further, when judging whether the head acceleration exceeds the preset value, the head acceleration, the human body gravity center offset and the human body height-width ratio are weighted and summed to carry out comprehensive judgment.
The network connection module receives the judgment result of the judgment processing module and transmits the judgment result to the model control module; and when the Model control module receives the judgment result that the object is the potential fall risk object, the Model control module controls the neural network module to load the trained Model2, so that the Model2 is activated.
When the Model2 is activated, carrying out human body key point feature identification on the potential falling risk object during activation to obtain key point coordinate information; the judgment processing module judges the posture of the potential falling risk object according to the coordinate information of the key point, and fuses the posture judgment result with the judgment result of the first judgment module so as to finally judge whether the potential falling risk object falls down; and the judgment result is transmitted to the network connection module again.
And the network connection module receives the final judgment result of the judgment processing module and sends the falling state to the server cloud platform when the judgment result is that the terminal falls down, so that the server cloud platform sends alarm information to the pre-bound emergency contact terminal.
On the other hand, the video image subsystem waits for the detection result of the neural network module, then overlaps the result on the image of the channel CH0, and is divided into two paths, wherein one path of video is sent to the video coding module to be encoded and stored in the SD card in H.264 mode, the other path of video is sent to the video viewing module, and the video image overlapped with the detection information is displayed on the external display through the interface.
As some optional methods, the IoT device may select a raspberry type carrying the Home asset, and the Soft BUS may connect in a bluetooth manner.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and it is apparent that those skilled in the art can make various changes and modifications to the present application without departing from the spirit and scope of the present application.

Claims (10)

1. A human body falling detection method based on deep learning is characterized by comprising the following steps:
s100, inputting pictures acquired in the area by a camera into a trained Model1 which is pre-deployed in edge embedded terminal equipment, and identifying a human body target to obtain human body area information and head area information;
s200, recording and filtering the human body area information and the head area information, judging whether the shift of the gravity center of the human body and the change of the height-width ratio of the human body both exceed a preset value or not according to the filtered information, if so, taking the human body as a potential falling risk object, executing S300, if not, judging that the human body does not fall, and ending;
s300, carrying out human body key point feature recognition on the potential falling risk object by adopting a trained Model2 which is deployed in the edge embedded terminal equipment in advance, acquiring key point coordinate information, judging the posture of the potential falling risk object, and fusing the posture judgment result with the judgment result in the S200 so as to finally judge whether the potential falling risk object falls down;
and S400, when the judgment result of the S300 is that the terminal falls down, sending the falling state to a server cloud platform through the extension IoT device connected with the edge embedded terminal device, so that the server cloud platform sends alarm information to the pre-bound emergency contact terminal.
2. The method for detecting human body fall based on deep learning of claim 1, wherein in step S200, when determining whether the shift of the center of gravity of the human body and the change of the length-width ratio of the human body both exceed predetermined values, it is also necessary to determine whether the acceleration of the head of the human body exceeds a predetermined value, and if both exceed, it is determined as a potential fall risk object.
3. The method for detecting human body fall based on deep learning of claim 2, wherein the step S200 of determining whether the shift of the center of gravity of the human body, the change of the length-width ratio of the human body, and the acceleration of the head of the human body exceed the predetermined values is performed by a weighted summation method:
is provided withNThe time, the head region coordinates obtained from the filtered information are: (x h N()y h N() ) The coordinates of the human body region are (x b N()y b N() ),NTemporal head accelerationa h N() The calculation of (1) adopts continuous 10 frames of images, namely coordinates of N-9 time to N time, and is calculated by using the following formula:
Figure DEST_PATH_IMAGE001
Figure DEST_PATH_IMAGE002
wherein,a hx N() is composed ofNTime of dayxThe acceleration in the direction of the vehicle,a hy N() is composed ofNTime of dayyA directional acceleration;
Nbarycentric coordinates of time (x g N()y g N() ) The head region coordinates and the body region coordinates are weighted and summed to obtain:x g N() =x h N() +λ(x b N() -x h N() ),y g N() =y h N() +λ(y b N() -y h N() ),λthe value range is (0.5 to 1) as a weight factor;
Nshift of center of gravity of timex g N() Is composed ofNThe time barycentric coordinate andN-1 difference in barycentric coordinates at time:
Figure DEST_PATH_IMAGE003
/>
defining potential fall risk coefficientsL r1 The value is (0 to 1), and the following formula is adopted:
Figure DEST_PATH_IMAGE004
wherein,w b is the information of the width of the human body,h b the height/length information of the human body is obtained from the filtered information;
risk coefficient of potential fallL Nr1() When the current value is greater than or equal to a preset threshold value, judging the potential falling risk object;L Nr1() and when the current value is less than the preset threshold value, judging that the falling does not occur.
4. The deep learning based human fall detection method according to claim 1, wherein:
when the model Modle2 is adopted for recognition in S300, the method isMAt any moment, according to the coordinate information of the key points, the gesture of the potential falling risk object is carefully identified by using a KNN classification algorithm, and the probability of the falling category is obtainedη(M);
And S300, fusing the posture judgment result and the judgment result in S200, and adopting the following formula:
L Mr() (ML Nr1()
wherein,L Mr() is composed ofMA fusion result of the moments;
judgment ofL Mr() Whether the potential fall risk object is larger than or equal to a preset fall threshold value or not is judged to be fallen finally if the potential fall risk object is larger than or equal to the preset fall threshold value, and if the potential fall risk object is not larger than the preset fall threshold value, the potential fall risk object is judged to be fallen.
5. The deep learning based human fall detection method according to claim 1, wherein the Model1 adopts a YOLO V2 Model of a ResNet18 network for extracting main features.
6. The deep learning-based human body fall detection method according to claim 1, wherein the Model2 adopts a YOLO V3 Tiny Model of a resenet 18 as a backbone feature extraction network for detecting feature key points of a human body.
7. The deep learning based human fall detection method according to claim 1, wherein the Model1 and the Model2 are trained by:
building a corresponding neural network model through a Darknet framework at a computer terminal;
marking the characteristics of the collected pictures;
and training the neural network model at the server side by using the marked picture.
8. The deep learning-based human fall detection method according to claim 6, wherein the trained Model1 and Model2 are deployed to the edge-embedded terminal device by:
quantizing and pruning the trained neural network model by using a quantization algorithm;
converting the quantized and pruned model into an instruction file which can be recognized by a bottom chip to obtain a file model;
and deploying the file model in the edge embedded terminal equipment so as to enable the application program constructed by the user to be called and run.
9. Human fall detection device based on deep learning, its characterized in that includes:
the first identification module is used for inputting pictures acquired by the camera in the area into a trained Model1 which is pre-deployed in the edge embedded terminal equipment, identifying a human body target and obtaining human body area information and head area information;
the first judgment module is used for recording and filtering the human body area information and the head area information, judging whether the shift of the gravity center of the human body and the change of the height-width ratio of the human body both exceed a preset value or not according to the filtered information, if so, taking the human body as a potential falling risk object, executing the second judgment module, if not, judging that the human body does not fall, and ending;
the second judgment module is used for carrying out human body key point feature identification on the potential falling risk object by adopting a trained Model2 which is pre-deployed in the edge embedded terminal equipment, acquiring key point coordinate information, judging the posture of the potential falling risk object, and fusing the posture judgment result with the judgment result of the first judgment module so as to finally judge whether the potential falling risk object falls down;
and the sending module is used for sending the falling state to the server cloud platform through the extension IoT equipment connected with the edge embedded terminal equipment when the judgment result of the second judgment module is that the terminal falls down, so that the server cloud platform sends alarm information to the pre-bound emergency contact terminal.
10. The human body falling detection system based on deep learning is characterized by comprising edge embedded terminal equipment and IoT equipment, wherein the edge embedded terminal equipment comprises a video processing subsystem, a neural network module and a judgment processing module, and the IoT equipment comprises a model control module and a network connection module;
the video processing subsystem is used for receiving and outputting pictures acquired in the area through the camera;
the neural network module is used for loading the trained Model1 to perform human body target recognition on the picture input by the video processing subsystem, and obtaining human body region information and head region information;
the judgment processing module is used for recording and filtering the human body area information and the head area information, judging whether the shift of the gravity center of the human body and the change of the height-width ratio of the human body both exceed a preset value or not according to the filtered information, if so, taking the human body as a potential falling risk object, if not, judging that the human body does not fall, and ending;
the network connection module is used for receiving the judgment result of the judgment processing module and transmitting the judgment result to the model control module;
the Model control module is used for controlling the neural network module to load the trained Model2 when the judgment result is received as the potential falling risk object, so as to perform human body key point feature identification on the potential falling risk object and acquire key point coordinate information;
the judgment processing module is used for judging the posture of the potential falling risk object according to the coordinate information of the key point, and fusing the posture judgment result with the judgment result of the first judgment module so as to finally judge whether the potential falling risk object falls;
the network connection module is used for receiving the final judgment result of the judgment processing module and sending the falling state to the server cloud platform when the judgment result is that the terminal falls down, so that the server cloud platform sends alarm information to the pre-bound emergency contact terminal.
CN202310011636.5A 2023-01-05 2023-01-05 Human body falling detection method, device and system based on deep learning Pending CN115984967A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310011636.5A CN115984967A (en) 2023-01-05 2023-01-05 Human body falling detection method, device and system based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310011636.5A CN115984967A (en) 2023-01-05 2023-01-05 Human body falling detection method, device and system based on deep learning

Publications (1)

Publication Number Publication Date
CN115984967A true CN115984967A (en) 2023-04-18

Family

ID=85969995

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310011636.5A Pending CN115984967A (en) 2023-01-05 2023-01-05 Human body falling detection method, device and system based on deep learning

Country Status (1)

Country Link
CN (1) CN115984967A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116304596A (en) * 2023-05-26 2023-06-23 深圳市明源云科技有限公司 Indoor child safety monitoring method and device, electronic equipment and storage medium
CN116542674A (en) * 2023-07-06 2023-08-04 鲁担(山东)数据科技有限公司 Risk analysis and assessment method and system based on big data
CN116610834A (en) * 2023-05-15 2023-08-18 三峡科技有限责任公司 Monitoring video storage and quick query method based on AI analysis
CN117132949A (en) * 2023-10-27 2023-11-28 长春理工大学 All-weather fall detection method based on deep learning

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116610834A (en) * 2023-05-15 2023-08-18 三峡科技有限责任公司 Monitoring video storage and quick query method based on AI analysis
CN116610834B (en) * 2023-05-15 2024-04-12 三峡科技有限责任公司 Monitoring video storage and quick query method based on AI analysis
CN116304596A (en) * 2023-05-26 2023-06-23 深圳市明源云科技有限公司 Indoor child safety monitoring method and device, electronic equipment and storage medium
CN116542674A (en) * 2023-07-06 2023-08-04 鲁担(山东)数据科技有限公司 Risk analysis and assessment method and system based on big data
CN116542674B (en) * 2023-07-06 2023-09-26 鲁担(山东)数据科技有限公司 Risk analysis and assessment method and system based on big data
CN117132949A (en) * 2023-10-27 2023-11-28 长春理工大学 All-weather fall detection method based on deep learning
CN117132949B (en) * 2023-10-27 2024-02-09 长春理工大学 All-weather fall detection method based on deep learning

Similar Documents

Publication Publication Date Title
CN115984967A (en) Human body falling detection method, device and system based on deep learning
US11298050B2 (en) Posture estimation device, behavior estimation device, storage medium storing posture estimation program, and posture estimation method
Kepski et al. Fall detection using ceiling-mounted 3d depth camera
JP5390322B2 (en) Image processing apparatus and image processing method
US10841501B2 (en) Photographing control apparatus and photographing control method
CN112165600B (en) Drowning identification method and device, camera and computer system
KR20110003146A (en) Apparatus for econgnizing gesture, robot system using the same and method for econgnizing gesture using the same
KR102413893B1 (en) Non-face-to-face non-contact fall detection system based on skeleton vector and method therefor
Humenberger et al. Embedded fall detection with a neural network and bio-inspired stereo vision
KR102410286B1 (en) Method for detecting a falling accident based on deep learning and electronic device thereof
Joshi et al. A fall detection and alert system for an elderly using computer vision and Internet of Things
KR20080018642A (en) Remote emergency monitoring system and method
CN114616591A (en) Object tracking device and object tracking method
KR20190119212A (en) System for performing virtual fitting using artificial neural network, method thereof and computer recordable medium storing program to perform the method
CN113076799A (en) Drowning identification alarm method, drowning identification alarm device, drowning identification alarm platform, drowning identification alarm system and drowning identification alarm system storage medium
Taghvaei et al. Image-based fall detection and classification of a user with a walking support system
CN111797654A (en) Driver fatigue state detection method and device, storage medium and mobile terminal
JP7214437B2 (en) Information processing device, information processing method and program
JP2019029747A (en) Image monitoring system
CN113660455B (en) Method, system and terminal for fall detection based on DVS data
CN116453058A (en) Home old man behavior monitoring method and system based on deep learning and digital body separation
CN115223198A (en) Pig behavior identification method, pig behavior identification system, computer equipment and storage medium
CN115546825A (en) Automatic monitoring method for safety inspection normalization
WO2020241057A1 (en) Image processing system, image processing program, and image processing method
JP7198052B2 (en) image surveillance system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination