CN111783702A - Efficient pedestrian tumble detection method based on image enhancement algorithm and human body key point positioning - Google Patents
Efficient pedestrian tumble detection method based on image enhancement algorithm and human body key point positioning Download PDFInfo
- Publication number
- CN111783702A CN111783702A CN202010643346.9A CN202010643346A CN111783702A CN 111783702 A CN111783702 A CN 111783702A CN 202010643346 A CN202010643346 A CN 202010643346A CN 111783702 A CN111783702 A CN 111783702A
- Authority
- CN
- China
- Prior art keywords
- coordinates
- human body
- position coordinate
- image enhancement
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 47
- 238000000034 method Methods 0.000 claims abstract description 16
- 238000012545 processing Methods 0.000 claims abstract description 8
- 210000003127 knee Anatomy 0.000 claims description 21
- 238000012544 monitoring process Methods 0.000 claims description 11
- 239000000126 substance Substances 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 5
- 230000004927 fusion Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000001960 triggered effect Effects 0.000 description 2
- 208000027418 Wounds and injury Diseases 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 230000032683 aging Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000007664 blowing Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000006378 damage Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 208000014674 injury Diseases 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/103—Static body considered as a whole, e.g. static pedestrian or occupant recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Human Computer Interaction (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a high-efficiency pedestrian tumble detection method based on an image enhancement algorithm and human body key point positioning, which comprises the steps of (1) extracting a pedestrian behavior video clip; (2) processing images in the video; (3) detecting people in the video band; (4) capturing pictures of the tracking frames of the same tracked target person as key points of the human body and obtaining position coordinates of 18 key points of the human body; (5) analyzing whether the target character falls down or not according to the positions of the key points in the step (4); (6) judging that the robot falls down, storing the video segment and sending alarm information; (7) the staff hears the alarm prompt and makes corresponding measures. The method disclosed by the invention can greatly save the existing manpower, material and financial cost by combining the image enhancement algorithm with the positioning of the key points of the human body, and has very high application requirements and market prospects in the field of social public safety for timely obtaining the falling information of the pedestrian and making a quick response.
Description
Technical Field
The invention belongs to the technical field of computer vision analysis, and particularly relates to a high-efficiency pedestrian tumble detection method based on an image enhancement algorithm and human body key point positioning.
Background
With the rapid development of the times, China also gradually enters the aging stage at present, and research and investigation show that fall is the leading factor of injury and death of 65-year-old people. The fall of the elderly does not seem to be an accident as the average person thinks, but is a potential danger. Once the elderly fall, if the elderly can be found in time and rescued as early as possible, the risk of death and long-term hospitalization can be effectively reduced. Therefore, the falling detection, prevention and control for the old people are particularly important.
At present, there are two main methods for detecting falls, one is a fall detection method based on a wearable sensor, and the other is a fall detection method based on computational vision analysis.
A tumble detection method based on a wearable sensor mainly comprises the step that an old man wears a device consisting of a gyroscope, an acceleration sensor, a pressure sensor and the like on the waist or the arms. The Chinese patent CN108041772A discloses an old man fall detection intelligent bracelet and an old man fall detection method, comprising a bracelet body, wherein the bracelet body is provided with a motion sensor module, a heart rate module and a processor for carrying out fall joint detection according to data of the motion sensor and the heart rate sensor, the motion sensor module and the heart rate module are both connected with the processor, and the processor is connected with a wireless communication module; the processor comprises a weightlessness status monitoring unit, a collision status monitoring unit, a tumbling status monitoring unit, an alarm sending unit and a heartbeat monitoring unit. Whether the target character falls or not is judged by detecting whether the value of the corresponding equipment reaches the threshold value of the falling or not in real time. However, the method has certain defects, for example, the method belongs to a matching formula and needs to be carried around every time, but the method causes adverse factors such as inconvenient walking of the old and the like, and in addition, the accuracy rate of the method also has the problem of low recognition rate.
The method for detecting the falling of the target person based on the computer vision analysis is characterized in that the person under the monitoring video is directly analyzed without the cooperation of the person, and the method mainly comprises the following steps of firstly detecting the pedestrian in the video, then positioning key points of a human body, and finally analyzing by combining with the falling related posture threshold value to judge whether the target person falls down. For example, chinese patent CN108764131A discloses a video processing-based adaptive threshold multi-target fall detection method, which is technically characterized by comprising the following steps: acquiring user image information, and recording the aspect ratio, the effective area ratio and the central change of a human body image in the minimum circumscribed rectangle during normal activities and falls of a user; step two, different weights are given to the human body image according to different tumble sensitivity degrees of the height-to-width ratio and the effective area ratio, and new judgment parameters are obtained to realize the fusion of tumble judgment modes; step three, setting the optimal threshold value for users with different body types; and step four, according to the fusion of the user image information acquired in the step one and the fall judgment mode in the step two, and the combination of the user optimal fall threshold value set in the step three, the target fall detection is realized. However, this method also has certain disadvantages, such as the monitoring camera can be affected by different degrees of illumination and wind blowing, and the exposure of the illuminated video image is insufficient or the exposure is excessive and the shaking is fuzzy, and in addition, this method usually needs to analyze through two models of target detection and feature point positioning at the same time, which causes the problems of large consumption of computer resources and high calculation cost.
Aiming at the problems, an efficient pedestrian tumble detection method based on an image enhancement algorithm and human body key point positioning needs to be designed, and the practical application requirements of the current monitoring scene are met through the method.
Disclosure of Invention
In view of the above, the invention provides an efficient pedestrian fall detection method based on an image enhancement algorithm and human body key point positioning. According to pedestrian video data under the current monitoring scene, a whole solution scheme that resources occupy less and the pedestrian tumble detection with high recognition rate is designed, and whether the worker is a real tumble can be judged by the worker only by taking place of the tumble alarm pedestrian video, so that the work efficiency is greatly improved, and meanwhile, the cost and the expense are greatly reduced. In order to achieve the purpose, the invention provides the following technical scheme: in order to achieve the purpose, the technical scheme provided by the invention is as follows:
the invention relates to a high-efficiency pedestrian tumble detection method based on an image enhancement algorithm and human body key point positioning, which comprises the following steps of:
step (1): extracting pedestrian behavior video clips from a real monitoring scene;
step (2): processing unclear images in the video through an enhancement algorithm;
and (3): detecting persons in the video segment processed by the image enhancement algorithm through a YOLO-V3 target detection algorithm;
and (4): intercepting a picture of a tracking frame of the same tracked target person as the input of a human body key point detection algorithm-Alpha _ Pose, and obtaining position coordinates of 18 key points of a human body;
and (5): selecting 7 key point coordinates according to the positions of the key points in the step (4), and analyzing whether the target character falls down or not through position judgment and angle judgment respectively;
and (6): if at least three continuous frames judge that the robot falls down, storing the video segment in the process and sending out alarm information;
and (7): and (4) the staff hears the alarm prompt, can immediately and manually judge whether the target character falls or not according to the falling video band generated in the step (6), and makes corresponding measures.
Preferably, step (2) comprises
(2-1) image definition judgment: calculating the gradient value between every two adjacent pixels through an energy gradient function, and judging the picture to be an unclear picture when the sum of the gradient energy is greater than a standard threshold value 50;
wherein:
m represents the width of the image;
n represents the height of the image;
f (x, y) represents a pixel value of the picture at the coordinates (x, y);
d (f) represents the sum of gradient energies of the pictures;
(2-2) image enhancement: and performing definition enhancement processing on the picture with low definition through a Retinex image enhancement algorithm.
Preferably, in step (3), in order to prevent the problem of high CPU occupancy caused by frame-by-frame detection, the video segment is detected once every 24 frames, the middle 23 frames are tracked by KCF target tracking algorithm, the detected human target frame of the next frame and the previous frame are tracked to calculate the IOU, if the IOU is greater than 0.4, the detected human target frame is considered as the same person, and if the detected human target frame is less than 0.4, the detected human target frame is marked as a new target person. The IOU (interaction over union) is a standard for measuring the accuracy of detecting corresponding objects in a specific data set, and is the overlapping rate of the generated candidate frame (candidate frame) and the original mark frame (group mark), and the larger the IOU value is, the closer the positions of the candidate frame and the original mark frame are.
Preferably, in step (4), the coordinates of the 18 key point positions of the human body are respectively 0: a nose position coordinate; 1: a left eye position coordinate; 2: a right eye position coordinate; 3: left ear position coordinates; 4: the right position coordinate; 5: left shoulder position coordinates; 6: a right shoulder position coordinate; 7: a left elbow position coordinate; 8: a right elbow position coordinate; 9: left hand position coordinates; 10: a right hand position coordinate; 11. left hip position coordinates; 12. a right hip position coordinate; 13. left knee position coordinates; 14. the position coordinates of the right knee; 15. a left foot position coordinate; 16. the position coordinates of the right foot; 17. neck position coordinates. The Alpha _ Pose is an accurate multi-person posture estimation system, is a human key point detection algorithm, and can detect 18 key point position coordinates of a human body according to a pedestrian picture.
Preferably, in the step (5), the calculation formula for analyzing the falling of the person by angle judgment is
θ1representing the included angle between the center coordinates of the neck and the two hips;
θ2representing the included angle between the neck and the center coordinates of the two knees;
θ3representing the included angle between the neck and the central coordinates of the two feet;
and when any two of the three included angles meet the angle condition, judging that the target character belongs to a falling state.
Preferably, in the step (5), the formula for analyzing the falling of the person by the position judgment is as follows:
and when any one of the hip, the knee or the foot is lower than the neck, judging that the target belongs to a falling state.
The system used by the detection method comprises a video acquisition module, an image enhancement module, a pedestrian detection module, a pedestrian tracking module, a characteristic point detection module, a tumbling detection module, an alarm module and a human judgment module. The video acquisition module mainly acquires the video of the person to be detected, the pedestrian detection module mainly detects pedestrians in the video and captures pedestrian pictures, the pedestrian tracking module continuously positions the same target person, the characteristic point detection module positions the relevant characteristic positions of the person, the falling detection module carries out posture estimation according to the human posture formed by the relevant characteristic points, the falling posture is compared, the alarm module judges the reminding information triggered when the target falls through the falling detection module, and generates a corresponding video segment, and the artificial judgment module judges whether the generated video segment belongs to real falling.
Compared with the prior art, the technical scheme provided by the invention has the following beneficial effects:
the method disclosed by the invention can greatly save the existing manpower, material and financial cost by combining the image enhancement algorithm with the positioning of the key points of the human body, and has very high application requirements and market prospects in the field of social public safety for timely obtaining the falling information of the pedestrian and making a quick response.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart of the detection method of the present invention;
fig. 2 is a schematic diagram of detecting the positions of human body keys by Alpha _ pos in the embodiment of the present invention.
Detailed Description
For further understanding of the present invention, the present invention will be described in detail with reference to examples, which are provided for illustration of the present invention but are not intended to limit the scope of the present invention.
Referring to fig. 1, the embodiment relates to a high-efficiency pedestrian fall detection method based on an image enhancement algorithm and human body key point positioning. The video acquisition module mainly acquires the video of the person to be detected, the pedestrian detection module mainly detects pedestrians in the video and captures pedestrian pictures, the pedestrian tracking module continuously positions the same target person, the characteristic point detection module positions the relevant characteristic positions of the person, the falling detection module carries out posture estimation according to the human posture formed by the relevant characteristic points, the falling posture is compared, the alarm module judges the reminding information triggered when the target falls through the falling detection module, and generates a corresponding video segment, and the artificial judgment module judges whether the generated video segment belongs to real falling. Which comprises the following steps:
step (1): and extracting pedestrian behavior video clips from the real monitoring scene.
Step (2): and processing unclear images in the video through an enhancement algorithm.
(2-1) image definition judgment: calculating the gradient value between every two adjacent pixels through an energy gradient function, and judging the picture to be an unclear picture when the sum of the gradient energy is greater than a standard threshold value 50;
wherein:
m represents the width of the image;
n represents the height of the image;
f (x, y) represents a pixel value of the picture at the coordinates (x, y);
d (f) represents the sum of gradient energies of the pictures.
(2-2) image enhancement: and performing definition enhancement processing on the picture with low definition through a Retinex image enhancement algorithm.
And (3): and detecting the person in the video segment processed by the image enhancement algorithm through a YOLO-V3 target detection algorithm.
In order to prevent the problem of overhigh CPU occupancy rate caused by frame-by-frame detection, the video segment is detected once every 24 frames, the middle 23 frames are tracked by a KCF target tracking algorithm, the detected person target frame of the next frame and the tracked person target frame of the previous frame are subjected to IOU calculation, if the IOU is more than 0.4, the detected person target frame and the tracked person target frame of the previous frame are considered as the same person, and if the IOU is less than 0.4, the detected person target frame and the tracked person target frame are marked as new target persons. The IOU (interaction over union) is a standard for measuring the accuracy of detecting corresponding objects in a specific data set, and is the overlapping rate of the generated candidate frame (candidate frame) and the original mark frame (ground mark frame), and a larger IOU value indicates that the candidate frame and the original mark frame are closer in position.
And (4): and intercepting the picture of the tracking frame of the same tracked target person as the input of a human body key point detection algorithm-Alpha _ Pose, and obtaining the position coordinates of 18 key points of the human body.
Referring to fig. 2, the coordinates of 18 key points of the human body are respectively 0: a nose position coordinate; 1: a left eye position coordinate; 2: a right eye position coordinate; 3: left ear position coordinates; 4: the right position coordinate; 5: left shoulder position coordinates; 6: a right shoulder position coordinate; 7: a left elbow position coordinate; 8: a right elbow position coordinate; 9: left hand position coordinates; 10: a right hand position coordinate; 11. left hip position coordinates; 12. a right hip position coordinate; 13. left knee position coordinates; 14. the position coordinates of the right knee; 15. a left foot position coordinate; 16. the position coordinates of the right foot; 17. neck position coordinates. The Alpha _ Pose is an accurate multi-person posture estimation system, is a human key point detection algorithm, and can detect 18 key point position coordinates of a human body according to a pedestrian picture.
And (5): and (4) selecting 7 key point coordinates according to the positions of the key points in the step (4), and analyzing whether the target character falls down or not through position judgment and angle judgment respectively.
The calculation formula for judging and analyzing the falling of the figure by the angle is
θ1representing the included angle between the center coordinates of the neck and the two hips;
θ2representing the included angle between the neck and the center coordinates of the two knees;
θ3representing the included angle between the neck and the central coordinates of the two feet;
and when any two of the three included angles meet the angle condition, judging that the target character belongs to a falling state.
The formula for analyzing the falling of the figure by position judgment is as follows:
and when any one of the hip, the knee or the foot is lower than the neck, judging that the target belongs to a falling state.
And (6): if at least three continuous frames judge that the robot falls down, storing the video segment in the process and sending out alarm information;
and (7): and (4) the staff hears the alarm prompt, can immediately and manually judge whether the target character falls or not according to the falling video band generated in the step (6), and makes corresponding measures.
The present invention and its embodiments have been described above schematically, without limitation, and the embodiments of the present invention are shown in the drawings, and the actual structures are not limited thereto. Therefore, those skilled in the art should understand that they can easily and effectively design and modify the structure and embodiments of the present invention without departing from the spirit and scope of the present invention.
Claims (6)
1. An efficient pedestrian tumble detection method based on an image enhancement algorithm and human body key point positioning is characterized by comprising the following steps:
step (1): extracting pedestrian behavior video clips from a real monitoring scene;
step (2): processing unclear images in the video through an enhancement algorithm;
and (3): detecting persons in the video segment processed by the image enhancement algorithm through a YOLO-V3 target detection algorithm;
and (4): intercepting a picture of a tracking frame of the same tracked target person as the input of a human body key point detection algorithm-Alpha _ Pose, and obtaining position coordinates of 18 key points of a human body;
and (5): selecting 7 key point coordinates according to the positions of the key points in the step (4), and analyzing whether the target character falls down or not through position judgment and angle judgment respectively;
and (6): if at least three continuous frames judge that the robot falls down, storing the video segment in the process and sending out alarm information;
and (7): and (4) the staff hears the alarm prompt, can immediately and manually judge whether the target character falls or not according to the falling video band generated in the step (6), and makes corresponding measures.
2. The efficient pedestrian fall detection method based on image enhancement algorithm and human body key point positioning as claimed in claim 1, wherein the step (2) comprises
(2-1) image definition judgment: calculating the gradient value between every two adjacent pixels through an energy gradient function, and judging the picture to be an unclear picture when the sum of the gradient energy is greater than a standard threshold value 50;
wherein:
m represents the width of the image;
n represents the height of the image;
f (x, y) represents a pixel value of the picture at the coordinates (x, y);
d (f) represents the sum of gradient energies of the pictures;
(2-2) image enhancement: and performing definition enhancement processing on the picture with low definition through a Retinex image enhancement algorithm.
3. The method for detecting pedestrian fall according to claim 1, wherein in step (3), the video segment is detected once every 24 frames, the middle 23 frames are tracked by KCF target tracking algorithm, the detected human target frame of the next frame and the tracked human target frame of the previous frame are calculated as IOU, if the IOU is greater than 0.4, the detected human target frame is considered as the same person, and if the detected human target frame is less than 0.4, the detected human target frame is labeled as a new target person.
4. The efficient pedestrian fall detection method based on the image enhancement algorithm and the human body key point positioning according to claim 1, wherein in the step (4), the position coordinates of 18 key points of the human body are respectively 0: a nose position coordinate; 1: a left eye position coordinate; 2: a right eye position coordinate; 3: left ear position coordinates; 4: the right position coordinate; 5: left shoulder position coordinates; 6: a right shoulder position coordinate; 7: a left elbow position coordinate; 8: a right elbow position coordinate; 9: left hand position coordinates; 10: a right hand position coordinate; 11. left hip position coordinates; 12. a right hip position coordinate; 13. left knee position coordinates; 14. the position coordinates of the right knee; 15. a left foot position coordinate; 16. the position coordinates of the right foot; 17. neck position coordinates.
5. The efficient pedestrian fall detection method based on image enhancement algorithm and human body key point positioning as claimed in claim 1, wherein in the step (5), the calculation formula for angle judgment and analysis of the person fall is as follows
θ1representing the included angle between the center coordinates of the neck and the two hips;
θ2representing the included angle between the neck and the center coordinates of the two knees;
θ3representing the included angle between the neck and the central coordinates of the two feet;
and when any two of the three included angles meet the angle condition, judging that the target character belongs to a falling state.
6. The efficient pedestrian fall detection method based on the image enhancement algorithm and the human body key point positioning as claimed in claim 1, wherein in the step (5), the formula for analyzing the person fall by position judgment is as follows:
and when any one of the hip, the knee or the foot is lower than the neck, judging that the target belongs to a falling state.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010643346.9A CN111783702A (en) | 2020-07-20 | 2020-07-20 | Efficient pedestrian tumble detection method based on image enhancement algorithm and human body key point positioning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010643346.9A CN111783702A (en) | 2020-07-20 | 2020-07-20 | Efficient pedestrian tumble detection method based on image enhancement algorithm and human body key point positioning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111783702A true CN111783702A (en) | 2020-10-16 |
Family
ID=72759490
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010643346.9A Pending CN111783702A (en) | 2020-07-20 | 2020-07-20 | Efficient pedestrian tumble detection method based on image enhancement algorithm and human body key point positioning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111783702A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112800900A (en) * | 2021-01-18 | 2021-05-14 | 上海云话科技有限公司 | Mine personnel land falling detection method based on visual perception |
CN112800901A (en) * | 2021-01-18 | 2021-05-14 | 上海云话科技有限公司 | Mine personnel safety detection method based on visual perception |
CN113256938A (en) * | 2021-05-31 | 2021-08-13 | 苏州优函信息科技有限公司 | Fall monitoring alarm scheme based on spectrum camera |
WO2023123214A1 (en) * | 2021-12-30 | 2023-07-06 | 焦旭 | Electronic device, hand compression depth measurement method, system, and wearable device |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107392086A (en) * | 2017-05-26 | 2017-11-24 | 深圳奥比中光科技有限公司 | Apparatus for evaluating, system and the storage device of human body attitude |
CN108629300A (en) * | 2018-04-24 | 2018-10-09 | 北京科技大学 | A kind of fall detection method |
CN109635783A (en) * | 2019-01-02 | 2019-04-16 | 上海数迹智能科技有限公司 | Video monitoring method, device, terminal and medium |
CN109919132A (en) * | 2019-03-22 | 2019-06-21 | 广东省智能制造研究所 | A kind of pedestrian's tumble recognition methods based on skeleton detection |
CN110287923A (en) * | 2019-06-29 | 2019-09-27 | 腾讯科技(深圳)有限公司 | Human body attitude acquisition methods, device, computer equipment and storage medium |
CN110477925A (en) * | 2019-08-23 | 2019-11-22 | 广东省智能制造研究所 | A kind of fall detection for home for the aged old man and method for early warning and system |
CN110738154A (en) * | 2019-10-08 | 2020-01-31 | 南京熊猫电子股份有限公司 | pedestrian falling detection method based on human body posture estimation |
-
2020
- 2020-07-20 CN CN202010643346.9A patent/CN111783702A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107392086A (en) * | 2017-05-26 | 2017-11-24 | 深圳奥比中光科技有限公司 | Apparatus for evaluating, system and the storage device of human body attitude |
CN108629300A (en) * | 2018-04-24 | 2018-10-09 | 北京科技大学 | A kind of fall detection method |
CN109635783A (en) * | 2019-01-02 | 2019-04-16 | 上海数迹智能科技有限公司 | Video monitoring method, device, terminal and medium |
CN109919132A (en) * | 2019-03-22 | 2019-06-21 | 广东省智能制造研究所 | A kind of pedestrian's tumble recognition methods based on skeleton detection |
CN110287923A (en) * | 2019-06-29 | 2019-09-27 | 腾讯科技(深圳)有限公司 | Human body attitude acquisition methods, device, computer equipment and storage medium |
CN110477925A (en) * | 2019-08-23 | 2019-11-22 | 广东省智能制造研究所 | A kind of fall detection for home for the aged old man and method for early warning and system |
CN110738154A (en) * | 2019-10-08 | 2020-01-31 | 南京熊猫电子股份有限公司 | pedestrian falling detection method based on human body posture estimation |
Non-Patent Citations (1)
Title |
---|
宋春瑾: "图像美学质量评价及自适应增强研究", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112800900A (en) * | 2021-01-18 | 2021-05-14 | 上海云话科技有限公司 | Mine personnel land falling detection method based on visual perception |
CN112800901A (en) * | 2021-01-18 | 2021-05-14 | 上海云话科技有限公司 | Mine personnel safety detection method based on visual perception |
CN113256938A (en) * | 2021-05-31 | 2021-08-13 | 苏州优函信息科技有限公司 | Fall monitoring alarm scheme based on spectrum camera |
WO2023123214A1 (en) * | 2021-12-30 | 2023-07-06 | 焦旭 | Electronic device, hand compression depth measurement method, system, and wearable device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109919132B (en) | Pedestrian falling identification method based on skeleton detection | |
CN111783702A (en) | Efficient pedestrian tumble detection method based on image enhancement algorithm and human body key point positioning | |
Bobick et al. | The recognition of human movement using temporal templates | |
Jain et al. | Real-time upper-body human pose estimation using a depth camera | |
CN107766819B (en) | Video monitoring system and real-time gait recognition method thereof | |
US20220383653A1 (en) | Image processing apparatus, image processing method, and non-transitory computer readable medium storing image processing program | |
CN114842397B (en) | Real-time old man falling detection method based on anomaly detection | |
Chen et al. | Fall detection system based on real-time pose estimation and SVM | |
CN111881898B (en) | Human body posture detection method based on monocular RGB image | |
CN110472473A (en) | The method fallen based on people on Attitude estimation detection staircase | |
CN112966628A (en) | Visual angle self-adaptive multi-target tumble detection method based on graph convolution neural network | |
CN109101943A (en) | It is a kind of for detecting the machine vision method of Falls Among Old People | |
CN115116127A (en) | Fall detection method based on computer vision and artificial intelligence | |
CN114140745A (en) | Method, system, device and medium for detecting personnel attributes of construction site | |
CN113384267A (en) | Fall real-time detection method, system, terminal equipment and storage medium | |
Wang et al. | Robust pose recognition of the obscured human body | |
CN112036324A (en) | Human body posture judgment method and system for complex multi-person scene | |
JP6992900B2 (en) | Information processing equipment, control methods, and programs | |
CN115731563A (en) | Method for identifying falling of remote monitoring personnel | |
CN113408435B (en) | Security monitoring method, device, equipment and storage medium | |
CN114639168B (en) | Method and system for recognizing running gesture | |
Lee et al. | Automated abnormal behavior detection for ubiquitous healthcare application in daytime and nighttime | |
CN115331304A (en) | Running identification method | |
CN115424341A (en) | Fighting behavior identification method and device and electronic equipment | |
JP2022019988A (en) | Information processing apparatus, display device, and control method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20201016 |