CN111783702A - Efficient pedestrian tumble detection method based on image enhancement algorithm and human body key point positioning - Google Patents

Efficient pedestrian tumble detection method based on image enhancement algorithm and human body key point positioning Download PDF

Info

Publication number
CN111783702A
CN111783702A CN202010643346.9A CN202010643346A CN111783702A CN 111783702 A CN111783702 A CN 111783702A CN 202010643346 A CN202010643346 A CN 202010643346A CN 111783702 A CN111783702 A CN 111783702A
Authority
CN
China
Prior art keywords
coordinates
human body
position coordinate
image enhancement
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010643346.9A
Other languages
Chinese (zh)
Inventor
周戎龙
邱彦林
李华松
金国庆
张慧娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Xujian Science And Technology Co ltd
Original Assignee
Hangzhou Xujian Science And Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Xujian Science And Technology Co ltd filed Critical Hangzhou Xujian Science And Technology Co ltd
Priority to CN202010643346.9A priority Critical patent/CN111783702A/en
Publication of CN111783702A publication Critical patent/CN111783702A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a high-efficiency pedestrian tumble detection method based on an image enhancement algorithm and human body key point positioning, which comprises the steps of (1) extracting a pedestrian behavior video clip; (2) processing images in the video; (3) detecting people in the video band; (4) capturing pictures of the tracking frames of the same tracked target person as key points of the human body and obtaining position coordinates of 18 key points of the human body; (5) analyzing whether the target character falls down or not according to the positions of the key points in the step (4); (6) judging that the robot falls down, storing the video segment and sending alarm information; (7) the staff hears the alarm prompt and makes corresponding measures. The method disclosed by the invention can greatly save the existing manpower, material and financial cost by combining the image enhancement algorithm with the positioning of the key points of the human body, and has very high application requirements and market prospects in the field of social public safety for timely obtaining the falling information of the pedestrian and making a quick response.

Description

Efficient pedestrian tumble detection method based on image enhancement algorithm and human body key point positioning
Technical Field
The invention belongs to the technical field of computer vision analysis, and particularly relates to a high-efficiency pedestrian tumble detection method based on an image enhancement algorithm and human body key point positioning.
Background
With the rapid development of the times, China also gradually enters the aging stage at present, and research and investigation show that fall is the leading factor of injury and death of 65-year-old people. The fall of the elderly does not seem to be an accident as the average person thinks, but is a potential danger. Once the elderly fall, if the elderly can be found in time and rescued as early as possible, the risk of death and long-term hospitalization can be effectively reduced. Therefore, the falling detection, prevention and control for the old people are particularly important.
At present, there are two main methods for detecting falls, one is a fall detection method based on a wearable sensor, and the other is a fall detection method based on computational vision analysis.
A tumble detection method based on a wearable sensor mainly comprises the step that an old man wears a device consisting of a gyroscope, an acceleration sensor, a pressure sensor and the like on the waist or the arms. The Chinese patent CN108041772A discloses an old man fall detection intelligent bracelet and an old man fall detection method, comprising a bracelet body, wherein the bracelet body is provided with a motion sensor module, a heart rate module and a processor for carrying out fall joint detection according to data of the motion sensor and the heart rate sensor, the motion sensor module and the heart rate module are both connected with the processor, and the processor is connected with a wireless communication module; the processor comprises a weightlessness status monitoring unit, a collision status monitoring unit, a tumbling status monitoring unit, an alarm sending unit and a heartbeat monitoring unit. Whether the target character falls or not is judged by detecting whether the value of the corresponding equipment reaches the threshold value of the falling or not in real time. However, the method has certain defects, for example, the method belongs to a matching formula and needs to be carried around every time, but the method causes adverse factors such as inconvenient walking of the old and the like, and in addition, the accuracy rate of the method also has the problem of low recognition rate.
The method for detecting the falling of the target person based on the computer vision analysis is characterized in that the person under the monitoring video is directly analyzed without the cooperation of the person, and the method mainly comprises the following steps of firstly detecting the pedestrian in the video, then positioning key points of a human body, and finally analyzing by combining with the falling related posture threshold value to judge whether the target person falls down. For example, chinese patent CN108764131A discloses a video processing-based adaptive threshold multi-target fall detection method, which is technically characterized by comprising the following steps: acquiring user image information, and recording the aspect ratio, the effective area ratio and the central change of a human body image in the minimum circumscribed rectangle during normal activities and falls of a user; step two, different weights are given to the human body image according to different tumble sensitivity degrees of the height-to-width ratio and the effective area ratio, and new judgment parameters are obtained to realize the fusion of tumble judgment modes; step three, setting the optimal threshold value for users with different body types; and step four, according to the fusion of the user image information acquired in the step one and the fall judgment mode in the step two, and the combination of the user optimal fall threshold value set in the step three, the target fall detection is realized. However, this method also has certain disadvantages, such as the monitoring camera can be affected by different degrees of illumination and wind blowing, and the exposure of the illuminated video image is insufficient or the exposure is excessive and the shaking is fuzzy, and in addition, this method usually needs to analyze through two models of target detection and feature point positioning at the same time, which causes the problems of large consumption of computer resources and high calculation cost.
Aiming at the problems, an efficient pedestrian tumble detection method based on an image enhancement algorithm and human body key point positioning needs to be designed, and the practical application requirements of the current monitoring scene are met through the method.
Disclosure of Invention
In view of the above, the invention provides an efficient pedestrian fall detection method based on an image enhancement algorithm and human body key point positioning. According to pedestrian video data under the current monitoring scene, a whole solution scheme that resources occupy less and the pedestrian tumble detection with high recognition rate is designed, and whether the worker is a real tumble can be judged by the worker only by taking place of the tumble alarm pedestrian video, so that the work efficiency is greatly improved, and meanwhile, the cost and the expense are greatly reduced. In order to achieve the purpose, the invention provides the following technical scheme: in order to achieve the purpose, the technical scheme provided by the invention is as follows:
the invention relates to a high-efficiency pedestrian tumble detection method based on an image enhancement algorithm and human body key point positioning, which comprises the following steps of:
step (1): extracting pedestrian behavior video clips from a real monitoring scene;
step (2): processing unclear images in the video through an enhancement algorithm;
and (3): detecting persons in the video segment processed by the image enhancement algorithm through a YOLO-V3 target detection algorithm;
and (4): intercepting a picture of a tracking frame of the same tracked target person as the input of a human body key point detection algorithm-Alpha _ Pose, and obtaining position coordinates of 18 key points of a human body;
and (5): selecting 7 key point coordinates according to the positions of the key points in the step (4), and analyzing whether the target character falls down or not through position judgment and angle judgment respectively;
and (6): if at least three continuous frames judge that the robot falls down, storing the video segment in the process and sending out alarm information;
and (7): and (4) the staff hears the alarm prompt, can immediately and manually judge whether the target character falls or not according to the falling video band generated in the step (6), and makes corresponding measures.
Preferably, step (2) comprises
(2-1) image definition judgment: calculating the gradient value between every two adjacent pixels through an energy gradient function, and judging the picture to be an unclear picture when the sum of the gradient energy is greater than a standard threshold value 50;
Figure 100002_DEST_PATH_IMAGE001
wherein:
m represents the width of the image;
n represents the height of the image;
f (x, y) represents a pixel value of the picture at the coordinates (x, y);
d (f) represents the sum of gradient energies of the pictures;
(2-2) image enhancement: and performing definition enhancement processing on the picture with low definition through a Retinex image enhancement algorithm.
Preferably, in step (3), in order to prevent the problem of high CPU occupancy caused by frame-by-frame detection, the video segment is detected once every 24 frames, the middle 23 frames are tracked by KCF target tracking algorithm, the detected human target frame of the next frame and the previous frame are tracked to calculate the IOU, if the IOU is greater than 0.4, the detected human target frame is considered as the same person, and if the detected human target frame is less than 0.4, the detected human target frame is marked as a new target person. The IOU (interaction over union) is a standard for measuring the accuracy of detecting corresponding objects in a specific data set, and is the overlapping rate of the generated candidate frame (candidate frame) and the original mark frame (group mark), and the larger the IOU value is, the closer the positions of the candidate frame and the original mark frame are.
Preferably, in step (4), the coordinates of the 18 key point positions of the human body are respectively 0: a nose position coordinate; 1: a left eye position coordinate; 2: a right eye position coordinate; 3: left ear position coordinates; 4: the right position coordinate; 5: left shoulder position coordinates; 6: a right shoulder position coordinate; 7: a left elbow position coordinate; 8: a right elbow position coordinate; 9: left hand position coordinates; 10: a right hand position coordinate; 11. left hip position coordinates; 12. a right hip position coordinate; 13. left knee position coordinates; 14. the position coordinates of the right knee; 15. a left foot position coordinate; 16. the position coordinates of the right foot; 17. neck position coordinates. The Alpha _ Pose is an accurate multi-person posture estimation system, is a human key point detection algorithm, and can detect 18 key point position coordinates of a human body according to a pedestrian picture.
Preferably, in the step (5), the calculation formula for analyzing the falling of the person by angle judgment is
Figure 232545DEST_PATH_IMAGE002
(ii) a Wherein the content of the first and second substances,
Figure 100002_DEST_PATH_IMAGE003
Figure 676034DEST_PATH_IMAGE004
representing the coordinates of the midpoint positions of the two hips;
Figure 100002_DEST_PATH_IMAGE005
Figure 749163DEST_PATH_IMAGE006
representing the coordinates of the midpoint positions of the two knees;
Figure 100002_DEST_PATH_IMAGE007
Figure 733037DEST_PATH_IMAGE008
representing the coordinates of the midpoint position of the two feet;
θ1representing the included angle between the center coordinates of the neck and the two hips;
θ2representing the included angle between the neck and the center coordinates of the two knees;
θ3representing the included angle between the neck and the central coordinates of the two feet;
and when any two of the three included angles meet the angle condition, judging that the target character belongs to a falling state.
Preferably, in the step (5), the formula for analyzing the falling of the person by the position judgment is as follows:
Figure 100002_DEST_PATH_IMAGE009
(ii) a Wherein the content of the first and second substances,
Figure 840802DEST_PATH_IMAGE010
the coordinate height of the position of the left hip is represented;
Figure 100002_DEST_PATH_IMAGE011
the coordinate height of the position of the right hip is represented;
Figure 720771DEST_PATH_IMAGE012
represents the height of the position coordinate of the left knee;
Figure 100002_DEST_PATH_IMAGE013
represents the position coordinate height of the right knee;
Figure 281196DEST_PATH_IMAGE014
representing the height of the position coordinate of the left foot;
Figure 100002_DEST_PATH_IMAGE015
representing the height of the position coordinate of the right foot;
Figure 983327DEST_PATH_IMAGE016
represents the height of the position coordinate of the neck;
and when any one of the hip, the knee or the foot is lower than the neck, judging that the target belongs to a falling state.
The system used by the detection method comprises a video acquisition module, an image enhancement module, a pedestrian detection module, a pedestrian tracking module, a characteristic point detection module, a tumbling detection module, an alarm module and a human judgment module. The video acquisition module mainly acquires the video of the person to be detected, the pedestrian detection module mainly detects pedestrians in the video and captures pedestrian pictures, the pedestrian tracking module continuously positions the same target person, the characteristic point detection module positions the relevant characteristic positions of the person, the falling detection module carries out posture estimation according to the human posture formed by the relevant characteristic points, the falling posture is compared, the alarm module judges the reminding information triggered when the target falls through the falling detection module, and generates a corresponding video segment, and the artificial judgment module judges whether the generated video segment belongs to real falling.
Compared with the prior art, the technical scheme provided by the invention has the following beneficial effects:
the method disclosed by the invention can greatly save the existing manpower, material and financial cost by combining the image enhancement algorithm with the positioning of the key points of the human body, and has very high application requirements and market prospects in the field of social public safety for timely obtaining the falling information of the pedestrian and making a quick response.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart of the detection method of the present invention;
fig. 2 is a schematic diagram of detecting the positions of human body keys by Alpha _ pos in the embodiment of the present invention.
Detailed Description
For further understanding of the present invention, the present invention will be described in detail with reference to examples, which are provided for illustration of the present invention but are not intended to limit the scope of the present invention.
Referring to fig. 1, the embodiment relates to a high-efficiency pedestrian fall detection method based on an image enhancement algorithm and human body key point positioning. The video acquisition module mainly acquires the video of the person to be detected, the pedestrian detection module mainly detects pedestrians in the video and captures pedestrian pictures, the pedestrian tracking module continuously positions the same target person, the characteristic point detection module positions the relevant characteristic positions of the person, the falling detection module carries out posture estimation according to the human posture formed by the relevant characteristic points, the falling posture is compared, the alarm module judges the reminding information triggered when the target falls through the falling detection module, and generates a corresponding video segment, and the artificial judgment module judges whether the generated video segment belongs to real falling. Which comprises the following steps:
step (1): and extracting pedestrian behavior video clips from the real monitoring scene.
Step (2): and processing unclear images in the video through an enhancement algorithm.
(2-1) image definition judgment: calculating the gradient value between every two adjacent pixels through an energy gradient function, and judging the picture to be an unclear picture when the sum of the gradient energy is greater than a standard threshold value 50;
Figure 398128DEST_PATH_IMAGE001
wherein:
m represents the width of the image;
n represents the height of the image;
f (x, y) represents a pixel value of the picture at the coordinates (x, y);
d (f) represents the sum of gradient energies of the pictures.
(2-2) image enhancement: and performing definition enhancement processing on the picture with low definition through a Retinex image enhancement algorithm.
And (3): and detecting the person in the video segment processed by the image enhancement algorithm through a YOLO-V3 target detection algorithm.
In order to prevent the problem of overhigh CPU occupancy rate caused by frame-by-frame detection, the video segment is detected once every 24 frames, the middle 23 frames are tracked by a KCF target tracking algorithm, the detected person target frame of the next frame and the tracked person target frame of the previous frame are subjected to IOU calculation, if the IOU is more than 0.4, the detected person target frame and the tracked person target frame of the previous frame are considered as the same person, and if the IOU is less than 0.4, the detected person target frame and the tracked person target frame are marked as new target persons. The IOU (interaction over union) is a standard for measuring the accuracy of detecting corresponding objects in a specific data set, and is the overlapping rate of the generated candidate frame (candidate frame) and the original mark frame (ground mark frame), and a larger IOU value indicates that the candidate frame and the original mark frame are closer in position.
And (4): and intercepting the picture of the tracking frame of the same tracked target person as the input of a human body key point detection algorithm-Alpha _ Pose, and obtaining the position coordinates of 18 key points of the human body.
Referring to fig. 2, the coordinates of 18 key points of the human body are respectively 0: a nose position coordinate; 1: a left eye position coordinate; 2: a right eye position coordinate; 3: left ear position coordinates; 4: the right position coordinate; 5: left shoulder position coordinates; 6: a right shoulder position coordinate; 7: a left elbow position coordinate; 8: a right elbow position coordinate; 9: left hand position coordinates; 10: a right hand position coordinate; 11. left hip position coordinates; 12. a right hip position coordinate; 13. left knee position coordinates; 14. the position coordinates of the right knee; 15. a left foot position coordinate; 16. the position coordinates of the right foot; 17. neck position coordinates. The Alpha _ Pose is an accurate multi-person posture estimation system, is a human key point detection algorithm, and can detect 18 key point position coordinates of a human body according to a pedestrian picture.
And (5): and (4) selecting 7 key point coordinates according to the positions of the key points in the step (4), and analyzing whether the target character falls down or not through position judgment and angle judgment respectively.
The calculation formula for judging and analyzing the falling of the figure by the angle is
Figure 917840DEST_PATH_IMAGE002
(ii) a Wherein the content of the first and second substances,
Figure 231141DEST_PATH_IMAGE003
Figure 448496DEST_PATH_IMAGE004
representing the coordinates of the midpoint positions of the two hips;
Figure 967071DEST_PATH_IMAGE005
Figure 221465DEST_PATH_IMAGE006
representing the coordinates of the midpoint positions of the two knees;
Figure 209013DEST_PATH_IMAGE007
Figure 744905DEST_PATH_IMAGE008
representing the coordinates of the midpoint position of the two feet;
θ1representing the included angle between the center coordinates of the neck and the two hips;
θ2representing the included angle between the neck and the center coordinates of the two knees;
θ3representing the included angle between the neck and the central coordinates of the two feet;
and when any two of the three included angles meet the angle condition, judging that the target character belongs to a falling state.
The formula for analyzing the falling of the figure by position judgment is as follows:
Figure 603140DEST_PATH_IMAGE009
(ii) a Wherein the content of the first and second substances,
Figure 700540DEST_PATH_IMAGE010
the coordinate height of the position of the left hip is represented;
Figure 909805DEST_PATH_IMAGE011
the coordinate height of the position of the right hip is represented;
Figure 842863DEST_PATH_IMAGE012
represents the height of the position coordinate of the left knee;
Figure 837495DEST_PATH_IMAGE013
represents the position coordinate height of the right knee;
Figure 823906DEST_PATH_IMAGE014
representing the height of the position coordinate of the left foot;
Figure 300892DEST_PATH_IMAGE015
representing the height of the position coordinate of the right foot;
Figure 663741DEST_PATH_IMAGE016
represents the height of the position coordinate of the neck;
and when any one of the hip, the knee or the foot is lower than the neck, judging that the target belongs to a falling state.
And (6): if at least three continuous frames judge that the robot falls down, storing the video segment in the process and sending out alarm information;
and (7): and (4) the staff hears the alarm prompt, can immediately and manually judge whether the target character falls or not according to the falling video band generated in the step (6), and makes corresponding measures.
The present invention and its embodiments have been described above schematically, without limitation, and the embodiments of the present invention are shown in the drawings, and the actual structures are not limited thereto. Therefore, those skilled in the art should understand that they can easily and effectively design and modify the structure and embodiments of the present invention without departing from the spirit and scope of the present invention.

Claims (6)

1. An efficient pedestrian tumble detection method based on an image enhancement algorithm and human body key point positioning is characterized by comprising the following steps:
step (1): extracting pedestrian behavior video clips from a real monitoring scene;
step (2): processing unclear images in the video through an enhancement algorithm;
and (3): detecting persons in the video segment processed by the image enhancement algorithm through a YOLO-V3 target detection algorithm;
and (4): intercepting a picture of a tracking frame of the same tracked target person as the input of a human body key point detection algorithm-Alpha _ Pose, and obtaining position coordinates of 18 key points of a human body;
and (5): selecting 7 key point coordinates according to the positions of the key points in the step (4), and analyzing whether the target character falls down or not through position judgment and angle judgment respectively;
and (6): if at least three continuous frames judge that the robot falls down, storing the video segment in the process and sending out alarm information;
and (7): and (4) the staff hears the alarm prompt, can immediately and manually judge whether the target character falls or not according to the falling video band generated in the step (6), and makes corresponding measures.
2. The efficient pedestrian fall detection method based on image enhancement algorithm and human body key point positioning as claimed in claim 1, wherein the step (2) comprises
(2-1) image definition judgment: calculating the gradient value between every two adjacent pixels through an energy gradient function, and judging the picture to be an unclear picture when the sum of the gradient energy is greater than a standard threshold value 50;
Figure DEST_PATH_IMAGE001
wherein:
m represents the width of the image;
n represents the height of the image;
f (x, y) represents a pixel value of the picture at the coordinates (x, y);
d (f) represents the sum of gradient energies of the pictures;
(2-2) image enhancement: and performing definition enhancement processing on the picture with low definition through a Retinex image enhancement algorithm.
3. The method for detecting pedestrian fall according to claim 1, wherein in step (3), the video segment is detected once every 24 frames, the middle 23 frames are tracked by KCF target tracking algorithm, the detected human target frame of the next frame and the tracked human target frame of the previous frame are calculated as IOU, if the IOU is greater than 0.4, the detected human target frame is considered as the same person, and if the detected human target frame is less than 0.4, the detected human target frame is labeled as a new target person.
4. The efficient pedestrian fall detection method based on the image enhancement algorithm and the human body key point positioning according to claim 1, wherein in the step (4), the position coordinates of 18 key points of the human body are respectively 0: a nose position coordinate; 1: a left eye position coordinate; 2: a right eye position coordinate; 3: left ear position coordinates; 4: the right position coordinate; 5: left shoulder position coordinates; 6: a right shoulder position coordinate; 7: a left elbow position coordinate; 8: a right elbow position coordinate; 9: left hand position coordinates; 10: a right hand position coordinate; 11. left hip position coordinates; 12. a right hip position coordinate; 13. left knee position coordinates; 14. the position coordinates of the right knee; 15. a left foot position coordinate; 16. the position coordinates of the right foot; 17. neck position coordinates.
5. The efficient pedestrian fall detection method based on image enhancement algorithm and human body key point positioning as claimed in claim 1, wherein in the step (5), the calculation formula for angle judgment and analysis of the person fall is as follows
Figure 483140DEST_PATH_IMAGE002
(ii) a Wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE003
Figure 187922DEST_PATH_IMAGE004
representing the coordinates of the midpoint positions of the two hips;
Figure DEST_PATH_IMAGE005
Figure 152204DEST_PATH_IMAGE006
representing the coordinates of the midpoint positions of the two knees;
Figure DEST_PATH_IMAGE007
Figure 849902DEST_PATH_IMAGE008
representing the coordinates of the midpoint position of the two feet;
θ1representing the included angle between the center coordinates of the neck and the two hips;
θ2representing the included angle between the neck and the center coordinates of the two knees;
θ3representing the included angle between the neck and the central coordinates of the two feet;
and when any two of the three included angles meet the angle condition, judging that the target character belongs to a falling state.
6. The efficient pedestrian fall detection method based on the image enhancement algorithm and the human body key point positioning as claimed in claim 1, wherein in the step (5), the formula for analyzing the person fall by position judgment is as follows:
Figure DEST_PATH_IMAGE009
(ii) a Wherein the content of the first and second substances,
Figure 553547DEST_PATH_IMAGE010
the coordinate height of the position of the left hip is represented;
Figure DEST_PATH_IMAGE011
the coordinate height of the position of the right hip is represented;
Figure 193344DEST_PATH_IMAGE012
represents the height of the position coordinate of the left knee;
Figure DEST_PATH_IMAGE013
represents the position coordinate height of the right knee;
Figure 395656DEST_PATH_IMAGE014
representing the height of the position coordinate of the left foot;
Figure DEST_PATH_IMAGE015
representing the height of the position coordinate of the right foot;
Figure 647777DEST_PATH_IMAGE016
represents the height of the position coordinate of the neck;
and when any one of the hip, the knee or the foot is lower than the neck, judging that the target belongs to a falling state.
CN202010643346.9A 2020-07-20 2020-07-20 Efficient pedestrian tumble detection method based on image enhancement algorithm and human body key point positioning Pending CN111783702A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010643346.9A CN111783702A (en) 2020-07-20 2020-07-20 Efficient pedestrian tumble detection method based on image enhancement algorithm and human body key point positioning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010643346.9A CN111783702A (en) 2020-07-20 2020-07-20 Efficient pedestrian tumble detection method based on image enhancement algorithm and human body key point positioning

Publications (1)

Publication Number Publication Date
CN111783702A true CN111783702A (en) 2020-10-16

Family

ID=72759490

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010643346.9A Pending CN111783702A (en) 2020-07-20 2020-07-20 Efficient pedestrian tumble detection method based on image enhancement algorithm and human body key point positioning

Country Status (1)

Country Link
CN (1) CN111783702A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112800900A (en) * 2021-01-18 2021-05-14 上海云话科技有限公司 Mine personnel land falling detection method based on visual perception
CN112800901A (en) * 2021-01-18 2021-05-14 上海云话科技有限公司 Mine personnel safety detection method based on visual perception
CN113256938A (en) * 2021-05-31 2021-08-13 苏州优函信息科技有限公司 Fall monitoring alarm scheme based on spectrum camera
WO2023123214A1 (en) * 2021-12-30 2023-07-06 焦旭 Electronic device, hand compression depth measurement method, system, and wearable device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107392086A (en) * 2017-05-26 2017-11-24 深圳奥比中光科技有限公司 Apparatus for evaluating, system and the storage device of human body attitude
CN108629300A (en) * 2018-04-24 2018-10-09 北京科技大学 A kind of fall detection method
CN109635783A (en) * 2019-01-02 2019-04-16 上海数迹智能科技有限公司 Video monitoring method, device, terminal and medium
CN109919132A (en) * 2019-03-22 2019-06-21 广东省智能制造研究所 A kind of pedestrian's tumble recognition methods based on skeleton detection
CN110287923A (en) * 2019-06-29 2019-09-27 腾讯科技(深圳)有限公司 Human body attitude acquisition methods, device, computer equipment and storage medium
CN110477925A (en) * 2019-08-23 2019-11-22 广东省智能制造研究所 A kind of fall detection for home for the aged old man and method for early warning and system
CN110738154A (en) * 2019-10-08 2020-01-31 南京熊猫电子股份有限公司 pedestrian falling detection method based on human body posture estimation

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107392086A (en) * 2017-05-26 2017-11-24 深圳奥比中光科技有限公司 Apparatus for evaluating, system and the storage device of human body attitude
CN108629300A (en) * 2018-04-24 2018-10-09 北京科技大学 A kind of fall detection method
CN109635783A (en) * 2019-01-02 2019-04-16 上海数迹智能科技有限公司 Video monitoring method, device, terminal and medium
CN109919132A (en) * 2019-03-22 2019-06-21 广东省智能制造研究所 A kind of pedestrian's tumble recognition methods based on skeleton detection
CN110287923A (en) * 2019-06-29 2019-09-27 腾讯科技(深圳)有限公司 Human body attitude acquisition methods, device, computer equipment and storage medium
CN110477925A (en) * 2019-08-23 2019-11-22 广东省智能制造研究所 A kind of fall detection for home for the aged old man and method for early warning and system
CN110738154A (en) * 2019-10-08 2020-01-31 南京熊猫电子股份有限公司 pedestrian falling detection method based on human body posture estimation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
宋春瑾: "图像美学质量评价及自适应增强研究", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112800900A (en) * 2021-01-18 2021-05-14 上海云话科技有限公司 Mine personnel land falling detection method based on visual perception
CN112800901A (en) * 2021-01-18 2021-05-14 上海云话科技有限公司 Mine personnel safety detection method based on visual perception
CN113256938A (en) * 2021-05-31 2021-08-13 苏州优函信息科技有限公司 Fall monitoring alarm scheme based on spectrum camera
WO2023123214A1 (en) * 2021-12-30 2023-07-06 焦旭 Electronic device, hand compression depth measurement method, system, and wearable device

Similar Documents

Publication Publication Date Title
CN109919132B (en) Pedestrian falling identification method based on skeleton detection
CN111783702A (en) Efficient pedestrian tumble detection method based on image enhancement algorithm and human body key point positioning
Bobick et al. The recognition of human movement using temporal templates
Jain et al. Real-time upper-body human pose estimation using a depth camera
CN107766819B (en) Video monitoring system and real-time gait recognition method thereof
US20220383653A1 (en) Image processing apparatus, image processing method, and non-transitory computer readable medium storing image processing program
CN114842397B (en) Real-time old man falling detection method based on anomaly detection
Chen et al. Fall detection system based on real-time pose estimation and SVM
CN111881898B (en) Human body posture detection method based on monocular RGB image
CN110472473A (en) The method fallen based on people on Attitude estimation detection staircase
CN112966628A (en) Visual angle self-adaptive multi-target tumble detection method based on graph convolution neural network
CN109101943A (en) It is a kind of for detecting the machine vision method of Falls Among Old People
CN115116127A (en) Fall detection method based on computer vision and artificial intelligence
CN114140745A (en) Method, system, device and medium for detecting personnel attributes of construction site
CN113384267A (en) Fall real-time detection method, system, terminal equipment and storage medium
Wang et al. Robust pose recognition of the obscured human body
CN112036324A (en) Human body posture judgment method and system for complex multi-person scene
JP6992900B2 (en) Information processing equipment, control methods, and programs
CN115731563A (en) Method for identifying falling of remote monitoring personnel
CN113408435B (en) Security monitoring method, device, equipment and storage medium
CN114639168B (en) Method and system for recognizing running gesture
Lee et al. Automated abnormal behavior detection for ubiquitous healthcare application in daytime and nighttime
CN115331304A (en) Running identification method
CN115424341A (en) Fighting behavior identification method and device and electronic equipment
JP2022019988A (en) Information processing apparatus, display device, and control method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20201016