CN113409348A - Fall detection system and fall detection method based on depth image - Google Patents

Fall detection system and fall detection method based on depth image Download PDF

Info

Publication number
CN113409348A
CN113409348A CN202010183817.2A CN202010183817A CN113409348A CN 113409348 A CN113409348 A CN 113409348A CN 202010183817 A CN202010183817 A CN 202010183817A CN 113409348 A CN113409348 A CN 113409348A
Authority
CN
China
Prior art keywords
joint
points
depth image
human body
coordinates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010183817.2A
Other languages
Chinese (zh)
Inventor
金成铭
刘佩林
邹耀
应忍冬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Data Miracle Intelligent Technology Co ltd
Original Assignee
Shanghai Data Miracle Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Data Miracle Intelligent Technology Co ltd filed Critical Shanghai Data Miracle Intelligent Technology Co ltd
Priority to CN202010183817.2A priority Critical patent/CN113409348A/en
Publication of CN113409348A publication Critical patent/CN113409348A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a fall detection system and a fall detection method based on a depth image, wherein the fall detection system comprises an image reading device, and the image reading device reads and preprocesses the depth image; and the coordinate processing unit receives the depth image information of the image reading device, calculates and processes the coordinate information, calculates the speed and the acceleration of each joint point according to the coordinate information, and judges whether the joint points fall down or not by judging the characteristic information. The method has the advantages that the depth image is used for motion detection, behavior identification and falling detection, the method has the unique advantage of privacy protection, all-weather monitoring can be carried out indoors with more attention to privacy, and illumination interference is avoided.

Description

Fall detection system and fall detection method based on depth image
Technical Field
The invention relates to a detection method, in particular to a fall detection system and a fall detection method based on a depth image.
Background
In recent years, the aging problem of the population of China is increasingly severe, and the health problem and the safety problem of daily life of the old people are widely concerned and regarded by society. With the age, the risk of accidental falls increases year by year for the elderly, and the social and economic costs incurred after a fall are also higher, so fall detection techniques are essential.
Currently, there are mainly three types of fall detection systems: the human body falling detection system is based on wearable equipment, environment layout and vision technology. The fall detection system based on the wearable device mainly detects parameter information such as speed and acceleration of human motion through the wearable sensor device, and judges whether the fall occurs according to a preset threshold value. Because the portable monitoring device is carried about, the monitoring range is not limited. Power supply is a problem and comfort considerations are required for wearable devices.
Fall detection systems deployed based on the environment generally detect impacts generated during falls by pressure-sensitive sensors, such as paving pressure-sensitive tiles. The method has the advantages that extra equipment is not required to be carried, and the influence on the activities of the user is small; the disadvantage is that the monitoring area has limitations, and the monitoring can be carried out only in a room where the sensors are laid, and a plurality of sensors are generally required to be arranged, so that the cost is high.
A falling detection system based on a vision technology monitors human body activities through a camera and detects whether falling occurs or not. However, the RGB camera may risk privacy disclosure in privacy places such as a bathroom and a bedroom. Many elderly people do not want to install a camera in a private place, but these scenes are the most prone to accidents such as falls, and especially the floor of a toilet may be slippery. Therefore, the monitoring by using the depth camera can reasonably solve the contradiction, because only the contour features of the monitored person can be obtained, and color image information can not be obtained from the depth image (except for a binocular depth camera, because the two RGB cameras are still in nature), thereby protecting the privacy of the person.
Disclosure of Invention
The invention aims to provide a fall detection system and a fall detection method based on a depth image, which utilize the depth image to carry out motion detection, behavior identification and fall detection, have the unique advantage of privacy protection, can carry out all-weather monitoring indoors with more importance on privacy and are not interfered by illumination.
In order to solve the technical problems, the technical scheme of the invention is as follows:
a fall detection system comprising:
the image reading device reads and preprocesses the depth image; and
and the coordinate processing unit receives the depth image information of the image reading device, calculates and processes coordinate information, calculates the speed and the acceleration of each joint point according to the coordinate information, and judges whether the joint points fall down or not by judging the characteristic information.
Preferably, the image reading apparatus includes
The filtering module is used for carrying out median filtering and noise removal on the depth image information;
the cutting module cuts the depth image according to the self-adaptive depth threshold value to roughly remove the background area, further remove noise interference and set the background to be black so as to obtain a foreground area;
the filtering module removes fine holes in the foreground area by using graphical filtering; and
and the area setting module is used for searching a moving human body area in the foreground area according to the motion detection algorithm and taking the human body area as a new foreground area.
Preferably, the coordinate processing unit includes
The random forest classifier classifies the pixel points of the human body region obtained in the region setting module and judges that the pixel points belong to a certain joint part of a human body;
the screening module screens out pixel points of each joint part;
the calculation module is used for acquiring pixel points of all joint points and calculating to obtain the pixel coordinates of the joint points;
the conversion module is used for converting the three-dimensional space coordinates of the joint points according to the internal parameters of the depth map camera of the image reading device and the pixel coordinates of the joint points and recording the time stamps corresponding to the image frames;
the recording module records the three-dimensional space coordinates of each joint point;
a speed calculating module, which calculates the movement speed of the joint point and the overall average speed according to the coordinate difference and the time difference of the joint point;
the acceleration calculation module is used for calculating a speed difference according to the inter-frame speed obtained by calculation so as to calculate the acceleration; and
and the SVM classifier is used for judging whether the falling down occurs or not by combining the speed and acceleration information of the coordinates of each joint point.
Further, the invention also provides a fall detection method based on depth image motion detection and behavior identification, which comprises the following steps:
(a) reading a depth image, and preprocessing the depth image to obtain a depth image only containing a human body region;
(b) calculating coordinates of the human body joint points according to the depth information, and returning and recording coordinate information and the timestamp;
(c) calculating the movement speed and acceleration of the joint point according to the joint point coordinates and the time stamp obtained in the step (b);
(d) and (c) judging whether a falling event occurs according to the joint point coordinates, the speed and the acceleration obtained in the step (b) and the step (c).
Preferably, in step (a), the depth image preprocessing comprises the following specific steps:
(a1) carrying out median filtering on the depth image to remove noise;
(a2) cutting the depth image according to the self-adaptive depth threshold value to roughly remove a background area, further removing noise interference, and setting the background to be black;
(a3) removing fine holes in the foreground area by using graphical filtering;
(a4) and searching a moving human body area in the foreground area according to a motion detection algorithm, and taking the human body area as a new foreground area.
Preferably, in the step (b), the specific steps of the coordinate calculation of the human body joint point are as follows:
(b1) classifying the pixel points of the human body region obtained in the step (a4) by using a random forest classifier, and judging that the pixel points belong to a certain joint part of a human body;
(b2) screening points with the prediction probability larger than a set threshold value in the step (b1) to obtain pixel points of each joint part with different densities;
(b3) clustering pixel points belonging to the same position by using a Mean Shift algorithm to obtain a joint point pixel coordinate;
(b4) and (b) converting the image reading device and the pixel coordinates obtained in the step (b3) to obtain the three-dimensional space coordinates of the joint point, and recording a time stamp corresponding to the image frame.
Preferably, in the step (c), the joint point movement information is calculated as follows:
(c1) selecting the coordinates of 3 bone points of the neck, the bottom of the spine and the left ankle of the human body obtained in the step (b4), maintaining a queue, and recording the three-dimensional space coordinates and the time stamp of the joint points of the latest 10 frames;
(c2) calculating a coordinate difference and a time difference according to the information recorded in the step (c1), thereby calculating a speed of the movement of the joint point between the latest 10 frames and an average speed of the whole 10 frames;
(c3) calculating a velocity difference from the inter-frame velocity calculated in the step (c2), thereby calculating an acceleration;
preferably, in step (d), the specific determination steps are as follows:
(d1) calculating to obtain joint point speed characteristics and acceleration characteristics according to the joint point coordinate position obtained in the step (b) as height characteristics in the step (c);
(d2) and fusing three characteristics of the latest continuous 10 frames of joint points as a classification basis, classifying by using an SVM classifier, and judging whether a falling event occurs.
Drawings
Fig. 1 is a block diagram of a fall detection system according to the invention;
fig. 2 is a basic flow chart of a fall detection method according to the invention;
fig. 3 is a basic flowchart of the specific steps of depth image preprocessing of the fall detection method according to the present invention;
fig. 4 is a basic flowchart of the specific steps of the coordinate calculation of the human joint point of the fall detection method according to the present invention;
fig. 5 is a basic flowchart of the specific steps of determining whether a fall occurs according to the fall detection method of the present invention.
Detailed Description
The following further describes embodiments of the present invention with reference to the drawings. It should be noted that the description of the embodiments is provided to help understanding of the present invention, but the present invention is not limited thereto. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
As shown in fig. 1, the present invention provides a fall detection system, which includes an image reading device 10, where the image reading device 10 reads a current depth image and performs preprocessing on the depth image, where the preprocessing includes capturing a depth image currently including a human body region.
The Depth image is equal to a common RGB three-channel color image + Depth Map, which is an image or image channel containing information about the distance of the surface of the scene object of the viewpoint in 3D computer graphics. Where the Depth Map is similar to a grayscale image except that each pixel value thereof is the actual distance of the sensor from the object. Usually, the RGB image and the Depth image are registered, so that there is a one-to-one correspondence between the pixel points.
In domestic detection device, generally adopt the camera, but the risk that privacy was revealed exists in the RGB camera to the during operation relatively relies on ambient light, is difficult to normal work when light is dim or night. The image reading device 10 in the present invention reads the current depth image and processes the image, and the limiting conditions for light and time are small, so that the present invention is more suitable for different environments and conditions.
The fall detection system further comprises a coordinate processing unit 20, wherein the coordinate processing unit 10 receives the depth image of the image reading device 10, performs coordinate processing calculation on the current depth image, calculates the coordinates of the human body joint points, and returns the coordinates to the coordinate information and the time stamp points. And further, calculating the speed and the acceleration of the current joint point according to the calculated joint point coordinates and the timestamp point.
It can be understood that when the human body normally walks and falls, the speeds and accelerations generated by the respective joint points are different, and therefore, whether a fall incident occurs can be judged through the coordinates, the speeds and the accelerations of the joint points.
In the above process, the specific operation of the depth image preprocessing is as follows: and carrying out median filtering, depth cutting, graphical filtering, motion detection and human body area searching on the image.
Specifically, the image reading apparatus 10 includes a filtering module 11, which performs median filtering and noise removal on the depth image information;
the image reading apparatus 10 further includes a cutting module 12, the cutting module 12: cutting the depth image according to the self-adaptive depth threshold value to roughly remove a background area, further removing noise interference, and setting the background to be black so as to obtain a foreground area;
the image reading device 10 further includes a filtering module 13, where the filtering module 13 removes fine holes in the foreground region by using a graphical filtering;
the image reading apparatus 10 further includes a region setting module 14, wherein the region setting module 14 finds a moving human body region in the foreground region according to a motion detection algorithm, and takes the human body region as a new foreground region.
Through the operation, the depth image of the human body is obtained.
In the above process, the specific operations of the coordinate processing of the depth image are as follows: and calculating coordinates of the human body joint points and calculating movement information of the joint points.
Specifically, the coordinate processing unit 20 includes a random forest classifier 21, and the random forest classifier 21 classifies the pixel points of the human body region obtained in the region setting module 14, and determines that the pixel points belong to a certain joint part of the human body;
it should be noted that the random forest classifier: the method is a classifier which trains and predicts a sample by utilizing a plurality of decision trees, and the random forest combines the decision trees together, so that the accuracy is greatly improved compared with that of a single decision tree.
The coordinate processing unit 20 further includes a screening module 22, the screening module 22 screens out pixel points of each joint portion, and the specific screening mode is to screen out pixel points of each joint portion with different densities by predicting points with a probability greater than a set threshold;
the coordinate processing unit 20 further comprises a calculating module 23, wherein the calculating module 23 acquires pixel points of each joint point and calculates to obtain pixel coordinates of the joint points, and the specific calculating mode is to cluster the pixel points belonging to the same position by using a Mean Shift algorithm to obtain the pixel coordinates of the joint points;
it is worth mentioning that the Mean Shift algorithm: the method is also called a mean shift algorithm and is widely applied to clustering, image smoothing, segmentation and tracking. The Mean Shift algorithm is generally an iterative process, which first calculates the Mean Shift value in the region of interest, moves the center of the region to the calculated centroid, and then continues moving with the calculated centroid as a new starting point until the final condition is met. In the iterative process, the image is continuously shifted to a position with higher density until the image is moved to the central position with the highest density of the pixel points and then stopped.
The coordinate processing unit 20 further includes a conversion module 24, which converts the three-dimensional space coordinates of the joint point according to the internal parameters of the depth map camera of the image reading device 10 and the pixel coordinates of the joint point, and records the timestamp corresponding to the image frame.
More specifically, the coordinate processing unit 20 includes a recording module 25, which records the three-dimensional space coordinates of the above joint points, such as the coordinates of the bone points of the neck, the bottom of the spine, the left ankle, etc., maintains a queue, and records the three-dimensional space coordinates and the time stamp of the joint points; the number of frames recorded is exemplified by the latest 10 frames.
The coordinate processing unit 20 comprises a speed calculating module 26, which calculates the speed of the movement of the joint points between the latest 10 frames and the average speed of the latest 10 frames according to the coordinate difference and the time difference of the joint points;
the coordinate processing unit 20 further includes an acceleration calculating module 27, which calculates a speed difference according to the inter-frame speed obtained by calculation, so as to calculate an acceleration;
the coordinate processing unit 20 further includes an SVM classifier 28, which is used to determine whether a fall occurs by combining the information of the velocity and the acceleration of the coordinates of each joint point, and the specific determination process is as follows: calculating to obtain joint point speed characteristics and acceleration characteristics according to the obtained joint point coordinate position as height characteristics; three features of continuous 10 frames of joint points are fused to be used as a classification basis, and an SVM classifier 28 is used for classification to judge whether a falling event occurs.
It should be noted that the SVM classifier 28, also called a support vector machine, is a two-class model whose basic model is a linear classifier with maximum spacing defined in the feature space, and whose basic idea is to solve the separating hyperplane with maximum geometric spacing that can correctly divide the training data set.
As shown in fig. 2 to 5, according to the above specific operation modes, the present invention further provides a fall detection method based on depth image motion detection and behavior recognition, comprising the following steps:
(a) reading a depth image, and preprocessing the depth image to obtain a depth image only containing a human body region;
(b) calculating coordinates of the human body joint points according to the depth information, and returning and recording coordinate information and the timestamp;
(c) calculating the movement speed and acceleration of the joint point according to the joint point coordinates and the time stamp obtained in the step (b);
(d) and (c) judging whether a falling event occurs according to the joint point coordinates, the speed and the acceleration obtained in the step (b) and the step (c).
Specifically, in step (a), the depth image preprocessing comprises the following specific steps:
(a1) carrying out median filtering on the depth image to remove noise;
(a2) cutting the depth image according to the self-adaptive depth threshold value to roughly remove a background area, further removing noise interference, and setting the background to be black;
(a3) removing fine holes in the foreground area by using graphical filtering;
(a4) and searching a moving human body area in the foreground area according to a motion detection algorithm, and taking the human body area as a new foreground area.
Specifically, in the step (b), the specific steps of the calculation of the coordinates of the human body joint point are as follows:
(b1) classifying the pixel points of the human body region obtained in the step (a4) by using a random forest classifier, and judging that the pixel points belong to a certain joint part of a human body;
(b2) screening points with the prediction probability larger than a set threshold value in the step (b1) to obtain pixel points of each joint part with different densities;
(b3) clustering pixel points belonging to the same position by using a Mean Shift algorithm to obtain a joint point pixel coordinate;
(b4) and converting the three-dimensional space coordinates of the joint points according to the pixel coordinates obtained in the participation S33 in the depth map camera, and recording the time stamp corresponding to the image frame.
Specifically, in step (c), the joint point movement information is calculated as follows:
(c1) selecting the coordinates of 3 bone points of the neck, the bottom of the spine and the left ankle of the human body obtained in the step (b4), maintaining a queue, and recording the three-dimensional space coordinates and the time stamp of the joint points of the latest 10 frames;
(c2) calculating a coordinate difference and a time difference according to the information recorded in the step (c1), thereby calculating a speed of the movement of the joint point between the latest 10 frames and an average speed of the whole 10 frames;
(c3) calculating a velocity difference from the inter-frame velocity calculated in the step (c2), thereby calculating an acceleration;
specifically, in step (d), the specific determination steps are as follows:
(d1) calculating to obtain joint point speed characteristics and acceleration characteristics according to the joint point coordinate position obtained in the step (b) as height characteristics in the step (c);
(d2) and fusing three characteristics of the latest continuous 10 frames of joint points as a classification basis, classifying by using an SVM classifier, and judging whether a falling event occurs.
The embodiments of the present invention have been described in detail with reference to the accompanying drawings, but the present invention is not limited to the described embodiments. It will be apparent to those skilled in the art that various changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, and the scope of protection is still within the scope of the invention.

Claims (8)

1. A fall detection system, comprising:
the image reading device reads and preprocesses the depth image; and
and the coordinate processing unit receives the depth image information of the image reading device, calculates and processes coordinate information, calculates the speed and the acceleration of each joint point according to the coordinate information, and judges whether the joint points fall down or not by judging the characteristic information.
2. Fall detection system according to claim 1, wherein the image reading means comprises
The filtering module is used for carrying out median filtering and noise removal on the depth image information;
the cutting module cuts the depth image according to the self-adaptive depth threshold value to roughly remove the background area, further remove noise interference and set the background to be black so as to obtain a foreground area;
the filtering module removes fine holes in the foreground area by using graphical filtering; and
and the area setting module is used for searching a moving human body area in the foreground area according to the motion detection algorithm and taking the human body area as a new foreground area.
3. Fall detection system according to claim 2, wherein the coordinate processing unit comprises
The random forest classifier classifies the pixel points of the human body region obtained in the region setting module and judges that the pixel points belong to a certain joint part of a human body;
the screening module screens out pixel points of each joint part;
the calculation module is used for acquiring pixel points of all joint points and calculating to obtain the pixel coordinates of the joint points;
the conversion module is used for converting the three-dimensional space coordinates of the joint points according to the internal parameters of the depth map camera of the image reading device and the pixel coordinates of the joint points and recording the time stamps corresponding to the image frames;
the recording module records the three-dimensional space coordinates of each joint point;
a speed calculating module, which calculates the movement speed of the joint point and the overall average speed according to the coordinate difference and the time difference of the joint point;
the acceleration calculation module is used for calculating a speed difference according to the inter-frame speed obtained by calculation so as to calculate the acceleration; and
and the SVM classifier is used for judging whether the falling down occurs or not by combining the speed and acceleration information of the coordinates of each joint point.
4. A fall detection method based on a depth image is characterized by comprising the following steps:
(a) reading a depth image, and preprocessing the depth image to obtain a depth image only containing a human body region;
(b) calculating coordinates of the human body joint points according to the depth information, and returning and recording coordinate information and the timestamp;
(c) calculating the movement speed and acceleration of the joint point according to the joint point coordinates and the time stamp obtained in the step (b);
(d) and (c) judging whether a falling event occurs according to the joint point coordinates, the speed and the acceleration obtained in the step (b) and the step (c).
5. The fall detection method according to claim 4, wherein in step (a), the depth image preprocessing comprises the following specific steps:
(a1) carrying out median filtering on the depth image to remove noise;
(a2) cutting the depth image according to the self-adaptive depth threshold value to roughly remove a background area, further removing noise interference, and setting the background to be black;
(a3) removing fine holes in the foreground area by using graphical filtering;
(a4) and searching a moving human body area in the foreground area according to a motion detection algorithm, and taking the human body area as a new foreground area.
6. Fall detection method according to claim 5, wherein in step (b) the specific steps of the calculation of the coordinates of the human joint points are as follows:
(b1) classifying the pixel points of the human body region obtained in the step (a4) by using a random forest classifier, and judging that the pixel points belong to a certain joint part of a human body;
(b2) screening points with the prediction probability larger than a set threshold value in the step (b1) to obtain pixel points of each joint part with different densities;
(b3) clustering pixel points belonging to the same position by using a Mean Shift algorithm to obtain a joint point pixel coordinate;
(b4) and (b) converting the image reading device and the pixel coordinates obtained in the step (b3) to obtain three-dimensional space coordinates of the joint point, and recording a time stamp corresponding to the image frame.
7. Fall detection method according to claim 6, wherein in step (c) the specific steps of the joint movement information calculation are as follows:
(c1) selecting the coordinates of 3 bone points of the neck, the bottom of the spine and the left ankle of the human body obtained in the step (b4), maintaining a queue, and recording the three-dimensional space coordinates and the time stamp of the joint points of the latest 10 frames;
(c2) calculating a coordinate difference and a time difference according to the information recorded in the step (c1), thereby calculating a speed of the movement of the joint point between the latest 10 frames and an average speed of the whole 10 frames;
(c3) and (c2) calculating a velocity difference according to the inter-frame velocity calculated in the step (c2), thereby calculating the acceleration.
8. A fall detection method as claimed in claim 7, wherein in step (d), the specific determining step is as follows:
(d1) calculating to obtain joint point speed characteristics and acceleration characteristics according to the joint point coordinate position obtained in the step (b) as height characteristics in the step (c);
(d2) and fusing three characteristics of the latest continuous 10 frames of joint points as a classification basis, classifying by using an SVM classifier, and judging whether a falling event occurs.
CN202010183817.2A 2020-03-16 2020-03-16 Fall detection system and fall detection method based on depth image Pending CN113409348A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010183817.2A CN113409348A (en) 2020-03-16 2020-03-16 Fall detection system and fall detection method based on depth image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010183817.2A CN113409348A (en) 2020-03-16 2020-03-16 Fall detection system and fall detection method based on depth image

Publications (1)

Publication Number Publication Date
CN113409348A true CN113409348A (en) 2021-09-17

Family

ID=77676861

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010183817.2A Pending CN113409348A (en) 2020-03-16 2020-03-16 Fall detection system and fall detection method based on depth image

Country Status (1)

Country Link
CN (1) CN113409348A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI820784B (en) * 2022-07-04 2023-11-01 百一電子股份有限公司 A fall and posture identifying method with safety caring and high identification handling

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI820784B (en) * 2022-07-04 2023-11-01 百一電子股份有限公司 A fall and posture identifying method with safety caring and high identification handling

Similar Documents

Publication Publication Date Title
KR101808587B1 (en) Intelligent integration visual surveillance control system by object detection and tracking and detecting abnormal behaviors
CN110119676B (en) Driver fatigue detection method based on neural network
JP3785456B2 (en) Safety monitoring device at station platform
KR101788269B1 (en) Method and apparatus for sensing innormal situation
JP4792069B2 (en) Image recognition device
CN110287825B (en) Tumble action detection method based on key skeleton point trajectory analysis
CN111860274B (en) Traffic police command gesture recognition method based on head orientation and upper half skeleton characteristics
KR101839827B1 (en) Smart monitoring system applied with recognition technic of characteristic information including face on long distance-moving object
CN107578012B (en) Driving assistance system for selecting sensitive area based on clustering algorithm
CN105187785A (en) Cross-checkpost pedestrian identification system and method based on dynamic obvious feature selection
CN106251363A (en) A kind of wisdom gold eyeball identification artificial abortion's demographic method and device
CN113396423A (en) Method of processing information from event-based sensors
JP2010039580A (en) Traveling object tracking device
CN105354540A (en) Video analysis based method for implementing person fall-down behavior detection
CN111144174A (en) System for identifying falling behavior of old people in video by using neural network and traditional algorithm
CN112270807A (en) Old man early warning system that tumbles
CN110781735A (en) Alarm method and system for identifying on-duty state of personnel
CN113409348A (en) Fall detection system and fall detection method based on depth image
CN111985295A (en) Electric bicycle behavior recognition method and system, industrial personal computer and camera
CN103049748A (en) Behavior-monitoring method and behavior-monitoring system
CN110688969A (en) Video frame human behavior identification method
KR20020079758A (en) Image data processing
CN113052140A (en) Video-based substation personnel and vehicle violation detection method and system
EP1261951B1 (en) Surveillance method, system and module
CN115865988A (en) Passenger ship passenger treading event monitoring system and method utilizing mobile phone sensor network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210917