CN111160179A - Tumble detection method based on head segmentation and convolutional neural network - Google Patents

Tumble detection method based on head segmentation and convolutional neural network Download PDF

Info

Publication number
CN111160179A
CN111160179A CN201911323121.9A CN201911323121A CN111160179A CN 111160179 A CN111160179 A CN 111160179A CN 201911323121 A CN201911323121 A CN 201911323121A CN 111160179 A CN111160179 A CN 111160179A
Authority
CN
China
Prior art keywords
head
steps
following
method comprises
torso
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911323121.9A
Other languages
Chinese (zh)
Inventor
闵卫东
姚晨光
邓志峰
胡军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lattice Power Jiangxi Corp
Nanchang University
Original Assignee
Lattice Power Jiangxi Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lattice Power Jiangxi Corp filed Critical Lattice Power Jiangxi Corp
Priority to CN201911323121.9A priority Critical patent/CN111160179A/en
Publication of CN111160179A publication Critical patent/CN111160179A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Abstract

The invention discloses a tumble detection method based on head segmentation and a convolutional neural network, which comprises the following steps: the method comprises the steps of A, collecting video image information of a target object, B, processing the collected image, and respectively representing the head and the trunk by using two different ellipses, C, respectively extracting three characteristics of major-minor axis ratio, direction angle and vertical speed from the two ellipses in each frame, fusing the three characteristics into motion characteristics based on a time sequence, D, carrying out tumble detection, finding out the correlation between the contour characteristics of the head and the trunk, and achieving the purposes of tumble detection and similar activity distinguishing, and E, analyzing an experiment result. The tumble detection method based on the head segmentation and the convolutional neural network has higher detection rate and can effectively distinguish some similar activities.

Description

Tumble detection method based on head segmentation and convolutional neural network
Technical Field
The invention relates to the field of posture estimation, in particular to a tumble detection method based on head segmentation and a convolutional neural network.
Background
With the development of computer vision, indoor fall detection is becoming more and more popular. This method only requires the installation of some cameras indoors, which will automatically locate and track the object and detect falls by analyzing its movements.
In the prior art, a bounding box is often used to represent the shape of a person, however it cannot distinguish certain highly similar activities. This may lead to erroneous determination because the bounding box undergoes a large shape change when the pedestrian suddenly extends his arm during normal walking. Although ellipse fitting can effectively reduce this problem and can also remove elongated objects carried by pedestrians. However, special activities such as sitting down or squatting down quickly can be easily mistaken for a fall. In addition, the movement rules of different parts of the human body are different. Therefore, for some highly similar activities, it is not accurate to use a traditional global geometry to represent the overall shape of a human.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention aims to provide a fall detection method based on head segmentation and a convolutional neural network, which has a higher detection rate and can effectively distinguish some similar activities.
In order to achieve the purpose, the invention adopts the following technical scheme:
the invention provides a tumble detection method based on head segmentation and a convolutional neural network, which comprises the following steps:
step A, collecting video image information of a target object; b, processing the collected images, and respectively representing the head and the trunk by using two different ellipses; step C, respectively extracting three characteristics of ratio of long and short axes, direction angle and vertical speed from two ellipses in each frame, and fusing the three characteristics into motion characteristics based on time sequence; step D, carrying out tumble detection, finding out the correlation between the head and trunk contour characteristics, and achieving the purposes of tumble detection and similar activity distinguishing; step E, analyzing an experimental result;
preferably, the image processing in step B includes the following steps B1, B2, B3: step B1, after training the background, distinguishing the background and the foreground, extracting the foreground and inhibiting the shadow; step B2, after the head is pre-positioned, the head is divided, a candidate model which maximizes the similarity function is selected, and the average displacement vector of the target model is obtained; step B3, connecting each side of the polygon with the middle point, repeating the steps for a couple of times to make the shape of the trunk outline tend to an ellipse, and finally fitting the trunk by ellipse fitting;
preferably, morphological operations such as dilation and erosion are used to solve the problems of cavitation and noise generated in step B1;
preferably, the background and foreground are distinguished in step B1 by a gaussian mixture model;
preferably, the head is segmented in step B2 using a mean-shift tracking method;
preferably, in step B3, after polygon fitting is performed on the trunk contour, each side of the polygon is connected to the midpoint thereof, and the steps are repeated for an even number of times, so that the shape of the trunk contour tends to be elliptical;
preferably, a shallow CNN architecture is adopted in the step D for fall detection analysis;
preferably, in step E, the fall detection rate obtained by the experimental method and the bounding box ratio analysis method, the ellipse shape analysis method and the Chua's method is compared with the false alarm rate.
The invention has the beneficial effects that:
1. using a head and body segmentation method to extract geometric features of the head and torso for solving the instability of the traditional geometric feature-based method;
2. the introduction of the multi-channel shallow CNN can not only improve the precision, but also ensure the real-time performance;
3. the correlation between two elliptical contour features can be found using a trained shallow CNN architecture to detect indoor falls and distinguish some similar activities.
Drawings
FIG. 1 is a schematic diagram of a head pre-positioning method according to an embodiment of the present invention;
FIG. 2 is a display of the tracking head in an embodiment of the present invention, wherein the graphs (a) to (f) show the results of the tracking at different angles and with different persons;
fig. 3 is a conventional ellipse fitting method in accordance with an embodiment of the present invention, and fig. (a) and (c) are overall human ellipse fitting using the conventional method. Graph (b) and graph (d) are torso ellipse fits of the conventional method, respectively;
FIG. 4 is an ellipse extraction of a torso in an embodiment of the present invention;
FIG. 5 is the result of ellipse fitting in an embodiment of the present invention, with graphs (a) and (c) being ellipse fitting of the conventional method, and graphs (b) and (d) being ellipse fitting of our method;
FIG. 6 is a head and torso ellipse fitting scenario in accordance with an embodiment of the present invention, with figures (a-d) showing different activity scenarios;
fig. 7 is a feature extraction diagram in accordance with an embodiment of the present invention. a red ellipse is a full body ellipse fit; the blue ellipse is a head ellipse fit; the green ellipse is a torso ellipse fit. b schematic representation depicting fall characteristics;
FIG. 8 is a time series based motion profile in an embodiment of the present invention;
FIG. 9 is an architecture of a shallow CNN in an embodiment of the present invention;
FIG. 10 is a diagram of different activities on a self-collected data set in an embodiment of the present invention. Figure (a) sitting down; fig. (b-c) lying down; FIG. d shows walking; FIG. (e) crouching down; graph (f-h) different falls;
FIG. 11 is a detailed illustration of experimental data from a collected data set in accordance with an embodiment of the present invention;
FIG. 12 is an illustration of experimental data for detection rate and false alarm rate in the method of the present invention in an embodiment of the present invention;
fig. 13 is an analysis of the results of a lateral fall and squat in an embodiment of the invention, panel (a) a lateral fall, panel (b) head and torso characteristics based on time series, respectively, panel (c) squat. Map (d) head and torso features are based on time series, respectively;
FIG. 14 is a result analysis of back groveling and sitting activities in accordance with an embodiment of the present invention, with plot (a) back groveling, plot (b) head and torso features based on time series, respectively, and plot (c) sitting, and plot (d) head and torso features based on time series, respectively;
FIG. 15 shows data obtained by comparing some of the classical methods in accordance with embodiments of the present invention.
Detailed Description
The technical scheme of the invention is further explained by the specific implementation mode in combination with the attached drawings.
The fall detection method based on head segmentation and convolutional neural network provided in the embodiment comprises the following steps:
and step A, collecting video image information of a target object.
B, processing the collected images, and respectively representing the head and the trunk by using two different ellipses;
step B1: after training a background, simulating each pixel in each frame by using a mixed Gaussian model, if the pixel is matched with the Gaussian model of the background, taking the pixel as the background, otherwise, taking the pixel as the foreground, extracting the foreground, applying shadow suppression to suppress shadows, and aiming at the fact that cavities and noises possibly appear in an image, adopting morphological operations such as expansion, corrosion and the like to solve the problem;
step B2: since the conventional ellipse shape is adapted to the overall contour of the human body, it cannot effectively reflect the difference between some similar activities. It is easy to cause erroneous judgment. Thus, to improve the discernability of similar activities, the present invention segments the head using two different ellipses fitting the human head and torso, respectively, as shown in FIG. 1. In order to avoid manual operation and realize intelligent monitoring, the head is pre-positioned before head tracking, and after the head is pre-positioned, the head is segmented by adopting a mean shift tracking method. The algorithm has less computational effort and can track in real time when the target area is known. It is also insensitive to edge occlusion, object rotation, distortion and background motion. A target tracking algorithm based on mean shift obtains a description of the target model and the candidate model by calculating the eigenvalue probabilities of the pixels in the target region and the candidate region, respectively. Then, a similarity function is used to measure the similarity between the initial frame target model and the candidate template of the current frame, and a candidate model that maximizes the similarity function is selected. An average displacement vector of the object model is obtained, which is the vector of the object moving from the initial position to the correct position. Due to the rapid convergence of the mean shift algorithm, the algorithm converges to the real position of the target, the purpose of tracking is achieved by iteratively calculating a mean shift vector, and the approximate position of the head is found out through head pre-positioning. Then, the head is tracked by the mean shift method. The results of the tracking head are shown in FIG. 2;
step B3, after tracing the head of the person, we fit the head and torso of the person using two ellipses, respectively. But the conventional ellipse fitting method cannot effectively reflect the difference between the entire human body and the human torso, as shown in fig. 3. Thus, the present invention modifies the torso contour to obtain a compact torso elliptical contour that is different from the overall human elliptical contour. First the torso contour is polygon fitted, and second we connect each side of the polygon to their midpoint, and repeat the process an even number of times, when the shape of the torso contour tends to be elliptical. Finally, we fit the torso using an ellipse fit to obtain a more compact elliptical representation. A torso ellipse extraction diagram is depicted in fig. 4. In this way, good results are obtained. The results of the torso ellipse fitting are shown in fig. 5. This method is also used for head fitting. In fig. 6, which shows the results of fitting the head and torso ellipses, it can be seen that the method employed in the present invention results in a compact torso ellipse compared to the conventional method. Thus, the elliptical representation of the torso presented herein can be effectively distinguished from the smallest outer ellipse of the contour of the entire human body.
Step C, after using two ellipses to fit the head and torso of a person, an appropriate feature representation is extracted to represent the human motion. In conventional methods of detecting falls using geometric features, elliptical features are more representative of human motion than bounding boxes. Thus, the present invention extracts three elliptical features from the head ellipse and the torso ellipse, each representing a human action. The extracted features are contour features and velocity features. The profile characteristics are the inclination angle of the ellipse θ and the ratio ρ ═ of the major axis to the minor axis of the ellipse. When the motion of the person changes, the angle θ and the ratio ρ change, respectively. Once a fall occurs, the speed in the vertical direction will change rapidly, and the speed characteristic in the vertical direction of the elliptical mass center is extracted as formula one.
Figure BDA0002327686470000061
Wherein vυRepresents the velocity in the vertical direction; (x)n-1,yn-1) Is the coordinate of the center of the (n-1) th frame; (x)n,yn) Is the coordinate of the nth frame center; fps represents the number of frames per second; sin θ represents the sine of the elliptical tilt angle. Then, a feature extraction diagram is shown in fig. 7, where in diagram (b), a and b represent the major and minor axes of the ellipse, respectively; θ represents the tilt angle of the ellipse; v. ofυRepresenting the velocity in the direction perpendicular to the center of the elliptical profile. Then, we integrate these six extracted features into a time series based motion feature. The motion characteristics are shown in fig. 8.
Step D, in order to find the correlation between the head ellipse and the torso ellipse to detect falls, the present invention uses deep learning to learn their correlation. These deeper architectures can theoretically learn more abstract features, but it often results in data overfitting and high computational complexity. Therefore, in order to maintain the balance between them, the present invention learns the correlation between two ellipses from the extracted motion features using the shallow CNN architecture. The shallow CNN architecture does not require a large number of training samples, is simple, can generate accurate classification results,
the main composition of CNN is as follows:
and (3) rolling layers: the input original image is convolved with a plurality of trainable filters (or convolution kernels) and an addable offset vector to obtain a plurality of mapping feature maps.
A pooling layer: usually after the convolutional layer, to perform down-sampling to reduce the dimensionality of the features. The most traditional two pooling methods are maximum pooling and mean pooling.
Full connection layer: after the original image is processed through multiple convolutional and pooling layers, the output features are compressed into one-dimensional vectors and used for classification. In this layer, other features may be added to this one-dimensional vector.
In the present invention, a shallow CNN architecture is used to detect falls. This architecture is shown in fig. 9, where CNN is used to train and learn motion features from a time series. Specifically, first, feature maps based on three partitions of the time series in the convolutional layer were learned using 196 filters of size 1 × 12 to obtain rich data feature representations. There is only one layer in the convolutional layer. Then, after applying the ReLU activation function to 196 feature maps, the dimensionality is reduced by four times using a maximum pooling layer of size 1x 4. The feature map output by the pooling layer is flattened and then some statistical features (e.g., mean values) are put together to obtain 1024 features through the fully connected layer. And finally, performing final classification calculation on the features obtained by the full connection layer through a soft-max function. The model was trained to minimize the cross-entropy loss function, which was enhanced by the l 2-norm regularization of the CNN weights. A back-propagation algorithm is used to compute the gradients and optimize the network parameters using a modified stochastic gradient descent method.
Step E, the experimental platform herein is on a notebook computer with 1.9GHz Inter (R) i5-4300U CPU and 4GB RAM. To test the CNN architecture of the present invention, falls and normal daily activities were simulated to collect a large number of video samples to train and test the CNN architecture. Multiple monocular cameras were used to take 102 short videos from different perspectives and heights. Of these, there are 74 training videos and 28 test videos. The video includes normal activities such as lying down, walking, squatting, and sitting down; in the test data set, it contained 30 simulated fall activities and 28 normal activities. In fig. 10, it shows different normal activities and different simulated fall scenarios.
Sufficient training and test samples were collected from the self-collected data set, and a detailed experimental sample collection description is shown in fig. 12. The six feature data are fused to a motion feature that is input into the CNN for training and testing based on a time series as shown in fig. 8. The test results from the collected data set used by our proposed method are shown in FIG. 12, where there are 14284 positive sample images and 18614 negative sample images in the training data set and 4247 positive sample images and 5530 negative sample images in the test data set, as shown in FIG. 11; as shown in fig. 12, the method achieves a detection accuracy of 90.5% and a false alarm rate of 10.0%. These two similar activities are listed to analyze the principle of detection by the methods shown in fig. 13 and 14.
As shown in fig. 13, fig. (a) and (c) are a lateral fall and crouch down. The two activities are very similar. False positives can easily result if only one global geometric representation is used to detect falls. When two ellipses are used to represent the head and torso, respectively, these two similar activities can be effectively distinguished from the graphs (b) and (d), which describe the changes in the characteristics of the head and torso of the two activities, respectively. Thus, after CNN training, it can learn the correlation between two similar activities, thereby effectively detecting a fall.
To further demonstrate the effectiveness of this approach, the present invention has conducted more extensive experiments to compare with some classical approaches. Three classical algorithms of comparative experiments are implemented herein, namely bounding box ratio analysis, ellipse shape analysis and Chua's method. As shown in fig. 15, the results of the method on the data set we collected by themselves are shown. The Chua's method uses three points to represent the human body, from which features are extracted to detect falls. The bounding box ratio analysis method uses the aspect ratio of a human bounding box to detect falls. The elliptical shape analysis method fuses elliptical features and motion history images to detect falls. Under the collected data set, the method achieves 90.5% of detection accuracy and 10.0% of false alarm rate, and compared with other traditional geometric characteristic methods, the method has higher accuracy rate and lower false alarm rate.

Claims (8)

1. A fall detection method based on head segmentation and a convolutional neural network is characterized in that:
the method comprises the following steps:
step A, collecting video image information of a target object;
b, processing the collected images, and respectively representing the head and the trunk by using two different ellipses;
step C, respectively extracting three characteristics of ratio of long and short axes, direction angle and vertical speed from two ellipses in each frame, and fusing the three characteristics into motion characteristics based on time sequence;
step D, carrying out tumble detection, finding out the correlation between the head and trunk contour characteristics, and achieving the purposes of tumble detection and similar activity distinguishing;
and E, analyzing the experimental result.
2. The method of claim 1, wherein the method comprises the following steps:
the image processing in the step B includes the following steps B1, B2, B3:
step B1, after training the background, distinguishing the background and the foreground, extracting the foreground and inhibiting the shadow;
step B2, after the head is pre-positioned, the head is divided, a candidate model which maximizes the similarity function is selected, and the average displacement vector of the target model is obtained;
step B3, fitting the torso using ellipse fitting.
3. The method of claim 2, wherein the method comprises the following steps:
the voids and noise generated in step B1 are resolved by morphological operations such as dilation and erosion.
4. The method of claim 2, wherein the method comprises the following steps:
the background and foreground are distinguished in step B1 by a gaussian mixture model.
5. The method of claim 2, wherein the method comprises the following steps:
the head is segmented using a mean-shift tracking method in step B2.
6. The method of claim 2, wherein the method comprises the following steps:
in step B3, after the torso contour is polygon-fitted, each side of the polygon is connected to their midpoint and repeated an even number of times to make the torso contour elliptical in shape.
7. The method of claim 1, wherein the method comprises the following steps:
and D, adopting a shallow CNN framework to perform tumble detection analysis.
8. The method of claim 1, wherein the method comprises the following steps:
and step E, comparing the falling detection rate obtained by the experimental method with the falling detection rate obtained by a bounding box ratio analysis method, an oval analysis method and a Chua's method with the false alarm rate.
CN201911323121.9A 2019-12-20 2019-12-20 Tumble detection method based on head segmentation and convolutional neural network Pending CN111160179A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911323121.9A CN111160179A (en) 2019-12-20 2019-12-20 Tumble detection method based on head segmentation and convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911323121.9A CN111160179A (en) 2019-12-20 2019-12-20 Tumble detection method based on head segmentation and convolutional neural network

Publications (1)

Publication Number Publication Date
CN111160179A true CN111160179A (en) 2020-05-15

Family

ID=70557500

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911323121.9A Pending CN111160179A (en) 2019-12-20 2019-12-20 Tumble detection method based on head segmentation and convolutional neural network

Country Status (1)

Country Link
CN (1) CN111160179A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113762002A (en) * 2020-10-14 2021-12-07 天翼智慧家庭科技有限公司 Method and apparatus for detecting human falls

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102436636A (en) * 2010-09-29 2012-05-02 中国科学院计算技术研究所 Method and system for segmenting hair automatically
CN102521563A (en) * 2011-11-19 2012-06-27 江苏大学 Method for indentifying pig walking postures based on ellipse fitting
CN105046246A (en) * 2015-08-31 2015-11-11 广州市幸福网络技术有限公司 Identification photo camera capable of performing human image posture photography prompting and human image posture detection method
US20160210562A1 (en) * 2013-09-30 2016-07-21 Kun Hu Method and system for building a human fall detection model
CN106127148A (en) * 2016-06-21 2016-11-16 华南理工大学 A kind of escalator passenger's unusual checking algorithm based on machine vision
US20170061763A1 (en) * 2011-04-04 2017-03-02 Alarm.Com Incorporated Fall detection and reporting technology
CN107133604A (en) * 2017-05-25 2017-09-05 江苏农林职业技术学院 A kind of pig abnormal gait detection method based on ellipse fitting and predictive neutral net
CN107423730A (en) * 2017-09-20 2017-12-01 湖南师范大学 A kind of body gait behavior active detecting identifying system and method folded based on semanteme
WO2019036805A1 (en) * 2017-08-22 2019-02-28 Orpyx Medical Technologies Inc. Method and system for activity classification

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102436636A (en) * 2010-09-29 2012-05-02 中国科学院计算技术研究所 Method and system for segmenting hair automatically
US20170061763A1 (en) * 2011-04-04 2017-03-02 Alarm.Com Incorporated Fall detection and reporting technology
US20180336773A1 (en) * 2011-04-04 2018-11-22 Alarm.Com Incorporated Fall detection and reporting technology
CN102521563A (en) * 2011-11-19 2012-06-27 江苏大学 Method for indentifying pig walking postures based on ellipse fitting
US20160210562A1 (en) * 2013-09-30 2016-07-21 Kun Hu Method and system for building a human fall detection model
CN105046246A (en) * 2015-08-31 2015-11-11 广州市幸福网络技术有限公司 Identification photo camera capable of performing human image posture photography prompting and human image posture detection method
CN106127148A (en) * 2016-06-21 2016-11-16 华南理工大学 A kind of escalator passenger's unusual checking algorithm based on machine vision
CN107133604A (en) * 2017-05-25 2017-09-05 江苏农林职业技术学院 A kind of pig abnormal gait detection method based on ellipse fitting and predictive neutral net
WO2019036805A1 (en) * 2017-08-22 2019-02-28 Orpyx Medical Technologies Inc. Method and system for activity classification
CN107423730A (en) * 2017-09-20 2017-12-01 湖南师范大学 A kind of body gait behavior active detecting identifying system and method folded based on semanteme

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
JOKANOVIĆ B: "Fall Detection Using Deep Learning in Range-Doppler Radars", 《IEEE TRANSACTIONS ON AEROSPACE & ELECTRONIC SYSTEMS》 *
曾星等: "基于深度图像的嵌入式人体坐姿检测系统的实现", 《计算机测量与控制》 *
邓志锋: "一种基于CNN和人体椭圆轮廓运动特征的摔倒检测方法", 《图学学报》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113762002A (en) * 2020-10-14 2021-12-07 天翼智慧家庭科技有限公司 Method and apparatus for detecting human falls

Similar Documents

Publication Publication Date Title
CN107832672B (en) Pedestrian re-identification method for designing multi-loss function by utilizing attitude information
WO2020108362A1 (en) Body posture detection method, apparatus and device, and storage medium
Kamal et al. A hybrid feature extraction approach for human detection, tracking and activity recognition using depth sensors
CN108520226B (en) Pedestrian re-identification method based on body decomposition and significance detection
CN108062525B (en) Deep learning hand detection method based on hand region prediction
CN102324025B (en) Human face detection and tracking method based on Gaussian skin color model and feature analysis
CN104680559B (en) The indoor pedestrian tracting method of various visual angles based on motor behavior pattern
CN106023257A (en) Target tracking method based on rotor UAV platform
CN107808376B (en) Hand raising detection method based on deep learning
CN114187665B (en) Multi-person gait recognition method based on human skeleton heat map
CN110298297A (en) Flame identification method and device
WO2013075295A1 (en) Clothing identification method and system for low-resolution video
CN106529441B (en) Depth motion figure Human bodys' response method based on smeared out boundary fragment
Fang et al. Partial attack supervision and regional weighted inference for masked face presentation attack detection
CN107886057B (en) Robot hand waving detection method and system and robot
Chan et al. A 3-D-point-cloud system for human-pose estimation
Ali et al. Deep Learning Algorithms for Human Fighting Action Recognition.
Nguyen et al. Combined YOLOv5 and HRNet for high accuracy 2D keypoint and human pose estimation
CN114170686A (en) Elbow bending behavior detection method based on human body key points
Pathak et al. A framework for dynamic hand gesture recognition using key frames extraction
CN111160179A (en) Tumble detection method based on head segmentation and convolutional neural network
CN107886060A (en) Pedestrian's automatic detection and tracking based on video
Radwan et al. Regression based pose estimation with automatic occlusion detection and rectification
CN108985216B (en) Pedestrian head detection method based on multivariate logistic regression feature fusion
CN103020631A (en) Human movement identification method based on star model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200515