WO2020107847A1 - Procédé de détection de chute sur la base des points osseux et dispositif de détection de chute associé - Google Patents

Procédé de détection de chute sur la base des points osseux et dispositif de détection de chute associé Download PDF

Info

Publication number
WO2020107847A1
WO2020107847A1 PCT/CN2019/089500 CN2019089500W WO2020107847A1 WO 2020107847 A1 WO2020107847 A1 WO 2020107847A1 CN 2019089500 W CN2019089500 W CN 2019089500W WO 2020107847 A1 WO2020107847 A1 WO 2020107847A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature
neural network
points
layer
behavior
Prior art date
Application number
PCT/CN2019/089500
Other languages
English (en)
Chinese (zh)
Inventor
周涛涛
周宝
陈远旭
肖京
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2020107847A1 publication Critical patent/WO2020107847A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1116Determining posture transitions
    • A61B5/1117Fall detection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1126Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique
    • A61B5/1128Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique using image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques

Definitions

  • the present application relates to the field of machine vision deep learning technology, and in particular to a fall detection method, device, computer equipment, and storage medium based on bone points.
  • fall detection based on wearable devices fall detection based on depth cameras and fall detection based on ordinary cameras.
  • the method based on wearable devices must be carried at all times, which causes great inconvenience to users and has little practical application value; the method based on depth cameras is expensive and difficult to promote in practice; and the method based on ordinary cameras is cheap and easy to use Convenient, but requires higher algorithm.
  • the purpose of this application is to provide a fall detection method, device, computer equipment and storage medium based on bone points, which are used to solve the problems in the prior art.
  • the present application provides a bone point-based fall detection method, including the following steps:
  • a first feature extraction neural network through a first picture sample, the first feature extraction neural network is used to extract a plurality of first feature points in the first picture sample, the first feature points represent key points on the human body Bone point
  • Input a second video sample into the trained first feature extraction neural network to obtain a plurality of second feature points characterizing key bone points of the human body in the second video sample;
  • the video data of the monitored object is sequentially input into the trained first feature extraction neural network and the second behavior classification neural network to output the behavior category of the monitored object.
  • the present application also proposes a fall detection device based on bone points, including:
  • the first neural network training module is adapted to train a first feature extraction neural network through a first picture sample, the first feature extraction neural network is used to extract multiple first feature points in the first picture sample, the The first feature point represents the key bone point on the human body;
  • the feature point extraction module is adapted to input the second video sample into the trained first feature extraction neural network to obtain a plurality of second feature points characterizing key bone points of the human body in the second video sample;
  • a feature map generation module adapted to encode the multiple second feature points to generate a predicted feature map characterizing the distribution of the multiple second feature points
  • a second neural network training module adapted to train a second behavior classification neural network through the predicted feature map, and the second behavior classification neural network is used to classify the behavior represented in the predicted feature map;
  • the classification module is adapted to sequentially input the video data of the monitored object into the trained first feature extraction neural network and the second behavior classification neural network to output the behavior category of the monitored object.
  • the present application also provides a computer device, including a memory, a processor, and a computer program stored on the memory and executable on the processor.
  • a computer program stored on the memory and executable on the processor.
  • a first feature extraction neural network through a first picture sample, the first feature extraction neural network is used to extract a plurality of first feature points in the first picture sample, the first feature points represent key points on the human body Bone point
  • Input a second video sample into the trained first feature extraction neural network to obtain a plurality of second feature points characterizing key bone points of the human body in the second video sample;
  • the video data of the monitored object is sequentially input into the trained first feature extraction neural network and the second behavior classification neural network to output the behavior category of the monitored object.
  • the present application also provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by the processor, the following steps are realized:
  • a first feature extraction neural network through a first picture sample, the first feature extraction neural network is used to extract a plurality of first feature points in the first picture sample, the first feature points represent key points on the human body Bone point
  • Input a second video sample into the trained first feature extraction neural network to obtain a plurality of second feature points characterizing key bone points of the human body in the second video sample;
  • the video data of the monitored object is sequentially input into the trained first feature extraction neural network and the second behavior classification neural network to output the behavior category of the monitored object.
  • This application addresses the problem of insufficient fall detection data in the prior art, and uses other data to train human bone point feature extraction neural networks; for the problem of using frame information to detect fall behavior, use bone point information to classify fall behavior.
  • This application trains the first feature extraction neural network through the image sample library to extract the key bone point information in the human body; trains the second behavior classification neural network through the video sample library, and judges the video based on the extracted key bone point information Whether the human movement in is a fall behavior.
  • the bone point information of the monitored object can be accurately extracted, and according to the bone point information, it can be judged in time whether the monitored object has fallen down.
  • the provision of timely and effective care for the handicapped elderly and disabled persons is conducive to improving people's quality of life.
  • FIG. 1 is a flowchart of Embodiment 1 of a fall detection method based on bone points of the present application
  • FIG. 2 is a schematic structural diagram of a first feature extraction neural network in Embodiment 1 of the present application.
  • FIG. 3 is a schematic structural diagram of a second behavior classification neural network in Embodiment 1 of the present application.
  • FIG. 4 is a schematic diagram of a program module of a first embodiment of a fall detection device based on a bone point according to this application;
  • FIG. 5 is a schematic diagram of the hardware structure of the first embodiment of the memory sharing device of the present application.
  • the fall detection method, device, computer equipment and storage medium provided by the present application are applicable to the field of machine vision technology, and provide a fall detection method and device for the elderly or disabled persons living alone to detect fall behavior in time.
  • This application trains the first feature extraction neural network through the image sample library to extract the key bone point information in the human body; trains the second behavior classification neural network through the video sample library, and judges the video based on the extracted key bone point information Whether the human movement in is a fall behavior.
  • the first feature extraction neural network and the second behavior classification neural network trained by this application can accurately extract the bone point information of the monitored object, and timely determine whether the monitored object has fallen down according to the bone point information, which is beneficial to Greatly improve people's quality of life.
  • a fall detection method based on bone points in this embodiment includes the following steps:
  • the first picture sample is selected from the picture sample library to train the first feature extraction neural network.
  • the first picture sample is preferably a full-body picture of the person.
  • the first image sample is divided into a training image sample and a test image sample, where the training image sample is used to train the first feature extraction neural network, and the test image sample is used to verify the first feature after the training image sample training The effect of extracting the neural network when extracting the feature information in the picture.
  • the above training picture samples and test picture samples may be subjected to data enhancement preprocessing, such as performing contrast transformation and brightness transformation on each sample, adding local random Gaussian noise, and performing uniform normalization processing, thereby The training image samples and the test image samples after data enhancement are obtained.
  • the structure of the first feature extraction neural network in this step will be described in detail below with a test picture sample as an example, as shown in FIG. 2.
  • the test picture sample first enters the feature extraction module to extract the features in the test picture sample.
  • the feature extraction module in this embodiment uses a ResNet residual network to ensure better feature extraction performance.
  • the test sample image passes through the ResNet residual network
  • the first extracted data D 1 is obtained , and then the first extracted data D 1 enters four convolution modules with different expansion coefficients respectively, to obtain four second extracted data D with different feature channels. 2 .
  • the four second extracted data D 2 with different feature channels are combined into the first convolutional layer stacked by the residual module to obtain four third extracted data D 3 with different perceptual fields.
  • it after fusing four third extracted data D 3 with different perceptual fields, it enters the second convolutional layer piled up by the residual module again, and finally outputs multiple first feature points representing key bone points on the human body .
  • the convolution module includes the following layers in sequence: a convolution layer, a batch normalization layer, a Relu activation function layer, a convolution layer, a batch normalization layer, a Relu activation function layer, and a pooling layer, each The convolutional layers have different expansion coefficients.
  • the feature information is the bone feature points on the human body, including the feature points at the main joints of the body, such as the elbow joint, shoulder joint, knee joint, hip joint, etc.
  • the target feature point associated with the preset behavior can be further selected from the bone feature points.
  • the preset behavior may be squatting, bending over, standing up, falling, etc.
  • the characteristic points of displacement in different behaviors may be different, so the one that best reflects the characteristics of this behavior can be selected according to the behavior to be detected Target feature point.
  • a total of 14 bone point information including head, neck, shoulders, elbows, hands, hips, knees, and feet are selected as targets Feature points.
  • the selection method of the present invention can make the number of bone feature points as small as possible to reduce the calculation amount in the subsequent behavior analysis process; on the other hand, the above-mentioned 14 target feature points are evenly distributed at major joints of the human body , Can reflect the basic trend of human behavior as a whole.
  • the positions of the bone points listed above are only used as examples, and are not used to limit specific feature point information.
  • the above bone point information may also be deleted or added, or specific feature points may be changed
  • the location of the acupuncture point in the human body can also be obtained. This application does not limit this.
  • the plurality of first feature points in this embodiment may preferably be the above bone point distribution information maps marked in the human body.
  • x p and y p represent the predicted coordinates of the first feature point extracted by the first feature extraction neural network
  • x g and y g represent the actual coordinates of the first feature point
  • S2 Input the second video sample into the trained first feature extraction neural network to obtain a plurality of second feature points that represent key bone points of the human body in the second video sample.
  • this step uses the trained first feature extraction neural network to extract the second feature point in the video sample.
  • the second feature point is the above The 14 bone feature points mentioned.
  • the present application is based on the video information of the monitored person collected by a common camera for fall detection. Therefore, the object of feature point extraction in this step is a continuous video rather than a simple picture. Since the video is formed by a series of picture frames changing with time, the video needs to be sampled first to extract the target picture. For example, the video is extracted according to the standard of 20 frames per second, with 3 seconds as a sample. At the same time, in order to generate diverse samples, the starting frame can be randomly selected near the starting point of the behavior in the video.
  • the feature point information in the target pictures can be extracted through the first feature extraction neural network, preferably the 14 bone feature points mentioned above.
  • S3 Encoding the plurality of second feature points to generate a predicted feature map.
  • This step is used to process the extracted second feature points to obtain a predicted feature map. Taking the above 14 bone feature points as an example, the following processing steps are included:
  • any two feature points from the 14 bone feature points are paired, and the calculation formula is as follows:
  • x it and y it respectively represent the horizontal and vertical coordinates of the i-th second feature point at time t; l xjt represents the Euler of the i-th second feature point and j-th second feature point at time t distance, v xit represents the i-th second feature points at time t in the x-direction velocity, v yit speed of the i-th representative of a second characteristic point in the y-direction.
  • the matrix diagram is the prediction feature diagram.
  • the purpose of this step is to train a second behavior classification neural network to classify the behavior represented in the prediction feature map to determine whether a fall behavior has occurred.
  • the structure of the second behavior classification neural network in this application is shown in FIG. 3, which will be described in detail below.
  • the prediction feature map first passes through a conventional convolution module to obtain first classification data R1. Then, the first classification data R1 respectively passes through four convolution modules with different expansion coefficients to obtain four second classification data R2 with different characteristic channels. Preferably, the expansion coefficients of the above four convolution modules are 1 respectively. , 3, 6 and 12. Next, the above-mentioned four second classification data R2 with different feature channels are combined and then sequentially passed through three conventional convolution modules, and finally output behavior classification, which is used to judge which behavior category the behavior represented in the above-mentioned predicted feature map belongs to.
  • the convolution module includes the following layers in sequence: a convolution layer, a batch normalization layer, a Relu activation function layer, a convolution layer, a batch normalization layer, a Relu activation function layer, and a pooling layer.
  • the second behavior classification neural network is trained by the loss function L H (X, Y), the specific expression is as follows:
  • x k represents the parameter value of the kth behavior category
  • z k represents the predicted probability of the kth behavior category.
  • the second behavior classification neural network can recognize the categories of squatting, standing, waving, bending, falling, lying down, etc., each behavior corresponds to its own parameter value, such as When the monitored person is falling, then x k represents the parameter value of the monitored person's falling behavior, and z k represents the predicted probability of the monitored person's falling behavior.
  • this embodiment adds an L2 regular term after the loss function to prevent overfitting.
  • the resulting cost function is as follows:
  • S5 Input the video data of the monitored object into the trained first feature extraction neural network and the second behavior classification neural network in order to output the behavior category of the monitored object.
  • the present application can detect the fall behavior of the actual surveillance video.
  • the video information of the monitored object is photographed in real time through a common camera, and the video information is sampled to extract a certain number of target images.
  • the target image first undergoes a trained first feature extraction neural network to extract multiple feature points in the target image, such as bone feature points.
  • Calculate and combine multiple bone feature points for example, calculate the Euler distance between each two bone feature points and the speed in the x and y directions, and arrange the vectors calculated above in the order of each frame of image , And finally get the predicted feature map.
  • you can obtain the category to which the behavior included in the prediction feature map belongs, for example, whether it is a fall behavior.
  • the fall detection device 10 may include or be divided into one or more program modules, and one or more program modules are stored in a storage medium. , And executed by one or more processors to complete this application, and can implement the above-mentioned fall detection method.
  • the program module referred to in this application refers to a series of computer program instruction segments capable of performing specific functions, and is more suitable for describing the execution process of the fall detection device 10 in the storage medium than the program itself. The following description will specifically introduce the functions of the program modules of this embodiment:
  • the first neural network training module 11 is adapted to train a first feature extraction neural network through a first picture sample.
  • the first feature extraction neural network is used to extract multiple first feature points in the first picture sample.
  • the first feature point represents the key bone point on the human body;
  • the feature point extraction module 12 is adapted to input the second video sample into the trained first feature extraction neural network to obtain a plurality of second feature points characterizing key bone points of the human body in the second video sample;
  • the feature map generation module 13 is adapted to encode the multiple second feature points to generate a predicted feature map characterizing the distribution of the multiple second feature points;
  • the second neural network training module 14 is adapted to train a second behavior classification neural network through the predicted feature map, and the second behavior classification neural network is used to classify the behavior represented in the predicted feature map;
  • the classification module 15 is adapted to sequentially input the video data of the monitored object into the trained first feature extraction neural network and the second behavior classification neural network to output the behavior category of the monitored object.
  • This embodiment also provides a computer device, such as a smartphone, a tablet computer, a notebook computer, a desktop computer, a rack server, a blade server, a tower server, or a rack server (including an independent server, or A server cluster composed of multiple servers), etc.
  • the computer device 20 of this embodiment includes at least but not limited to: a memory 21 and a processor 22 that can be communicatively connected to each other through a system bus, as shown in FIG. 5. It should be noted that FIG. 5 only shows the computer device 20 having components 21-22, but it should be understood that it is not required to implement all the components shown, and that more or fewer components may be implemented instead.
  • the memory 21 (ie, readable storage medium) includes flash memory, hard disk, multimedia card, card-type memory (for example, SD or DX memory, etc.), random access memory (RAM), static random access memory (SRAM), Read only memory (ROM), electrically erasable programmable read only memory (EEPROM), programmable read only memory (PROM), magnetic memory, magnetic disk, optical disk, etc.
  • the memory 21 may be an internal storage unit of the computer device 20, such as a hard disk or memory of the computer device 20.
  • the memory 21 may also be an external storage device of the computer device 20, such as a plug-in hard disk equipped on the computer device 20, a smart memory card (Smart Media, Card, SMC), and secure digital (Secure Digital, SD) card, flash card (Flash Card), etc.
  • the memory 21 may also include both the internal storage unit of the computer device 20 and its external storage device.
  • the memory 21 is generally used to store the operating system and various application software installed in the computer device 20, such as the program code of the fall detection device 10 of the first embodiment.
  • the memory 21 can also be used to temporarily store various types of data that have been output or will be output.
  • the processor 22 may be a central processing unit (CPU), controller, microcontroller, microprocessor, or other data processing chip in some embodiments.
  • the processor 22 is generally used to control the overall operation of the computer device 20.
  • the processor 22 is used to run the program code or process data stored in the memory 21, for example, to run the fall detection device 10, so as to implement the fall detection method of the first embodiment.
  • This embodiment also provides a computer-readable storage medium, such as flash memory, hard disk, multimedia card, card-type memory (for example, SD or DX memory, etc.), random access memory (RAM), static random access memory (SRAM), only Read memory (ROM), electrically erasable programmable read-only memory (EEPROM), programmable read-only memory (PROM), magnetic memory, magnetic disk, optical disk, server, App store, etc., which store computer programs, When the program is executed by the processor, the corresponding function is realized.
  • the computer-readable storage medium of this embodiment is used to store the fall detection device 10, and when executed by the processor, the fall detection method of the first embodiment is implemented.
  • Any process or method description in a flowchart or otherwise described herein may be understood to represent a module, segment, or portion of code that includes one or more executable instructions for implementing specific logical functions or steps of a process , And the scope of the preferred embodiment of the present application includes additional implementations, in which the functions may not be performed in the order shown or discussed, including performing functions in a substantially simultaneous manner or in reverse order according to the functions involved, which shall It is understood by those skilled in the art to which the embodiments of the present application belong.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Surgery (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Engineering & Computer Science (AREA)
  • Physiology (AREA)
  • Dentistry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Artificial Intelligence (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Image Analysis (AREA)

Abstract

La présente invention concerne un procédé de détection de chute sur la base des points osseux et un dispositif de détection de chute associé. Le procédé comprend : entraînement d'un premier réseau neuronal d'extraction de caractéristiques au moyen d'un premier échantillon d'image, le premier réseau neuronal d'extraction de caractéristiques étant utilisé pour extraire une pluralité de premiers points caractéristiques représentant des points osseux clés d'un corps humain ; entrée d'un deuxième échantillon vidéo dans le premier réseau neuronal d'extraction de caractéristique entraîné pour obtenir une pluralité de deuxièmes points caractéristiques représentant les points osseux clés du corps humain dans le deuxième échantillon vidéo ; codage de la pluralité de deuxièmes points caractéristiques pour générer une carte de caractéristiques de prédiction ; entraînement d'un deuxième réseau neuronal de classification de comportement au moyen de la carte de caractéristiques de prédiction, le deuxième réseau neuronal de classification de comportement étant utilisé pour classifier des comportements représentés dans la carte de caractéristiques de prédiction ; et entrée séquentielle des données vidéo d'un objet surveillé dans le premier réseau neuronal d'extraction de caractéristique entraîné et le deuxième réseau neuronal de classification de comportement entraîné pour délivrer en sortie une catégorie de comportement de l'objet surveillé.
PCT/CN2019/089500 2018-11-28 2019-05-31 Procédé de détection de chute sur la base des points osseux et dispositif de détection de chute associé WO2020107847A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811433808.3 2018-11-28
CN201811433808.3A CN109492612A (zh) 2018-11-28 2018-11-28 基于骨骼点的跌倒检测方法及其跌倒检测装置

Publications (1)

Publication Number Publication Date
WO2020107847A1 true WO2020107847A1 (fr) 2020-06-04

Family

ID=65698053

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/089500 WO2020107847A1 (fr) 2018-11-28 2019-05-31 Procédé de détection de chute sur la base des points osseux et dispositif de détection de chute associé

Country Status (2)

Country Link
CN (1) CN109492612A (fr)
WO (1) WO2020107847A1 (fr)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111860312A (zh) * 2020-07-20 2020-10-30 上海汽车集团股份有限公司 一种驾乘环境调节方法和装置
CN112364695A (zh) * 2020-10-13 2021-02-12 杭州城市大数据运营有限公司 一种行为预测方法、装置、计算机设备和存储介质
CN112541576A (zh) * 2020-12-14 2021-03-23 四川翼飞视科技有限公司 Rgb单目图像的生物活体识别神经网络及其构建方法
CN114882596A (zh) * 2022-07-08 2022-08-09 深圳市信润富联数字科技有限公司 行为预警方法、装置、电子设备及存储介质
CN115546491A (zh) * 2022-11-28 2022-12-30 中南财经政法大学 一种跌倒报警方法、系统、电子设备及存储介质

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109492612A (zh) * 2018-11-28 2019-03-19 平安科技(深圳)有限公司 基于骨骼点的跌倒检测方法及其跌倒检测装置
CN110378245B (zh) * 2019-06-26 2023-07-21 平安科技(深圳)有限公司 基于深度学习的足球比赛行为识别方法、装置及终端设备
CN110276332B (zh) * 2019-06-28 2021-12-24 北京奇艺世纪科技有限公司 一种视频特征处理方法及装置
CN110532874B (zh) * 2019-07-23 2022-11-11 深圳大学 一种物体属性识别模型的生成方法、存储介质及电子设备
CN110633736A (zh) * 2019-08-27 2019-12-31 电子科技大学 一种基于多源异构数据融合的人体跌倒检测方法
SE1951443A1 (en) * 2019-12-12 2021-06-13 Assa Abloy Ab Improving machine learning for monitoring a person
CN111209848B (zh) * 2020-01-03 2023-07-21 北京工业大学 一种基于深度学习的实时跌倒检测方法
CN113792595A (zh) * 2021-08-10 2021-12-14 北京爱笔科技有限公司 目标行为检测方法、装置、计算机设备和存储介质
CN113712538A (zh) * 2021-08-30 2021-11-30 平安科技(深圳)有限公司 基于wifi信号的跌倒检测方法、装置、设备和存储介质
CN115661943B (zh) * 2022-12-22 2023-03-31 电子科技大学 一种基于轻量级姿态评估网络的跌倒检测方法
CN117238026B (zh) * 2023-07-10 2024-03-08 中国矿业大学 一种基于骨骼和图像特征的姿态重建交互行为理解方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107220604A (zh) * 2017-05-18 2017-09-29 清华大学深圳研究生院 一种基于视频的跌倒检测方法
US20170316578A1 (en) * 2016-04-29 2017-11-02 Ecole Polytechnique Federale De Lausanne (Epfl) Method, System and Device for Direct Prediction of 3D Body Poses from Motion Compensated Sequence
CN107392131A (zh) * 2017-07-14 2017-11-24 天津大学 一种基于人体骨骼节点距离的动作识别方法
CN107784654A (zh) * 2016-08-26 2018-03-09 杭州海康威视数字技术股份有限公司 图像分割方法、装置及全卷积网络系统
CN108647776A (zh) * 2018-05-08 2018-10-12 济南浪潮高新科技投资发展有限公司 一种卷积神经网络卷积膨胀处理电路及方法
CN109492612A (zh) * 2018-11-28 2019-03-19 平安科技(深圳)有限公司 基于骨骼点的跌倒检测方法及其跌倒检测装置

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105590099B (zh) * 2015-12-22 2019-02-01 中国石油大学(华东) 一种基于改进卷积神经网络的多人行为识别方法
CN108294759A (zh) * 2017-01-13 2018-07-20 天津工业大学 一种基于cnn眼部状态识别的驾驶员疲劳检测方法
CN108280455B (zh) * 2018-01-19 2021-04-02 北京市商汤科技开发有限公司 人体关键点检测方法和装置、电子设备、程序和介质
CN108717569B (zh) * 2018-05-16 2022-03-22 中国人民解放军陆军工程大学 一种膨胀全卷积神经网络装置及其构建方法
CN108776775B (zh) * 2018-05-24 2020-10-27 常州大学 一种基于权重融合深度及骨骼特征的老年人室内跌倒检测方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170316578A1 (en) * 2016-04-29 2017-11-02 Ecole Polytechnique Federale De Lausanne (Epfl) Method, System and Device for Direct Prediction of 3D Body Poses from Motion Compensated Sequence
CN107784654A (zh) * 2016-08-26 2018-03-09 杭州海康威视数字技术股份有限公司 图像分割方法、装置及全卷积网络系统
CN107220604A (zh) * 2017-05-18 2017-09-29 清华大学深圳研究生院 一种基于视频的跌倒检测方法
CN107392131A (zh) * 2017-07-14 2017-11-24 天津大学 一种基于人体骨骼节点距离的动作识别方法
CN108647776A (zh) * 2018-05-08 2018-10-12 济南浪潮高新科技投资发展有限公司 一种卷积神经网络卷积膨胀处理电路及方法
CN109492612A (zh) * 2018-11-28 2019-03-19 平安科技(深圳)有限公司 基于骨骼点的跌倒检测方法及其跌倒检测装置

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111860312A (zh) * 2020-07-20 2020-10-30 上海汽车集团股份有限公司 一种驾乘环境调节方法和装置
CN112364695A (zh) * 2020-10-13 2021-02-12 杭州城市大数据运营有限公司 一种行为预测方法、装置、计算机设备和存储介质
CN112541576A (zh) * 2020-12-14 2021-03-23 四川翼飞视科技有限公司 Rgb单目图像的生物活体识别神经网络及其构建方法
CN112541576B (zh) * 2020-12-14 2024-02-20 四川翼飞视科技有限公司 Rgb单目图像的生物活体识别神经网络构建方法
CN114882596A (zh) * 2022-07-08 2022-08-09 深圳市信润富联数字科技有限公司 行为预警方法、装置、电子设备及存储介质
CN115546491A (zh) * 2022-11-28 2022-12-30 中南财经政法大学 一种跌倒报警方法、系统、电子设备及存储介质
CN115546491B (zh) * 2022-11-28 2023-03-10 中南财经政法大学 一种跌倒报警方法、系统、电子设备及存储介质

Also Published As

Publication number Publication date
CN109492612A (zh) 2019-03-19

Similar Documents

Publication Publication Date Title
WO2020107847A1 (fr) Procédé de détection de chute sur la base des points osseux et dispositif de détection de chute associé
Yang et al. Uncertainty-guided transformer reasoning for camouflaged object detection
CN109558832B (zh) 一种人体姿态检测方法、装置、设备及存储介质
Wang et al. A deep network solution for attention and aesthetics aware photo cropping
WO2021129064A1 (fr) Procédé et dispositif d'acquisition de posture, et procédé et dispositif de formation de modèle de positionnement de coordonnées de point clé
CN108205655B (zh) 一种关键点预测方法、装置、电子设备及存储介质
WO2021227726A1 (fr) Procédés et appareils d'apprentissage de détection de visage et réseaux neuronaux de détection d'image, et dispositif
WO2021238548A1 (fr) Procédé, appareil et dispositif de reconnaissance de région, et support de stockage lisible
CN111368672A (zh) 一种用于遗传病面部识别模型的构建方法及装置
WO2021218238A1 (fr) Procédé et appareil de traitement d'image
CN112766186B (zh) 一种基于多任务学习的实时人脸检测及头部姿态估计方法
CN110738650B (zh) 一种传染病感染识别方法、终端设备及存储介质
WO2022111387A1 (fr) Procédé de traitement de données et appareil associé
CN111738074B (zh) 基于弱监督学习的行人属性识别方法、系统及装置
Gupta et al. Advanced security system in video surveillance for COVID-19
CN116897012A (zh) 中医体质识别方法、装置、电子设备、存储介质及程序
CN111340213B (zh) 神经网络的训练方法、电子设备、存储介质
Zhou et al. A study on attention-based LSTM for abnormal behavior recognition with variable pooling
CN114724251A (zh) 一种在红外视频下基于骨架序列的老人行为识别方法
CN112381118B (zh) 一种大学舞蹈考试测评方法及装置
Liu et al. Key algorithm for human motion recognition in virtual reality video sequences based on hidden markov model
CN117037244A (zh) 人脸安全检测方法、装置、计算机设备和存储介质
Fan et al. EGFNet: Efficient guided feature fusion network for skin cancer lesion segmentation
CN112633224B (zh) 一种社交关系识别方法、装置、电子设备及存储介质
CN115187660A (zh) 一种基于知识蒸馏的多人人体姿态估计方法及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19889552

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19889552

Country of ref document: EP

Kind code of ref document: A1