CN109492612A - Fall detection method and its falling detection device based on skeleton point - Google Patents

Fall detection method and its falling detection device based on skeleton point Download PDF

Info

Publication number
CN109492612A
CN109492612A CN201811433808.3A CN201811433808A CN109492612A CN 109492612 A CN109492612 A CN 109492612A CN 201811433808 A CN201811433808 A CN 201811433808A CN 109492612 A CN109492612 A CN 109492612A
Authority
CN
China
Prior art keywords
fisrt feature
point
neural network
data
behavior
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811433808.3A
Other languages
Chinese (zh)
Inventor
周涛涛
周宝
陈远旭
肖京
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201811433808.3A priority Critical patent/CN109492612A/en
Publication of CN109492612A publication Critical patent/CN109492612A/en
Priority to PCT/CN2019/089500 priority patent/WO2020107847A1/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1116Determining posture transitions
    • A61B5/1117Fall detection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1126Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique
    • A61B5/1128Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique using image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques

Abstract

The present invention provides a kind of fall detection method and its device based on skeleton point, the described method includes: extracting neural network by the first picture sample training fisrt feature, the fisrt feature extracts the multiple fisrt feature points for the crucial skeleton point that neural network is used to extract on characterization human body;The trained fisrt feature of second video sample input is extracted into neural network, obtains the multiple second feature points for the crucial skeleton point for characterizing the human body in second video sample;Coding is carried out to the multiple second feature point and generates predicted characteristics figure;By the predicted characteristics figure the second behavior Classification Neural of training, the second behavior Classification Neural is for classifying to the behavior indicated in the predicted characteristics figure;The video data of monitored target is sequentially input into the trained fisrt feature and extracts neural network and the second behavior Classification Neural, to export the behavior classification of the monitored target.

Description

Fall detection method and its falling detection device based on skeleton point
Technical field
The present invention relates to machine vision depth learning technology field more particularly to a kind of fall detection sides based on skeleton point Method, device, computer equipment and storage medium.
Background technique
As China enters aging society, endowment problem is increasingly severe.Every body function target of the elderly declines, Mobility reduces, and the deficiency of especially equilibrant force, respond and cooperative ability may cause accidentally tumble and happen.When After old man falls, if there is no timely help it is possibly even therefore dead at home.Therefore, family or other Fall detection in environment for old man be in computer vision and machine learning field one meaningful study a question.
There are mainly three types of methods for current existing fall detection, and fall detection respectively based on wearable device is based on The fall detection of depth camera and fall detection based on common camera.When wherein the method based on wearable device is necessary It carves and carries, bring very big inconvenience to user, practical application value is little;Based on the method for depth camera since cost is high Expensive, practical popularization difficulty is big;And the method cost based on common camera is cheap, easy to use, but to the more demanding of algorithm.
Since common camera can cover each place, hardware foundation is mature.At present in the industry using general Logical camera carries out fall detection and many methods has been proposed.For example, directly using the information of image sequence to tumble behavior Classify, is classified using detection algorithm to the frame variation of personage.But the data of fall detection are less at present, scene It is single, it cannot apply in various actual scenes.For that can not be instructed using the classification method of image sequence since data are few Practise outstanding network.The letter of other a large amount of data sets is utilized in method for being classified using detection algorithm to personage's frame Breath can effectively be detected people, but when being classified using frame information, due to frame Limited information, cannot be obtained general The good network of the property changed.
Summary of the invention
The object of the present invention is to provide a kind of fall detection method based on skeleton point, device, computer equipment and storages Medium, it is of the existing technology for solving the problems, such as.
To achieve the above object, the present invention provides a kind of fall detection method based on skeleton point, comprising the following steps:
Neural network is extracted by the first picture sample training fisrt feature, the fisrt feature is extracted neural network and is used for Extract multiple fisrt feature points in first picture sample, the crucial skeleton point on the fisrt feature point characterization human body;
The trained fisrt feature of second video sample input is extracted into neural network, obtains characterizing second view Multiple second feature points of the crucial skeleton point of human body in frequency sample;
Coding is carried out to the multiple second feature point and generates predicted characteristics figure;
By the predicted characteristics figure the second behavior Classification Neural of training, the second behavior Classification Neural is used The behavior indicated in the predicted characteristics figure is classified;
The video data of monitored target is sequentially input into the trained fisrt feature and extracts neural network and described Second behavior Classification Neural, to export the behavior classification of the monitored target.
Further, the first picture sample training fisrt feature that passes through extracts neural network, comprising:
First picture sample inputs Resnet residual error network, obtains the first extraction data;
Described first extracts data respectively by multiple convolution modules with the different coefficients of expansion, obtains multiple with not Second with feature channel extracts data;
The multiple the second extraction data with different characteristic channel enter next with residual error convolution heap after combining First convolutional layer obtains multiple thirds wild with different perception and extracts data;
It extracts data to the multiple third wild with different perception to merge, subsequently into the accumulation of residual error module The second convolutional layer to get up, final output characterize multiple fisrt feature points of the crucial skeleton point on human body;
Network is extracted to the fisrt feature by first-loss function and carries out reverse train.
It is further, described to pass through the predicted characteristics figure the second behavior Classification Neural of training, comprising:
The predicted characteristics figure obtains the first classification data by conventional convolution module;
First classification data by multiple convolution modules with the different coefficients of expansion, obtains multiple with not respectively With second classification data in feature channel;
Three conventional convolution modules are passed sequentially through after the multiple the second classification data combination with different characteristic channel, Final output behavior classification.
Further, the first-loss function F are as follows:
Wherein xp、ypRepresent the prediction coordinate that the fisrt feature extracts the fisrt feature point that neural network is extracted, xg、 ygRepresent the actual coordinate of the fisrt feature point.
Further, the second loss function L are as follows:
Wherein, the xkRepresent the parameter value of kth kind behavior classification, zkRepresent the prediction probability of behavior classification in kth, L2 Represent the regular terms for preventing over-fitting.
Further, the convolution module with lower layer by being sequentially connected in series: convolutional layer, batch regularization layer, Relu swash Function layer, convolutional layer, batch regularization layer, Relu activation primitive layer, pond layer living.
It is further, described that coding generation predicted characteristics figure is carried out to the multiple second feature point, comprising:
The multiple second feature point is matched two-by-two;
Calculate the distance between every two second feature point and speed:
vxit=xit-xi(t-1)
vyit=yit-yi(t-1)
In above formula, xit、yitRespectively represent cross, the ordinate of i-th of second feature point of t moment;lxjtRepresent t moment Euler's distance of i second feature point and j-th of second feature point, vxitI-th of second feature point is represented in t moment in the direction x On speed, vyitRepresent the speed of i-th of second feature point in y-direction;
All calculated distances and speed data are combined to form predicted characteristics figure.
To achieve the above object, the present invention also proposes a kind of falling detection device based on skeleton point, comprising:
First nerves network training module is suitable for extracting neural network by the first picture sample training fisrt feature, The fisrt feature extracts neural network and is used to extract multiple fisrt feature points in first picture sample, and described first is special Crucial skeleton point on sign point characterization human body;
Feature point extraction module is suitable for the trained fisrt feature of the second video sample input extracting nerve net Network obtains the multiple second feature points for the crucial skeleton point for characterizing the human body in second video sample;
Characteristic pattern generation module generates characterization the multiple second suitable for carrying out coding to the multiple second feature point The predicted characteristics figure of the distribution situation of characteristic point;
Nervus opticus network training module is suitable for nerve net of classifying by the predicted characteristics figure the second behavior of training Network, the second behavior Classification Neural is for classifying to the behavior indicated in the predicted characteristics figure;
Categorization module is extracted suitable for the video data of monitored target is sequentially input the trained fisrt feature Neural network and the second behavior Classification Neural, to export the behavior classification of the monitored target.
To achieve the above object, it the present invention also provides a kind of computer equipment, including memory, processor and is stored in On memory and the computer program that can run on a processor, the processor are realized above-mentioned when executing the computer program The step of method.
To achieve the above object, the present invention also provides computer readable storage mediums, are stored thereon with computer program, institute State the step of above method is realized when computer program is executed by processor.
The present invention utilizes other data training human body skeleton point aiming at the problem that fall detection data deficiencies in the prior art Feature extraction neural network;Aiming at the problem that being not enough to detect tumble behavior using frame information, skeleton point information pair is utilized Tumble behavior is classified.The present invention extracts neural network by picture sample library training fisrt feature, for extracting in human body Crucial skeleton point information;By the second behavior Classification Neural of video sample library training, crucial skeleton point letter is being extracted On the basis of breath, judge whether the human action in video belongs to tumble behavior.The fisrt feature trained through the invention mentions Neural network and the second behavior Classification Neural are taken, can accurately extract the skeleton point information of monitored target, and according to Skeleton point information judges whether monitored target has occurred tumble behavior in time, can be handicapped old man, the disabled Equal offers are timely and effectively nursed, and are conducive to improve people's lives quality.
Detailed description of the invention
Fig. 1 is that the present invention is based on the flow charts of the fall detection method embodiment one of skeleton point;
Fig. 2 is the structural schematic diagram of the fisrt feature extraction neural network in the embodiment of the present invention one;
Fig. 3 is the structural schematic diagram of the second behavior Classification Neural in the embodiment of the present invention one;
Fig. 4 is that the present invention is based on the program module schematic diagrames of the falling detection device embodiment one of skeleton point;
Fig. 5 is the hardware structural diagram of memory sharing Installation practice one of the present invention.
Specific embodiment
In order to make the objectives, technical solutions, and advantages of the present invention clearer, with reference to the accompanying drawings and embodiments, right The present invention is further elaborated.It should be appreciated that described herein, specific examples are only used to explain the present invention, not For limiting the present invention.Based on the embodiments of the present invention, those of ordinary skill in the art are not before making creative work Every other embodiment obtained is put, shall fall within the protection scope of the present invention.
Fall detection method, device, computer equipment and storage medium provided by the invention are suitable for machine vision technique Field, in solitary situation old man or the disabled etc. provide a kind of fall detection method that can find tumble behavior in time and Its device.The present invention extracts neural network by picture sample library training fisrt feature, for extracting the crucial bone in human body Point information;By the second behavior Classification Neural of video sample library training, on the basis of having extracted crucial skeleton point information, Judge whether the human action in video belongs to tumble behavior.The fisrt feature that trains through the invention extract neural network and Second behavior Classification Neural, can accurately extract the skeleton point information of monitored target, and according to skeleton point information and When judge whether monitored target has occurred tumble behavior, be conducive to that people's lives quality is greatly improved.
Embodiment one
Referring to Fig. 1, a kind of fall detection method based on skeleton point of the present embodiment, comprising the following steps:
S1: neural network is extracted by the first picture sample training fisrt feature, the fisrt feature extracts neural network Crucial bone for extracting multiple fisrt feature points in first picture sample, on the fisrt feature point characterization human body Point.
In this step, the first picture sample is selected from picture sample library to train fisrt feature to extract neural network, the One picture sample is preferably the whole body picture of personage.When implementation, the first picture sample is divided into trained picture sample and test chart Piece sample, wherein picture sample is trained to be used to be trained fisrt feature extraction neural network, test picture sample is for testing It demonstrate,proves the fisrt feature after training picture sample training and extracts effect of the neural network when extracting the characteristic information in picture. Preferably, the pretreatment of data enhancing can be carried out to above-mentioned trained picture sample and test picture sample, such as to each sample Originally degree of comparing transformation and luminance transformation along with local random Gaussian, and carry out unified normalized, thus Obtain the enhanced trained picture sample of data and test picture sample.
The fisrt feature being described in detail in this step by taking certain tests picture sample as an example below extracts neural network Structure, as shown in Figure 2.Test picture sample initially enters characteristic extracting module to extract the feature in the test picture sample, Characteristic extracting module in the present embodiment uses ResNet residual error network, to guarantee preferable feature extraction performance.Test sample Picture obtains the first extraction data D1 after ResNet residual error network, and then the first extraction data D1 respectively enters four and contains The convolution module of the different coefficients of expansion obtains four the second extraction data D2 with different characteristic channel.Next, four tools There is the second extraction data D2 in different characteristic channel combined rear into the first convolutional layer to get up with the accumulation of residual error module, obtains Four are extracted data D3 with the wild third of different perception.Finally, extracting data D3 with the wild third of different perception for four After fusion, it is again introduced into the second convolutional layer to get up with the accumulation of residual error module, final output characterizes the crucial skeleton point on human body Multiple fisrt feature points.
It should be noted that the value of convolution module number and the coefficient of expansion disclosed in the present embodiment is only to make For exemplary illustration, it is not limited thereto.Those of ordinary skill in the art can arbitrarily change above-mentioned convolution according to actual needs The number of module and the numerical value of the coefficient of expansion, all belong to the scope of protection of the present invention within.
Preferably, above-mentioned convolution module successively includes forming with lower layer: convolutional layer, batch standardization layer, Relu activate letter Several layers, convolutional layer, batch standardization layer, Relu activation primitive layer and the pond pool layer, wherein the convolutional layer in each module has The different coefficients of expansion.
In this step, characteristic information is preferably the skeleton character point on human body, for example including head, neck, two shoulders, two elbows, Two hands, two sterns, two knees and bipod totally 14 skeleton point information.Certainly, skeleton point position listed above is only used for lifting Example, be not intended to limit specific characteristic point information, as the case may be, above-mentioned skeleton point information can also be carried out delete or Person increases, or changes the position of specific features point, such as can also obtain the acupuncture point characteristic point information in human body, and the present invention is to this With no restrictions.On this basis, multiple fisrt feature points in the present embodiment preferably can be indicate in human body it is upper State skeleton point distributed intelligence figure.
This step is that use stochastic gradient descent method using cross entropy as loss function with momentum above-mentioned first special to train Sign extracts neural network.The expression formula of specific loss function is as follows:
Wherein xp、ypRepresent the prediction coordinate that the fisrt feature extracts the fisrt feature point that neural network is extracted, xg、 ygRepresent the actual coordinate of the fisrt feature point.
S2: extracting neural network for the trained fisrt feature of the second video sample input, obtains characterizing described the Multiple second feature points of the crucial skeleton point of human body in two video samples.
On the basis of step S1 has trained completion fisrt feature to extract neural network, this step utilizes trained first Feature extraction neural network extracts the point of the second feature in video sample, it is preferred that the second feature point is to be mentioned above 14 skeleton character points.
The present invention is the progress fall detection based on the video information of the collected subjects of common camera, because It is continuous videos rather than simple picture that the object of feature point extraction is carried out in this this step.Since video is by a series of Picture frame convert and formed at any time, therefore firstly the need of to video sampling to extract Target Photo.For example, video is pressed Picture is extracted according to 20 frames/second standard, using 3 seconds as one sample.Simultaneously in order to generate diversified sample, can regard Behavior starting point in frequency nearby randomly chooses start frame.
After extracting sufficient amount of Target Photo, neural network can be extracted by fisrt feature to extract target Characteristic point information in picture preferably can be 14 skeleton character points being mentioned above.
S3: coding is carried out to the multiple second feature point and generates predicted characteristics figure.
This step is for handling the second feature point extracted, to obtain predicted characteristics figure.Still with above For 14 skeleton character points, including following processing step:
S31: above-mentioned skeleton character point is matched two-by-two.
The present embodiment is that optional two characteristic points are matched from 14 skeleton character points, and calculating formula is as follows:
C (14,2)=14!/(12!*2!)=91;
S32: Euler's distance l between every two skeleton character point is calculatedxjtWith direction speed vxitAnd vyit:
vxit=xit-xi(t-1)
vyit=yit-yi(t-1)
In above formula, xit、yitRespectively represent cross, the ordinate of i-th of second feature point of t moment;lxjtRepresent t moment Euler's distance of i second feature point and j-th of second feature point, vxitI-th of second feature point is represented in t moment in the direction x On speed, vyitRepresent the speed of i-th of second feature point in y-direction.
S33: it combines all Euler's distances being calculated and direction speed data to form predicted characteristics figure.
For any width sample graph, 14 skeleton character points are matched available 91 kinds of combinations two-by-two, 91 Euler's distances can namely be calculated;Each skeleton character point be respectively provided with the direction x speed and a side y To speed, that is, share the speed in 14 directions x and the speed in 14 directions y, integrate and 91+14+14=is obtained 119 feature vectors.
It needs to handle assuming that sharing 60 frame images in this step, then arranging the feature vector in each frame image in order Column, available 60 × 119 matrix diagram.The matrix diagram is predicted characteristics figure.
S4: by the predicted characteristics figure the second behavior Classification Neural of training, the second behavior classification nerve net Network is for classifying to the behavior indicated in the predicted characteristics figure.
On the basis of having obtained predicted characteristics figure, the purpose of this step is training the second behavior classification nerve net Network for classifying to the behavior in predicted characteristics figure that indicates determines whether that tumble behavior has occurred.In the present invention The second behavior Classification Neural structure as shown in figure 3, being specifically described below.
For certain predicted characteristics figure obtained in the step S3.The predicted characteristics figure passes through conventional convolution module first, Obtain the first classification data R1.Then, first classification data R1 passes through four convolution moulds with the different coefficients of expansion respectively Block obtains four the second classification data R2 with different characteristic channel, it is preferred that the coefficient of expansion of aforementioned four convolution module Respectively 1,3,6 and 12.Next, being passed sequentially through after aforementioned four the second classification data R2 combination with different characteristic channel Three conventional convolution modules, final output behavior classification, indicate which the behavior in above-mentioned predicted characteristics figure belongs to judge Kind behavior classification.
It should be noted that the value of convolution module number and the coefficient of expansion disclosed in the present embodiment is only to make For exemplary illustration, it is not limited thereto.Those of ordinary skill in the art can arbitrarily change above-mentioned convolution according to actual needs The number of module and the numerical value of the coefficient of expansion, all belong to the scope of protection of the present invention within.
Preferably, above-mentioned convolution module successively includes forming with lower layer: convolutional layer, batch standardization layer, Relu activate letter Several layers, convolutional layer, batch standardization layer, Relu activation primitive layer and the pond pool layer.
Pass through loss function L in this stepH(X, Y) is trained the second behavior Classification Neural, embodies Formula is as follows:
In above formula, the xkRepresent the parameter value of kth kind behavior classification, zkRepresent the prediction probability of behavior classification in kth. For example, the classification that the second behavior Classification Neural can identify squat down, stand, waving, bending over, falling, lying low etc. it is a variety of Behavior, each behavior respectively corresponds respective parameter value, such as row of falling is occurring when being monitored people by video identification For when, then xkIndicate that monitored people is in the parameter value of tumble behavior, zkIndicate that the pre- of tumble behavior is occurring for monitored people Survey probability.
Over-fitting in order to prevent, the present embodiment added a L2 regular terms again after loss function, for preventing The case where over-fitting, obtained cost function are as follows:
L (X, Y)=LH(X,Y)+L2。
S5: the video data of monitored target is sequentially input into the trained fisrt feature and extracts neural network and institute The second behavior Classification Neural is stated, to export the behavior classification of the monitored target.
On the basis of having completed to extract the training of neural network and the second behavior Classification Neural to fisrt feature, The present invention can carry out the detection of tumble behavior to actual monitor video.Specifically, the present invention passes through common camera reality When shooting by the video information of guardianship, which extracts a certain number of target images through over-sampling.The target Image first passes around trained fisrt feature and extracts neural network, extracts multiple characteristic points in target image, e.g. Skeleton character point.Multiple skeleton character points are calculated, are combined, for example, calculate every two skeleton character point between Euler away from From with the speed on the direction x, the direction y, and the above-mentioned vector being calculated is arranged according to the sequence of every frame image, finally Obtain predicted characteristics figure.Next, predicted characteristics figure is inputted the second behavior Classification Neural, can obtain being included in pre- Classification belonging to surveying described in the behavior in characteristic pattern, for example whether being tumble behavior.
Please continue to refer to Fig. 4, a kind of falling detection device is shown, in the present embodiment, falling detection device 10 can be with Including or be divided into one or more program modules, one or more program module is stored in storage medium, and by Performed by one or more processors, to complete the present invention, and above-mentioned fall detection method can be realized.The so-called program of the present invention Module is the series of computation machine program instruction section for referring to complete specific function, falls and examines more suitable for description than program itself Survey implementation procedure of the device 10 in storage medium.The function of each program module of the present embodiment will specifically be introduced by being described below:
First nerves network training module 11 is suitable for extracting nerve net by the first picture sample training fisrt feature Network, the fisrt feature extract neural network and are used to extract multiple fisrt feature points in first picture sample, and described the One characteristic point characterizes the crucial skeleton point on human body;
Feature point extraction module 12 is suitable for the trained fisrt feature of the second video sample input extracting nerve Network obtains the multiple second feature points for the crucial skeleton point for characterizing the human body in second video sample;
Characteristic pattern generation module 13 generates characterization the multiple the suitable for carrying out coding to the multiple second feature point The predicted characteristics figure of the distribution situation of two characteristic points;
Nervus opticus network training module 14 is suitable for nerve net of classifying by the predicted characteristics figure the second behavior of training Network, the second behavior Classification Neural is for classifying to the behavior indicated in the predicted characteristics figure;
Categorization module 15 is mentioned suitable for the video data of monitored target is sequentially input the trained fisrt feature Neural network and the second behavior Classification Neural are taken, to export the behavior classification of the monitored target.
The present embodiment also provides a kind of computer equipment, can such as execute the smart phone, tablet computer, notebook of program Computer, desktop computer, rack-mount server, blade server, tower server or Cabinet-type server are (including independent Server cluster composed by server or multiple servers) etc..The computer equipment 20 of the present embodiment includes at least but not It is limited to: memory 21, the processor 22 of connection can be in communication with each other by system bus, as shown in Figure 5.It is pointed out that Fig. 5 The computer equipment 20 with component 21-22 is illustrated only, it should be understood that being not required for implementing all groups shown Part, the implementation that can be substituted is more or less component.
In the present embodiment, memory 21 (i.e. readable storage medium storing program for executing) includes flash memory, hard disk, multimedia card, card-type memory (for example, SD or DX memory etc.), random access storage device (RAM), static random-access memory (SRAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), programmable read only memory (PROM), magnetic storage, magnetic Disk, CD etc..In some embodiments, memory 21 can be the internal storage unit of computer equipment 20, such as the calculating The hard disk or memory of machine equipment 20.In further embodiments, memory 21 is also possible to the external storage of computer equipment 20 The plug-in type hard disk being equipped in equipment, such as the computer equipment 20, intelligent memory card (Smart Media Card, SMC), peace Digital (Secure Digital, SD) card, flash card (Flash Card) etc..Certainly, memory 21 can also both include meter The internal storage unit for calculating machine equipment 20 also includes its External memory equipment.In the present embodiment, memory 21 is commonly used in storage Be installed on the operating system and types of applications software of computer equipment 20, for example, embodiment one falling detection device 10 program Code etc..In addition, memory 21 can be also used for temporarily storing the Various types of data that has exported or will export.
Processor 22 can be in some embodiments central processing unit (Central Processing Unit, CPU), Controller, microcontroller, microprocessor or other data processing chips.The processor 22 is commonly used in control computer equipment 20 overall operation.In the present embodiment, program code or processing data of the processor 22 for being stored in run memory 21, Such as operation falling detection device 10, to realize the fall detection method of embodiment one.
The present embodiment also provides a kind of computer readable storage medium, such as flash memory, hard disk, multimedia card, card-type memory (for example, SD or DX memory etc.), random access storage device (RAM), static random-access memory (SRAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), programmable read only memory (PROM), magnetic storage, magnetic Disk, CD, server, App are stored thereon with computer program, phase are realized when program is executed by processor using store etc. Answer function.The computer readable storage medium of the present embodiment is for storing falling detection device 10, realization when being executed by processor The fall detection method of embodiment one.
The serial number of the above embodiments of the invention is only for description, does not represent the advantages or disadvantages of the embodiments.
Any process or the method description described in other ways in flow chart or herein is construed as, and expression includes It is one or more for realizing specific logical function or process the step of executable instruction code module, segment or portion Point, and the range of the preferred embodiment of the present invention includes other realization, wherein can not press shown or discussed suitable Sequence, including according to related function by it is basic simultaneously in the way of or in the opposite order, Lai Zhihang function, this should be of the invention Embodiment person of ordinary skill in the field understood.
Those skilled in the art are appreciated that all or part of step for realizing that above-described embodiment method carries It suddenly is that relevant hardware can be instructed to complete by program, the program can store in a kind of computer-readable medium In, which when being executed, includes the steps that one or a combination set of embodiment of the method.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show The description of example " or " some examples " etc. means particular features, structures, materials, or characteristics described in conjunction with this embodiment or example It is included at least one embodiment or example of the invention.In the present specification, schematic expression of the above terms are different Surely identical embodiment or example is referred to.Moreover, particular features, structures, materials, or characteristics described can be any It can be combined in any suitable manner in one or more embodiment or examples.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side Method can be realized by means of software and necessary general hardware platform, naturally it is also possible to by hardware, but in many cases The former is more preferably embodiment.
The above is only a preferred embodiment of the present invention, is not intended to limit the scope of the invention, all to utilize this hair Equivalent structure or equivalent flow shift made by bright specification and accompanying drawing content is applied directly or indirectly in other relevant skills Art field, is included within the scope of the present invention.

Claims (10)

1. a kind of fall detection method based on skeleton point, which comprises the following steps:
Neural network is extracted by the first picture sample training fisrt feature, the fisrt feature extracts neural network for extracting Characterize multiple fisrt feature points of the crucial skeleton point on human body;
The trained fisrt feature of second video sample input is extracted into neural network, obtains characterizing the second video sample Multiple second feature points of the crucial skeleton point of human body in this;
Coding is carried out to the multiple second feature point and generates predicted characteristics figure;
By the predicted characteristics figure training the second behavior Classification Neural, the second behavior Classification Neural for pair The behavior indicated in the predicted characteristics figure is classified;
The video data of monitored target is sequentially input into the trained fisrt feature and extracts neural network and described second Behavior Classification Neural exports the behavior classification of the monitored target.
2. fall detection method according to claim 1, which is characterized in that described to pass through the first picture sample training first Feature extraction neural network, comprising:
First picture sample inputs Resnet residual error network, obtains the first extraction data;
Described first extracts data respectively by multiple convolution modules with the different coefficients of expansion, obtains multiple with different spies Levy channel second extracts data;
The multiple second with different characteristic channel enters first come with residual error convolution heap after extracting data combination Convolutional layer obtains multiple thirds wild with different perception and extracts data;
Data are extracted to the multiple third wild with different perception to merge, and are got up subsequently into the accumulation of residual error module The second convolutional layer, final output characterize human body on crucial skeleton point multiple fisrt feature points;
Network is extracted to the fisrt feature by first-loss function and carries out reverse train.
3. fall detection method according to claim 1, which is characterized in that described to pass through predicted characteristics figure training the Two behavior Classification Neurals, comprising:
The predicted characteristics figure obtains the first classification data by conventional convolution module;
First classification data by multiple convolution modules with the different coefficients of expansion, obtains multiple with different spies respectively Levy second classification data in channel;
Three conventional convolution modules are passed sequentially through after the multiple the second classification data combination with different characteristic channel, finally Output behavior classification.
4. fall detection method according to claim 2, which is characterized in that the first-loss function F are as follows:
Wherein xp、ypRepresent the prediction coordinate that the fisrt feature extracts the fisrt feature point that neural network is extracted, xg、ygIt represents The actual coordinate of the fisrt feature point.
5. fall detection method according to claim 3, which is characterized in that the second loss function L are as follows:
Wherein, the xkRepresent the parameter value of kth kind behavior classification, zkThe prediction probability of behavior classification in kth is represented, L2 is represented Prevent the regular terms of over-fitting.
6. fall detection method according to claim 2 or 3, which is characterized in that the convolution module by with lower layer successively It is composed in series: convolutional layer, batch regularization layer, Relu activation primitive layer, convolutional layer, batch regularization layer, Relu activation primitive Layer, pond layer.
7. fall detection method according to claim 1, which is characterized in that described to be carried out to the multiple second feature point Coding generates predicted characteristics figure, comprising:
The multiple second feature point is matched two-by-two;
Calculate the distance between every two second feature point and speed:
vxit=xit-xi(t-1)
vyit=yit-yi(t-1)
In above formula, xit、yitRespectively represent cross, the ordinate of i-th of second feature point of t moment;lxjtIt represents i-th of t moment Euler's distance of two characteristic points and j-th of second feature point, vxitI-th second feature point is represented in t moment in the x direction Speed, vyitRepresent the speed of i-th of second feature point in y-direction;
All calculated distances and speed data are combined to form predicted characteristics figure.
8. a kind of falling detection device based on skeleton point characterized by comprising
First nerves network training module is suitable for extracting neural network by the first picture sample training fisrt feature, described Fisrt feature extracts neural network and is used to extract multiple fisrt feature points in first picture sample, the fisrt feature point Characterize the crucial skeleton point on human body;
Feature point extraction module is suitable for the trained fisrt feature of the second video sample input extracting neural network, Obtain characterizing multiple second feature points of the crucial skeleton point of the human body in second video sample;
Characteristic pattern generation module generates predicted characteristics figure suitable for carrying out coding to the multiple second feature point;
Nervus opticus network training module is suitable for through the predicted characteristics figure the second behavior Classification Neural of training, institute The second behavior Classification Neural is stated for classifying to the behavior indicated in the predicted characteristics figure;
Categorization module extracts nerve suitable for the video data of monitored target is sequentially input the trained fisrt feature Network and the second behavior Classification Neural, to export the behavior classification of the monitored target.
9. a kind of computer equipment, can run on a memory and on a processor including memory, processor and storage Computer program, which is characterized in that the processor realizes any one of claim 1 to 7 institute when executing the computer program The step of stating method.
10. a kind of computer readable storage medium, is stored thereon with computer program, it is characterised in that: the computer program The step of any one of claim 1 to 7 the method is realized when being executed by processor.
CN201811433808.3A 2018-11-28 2018-11-28 Fall detection method and its falling detection device based on skeleton point Pending CN109492612A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201811433808.3A CN109492612A (en) 2018-11-28 2018-11-28 Fall detection method and its falling detection device based on skeleton point
PCT/CN2019/089500 WO2020107847A1 (en) 2018-11-28 2019-05-31 Bone point-based fall detection method and fall detection device therefor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811433808.3A CN109492612A (en) 2018-11-28 2018-11-28 Fall detection method and its falling detection device based on skeleton point

Publications (1)

Publication Number Publication Date
CN109492612A true CN109492612A (en) 2019-03-19

Family

ID=65698053

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811433808.3A Pending CN109492612A (en) 2018-11-28 2018-11-28 Fall detection method and its falling detection device based on skeleton point

Country Status (2)

Country Link
CN (1) CN109492612A (en)
WO (1) WO2020107847A1 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110276332A (en) * 2019-06-28 2019-09-24 北京奇艺世纪科技有限公司 A kind of video features processing method, device and Three dimensional convolution neural network model
CN110633736A (en) * 2019-08-27 2019-12-31 电子科技大学 Human body falling detection method based on multi-source heterogeneous data fusion
CN111209848A (en) * 2020-01-03 2020-05-29 北京工业大学 Real-time fall detection method based on deep learning
WO2020107847A1 (en) * 2018-11-28 2020-06-04 平安科技(深圳)有限公司 Bone point-based fall detection method and fall detection device therefor
WO2020258498A1 (en) * 2019-06-26 2020-12-30 平安科技(深圳)有限公司 Football match behavior recognition method and apparatus based on deep learning, and terminal device
WO2021012348A1 (en) * 2019-07-23 2021-01-28 深圳大学 Method for generating object attribute recognition model, storage medium and electronic device
SE1951443A1 (en) * 2019-12-12 2021-06-13 Assa Abloy Ab Improving machine learning for monitoring a person
CN113712538A (en) * 2021-08-30 2021-11-30 平安科技(深圳)有限公司 Fall detection method, device, equipment and storage medium based on WIFI signal
CN113792595A (en) * 2021-08-10 2021-12-14 北京爱笔科技有限公司 Target behavior detection method and device, computer equipment and storage medium
CN115661943A (en) * 2022-12-22 2023-01-31 电子科技大学 Fall detection method based on lightweight attitude assessment network
CN117238026A (en) * 2023-07-10 2023-12-15 中国矿业大学 Gesture reconstruction interactive behavior understanding method based on skeleton and image features

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111860312A (en) * 2020-07-20 2020-10-30 上海汽车集团股份有限公司 Driving environment adjusting method and device
CN112364695A (en) * 2020-10-13 2021-02-12 杭州城市大数据运营有限公司 Behavior prediction method and device, computer equipment and storage medium
CN112541576B (en) * 2020-12-14 2024-02-20 四川翼飞视科技有限公司 Biological living body identification neural network construction method of RGB monocular image
CN114882596B (en) * 2022-07-08 2022-11-15 深圳市信润富联数字科技有限公司 Behavior early warning method and device, electronic equipment and storage medium
CN115546491B (en) * 2022-11-28 2023-03-10 中南财经政法大学 Fall alarm method, system, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105590099A (en) * 2015-12-22 2016-05-18 中国石油大学(华东) Multi-user behavior identification method based on improved convolutional neural network
CN108280455A (en) * 2018-01-19 2018-07-13 北京市商汤科技开发有限公司 Human body critical point detection method and apparatus, electronic equipment, program and medium
CN108294759A (en) * 2017-01-13 2018-07-20 天津工业大学 A kind of Driver Fatigue Detection based on CNN Eye state recognitions
CN108717569A (en) * 2018-05-16 2018-10-30 中国人民解放军陆军工程大学 It is a kind of to expand full convolutional neural networks and its construction method
CN108776775A (en) * 2018-05-24 2018-11-09 常州大学 Fall detection method in a kind of the elderly room based on weight fusion depth and skeleton character

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170316578A1 (en) * 2016-04-29 2017-11-02 Ecole Polytechnique Federale De Lausanne (Epfl) Method, System and Device for Direct Prediction of 3D Body Poses from Motion Compensated Sequence
CN107784654B (en) * 2016-08-26 2020-09-25 杭州海康威视数字技术股份有限公司 Image segmentation method and device and full convolution network system
CN107220604A (en) * 2017-05-18 2017-09-29 清华大学深圳研究生院 A kind of fall detection method based on video
CN107392131A (en) * 2017-07-14 2017-11-24 天津大学 A kind of action identification method based on skeleton nodal distance
CN108647776A (en) * 2018-05-08 2018-10-12 济南浪潮高新科技投资发展有限公司 A kind of convolutional neural networks convolution expansion process circuit and method
CN109492612A (en) * 2018-11-28 2019-03-19 平安科技(深圳)有限公司 Fall detection method and its falling detection device based on skeleton point

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105590099A (en) * 2015-12-22 2016-05-18 中国石油大学(华东) Multi-user behavior identification method based on improved convolutional neural network
CN108294759A (en) * 2017-01-13 2018-07-20 天津工业大学 A kind of Driver Fatigue Detection based on CNN Eye state recognitions
CN108280455A (en) * 2018-01-19 2018-07-13 北京市商汤科技开发有限公司 Human body critical point detection method and apparatus, electronic equipment, program and medium
CN108717569A (en) * 2018-05-16 2018-10-30 中国人民解放军陆军工程大学 It is a kind of to expand full convolutional neural networks and its construction method
CN108776775A (en) * 2018-05-24 2018-11-09 常州大学 Fall detection method in a kind of the elderly room based on weight fusion depth and skeleton character

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020107847A1 (en) * 2018-11-28 2020-06-04 平安科技(深圳)有限公司 Bone point-based fall detection method and fall detection device therefor
WO2020258498A1 (en) * 2019-06-26 2020-12-30 平安科技(深圳)有限公司 Football match behavior recognition method and apparatus based on deep learning, and terminal device
CN110276332B (en) * 2019-06-28 2021-12-24 北京奇艺世纪科技有限公司 Video feature processing method and device
CN110276332A (en) * 2019-06-28 2019-09-24 北京奇艺世纪科技有限公司 A kind of video features processing method, device and Three dimensional convolution neural network model
WO2021012348A1 (en) * 2019-07-23 2021-01-28 深圳大学 Method for generating object attribute recognition model, storage medium and electronic device
CN110633736A (en) * 2019-08-27 2019-12-31 电子科技大学 Human body falling detection method based on multi-source heterogeneous data fusion
SE1951443A1 (en) * 2019-12-12 2021-06-13 Assa Abloy Ab Improving machine learning for monitoring a person
CN111209848A (en) * 2020-01-03 2020-05-29 北京工业大学 Real-time fall detection method based on deep learning
CN113792595A (en) * 2021-08-10 2021-12-14 北京爱笔科技有限公司 Target behavior detection method and device, computer equipment and storage medium
CN113712538A (en) * 2021-08-30 2021-11-30 平安科技(深圳)有限公司 Fall detection method, device, equipment and storage medium based on WIFI signal
CN115661943A (en) * 2022-12-22 2023-01-31 电子科技大学 Fall detection method based on lightweight attitude assessment network
CN117238026A (en) * 2023-07-10 2023-12-15 中国矿业大学 Gesture reconstruction interactive behavior understanding method based on skeleton and image features
CN117238026B (en) * 2023-07-10 2024-03-08 中国矿业大学 Gesture reconstruction interactive behavior understanding method based on skeleton and image features

Also Published As

Publication number Publication date
WO2020107847A1 (en) 2020-06-04

Similar Documents

Publication Publication Date Title
CN109492612A (en) Fall detection method and its falling detection device based on skeleton point
CN110728209B (en) Gesture recognition method and device, electronic equipment and storage medium
Zhang et al. Demeshnet: Blind face inpainting for deep meshface verification
Liu et al. Cross‐ethnicity face anti‐spoofing recognition challenge: A review
Liu et al. Learning discriminative representations from RGB-D video data
CN110889672B (en) Student card punching and class taking state detection system based on deep learning
CN107862270B (en) Face classifier training method, face detection method and device and electronic equipment
CN110532884A (en) Pedestrian recognition methods, device and computer readable storage medium again
CN109145766A (en) Model training method, device, recognition methods, electronic equipment and storage medium
CN112543936B (en) Motion structure self-attention-drawing convolution network model for motion recognition
CN109190475A (en) A kind of recognition of face network and pedestrian identify network cooperating training method again
CN107437083B (en) Self-adaptive pooling video behavior identification method
CN107944398A (en) Based on depth characteristic association list diagram image set face identification method, device and medium
CN110008793A (en) Face identification method, device and equipment
CN109670517A (en) Object detection method, device, electronic equipment and target detection model
WO2023165616A1 (en) Method and system for detecting concealed backdoor of image model, storage medium, and terminal
CN116052218B (en) Pedestrian re-identification method
CN112651342A (en) Face recognition method and device, electronic equipment and storage medium
CN115223239A (en) Gesture recognition method and system, computer equipment and readable storage medium
CN113255557B (en) Deep learning-based video crowd emotion analysis method and system
CN111046213A (en) Knowledge base construction method based on image recognition
Begampure et al. Intelligent video analytics for human action detection: a deep learning approach with transfer learning
CN116701706B (en) Data processing method, device, equipment and medium based on artificial intelligence
CN112381118A (en) Method and device for testing and evaluating dance test of university
CN115424335B (en) Living body recognition model training method, living body recognition method and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination