CN110443150A - A kind of fall detection method, device, storage medium - Google Patents

A kind of fall detection method, device, storage medium Download PDF

Info

Publication number
CN110443150A
CN110443150A CN201910621095.1A CN201910621095A CN110443150A CN 110443150 A CN110443150 A CN 110443150A CN 201910621095 A CN201910621095 A CN 201910621095A CN 110443150 A CN110443150 A CN 110443150A
Authority
CN
China
Prior art keywords
human body
image
convolutional neural
neural networks
fall detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910621095.1A
Other languages
Chinese (zh)
Inventor
田志博
李邦庚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sparta Internet Of Things Technology (beijing) Co Ltd
Original Assignee
Sparta Internet Of Things Technology (beijing) Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sparta Internet Of Things Technology (beijing) Co Ltd filed Critical Sparta Internet Of Things Technology (beijing) Co Ltd
Priority to CN201910621095.1A priority Critical patent/CN110443150A/en
Publication of CN110443150A publication Critical patent/CN110443150A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

This application discloses a kind of fall detection method, device, storage mediums.Wherein, fall detection method includes: that the human body image region including human body target is determined in image to be detected using the human testing model based on convolutional neural networks;And determine whether the human body target is in a falling state according to human body image region using the fall detection model based on convolutional neural networks.To which the technical solution of the application can obtain video data simultaneously from multiplex image acquisition equipment, and the detection model based on convolutional neural networks can be used and show that testing result, accuracy rate are high in real time.And since this programme uses the detection model based on convolutional neural networks, sensor is not needed relative to traditional detection method, easy to use, calculating occupancy resource is not high, and cost is relatively low, is easy to universal.

Description

A kind of fall detection method, device, storage medium
Technical field
This application involves behavioral value method fields, more particularly to a kind of fall detection method, device, storage medium.
Background technique
As human longevity extends, birthrate of population is reduced, and aging has become global problem, and aging is likely to become The normality of future world.China also strides forward towards depth aging society, the variation of young man's operating pressure and life style, nothing Method extracts the plenty of time out and accompanies treatment old man, and the distribution of old solitary people is more and more.Due to the degeneration of old man's physiological function, fall As the big obstacle that senior health and fitness ensures, tumble may generate great actual bodily harm to old man, if cannot get and When give treatment to, more serious consequence may be generated.Therefore the fall detection method for studying a set of real-time high-efficiency is very necessary, thus The old man that can help to fall after Falls Among Old People can be succoured in time, ensure the healthy living of old man.
Tumble detection method for human body and main problem are currently on the market: 1) sensor-based detection needs are carried Wearable sensor, such detection device higher cost, is easily lost, and uses inconvenient, and area coverage is small, market is general And application cost is also higher;2) based on the detection method of video image analysis, such method is obtained by video background modeling technique Moving target foreground blocks are taken, are fallen by the prospect block feature of extraction, since there are many Moving Objects, such as various vehicles, respectively Whether kind of animal, being unable to judge accurately is that people passes through, and the more accuracy rate of erroneous detection is low is also unable to reach commercial standard (CS) for detection;3) it is based on The detection method consumption resource of deep learning model analysis picture video is larger, is delayed higher, cannot accomplish reality in video streaming When detect, can not realize multi-path camera while detect operation, at the same can not also be transplanted to mobile terminal operation, cannot allow and fall Old man is quickly succoured, and is extremely difficult to commercial standard (CS).
For detection accuracy present in above-mentioned existing tumble detection method for human body not high, inconvenient for use, cost compared with Technical problem that is high, calculating higher, the universal difficulty of occupancy resource, currently no effective solution has been proposed.
Summary of the invention
Embodiment of the disclosure provides a kind of fall detection method, device and image capture device, at least to solve Detection accuracy present in existing tumble detection method for human body is high, inconvenient for use, higher cost, calculates that occupy resource inclined Technical problem that is high, popularizing difficulty
According to the one aspect of the embodiment of the present disclosure, a kind of fall detection method is provided, comprising: using based on convolution mind Human testing model through network determines the human body image region including human body target in image to be detected;And utilize base Determine whether the human body target is in tumble shape according to human body image region in the fall detection model of convolutional neural networks State.
Optionally, using the human testing model based on convolutional neural networks, determine to include human body from image to be detected The operation in the human body image region of target, comprising: utilize human testing model, according to image to be detected, generate respectively with it is to be checked The corresponding multiple vectors of multiple rectangle frame regions in altimetric image, wherein vector includes at least following information: corresponding rectangle frame It include the confidence of human body target in the location information in region, the dimension information of corresponding rectangle frame region and corresponding rectangle frame Spend information;And rectangle frame region corresponding to the maximum vector of confidence information is determined as human body image region.
Optionally, the operation of multiple vectors corresponding with multiple rectangle frame regions in image to be detected respectively, packet are generated It includes: multiple eigenmatrixes being generated according to image to be detected using human testing model;And using in multiple eigenmatrixes The element of the same position of at least part eigenmatrix constructs a vector in multiple vectors.
Optionally, it using the element of the same position of at least part eigenmatrix in multiple eigenmatrixes, constructs more The operation of a vector in a vector, comprising: multiple eigenmatrixes are divided into multiple set of matrices, wherein each matrix stack The quantity for the eigenmatrix for including in conjunction is identical;And the member of the same position using the eigenmatrix in the same set of matrices Element constructs a vector in multiple vectors.
Optionally, human body mesh is determined according to human body image region using the fall detection model based on convolutional neural networks Mark whether operation in a falling state, comprising: utilize the fall detection model based on convolutional neural networks, according to human body image Region generates the fractional value whether fallen for identifying human body target;And when fractional value is greater than predetermined threshold, determine human body It is in a falling state.
Optionally, it is generated according to human body image region for marking using the fall detection model based on convolutional neural networks Know the operation for the fractional value whether human body target falls, comprising: using the neural network structure in fall detection model, extract people Characteristics of image in body image-region;And using the taxonomic structure in fall detection model, according to extracted characteristics of image, Generate the fractional value whether fallen for identifying the human body target.
Optionally, it is generated according to human body image region for marking using the fall detection model based on convolutional neural networks The operation for knowing the fractional value whether human body target falls, further includes the fractional value for converting fractional value to Probability Forms.
According to the other side of the embodiment of the present disclosure, a kind of falling detection device is additionally provided, comprising: human body image area Domain determining module determines to include human body for utilizing the human testing model based on convolutional neural networks in image to be detected The human body image region of target;And tumble state determination module, for utilizing the fall detection mould based on convolutional neural networks Type determines whether human body target is in a falling state according to human body image region.
According to the other side of the embodiment of the present disclosure, a kind of falling detection device is additionally provided, comprising: processor;With And memory, it is connect with processor, for providing the instruction for handling following processing step for processor: using based on convolutional Neural The human testing model of network determines the human body image region including human body target in image to be detected;And it utilizes and is based on The fall detection model of convolutional neural networks determines whether human body target is in a falling state according to human body image region.
To use the human testing based on convolutional neural networks using processor according to the technical solution of the present embodiment Model analyzes image, determines the human body image region comprising human body target, and use falling based on convolutional neural networks Detection model determines whether above-mentioned human body target is in a falling state according to the human body image region detected.The application's Technical solution can obtain video data simultaneously from multiplex image acquisition equipment, and can be used based on convolutional neural networks Detection model show that testing result, accuracy rate are high in real time.And since this programme uses the detection based on convolutional neural networks Model, therefore do not need user relative to traditional detection method and carry wearable sensor, it is easy to use, it calculates and occupies resource Not high, cost is relatively low, is easy to universal.
Therefore, the technical solution of the present embodiment solves detection accuracy present in existing tumble detection method for human body not High, inconvenient for use, higher cost calculates the technical problem for occupying higher, the universal difficulty of resource.
Detailed description of the invention
Attached drawing described herein is used to provide further understanding of the disclosure, constitutes part of this application, this public affairs The illustrative embodiments and their description opened do not constitute the improper restriction to the disclosure for explaining the disclosure.In the accompanying drawings:
Fig. 1 is the flow diagram of the fall detection method according to the embodiment of the present disclosure 1;
Fig. 2 is the parameter of the used convolutional neural networks of the human body detecting method according to the embodiment of the present disclosure 1 Parameter list;
Fig. 3 is the schematic diagram that the human body detecting method according to the embodiment of the present disclosure 1 handles characteristic pattern;
Fig. 4 is the schematic diagram that the human body detecting method according to the embodiment of the present disclosure 1 handles characteristic pattern;
Fig. 5 is the parameter of the used convolutional neural networks of the fall detection method according to the embodiment of the present disclosure 1 Parameter list;
Fig. 6 is the schematic diagram of the device of the fall detection according to the embodiment of the present disclosure 2;
Fig. 7 is the schematic diagram of the device of the fall detection according to the embodiment of the present disclosure 3.
Specific embodiment
In order to make those skilled in the art more fully understand the technical solution of the disclosure, implement below in conjunction with the disclosure Attached drawing in example, is clearly and completely described the technical solution in the embodiment of the present disclosure.Obviously, described embodiment The only embodiment of disclosure a part, instead of all the embodiments.Based on the embodiment in the disclosure, this field is common Disclosure protection all should belong in technical staff's every other embodiment obtained without making creative work Range.
It should be noted that the specification and claims of the disclosure and term " first " in above-mentioned attached drawing, " Two " etc. be to be used to distinguish similar objects, without being used to describe a particular order or precedence order.It should be understood that using in this way Data be interchangeable under appropriate circumstances, so as to embodiment of the disclosure described herein can in addition to illustrating herein or Sequence other than those of description is implemented.In addition, term " includes " and " having " and their any deformation, it is intended that cover Cover it is non-exclusive include, for example, the process, method, system, product or equipment for containing a series of steps or units are not necessarily limited to Step or unit those of is clearly listed, but may include be not clearly listed or for these process, methods, product Or other step or units that equipment is intrinsic.
Embodiment 1
Fig. 1 shows the flow diagram of the fall detection method according to the present embodiment, and Fig. 2 shows the present embodiment The parameter list of the parameter of the used convolutional neural networks of the human body detecting method;Fig. 3 is shown described in the present embodiment Human body detecting method schematic diagram that characteristic pattern is handled;Fig. 4 shows human body detecting method pair described in the present embodiment The schematic diagram that characteristic pattern is handled;Fig. 5 shows the used convolutional Neural of fall detection method described in the present embodiment The parameter list of the parameter of network.
Refering to what is shown in Fig. 1, the fall detection method of the present embodiment the following steps are included:
S102: utilizing the human testing model based on convolutional neural networks, determines to include human body mesh in image to be detected Target human body image region;And
S104: utilizing the fall detection model based on convolutional neural networks, according to human body image region, determines human body target It is whether in a falling state.
Specifically, using the human testing model based on convolutional neural networks, determine to include human body in image to be detected The human body image region of target.Then the fall detection model based on convolutional neural networks is utilized, according to determining human body image Region determines whether the human body target in human body image region is in a falling state.
As described in the background art, the detection side of sensor is mainly based upon for tumble detection method for human body at present The detection method of method, the detection method based on video image analysis and deep learning model analysis picture video.
Existing sensor-based detection method is based primarily upon wearable sensor, the movement of real-time measurement old man Acceleration information or angular velocity information, then the information according to measurement judges whether old man falls.The shortcomings that such method is to need Wearable sensor is carried, equipment cost is higher, and use is extremely inconvenient, and is easily lost, and area coverage is small, and market is general And application cost is also higher.
The existing detection method based on video image analysis, by video background modeling technique, before obtaining moving target Scape block is fallen by the prospect block feature of extraction, since there are many Moving Objects, such as various vehicles, various animals, Wu Fazhun Really judge whether it is that people passes through, foreground target characteristic error is larger, causes erroneous detection and missing inspection more, and accuracy rate is low, is unable to reach Commercial standard (CS).
The existing method based on deep learning model analysis picture video acquires scene image data by video camera, Determine object detection area;According to scene image data, the instantaneous of all pixels point in current frame image is calculated with logic of propositions Velocities field obtains pixel motion speed field picture;Assemble phase according to pixel motion speed field picture and default condition of similarity Like pixel, Candidate Motion target area is formed, Candidate Motion target is obtained in Candidate Motion target area;Filter out candidate fortune New moving target in moving-target;Pedestrian target is identified from new moving target according to deep learning method;Update pedestrian's mesh Target target following list information;Variable condition and default Rule of judgment based on target following list information, judge pedestrian's mesh Whether target falls and alarms.Method effect before this method is compared makes moderate progress, but algorithm flow is more complicated, Consumption resource is larger, and operation time is longer, is difficult to be measured in real time tumble alarm, can not realize multi-path camera simultaneously Detection operation, it is also difficult to be transplanted to mobile terminal and be detected, it is still higher to popularize hardware cost, is extremely difficult to commercial mark It is quasi-.
Aiming at the problems existing in the prior art, the technical solution of the present embodiment provides a kind of human body fall detection side Method uses the human testing model based on convolutional neural networks using processor, analyzes image, determines to include human body mesh Target human body image region, and the fall detection model based on convolutional neural networks is used, according to the human body image area detected Domain determines whether above-mentioned human body target is in a falling state.The technical solution of the application can be set from multiplex image acquisition simultaneously Standby middle acquisition video data, obtains testing result using the detection model based on convolutional neural networks in real time.The inspection of the present embodiment Survey a large amount of human body pictures of neural network learning visual signature and tumble human body picture posture feature, including hand, elbow joint, The visual signature at the positions such as shoulder, back, waist, stern, knee joint, foot, and relationship when tumble between each genius loci, accuracy rate It is high.And since this programme uses the detection model based on convolutional neural networks, not relative to traditional detection method Sensor is needed, easy to use, calculating occupancy resource is not high, and cost is relatively low, is easy to universal.
To which the technical solution of the present embodiment solves detection accuracy present in existing tumble detection method for human body not High, inconvenient for use, higher cost calculates the technical problem for occupying higher, the universal difficulty of resource.
In addition, for example, the technical solution of the present embodiment when obtaining image to be detected, such as the figure such as can use camera As acquisition equipment, and image to be detected is sent to processor.Wherein image to be detected for example can be derived from camera acquisition Video, processor obtain video data from video flowing, and are image frame data by video data decoding, wherein image frame data As above-mentioned image to be detected.
Optionally, using the human testing model based on convolutional neural networks, determine to include human body from image to be detected The operation in the human body image region of target, comprising: utilize human testing model, according to image to be detected, generate respectively with it is to be checked The corresponding multiple vectors of multiple rectangle frame regions in altimetric image, wherein vector includes at least following information: corresponding rectangle frame It include the confidence of human body target in the location information in region, the dimension information of corresponding rectangle frame region and corresponding rectangle frame Spend information;And rectangle frame region corresponding to the maximum vector of confidence information is determined as human body image region.
Specifically, using the human testing model based on convolutional neural networks, according to image to be detected, generate respectively with to The corresponding multiple vectors of multiple rectangle frame regions in detection image.Concrete example as illustrated with reference to fig. 2, in the network structure of Fig. 2 In, input is the image of 448*448, first passes through the characteristic pattern that convolutional layer generates 14*14*256.Then it is up-sampled twice, The characteristic pattern for ultimately generating " 56*56*15 " exports the characteristic pattern of 15 56*56.Wherein in " 56*56*15 ", " 56*56 " table After showing that original image reduces 8 times, the characteristic pattern of 56*56 is generated, the characteristic pattern of generation is used to predict the target of different scale.Wherein Different scale for example can be large, medium and small three scales.
In addition, the meaning of last one-dimensional " 15 " is (x, y, w, h, Confidence) * in the characteristic pattern of " 56*56*15 " 3, i.e., 15 characteristic patterns are divided into 3 groups, every group include 5 56*56 characteristic pattern.Refering to what is shown in Fig. 3, in each group of characteristic pattern In, the element (such as black dot at the same position marked in figure) of same position respectively indicates in same 5 dimensional vector An element.In this way, each group of characteristic pattern means that 56*56 5 dimensional vectors.To which 15 characteristic patterns have meant that 3*56* 56 5 dimensional vectors.
Wherein, " x " indicates the rectangle frame region left margin in image to be detected in the i-th row, jth column grid (i, j) Horizontal offset values divided by mesh width multiple;" y " indicate image to be detected in rectangle frame region coboundary the i-th row, In jth column grid (i, j) vertical offset value divided by grid height multiple;" w " indicates the width in image to be detected divided by net The multiple of lattice width;" h " indicates the height in image to be detected divided by the multiple of network height;" Confidence " indicates to be checked The confidence score of rectangle frame region in altimetric image.
To, using human testing model, according to image to be detected, generate respectively with the rectangle frame area in image to be detected The corresponding vector in domain is (x, y, w, h, Confidence).Wherein, x, y indicate the location information of corresponding rectangle frame region, w, h table Show that the dimension information of corresponding rectangle frame region, Confidence indicate that the confidence level in corresponding rectangle frame comprising human body target is believed Breath.
It further, can be by square corresponding to the vector of confidence information highest scoring in above-mentioned image to be detected Shape frame region is determined as human body image region.To may further determine that the people in image after determining human body image region Body coordinate (x, y, w, h).Wherein, " x " indicates the rectangle frame region left margin in image to be detected in the i-th row, jth column grid Horizontal offset values in (i, j) divided by mesh width multiple;" y " indicates the rectangle frame region in image to be detected in coboundary In the i-th row, jth column grid (i, j) vertical offset value divided by grid height multiple;" w " indicates the width in image to be detected Spend the multiple divided by mesh width;" h " indicates the height in image to be detected divided by the multiple of network height.
Optionally, the operation of multiple vectors corresponding with multiple rectangle frame regions in image to be detected respectively, packet are generated It includes: multiple eigenmatrixes being generated according to image to be detected using human testing model;And using in multiple eigenmatrixes The element of the same position of at least part eigenmatrix constructs a vector in multiple vectors.
Specifically, multiple eigenmatrixes are generated according to image to be detected using human testing model.Specifically for example in Fig. 2 Network structure in, input is the image of 448*448, and up-sampling ultimately generates " 56*56*15 " by convolutional layer and twice Characteristic pattern.Wherein in " 56*56*15 ", after " 56*56 " indicates that original image reduces 8 times, the characteristic pattern of 56*56 is generated, i.e., most Throughout one's life at the characteristic pattern (56*56*15) of 15 56*56.15 characteristic patterns are divided into 3 groups, every group include 5 56*56 feature Figure.Refering to what is shown in Fig. 3, in each group of characteristic pattern, the element of same position (such as the black at the same position marked in figure Dot) respectively indicate an element in same 5 dimensional vector.In this way, each group of characteristic pattern means that 56*56 5 dimensional vectors. To which 15 characteristic patterns have meant that 3*56*56 5 dimensional vectors.
Optionally, the element of the same position of at least part eigenmatrix in multiple eigenmatrixes constructs described more The operation of a vector in a vector, comprising: multiple eigenmatrixes are divided into multiple set of matrices, wherein each matrix stack The quantity for the eigenmatrix for including in conjunction is identical;And the member of the same position using the eigenmatrix in the same set of matrices Element constructs a vector in multiple vectors.
Specifically, refering to what is shown in Fig. 4,15 characteristic patterns are divided into 3 groups, every group include 5 56*56 characteristic pattern.Every In one group of characteristic pattern, the element (such as black dot at the same position marked in figure) of same position respectively indicates same An element in 5 dimensional vectors.In this way, each group of characteristic pattern means that 56*56 5 dimensional vectors.Every group of characteristic pattern prediction is different The target of scale.Refering to what is shown in Fig. 4, wherein different scale for example can be large, medium and small three scales.Wherein, large, medium and small three A scale for example can be the prefabricated template of 28*28,56*56,112*112.
For example, the corresponding rectangle frame region in human body image region carries out returning its correspondence according to the prefabricated template of 28*28 Coordinate vector when, such as true human body rectangle frame be 25*32, then the corresponding rectangle frame region of the human body target returned out The w value of vector be 0.8925557, h value be 1.142857, so as to return out the corresponding rectangle frame region of human body target The corresponding vector of coordinate.
It is thus possible to complete the detection to the human body target in image to be detected.
Optionally, human body mesh is determined according to human body image region using the fall detection model based on convolutional neural networks Mark whether operation in a falling state, comprising: utilize the fall detection model based on convolutional neural networks, according to human body image Region generates the fractional value whether fallen for identifying human body target;And when fractional value is greater than predetermined threshold, determine human body It is in a falling state.
Specifically, it is generated according to human body image region for marking using the fall detection model based on convolutional neural networks It further include that image size is changed by fall detection model, wherein changing figure before knowing the fractional value whether human body target falls The input feature vector figure of fall detection model is transformed to by the present embodiment by image interpolation as the mode of size for example can be Fall detection model inputs size.
Refering to what is shown in Fig. 5, the input picture of fall detection neural network passes through image interpolation in the network structure of Fig. 5 After changing size, the input picture of 112*112 is generated.Then after the compression for passing through convolutional layer, full articulamentum and average pond, Finally by softmax layers of prediction output valve.This output valve is as used to identify the fractional value whether human body target falls, wherein The value range of fractional value is 0~1.When doing tumble state and judging, such as a predetermined threshold can be set, wherein predetermined threshold The value of value for example can be 0.5, and when the fractional value of above-mentioned output is greater than 0.5, i.e. judgement human body is in a falling state, otherwise Determine that human body is not on tumble state.
Optionally, it is generated according to human body image region for marking using the fall detection model based on convolutional neural networks Know the operation for the fractional value whether human body target falls, comprising: using the neural network structure in fall detection model, extract people Characteristics of image in body image-region;And using the taxonomic structure in fall detection model, according to extracted characteristics of image, Generate the fractional value whether fallen for identifying human body target.
Specifically, refering to what is shown in Fig. 5, using fall detection model, by the convolutional layer of neural network structure, human body is extracted It is raw according to extracted characteristics of image using the taxonomic structure in fall detection model after characteristics of image in image-region At the fractional value whether fallen for identifying human body target.
Optionally, it is generated according to human body image region for marking using the fall detection model based on convolutional neural networks The operation for knowing the fractional value whether human body target falls, further includes the fractional value for converting fractional value to Probability Forms.
Specifically, convolutional layer is passed through using the fall detection model based on convolutional neural networks according to human body image region After extracting the characteristics of image in human body image region, after full articulamentum, it is converted by softmax layers of contribute's fractional value The fractional value of Probability Forms, value range are 0~1.So as to determine whether human body target is according to above-mentioned fractional value Tumble state.
In addition, construct the present embodiment scheme in human testing model when, acquire first sufficient amount of the first Body image data, wherein the first human body image data includes with different country origins, the different colour of skin, different sexes, all ages and classes, no The human body of same posture, different scenes;Then the first human body image data is cleaned, and marked in the first human body image data Human body minimum circumscribed rectangle frame coordinate (x, y, w, h);It finally arranges the training set of the first human body image data, verifying collection, survey Examination collection.
Wherein, the human testing model in the scheme of the present embodiment is constructed further include: acquire the first new human figure first Sheet data, wherein the first new human body image data includes with different country origins, the different colour of skin, different sexes, all ages and classes, no The human body of same posture, different scenes;Then the first new human body image data is detected using the model of above-mentioned training set, And the first new human body image data that will test result inaccuracy is marked again;Finally arrange the first new human body picture Training set, the verifying collection, test set of data, to obtain the human testing model of the present embodiment.
In addition, construct the present embodiment scheme in fall detection model when, acquire sufficient amount of second people first Body image data;Wherein the second human body image data includes the human body with tumble posture;It marks in the second human body image data Human body minimum circumscribed rectangle frame coordinate (x, y, w, h), and cut out the picture in human body minimum circumscribed rectangle frame;To cutting out Human body minimum circumscribed rectangle frame in picture carry out classification annotation, such as 0 be normal picture, 1 be tumble picture;Finally arrange Training set, the verifying collection, test set of second human body image data.
Wherein, the fall detection model in the scheme of the present embodiment is constructed further include: acquire second new human figure's the piece number According to wherein the first new human body image data includes having tumble posture and the human body with non-tumble posture;Utilize above-mentioned instruction The model for practicing collection carries out classification annotation, and the second new human body that classification results are inaccurate to the second new human body image data Image data is marked again;Training set, the verifying collection, test set of the second new human body image data are finally arranged, thus Obtain the fall detection model of the present embodiment.
To use the human testing based on convolutional neural networks using processor according to the technical solution of the present embodiment Model analyzes image, determines the human body image region comprising human body target, and use falling based on convolutional neural networks Detection model determines whether above-mentioned human body target is in a falling state according to the human body image region detected.The application's Technical solution can obtain video data simultaneously from multiplex image acquisition equipment, and can be used based on convolutional neural networks Detection model show that testing result, accuracy rate are high in real time.And since this programme uses the detection based on convolutional neural networks Model, therefore sensor is not needed relative to traditional detection method, easy to use, calculating occupancy resource is not high, and cost is relatively low, It is easy to universal.
In addition, in order to make it easy to understand, as follows to the supplementary explanation of the technical solution of the present embodiment step in chronological order.
Present embodiments provide following technical solution:
Step 1: the human testing model in the scheme of the present embodiment is constructed.Sufficient amount of first human body is acquired first Image data, wherein the first human body image data includes with different country origins, the different colours of skin, different sexes, all ages and classes, difference The human body of posture, different scenes;Then the first human body image data is cleaned, and marked in the first human body image data Human body minimum circumscribed rectangle frame coordinate (x, y, w, h);Finally arrange training set, the verifying collection, test of the first human body image data Collection.
Step 2: the fall detection model in the scheme of the present embodiment is constructed.Sufficient amount of second human body is acquired first Image data;Wherein the second human body image data includes the human body with tumble posture;It marks in the second human body image data Human body minimum circumscribed rectangle frame coordinate (x, y, w, h), and cut out the picture in human body minimum circumscribed rectangle frame;To what is cut out Picture in human body minimum circumscribed rectangle frame carries out classification annotation;Finally arrange training set, the verifying of the second human body image data Collection, test set.
Step 3: building human testing neural network, returns out human body coordinate (x, y, w, h), utilizes the first moment of gradient Battle array estimation and second-order matrix estimate the accuracy of combination training pattern, the parameter of used neural network structure such as Fig. 2 institute Show.Input is the image of 448*448, up-samples by convolutional layer and twice the characteristic pattern for ultimately generating " 56*56*15 ".Its In in " 56*56*15 ", " 56*56 " indicate original image reduce 8 times after, generate the characteristic pattern of 56*56, the characteristic pattern of generation For predicting the target of different scale.Wherein different scale for example can be large, medium and small three scales.
The meaning of last one-dimensional " 15 " is (x, y, w, h, Confidence) * 3, i.e., 15 characteristic patterns is divided into 3 groups, every group Characteristic pattern including 5 56*56.Wherein, " x " indicates the rectangle frame region left margin in image to be detected in the i-th row, the Horizontal offset values in j column grid (i, j) divided by mesh width multiple;" y " indicates the rectangle frame region in image to be detected Coboundary in the i-th row, jth column grid (i, j) vertical offset value divided by grid height multiple;" w " indicates mapping to be checked As in width divided by mesh width multiple;" h " indicates the height in image to be detected divided by the multiple of network height; " Confidence " indicates the confidence score of the rectangle frame region in image to be detected.
To choose the rectangle frame region of objective degrees of confidence highest scoring according to the value of Confidence, can further count Calculate the coordinate (x, y, w, h) of the corresponding vector of rectangle frame region of human body target.
Step 4: building fall detection neural network, to the to be detected with human body target of human testing network output Picture carries out tumble state-detection, that is, detects whether to be estimated using the first order matrix estimation of gradient and second-order matrix for tumble state The accuracy of combination training pattern is counted, the parameter of used neural network structure is as shown in Figure 5.Fall detection neural network Input picture after image interpolation changes size, after the compression by convolutional layer, full articulamentum and average pond, finally Pass through softmax layers of prediction output valve.This output valve is to be used to identify the fractional value whether human body target falls, mid-score The value range of value is 0~1.So as to determine whether human body target is in a falling state according to above-mentioned fractional value.
Step 5: acquiring the first new human body image data, wherein the first new human body image data includes having difference Country origin, the different colour of skin, different sexes, all ages and classes, different gestures, different scenes human body;Then it is trained using step 3 Human testing model the first new human body image data is detected, and will test the first new human body of result inaccuracy Image data is marked again;Training set, the verifying collection, test set for finally arranging the first new human body image data, until Until accuracy rate reaches requirement.
Step 6: step 5 is repeated, until million picture accuracys rate reach requirement deconditioning.
Step 7: acquiring the second new human body image data, wherein the first new human body image data includes having to fall Posture and human body with non-tumble posture;Using the trained fall detection model of step 4 to second new human figure's the piece number It is marked again according to progress classification annotation, and by the second new human body image data of classification results inaccuracy;Finally arrange Training set, the verifying collection, test set of the second new human body image data, until accuracy rate reaches requirement.
Step 8: step 7 is repeated, until million picture test accuracy rates reach requirement deconditioning.
Step 9: obtaining data from video flowing, is decoded into image frame data with decoder module
Step 10: with the human body minimum circumscribed rectangle frame in the human testing model prediction image frame data of pre-training, from And complete detection human body target.
Step 11: there is the human body target in the image data of human body target with the fall detection model inspection of pre-training It whether is tumble state, and output test result, wherein testing result can for example be shown by being sent to terminal.
In addition, providing a kind of storage medium according to the second aspect of the present embodiment, the storage medium includes storage Program, wherein described program operation when as processor execute any of the above one described in method.
Through the above description of the embodiments, those skilled in the art can be understood that according to above-mentioned implementation The method of example can be realized by means of software and necessary general hardware platform, naturally it is also possible to by hardware, but it is very much In the case of the former be more preferably embodiment.Based on this understanding, technical solution of the present invention is substantially in other words to existing The part that technology contributes can be embodied in the form of software products, which is stored in a storage In medium (such as ROM/RAM, magnetic disk, CD), including some instructions are used so that a terminal device (can be mobile phone, calculate Machine, server or network equipment etc.) execute method described in each embodiment of the present invention.
Embodiment 2
Fig. 6 shows the schematic diagram of the device of fall detection described in the present embodiment.The falling detection device of the present embodiment 600 is corresponding with the method according to embodiment 1.
Refering to what is shown in Fig. 6, the device 600 includes: human body image area determination module 610, for using based on convolution mind Human testing model through network determines the human body image region including human body target in image to be detected;And tumble shape State determination module 620, according to human body image region, determines people for utilizing the fall detection model based on convolutional neural networks Whether body target is in a falling state.
Optionally, human body image area determination module 610 includes: that vector generates submodule, for using human testing model, According to image to be detected, multiple vectors corresponding with multiple rectangle frame regions in image to be detected respectively are generated, wherein vector Including at least following information: the location information of corresponding rectangle frame region, the dimension information of corresponding rectangle frame region and right It include the confidence information of human body target in the rectangle frame answered;And human body image region determines submodule, is used for confidence level Rectangle frame region corresponding to the maximum vector of information is determined as human body image region.
Optionally, it includes: eigenmatrix generation unit that vector, which generates submodule, for utilizing human testing model, according to Image to be detected generates multiple eigenmatrixes;And vector construction unit, for utilizing at least one in multiple eigenmatrixes The element for dividing the same position of eigenmatrix, constructs a vector in multiple vectors.
Optionally, vector construction unit includes: that set of matrices divides subelement, more for being divided into multiple eigenmatrixes A set of matrices, wherein the quantity for the eigenmatrix for including in each set of matrices is identical;And vector constructs subelement, is used for Using the element of the same position of the eigenmatrix in the same set of matrices, a vector in multiple vectors is constructed.
Optionally, tumble state determination module 620 includes: that fractional value generates submodule, for using based on convolutional Neural The fall detection model of network generates the fractional value whether fallen for identifying human body target according to human body image region;And Tumble state decision sub-module, for determining that human body is in a falling state when fractional value is greater than predetermined threshold.
Optionally, it includes: image characteristics extraction unit that fractional value, which generates submodule, for using in fall detection model Neural network structure extracts the characteristics of image in human body image region;And fractional value generation unit, for utilizing fall detection Taxonomic structure in model generates the fractional value whether fallen for identifying human body target according to extracted characteristics of image.
Optionally, fractional value generation unit includes fractional value transforming subunit, for converting Probability Forms for fractional value Fractional value.
To use the human testing based on convolutional neural networks using processor according to the technical solution of the present embodiment Model analyzes image, determines the human body image region comprising human body target, and use falling based on convolutional neural networks Detection model determines whether above-mentioned human body target is in a falling state according to the human body image region detected.The application's Technical solution can obtain video data simultaneously from multiplex image acquisition equipment, and can be used based on convolutional neural networks Detection model show that testing result, accuracy rate are high in real time.And since this programme uses the detection based on convolutional neural networks Model, therefore sensor is not needed relative to traditional detection method, easy to use, calculating occupancy resource is not high, and cost is relatively low, It is easy to universal.
Therefore, the technical solution of the present embodiment solves detection accuracy present in existing tumble detection method for human body not High, inconvenient for use, higher cost calculates the technical problem for occupying higher, the universal difficulty of resource.
Embodiment 3
Fig. 7 shows the schematic diagram of the device of fall detection described in the present embodiment.The falling detection device of the present embodiment 700 is corresponding with the method according to embodiment 1.
Refering to what is shown in Fig. 7, the device 700 includes: processor 710;And memory 720, it connect, uses with processor 710 In providing the instruction for handling following processing step for processor 710: utilizing the human testing model based on convolutional neural networks, In The human body image region including human body target is determined in image to be detected;And utilize the fall detection based on convolutional neural networks Model determines whether human body target is in a falling state according to human body image region.
Optionally, memory 720 is also used to provide the instruction for handling following processing step for processor 710: utilizing human body Detection model generates multiple vectors corresponding with multiple rectangle frame regions in image to be detected respectively according to image to be detected, Wherein vector includes at least following information: the size letter of the location information of corresponding rectangle frame region, corresponding rectangle frame region It include the confidence information of human body target in breath and corresponding rectangle frame;And it will be corresponding to the maximum vector of confidence information Rectangle frame region be determined as human body image region.
Optionally, memory 720 is also used to provide the instruction for handling following processing step for processor 710: utilizing human body Detection model generates multiple eigenmatrixes according to image to be detected;And it is special using at least part in multiple eigenmatrixes The element for levying the same position of matrix, constructs a vector in multiple vectors.
Optionally, memory 720 is also used to provide the instruction for handling following processing step for processor 710: by multiple spies Sign matrix is divided into multiple set of matrices, wherein the quantity for the eigenmatrix for including in each set of matrices is identical;And it utilizes The element of the same position of eigenmatrix in the same set of matrices constructs a vector in multiple vectors.
Optionally, memory 720 is also used to provide the instruction for handling following processing step for processor 710: using being based on The fall detection model of convolutional neural networks generates point whether fallen for identifying human body target according to human body image region Numerical value;And when fractional value is greater than predetermined threshold, determine that human body is in a falling state.
Optionally, memory 720 is also used to provide the instruction for handling following processing step for processor 710: utilizing tumble Neural network structure in detection model extracts the characteristics of image in human body image region;And using in fall detection model Taxonomic structure the fractional value whether fallen for identifying the human body target is generated according to extracted characteristics of image.
Optionally, memory 720 is also used to provide the instruction for handling following processing step for processor 710: by fractional value It is converted into the fractional value of Probability Forms.
To use the human testing based on convolutional neural networks using processor according to the technical solution of the present embodiment Model analyzes image, determines the human body image region comprising human body target, and use falling based on convolutional neural networks Detection model determines whether above-mentioned human body target is in a falling state according to the human body image region detected.The application's Technical solution can obtain video data simultaneously from multiplex image acquisition equipment, and can be used based on convolutional neural networks Detection model show that testing result, accuracy rate are high in real time.And since this programme uses the detection based on convolutional neural networks Model, therefore sensor is not needed relative to traditional detection method, easy to use, calculating occupancy resource is not high, and cost is relatively low, It is easy to universal.
Therefore, the technical solution of the present embodiment solves detection accuracy present in existing tumble detection method for human body not High, inconvenient for use, higher cost calculates the technical problem for occupying higher, the universal difficulty of resource.
The serial number of the above embodiments of the invention is only for description, does not represent the advantages or disadvantages of the embodiments.
In the above embodiment of the invention, it all emphasizes particularly on different fields to the description of each embodiment, does not have in some embodiment The part of detailed description, reference can be made to the related descriptions of other embodiments.
In several embodiments provided herein, it should be understood that disclosed technology contents can pass through others Mode is realized.Wherein, the apparatus embodiments described above are merely exemplary, such as the division of the unit, only A kind of logical function partition, there may be another division manner in actual implementation, for example, multiple units or components can combine or Person is desirably integrated into another system, or some features can be ignored or not executed.Another point, shown or discussed is mutual Between coupling, direct-coupling or communication connection can be through some interfaces, the INDIRECT COUPLING or communication link of unit or module It connects, can be electrical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme 's.
It, can also be in addition, the functional units in various embodiments of the present invention may be integrated into one processing unit It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list Member both can take the form of hardware realization, can also realize in the form of software functional units.
If the integrated unit is realized in the form of SFU software functional unit and sells or use as independent product When, it can store in a computer readable storage medium.Based on this understanding, technical solution of the present invention is substantially The all or part of the part that contributes to existing technology or the technical solution can be in the form of software products in other words It embodies, which is stored in a storage medium, including some instructions are used so that a computer Equipment (can for personal computer, server or network equipment etc.) execute each embodiment the method for the present invention whole or Part steps.And storage medium above-mentioned includes: that USB flash disk, read-only memory (ROM, Read-Only Memory), arbitrary access are deposited Reservoir (RAM, Random Access Memory), mobile hard disk, magnetic or disk etc. be various to can store program code Medium.
The above is only a preferred embodiment of the present invention, it is noted that for the ordinary skill people of the art For member, various improvements and modifications may be made without departing from the principle of the present invention, these improvements and modifications are also answered It is considered as protection scope of the present invention.

Claims (10)

1. a kind of fall detection method, which comprises the following steps:
Using the human testing model based on convolutional neural networks, the human figure including human body target is determined in image to be detected As region;And
The human body target is determined according to the human body image region using the fall detection model based on convolutional neural networks It is whether in a falling state.
2. fall detection method according to claim 1, which is characterized in that examined using the human body based on convolutional neural networks Model is surveyed, the operation in the human body image region including human body target is determined from image to be detected, comprising:
Using the human testing model, according to described image to be detected, generate respectively with it is multiple in described image to be detected The corresponding multiple vectors of rectangle frame region, wherein the vector includes at least following information: the position of corresponding rectangle frame region It include the confidence information of human body target in information, the dimension information of corresponding rectangle frame region and corresponding rectangle frame;With And
Rectangle frame region corresponding to the maximum vector of the confidence information is determined as the human body image region.
3. fall detection method according to claim 2, which is characterized in that generate respectively and in described image to be detected The operation of the corresponding multiple vectors of multiple rectangle frame regions, comprising:
Multiple eigenmatrixes are generated according to described image to be detected using the human testing model;And
Using the element of the same position of at least part eigenmatrix in the multiple eigenmatrix, construct it is the multiple to A vector in amount.
4. fall detection method according to claim 3, which is characterized in that utilize in the multiple eigenmatrix at least The element of the same position of a part of eigenmatrix constructs the operation of a vector in the multiple vector, comprising:
The multiple eigenmatrix is divided into multiple set of matrices, wherein the number for the eigenmatrix for including in each set of matrices It measures identical;And
Using the element of the same position of the eigenmatrix in the same set of matrices, construct one in the multiple vector to Amount.
5. fall detection method according to claim 1, which is characterized in that examined using the tumble based on convolutional neural networks It surveys model and human body target operation whether in a falling state is determined according to the human body image region, comprising:
Using the fall detection model based on convolutional neural networks, according to the human body image region, generate described for identifying The fractional value whether human body target falls;And
When the fractional value is greater than predetermined threshold, determine that the human body is in a falling state.
6. fall detection method according to claim 5, which is characterized in that examined using the tumble based on convolutional neural networks It surveys model and the operation for identifying the fractional value whether human body target falls is generated according to the human body image region, wrap It includes:
Using the neural network structure in the fall detection model, the characteristics of image in the human body image region is extracted;With And
It is generated according to extracted characteristics of image for identifying the people using the taxonomic structure in the fall detection model The fractional value whether body target falls.
7. fall detection method according to claim 6, which is characterized in that examined using the tumble based on convolutional neural networks It surveys model and the operation for identifying the fractional value whether human body target falls is generated according to the human body image region, also Fractional value including converting the fractional value to Probability Forms.
8. a kind of storage medium, which is characterized in that the storage medium includes the program of storage, wherein run in described program When as processor perform claim require any one of 1 to 7 described in method.
9. a kind of falling detection device characterized by comprising
Human body image area determination module, for utilizing the human testing model based on convolutional neural networks, in image to be detected Middle determination includes the human body image region of human body target;And
Tumble state determination module, for utilizing the fall detection model based on convolutional neural networks, according to the human body image Region determines whether the human body target is in a falling state.
10. a kind of falling detection device characterized by comprising
Processor;And
Memory is connected to the processor, for providing the instruction for handling following processing step for the processor:
Using the human testing model based on convolutional neural networks, the human figure including human body target is determined in image to be detected As region;And
The human body target is determined according to the human body image region using the fall detection model based on convolutional neural networks It is whether in a falling state.
CN201910621095.1A 2019-07-10 2019-07-10 A kind of fall detection method, device, storage medium Pending CN110443150A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910621095.1A CN110443150A (en) 2019-07-10 2019-07-10 A kind of fall detection method, device, storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910621095.1A CN110443150A (en) 2019-07-10 2019-07-10 A kind of fall detection method, device, storage medium

Publications (1)

Publication Number Publication Date
CN110443150A true CN110443150A (en) 2019-11-12

Family

ID=68430120

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910621095.1A Pending CN110443150A (en) 2019-07-10 2019-07-10 A kind of fall detection method, device, storage medium

Country Status (1)

Country Link
CN (1) CN110443150A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111241913A (en) * 2019-12-19 2020-06-05 北京文安智能技术股份有限公司 Method, device and system for detecting falling of personnel
CN111461042A (en) * 2020-04-07 2020-07-28 中国建设银行股份有限公司 Fall detection method and system
CN111507252A (en) * 2020-04-16 2020-08-07 上海眼控科技股份有限公司 Human body falling detection device and method, electronic terminal and storage medium
WO2021104007A1 (en) * 2019-11-26 2021-06-03 京东数科海益信息科技有限公司 Method and device for animal state monitoring, electronic device, and storage medium
CN113221661A (en) * 2021-04-14 2021-08-06 浪潮天元通信信息系统有限公司 Intelligent human body tumbling detection system and method
CN113642361A (en) * 2020-05-11 2021-11-12 杭州萤石软件有限公司 Method and equipment for detecting falling behavior
CN117636404A (en) * 2024-01-26 2024-03-01 贵州信邦富顿科技有限公司 Fall detection method and system based on non-wearable equipment

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103211599A (en) * 2013-05-13 2013-07-24 桂林电子科技大学 Method and device for monitoring tumble
CN105279483A (en) * 2015-09-28 2016-01-27 华中科技大学 Fall-down behavior real-time detection method based on depth image
CN107220604A (en) * 2017-05-18 2017-09-29 清华大学深圳研究生院 A kind of fall detection method based on video
CN108062531A (en) * 2017-12-25 2018-05-22 南京信息工程大学 A kind of video object detection method that convolutional neural networks are returned based on cascade
CN108090458A (en) * 2017-12-29 2018-05-29 南京阿凡达机器人科技有限公司 Tumble detection method for human body and device
CN108154113A (en) * 2017-12-22 2018-06-12 重庆邮电大学 Tumble event detecting method based on full convolutional network temperature figure
CN108805093A (en) * 2018-06-19 2018-11-13 华南理工大学 Escalator passenger based on deep learning falls down detection algorithm
CN109299703A (en) * 2018-10-17 2019-02-01 思百达物联网科技(北京)有限公司 The method, apparatus and image capture device counted to mouse feelings
CN109800860A (en) * 2018-12-28 2019-05-24 北京工业大学 A kind of Falls in Old People detection method of the Community-oriented based on CNN algorithm

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103211599A (en) * 2013-05-13 2013-07-24 桂林电子科技大学 Method and device for monitoring tumble
CN105279483A (en) * 2015-09-28 2016-01-27 华中科技大学 Fall-down behavior real-time detection method based on depth image
CN107220604A (en) * 2017-05-18 2017-09-29 清华大学深圳研究生院 A kind of fall detection method based on video
CN108154113A (en) * 2017-12-22 2018-06-12 重庆邮电大学 Tumble event detecting method based on full convolutional network temperature figure
CN108062531A (en) * 2017-12-25 2018-05-22 南京信息工程大学 A kind of video object detection method that convolutional neural networks are returned based on cascade
CN108090458A (en) * 2017-12-29 2018-05-29 南京阿凡达机器人科技有限公司 Tumble detection method for human body and device
CN108805093A (en) * 2018-06-19 2018-11-13 华南理工大学 Escalator passenger based on deep learning falls down detection algorithm
CN109299703A (en) * 2018-10-17 2019-02-01 思百达物联网科技(北京)有限公司 The method, apparatus and image capture device counted to mouse feelings
CN109800860A (en) * 2018-12-28 2019-05-24 北京工业大学 A kind of Falls in Old People detection method of the Community-oriented based on CNN algorithm

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021104007A1 (en) * 2019-11-26 2021-06-03 京东数科海益信息科技有限公司 Method and device for animal state monitoring, electronic device, and storage medium
CN111241913A (en) * 2019-12-19 2020-06-05 北京文安智能技术股份有限公司 Method, device and system for detecting falling of personnel
CN111461042A (en) * 2020-04-07 2020-07-28 中国建设银行股份有限公司 Fall detection method and system
CN111507252A (en) * 2020-04-16 2020-08-07 上海眼控科技股份有限公司 Human body falling detection device and method, electronic terminal and storage medium
CN113642361A (en) * 2020-05-11 2021-11-12 杭州萤石软件有限公司 Method and equipment for detecting falling behavior
WO2021227874A1 (en) * 2020-05-11 2021-11-18 杭州萤石软件有限公司 Falling behaviour detection method and device
CN113642361B (en) * 2020-05-11 2024-01-23 杭州萤石软件有限公司 Fall behavior detection method and equipment
CN113221661A (en) * 2021-04-14 2021-08-06 浪潮天元通信信息系统有限公司 Intelligent human body tumbling detection system and method
CN117636404A (en) * 2024-01-26 2024-03-01 贵州信邦富顿科技有限公司 Fall detection method and system based on non-wearable equipment
CN117636404B (en) * 2024-01-26 2024-04-16 贵州信邦富顿科技有限公司 Fall detection method and system based on non-wearable equipment

Similar Documents

Publication Publication Date Title
CN110443150A (en) A kind of fall detection method, device, storage medium
CN111881705B (en) Data processing, training and identifying method, device and storage medium
JP6522060B2 (en) Object recognition device, classification tree learning device and operation method thereof
CN105740780A (en) Method and device for human face in-vivo detection
CN105022982B (en) Hand motion recognition method and apparatus
CN110175993A (en) A kind of Faster R-CNN pulmonary tuberculosis sign detection system and method based on FPN
CN107808143A (en) Dynamic gesture identification method based on computer vision
CN109635875A (en) A kind of end-to-end network interface detection method based on deep learning
CN109191588A (en) Move teaching method, device, storage medium and electronic equipment
CN110532874B (en) Object attribute recognition model generation method, storage medium and electronic device
CN109670380A (en) Action recognition, the method and device of pose estimation
CN115661943B (en) Fall detection method based on lightweight attitude assessment network
CN113111767A (en) Fall detection method based on deep learning 3D posture assessment
WO2017161734A1 (en) Correction of human body movements via television and motion-sensing accessory and system
CN112861723B (en) Sports action recognition counting method and device based on human body gesture recognition and computer readable storage medium
CN110298279A (en) A kind of limb rehabilitation training householder method and system, medium, equipment
CN105912991A (en) Behavior identification method based on 3D point cloud and key bone nodes
CN110084192A (en) Quick dynamic hand gesture recognition system and method based on target detection
KR20180110443A (en) Apparatus and method for providing calorie information
CN110321820A (en) A kind of sight drop point detection method based on contactless device
US20220415091A1 (en) Ml model arrangement and method for evaluating motion patterns
CN111883229A (en) Intelligent movement guidance method and system based on visual AI
CN115294660B (en) Body-building action recognition model, training method of model and body-building action recognition method
Ferreira et al. Deep learning approaches for workout repetition counting and validation
CN103455826B (en) Efficient matching kernel body detection method based on rapid robustness characteristics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20191112