CN107657244B - Human body falling behavior detection system based on multiple cameras and detection method thereof - Google Patents

Human body falling behavior detection system based on multiple cameras and detection method thereof Download PDF

Info

Publication number
CN107657244B
CN107657244B CN201710952871.7A CN201710952871A CN107657244B CN 107657244 B CN107657244 B CN 107657244B CN 201710952871 A CN201710952871 A CN 201710952871A CN 107657244 B CN107657244 B CN 107657244B
Authority
CN
China
Prior art keywords
human body
detection
image
behavior
multiple cameras
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201710952871.7A
Other languages
Chinese (zh)
Other versions
CN107657244A (en
Inventor
钱惠敏
丁彬
周军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hohai University HHU
Original Assignee
Hohai University HHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hohai University HHU filed Critical Hohai University HHU
Priority to CN201710952871.7A priority Critical patent/CN107657244B/en
Publication of CN107657244A publication Critical patent/CN107657244A/en
Application granted granted Critical
Publication of CN107657244B publication Critical patent/CN107657244B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a human body falling behavior detection method based on multiple cameras, which comprises the following steps: the method comprises the steps of obtaining monitoring videos based on multiple cameras, detecting and classifying moving human body targets, extracting contour depth maps and mass center movement rate characteristics, detecting falling down based on a support vector machine combined classifier, and fusing detection results of the multiple cameras. When the system detects the falling behavior of the human body, an alarm signal is sent to the monitoring terminal immediately. The invention adopts a plurality of cameras to realize the intelligent home nursing of the independent empty-nest old people, can avoid false detection and missed detection caused by shielding, has high detection accuracy and real-time algorithm.

Description

Human body falling behavior detection system based on multiple cameras and detection method thereof
Technical Field
The invention relates to the technical field of intelligent video monitoring and computer vision, in particular to a human body falling behavior detection method based on a multi-camera intelligent video monitoring system and a detection system based on the detection method.
Background
At present, China enters an aging society, and the rate of the solitary empty nesters is continuously improved due to the development of social economy, the increasingly obvious phenomenon of population mobility, the miniaturization of family structures and the like. The problem of safe nursing of the elderly is a problem which is closely concerned by both families and society; among them, falls are the main cause of injuries in the elderly.
According to the sensors used, the current commonly used human body falling behavior detection technologies can be divided into three categories: human fall detection based on wearable sensors, human fall detection based on sensors deployed in an environment, and human fall detection based on visual sensors.
Wearable sensor has advantages such as setting and easy operation, application scope are extensive, but can cause certain influence to the human body activity after dressing, and old person often can forget to dress moreover. The sensor is arranged in the environment, so that the sensor does not need to be worn, the activity of the old is not influenced, and the false detection rate is higher. Due to the popularization of the camera, the erection cost of the visual sensor is reduced, the burden of the old people cannot be caused in a non-contact mode, and the visual sensor can be additionally provided with a recording function and can be used for accurately analyzing the reason of falling down in the later diagnosis and treatment process.
A video monitoring system based on a single vision sensor can cause the problem of missed detection due to the shielding of articles in a home environment. Therefore, a more perfect intelligent nursing video monitoring system for the solitary empty nesters needs to be invented to automatically detect the falling behavior of the human body and give an alarm.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provides a human body falling behavior detection system based on multiple cameras and a detection method thereof.
In order to solve the technical problems, the invention provides a human body falling behavior detection system based on multiple cameras, which is characterized by comprising a multiple-camera video signal acquisition module, a human body detection and identification module, a feature extraction module, a falling behavior detection module and an alarm module;
the multi-camera video signal acquisition module is used for respectively shooting and recording monitoring videos from different angles through a plurality of cameras which are dispersedly erected at different positions of a monitoring site and outputting video image frames to the human body detection and identification module;
the human body detection and identification module is used for detecting moving targets of all the video image frames, identifying the human body targets from the moving targets and obtaining a human body binary image sequence input feature extraction module;
the feature extraction module is used for obtaining a space-time contour evolution diagram and a centroid motion rate feature in the horizontal direction aiming at continuous N frames of images based on the obtained human body binary image sequence, defining a two-dimensional Poisson equation set on a non-zero element region of the space-time contour evolution diagram, solving the equation set to obtain a contour depth diagram, and extracting a directional gradient histogram feature of the contour depth diagram;
the falling behavior detection module is used for carrying out behavior classification detection by using a support vector machine combined classifier based on the directional gradient histogram characteristics obtained by the characteristic extraction module and the centroid movement rate characteristics in the horizontal direction, obtaining a human body behavior detection result and inputting the human body behavior detection result into the alarm module;
and the warning module is used for outputting warning information when the detection result of any camera is a falling behavior based on the obtained human behavior detection results of all the cameras.
Correspondingly, the invention also provides a human body falling behavior detection method based on the multiple cameras, which is characterized by comprising the following steps of:
step S1, acquiring video image frames shot by each camera;
step S2, detecting moving objects of each video image frame, identifying human body objects from the moving objects, and obtaining a human body binary image sequence;
step S3, based on the obtained human body binary image sequence, aiming at continuous N frames of images
Figure BDA0001433244470000033
Obtaining a space-time contour evolution diagram and a centroid motion rate characteristic v in the horizontal directionh(t);
Step S4, defining a two-dimensional Poisson equation set on a non-zero element region of the space-time contour evolution diagram, solving the equation set to obtain a contour depth diagram, and extracting the directional gradient histogram feature of the contour depth diagram;
step S5, based on the extracted histogram feature of the directional gradient and the feature of the centroid movement rate in the horizontal direction, carrying out behavior classification detection by using a support vector machine combined classifier to obtain a human behavior detection result;
and step S6, obtaining the human body behavior detection results of the cameras based on the steps 2 to 5, and outputting alarm information when the detection result of any camera is a falling behavior.
Further, in step S2, a background modeling and updating method based on a gaussian mixture model is used to detect a moving object for each video image frame.
Further, in step S3, the specific calculation process for obtaining the spatio-temporal profile evolution diagram and the centroid motion rate characteristics is as follows:
(3.1) for continuous N frames of human body binary images
Figure BDA0001433244470000031
Calculating the coordinates (x) of the mass center of the human body in each framec(t),yc(t)); calculating the centroid movement velocity vh(t)=S·|xc(t)-xc(t-1) |, wherein S represents the ratio of the height of the human body in the image at the time t to the maximum height of the human body in the image of the N frames;
(3.2) translating the human body object in each frame of image I (·, t) to make the centroid thereof be at the same position, thereby obtaining the translated image sequence
Figure BDA0001433244470000032
(3.3) based on the translated image sequence
Figure BDA0001433244470000041
And calculating the space-time contour evolution diagram E (x, y).
Further, the space-time contour evolution diagram has the formula
Further, in step S4, a Gauss-Seidel iterative algorithm is used to solve the discrete poisson equation set.
Further, in step S4, the process of calculating the histogram of directional gradients from the contour depth map is as follows:
firstly, adjusting the SDI to be a standard size, setting an adjusted image as G, and calculating a gradient amplitude G (x, y) and a gradient direction O (x, y) on each pixel;
then, dividing the image g into a plurality of cell units containing m × m pixel points, and calculating a gradient direction histogram in each cell unit in a weighted voting mode to obtain an expression vector of the gradient direction histogram;
then, combining the nxn cell units to form a block region, and sequentially connecting cell unit representation vectors in the block region to obtain a normalized representation vector of the block region;
and finally, representing the vector by the stacking block area to obtain a feature vector corresponding to the SDI-HOG.
Further, in step S5, the multi-classification support vector machine combination classifier is composed of three support vector machine classifiers { SVM1, SVM2, SVM3 }.
Compared with the prior art, the invention has the following beneficial effects:
(1) the falling behavior is detected by adopting the directional gradient histogram characteristic based on the profile depth map, and the detection difficulty is reduced, the operation speed is increased, and the detection accuracy is improved by adopting the hierarchical classification method based on the combined classifier;
(2) a plurality of cameras are adopted for fusion monitoring, so that the falling behavior can be accurately detected under the condition that the target is shielded.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a schematic diagram of a human binary image sequence acquired in an embodiment of the present invention;
FIG. 3 is a spatio-temporal profile evolution diagram obtained by processing the image shown in FIG. 2;
FIG. 4 is a contour depth map obtained by processing the spatio-temporal contour evolution map described in FIG. 3;
FIG. 5 is a block diagram of a combined classifier.
Detailed Description
The invention is further described below with reference to the accompanying drawings. The following examples are only for illustrating the technical solutions of the present invention more clearly, and the protection scope of the present invention is not limited thereby.
The invention discloses a human body falling behavior detection system based on multiple cameras, which comprises a multiple-camera video signal acquisition module, a human body detection and identification module, a feature extraction module, a falling behavior detection module and an alarm module;
the multi-camera video signal acquisition module is used for respectively shooting and recording monitoring videos from different angles through a plurality of cameras which are dispersedly erected at different positions of a monitoring site and outputting video image frames to the human body detection and identification module;
the human body detection and identification module is used for detecting moving targets of all the video image frames, identifying the human body targets from the moving targets and obtaining a human body binary image sequence input feature extraction module;
the feature extraction module is used for obtaining a space-time contour evolution diagram and a centroid motion rate feature in the horizontal direction aiming at continuous N frames of images based on the obtained human body binary image sequence, defining a two-dimensional Poisson equation set on a non-zero element region of the space-time contour evolution diagram, solving the equation set to obtain a contour depth diagram, and extracting a directional gradient histogram feature of the contour depth diagram;
the falling behavior detection module is used for carrying out behavior classification detection by using a support vector machine combined classifier based on the directional gradient histogram characteristics obtained by the characteristic extraction module and the centroid movement rate characteristics in the horizontal direction, obtaining a human body behavior detection result and inputting the human body behavior detection result into the alarm module;
and the warning module is used for outputting warning information when the detection result of any camera is a falling behavior based on the obtained human behavior detection results of all the cameras.
Correspondingly, the flow of the human body falling behavior detection method based on multiple cameras is shown in fig. 1 and is divided into an offline training flow and an online testing flow. The off-line training method has the same realization mode with the on-line detection method in each step, and takes the on-line test flow as an example, the method comprises the following steps:
step S1, acquiring video image frames shot by each camera;
the method comprises the steps of respectively shooting and recording monitoring videos through a plurality of cameras arranged at different positions of a monitoring site, and obtaining video images at all viewing angles, wherein the size of a video image frame obtained by the camera at each viewing angle is 320 pixels multiplied by 240 pixels.
Step S2, detecting moving objects of each video image frame, identifying human body objects from the moving objects, and obtaining a human body binary image sequence;
in a home environment, the background is usually fixed or relatively small, and the illumination change is a main reason influencing the target detection result. In the prior art, a target detection algorithm based on a Gaussian mixture model can accurately detect a moving target under the condition of such environment change.
Therefore, the background modeling and updating method based on the Gaussian mixture model is adopted in the invention to detect the moving target for the video image frame of each camera, thereby detecting the moving target. In the embodiment of the invention, 3-5 Gaussian distribution numbers are selected in consideration of storage and calculation cost when the mixed Gaussian model is established.
Due to moving objects that may occur in the home environment of solitary, empty-nest elderly people, typically only people and pets are involved. Therefore, a minimum external rectangular frame is defined for the detected moving target, and the human body target is detected according to the area of the target area in the rectangular frame and the width-height characteristics of the rectangular frame. And setting the pixel values in the non-human body area to zero so that the image only contains the human body target, and giving the human body target in the scene in a binary image sequence form to obtain a human body silhouette change sequence. In the embodiment of the present invention, after the video image frame is subjected to the operation in this step, an obtained human binary image sequence is shown in fig. 2.
Step S3, based on the obtained human body binary image sequence, aiming at continuous N frames of images
Figure BDA0001433244470000071
Obtaining a space-time contour evolution diagram and a centroid motion rate characteristic v in the horizontal directionh(t);
The purpose of the step is to extract human body contour evolution characteristics and kinematic characteristics from the human body binary image sequence obtained in the last step and provide characteristic description of human body behaviors. The specific process is as follows:
(3.1) for consecutive N-frame peopleVolume binary image
Figure BDA0001433244470000072
(N can be the number of video frames recorded in 1-3 seconds), and calculating the coordinates (x) of the mass center of the human body in each framec(t),yc(t)); meanwhile, counting the height of the human body in the N frames of images, and calculating the maximum value of the height of the human body; calculating the centroid movement velocity vh(t)=S·|xc(t)-xc(t-1) |, wherein S represents the ratio of the height of the human body in the image at the time t to the maximum value of the height of the human body in the N frames of images.
(3.2) translating the human target in each frame of image I (·, t) so that its centroid is in the same position (x)0(t),y0(t)), thereby obtaining a translated image sequence
Figure BDA0001433244470000073
The translation processing of the human body target in the binary image refers to that all pixel points belonging to the human body target are moved on the image by a specified pixel distance, and the distance is determined by the centroid position (x) of the human body target in the current image framec(t),yc(t)) and the centroid position (x) of the human target in the reference image frame0(t),y0(t)) determining. The reference frame may be selected as the first frame image in the sequence of binary images.
The purpose of the translation processing is to ignore the movement of the human body in the horizontal or vertical direction, mainly measure the change rule of the silhouette of the human body along with the time in the human body movement process, and facilitate the distinction of the falling behavior and other daily behaviors by the change of the silhouette in the subsequent discussion.
(3.3) based on the translated image sequence
Figure BDA0001433244470000074
And calculating the space-time contour evolution diagram E (x, y).
The space-time contour evolution diagram is the superposition of the translated binary image on time, which not only reflects the space aggregation of the human silhouette on a single two-dimensional image plane, but also reveals the projection of the human silhouette on the corresponding two-dimensional image planeProbability of occurrence of shadow time. Wherein, the calculation formula of the space-time contour evolution diagram is as follows
Figure BDA0001433244470000081
In the embodiment of the invention, a space-time contour evolution diagram is calculated for the human body behavior binary diagram shown in fig. 2, as shown in fig. 3. The space-time profile evolution diagram has invariance to the duration of the behaviors, so that the difficulty in identification caused by the time difference of the human behaviors can be solved.
Step S4, defining a two-dimensional Poisson equation set on a non-zero element region of the space-time contour evolution diagram, solving the equation set to obtain a contour depth diagram, and extracting the directional gradient histogram feature of the contour depth diagram;
the specific process is as follows:
(4.1) defining a discrete poisson equation on each non-zero element pixel of the space-time profile evolution diagram, so as to obtain a discrete poisson equation set for each space-time profile evolution diagram, and determining the boundary condition of the discrete poisson equation set according to the target area in the space-time profile evolution diagram;
wherein the discrete Poisson equation defined over the pixel (x, y) is
φ(x,y)=1+1/4[φ(x+h,y)+φ(x-h,y)+φ(x,y+h)+φ(x,y-h)],
(4.2) solving the discrete poisson equation system by adopting Gauss-Seidel iterative algorithm until the difference value of the two adjacent iteration results meets a given stop condition (for example, the stop condition is the difference value e in the embodiment)<10-3) Obtaining a numerical approximation solution of the equation set, namely a contour depth map; the profile depth map solved for fig. 3 is shown in fig. 4;
(4.3) calculating a directional gradient histogram (SDI-HOG) of the contour depth map (SDI) according to the contour depth map, and realizing in four steps: firstly, adjusting the SDI to be a standard size, setting an adjusted image as G, and calculating a gradient amplitude G (x, y) and a gradient direction O (x, y) on each pixel; then, dividing the image g into a plurality of cell units containing m × m pixel points, and calculating a gradient direction histogram in each cell unit in a weighted voting mode to obtain an expression vector of the gradient direction histogram; then, combining the nxn cell units to form a block region, and sequentially connecting cell unit representation vectors in the block region to obtain a normalized representation vector of the block region; and finally, representing the vector by the stacking block area to obtain a feature vector corresponding to the SDI-HOG.
Wherein the gradient amplitude G (x, y) and the gradient direction O (x, y) are
Figure BDA0001433244470000091
And is
Figure BDA0001433244470000092
In each cell unit after the image is segmented, the weighted voting calculation method of the gradient histogram is as follows: equally dividing the gradient range [0, pi ] into M intervals, finding the interval to which the gradient direction of each pixel in the cell unit belongs, taking the gradient amplitude of the pixel as a weight, and counting a gradient histogram in each interval. Thus, each cell unit corresponds to a representation vector of dimension 1 × M.
The parameters used in the examples of the invention are as follows: the size of the contour depth map is adjusted to 160 × 80, the size of the cell unit is 8 × 8, when the gradient histogram in the cell unit is counted, the gradient range [0, pi ] is equally divided into 9 sections, each 2 cell units form one block, and the overlapping ratio between adjacent blocks is selected to be 0.5.
Step S5, based on the extracted histogram feature of the directional gradient and the feature of the centroid movement rate in the horizontal direction, using a support vector machine combination classifier to perform behavior classification detection;
(5.1) acquiring human body daily behavior samples of a plurality of cameras in a monitoring scene in advance, wherein each type of behavior has at least 50 samples, and establishing a multi-view video sample set;
(5.2) extracting SDI-HOG and v from the multi-view video sample set in steps 2 to 4 for the sample set at each viewhThe feature vectors are used for training a multi-classification support vector machine combined classifier;
(5.3) the multi-classification support vector machine combined classifier at each view angle is composed of three support vector machine classifiers { SVM1, SVM2, SVM3}, as shown in FIG. 5, a sampleThe corresponding feature vectors are firstly classified by the SVM1 of the first layer, and then the corresponding feature vectors are input into the SVM2 or SVM3 of the second layer according to the classification result, so that the final classification result is obtained. The training method of the combined classifier is as follows: first, with vhThe feature vector trains a two-classification support vector machine SVM1, and all behavior samples are divided into two classes, namely, behaviors with obvious movement in the horizontal direction (Act-I class) and behaviors with no obvious or almost no movement in the horizontal direction (Act-II class); then, for all behavior samples classified into Act-I class, the SDI-HOG feature vectors are adopted to train a two-class support vector machine SVM2, the falling behavior samples are positive samples, the rest behavior samples are negative samples, similarly, all behavior samples classified into Act-II class are adopted, and another two-class support vector machine SVM3 is trained according to the same rule.
(5.4) acquiring behavior videos under each view angle on line, and extracting SDI-HOG and v according to steps 2 to 4hAnd (4) carrying out feature vector and realizing falling behavior detection by adopting a trained support vector machine combined classifier under the visual angle.
And step S6, obtaining the human body behavior detection results of the cameras based on the steps 2 to 5, and outputting alarm information when the detection result of any camera is a falling behavior.
The monitoring result fusion mechanism under the multiple cameras is that if the detection result under one camera is a falling behavior, the judgment result is possibly the falling behavior, the alarm information is immediately sent to the monitoring terminal, and the monitoring videos which are marked before and after the moment are triggered. Considerations for determining this fusion mechanism are: the injuries that the falling behavior may cause to the elderly are often very serious, and the immediate discovery and treatment can alleviate the injuries.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several modifications and variations can be made without departing from the technical principle of the present invention, and these modifications and variations should also be regarded as the protection scope of the present invention.

Claims (5)

1. A human body falling behavior detection method based on multiple cameras is characterized by comprising the following steps:
step S1, acquiring video image frames shot by each camera;
step S2, detecting moving objects of each video image frame, identifying human body objects from the moving objects, and obtaining a human body binary image sequence;
step S3, based on the obtained human body binary image sequence, aiming at continuous N frames of images
Figure FDA0002670558790000011
Obtaining a space-time contour evolution diagram and a centroid motion rate characteristic v in the horizontal directionh(t);
Step S4, defining a two-dimensional Poisson equation set on a non-zero element region of the space-time contour evolution diagram, solving the equation set to obtain a contour depth diagram, and extracting the directional gradient histogram feature of the contour depth diagram;
step S5, based on the extracted histogram feature of the directional gradient and the feature of the centroid movement rate in the horizontal direction, carrying out behavior classification detection by using a support vector machine combined classifier to obtain a human behavior detection result;
step S6, obtaining the human body behavior detection results of each camera based on the steps S2 to S5, and outputting alarm information when the detection result of any camera is a falling behavior;
in step S3, the specific calculation process for obtaining the spatio-temporal profile evolution diagram and the centroid motion rate characteristics is as follows:
(3.1) for continuous N frames of human body binary images
Figure FDA0002670558790000012
Calculating the coordinates (x) of the mass center of the human body in each framec(t),yc(t)); calculating the centroid movement velocity vh(t)=S·|xc(t)-xc(t-1) |, wherein S represents the ratio of the height of the human body in the image at the time t to the maximum height of the human body in the image of the N frames;
(3.2) translating the human body in each frame of image I (·, t)The object is positioned so that its centroid is at the same position, thereby obtaining a translated image sequence
Figure FDA0002670558790000013
(3.3) based on the translated image sequence
Figure FDA0002670558790000014
Calculating a space-time contour evolution graph E (x, y);
in step S4, the process of calculating the histogram of directional gradients from the contour depth map is:
firstly, adjusting the profile depth map to a standard size, setting the adjusted image as G, and calculating the gradient amplitude G (x, y) and the gradient direction O (x, y) of each pixel;
then, dividing the image g into a plurality of cell units containing m × m pixel points, and calculating a gradient direction histogram in each cell unit in a weighted voting mode to obtain an expression vector of the gradient direction histogram;
then, combining the nxn cell units to form a block region, and sequentially connecting cell unit representation vectors in the block region to obtain a normalized representation vector of the block region;
and finally, representing the vector by the stacking block region to obtain a characteristic vector corresponding to the directional gradient histogram.
2. The method for detecting falling behavior of human body based on multiple cameras as claimed in claim 1, wherein in step S2, a background modeling and updating method based on a gaussian mixture model is used to detect the moving object for each video image frame.
3. The method as claimed in claim 1, wherein the space-time profile evolution diagram has a calculation formula of
Figure FDA0002670558790000021
4. The method for detecting the falling behavior of the human body based on the multiple cameras as claimed in claim 1, wherein in step S4, a Gauss-Seidel iterative algorithm is used to solve the discrete poisson equation set.
5. The method as claimed in claim 1, wherein in step S5, the multi-classification SVM combined classifier is composed of three SVM classifiers { SVM1, SVM2 and SVM3 }.
CN201710952871.7A 2017-10-13 2017-10-13 Human body falling behavior detection system based on multiple cameras and detection method thereof Expired - Fee Related CN107657244B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710952871.7A CN107657244B (en) 2017-10-13 2017-10-13 Human body falling behavior detection system based on multiple cameras and detection method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710952871.7A CN107657244B (en) 2017-10-13 2017-10-13 Human body falling behavior detection system based on multiple cameras and detection method thereof

Publications (2)

Publication Number Publication Date
CN107657244A CN107657244A (en) 2018-02-02
CN107657244B true CN107657244B (en) 2020-12-01

Family

ID=61118497

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710952871.7A Expired - Fee Related CN107657244B (en) 2017-10-13 2017-10-13 Human body falling behavior detection system based on multiple cameras and detection method thereof

Country Status (1)

Country Link
CN (1) CN107657244B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108509938A (en) * 2018-04-16 2018-09-07 重庆邮电大学 A kind of fall detection method based on video monitoring
CN108803341A (en) * 2018-06-29 2018-11-13 炬大科技有限公司 A kind of house security monitoring system and method based on sweeping robot
TWI662514B (en) * 2018-09-13 2019-06-11 緯創資通股份有限公司 Falling detection method and electronic system using the same
CN109886102B (en) * 2019-01-14 2020-11-17 华中科技大学 Fall-down behavior time-space domain detection method based on depth image
CN110070001A (en) * 2019-03-28 2019-07-30 上海拍拍贷金融信息服务有限公司 Behavioral value method and device, computer readable storage medium
CN111221262A (en) * 2020-03-30 2020-06-02 重庆特斯联智慧科技股份有限公司 Self-adaptive intelligent household adjusting method and system based on human body characteristics
CN111724566A (en) * 2020-05-20 2020-09-29 同济大学 Pedestrian falling detection method and device based on intelligent lamp pole video monitoring system
CN111914676A (en) * 2020-07-10 2020-11-10 泰康保险集团股份有限公司 Human body tumbling detection method and device, electronic equipment and storage medium
CN113111721B (en) * 2021-03-17 2022-07-05 同济大学 Human behavior intelligent identification method based on multi-unmanned aerial vehicle visual angle image data driving
CN117671799B (en) * 2023-12-15 2024-09-10 武汉星巡智能科技有限公司 Human body falling detection method, device, equipment and medium combining depth measurement

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102722715A (en) * 2012-05-21 2012-10-10 华南理工大学 Tumble detection method based on human body posture state judgment
CN204044980U (en) * 2014-08-21 2014-12-24 昆山市华正电子科技有限公司 Multi-functional human body falling detection device and human body fall alarm
CN104598896A (en) * 2015-02-12 2015-05-06 南通大学 Automatic human tumble detecting method based on Kinect skeleton tracking
CN104850850A (en) * 2015-04-05 2015-08-19 中国传媒大学 Binocular stereoscopic vision image feature extraction method combining shape and color
US9305216B1 (en) * 2014-12-15 2016-04-05 Amazon Technologies, Inc. Context-based detection and classification of actions

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102722715A (en) * 2012-05-21 2012-10-10 华南理工大学 Tumble detection method based on human body posture state judgment
CN204044980U (en) * 2014-08-21 2014-12-24 昆山市华正电子科技有限公司 Multi-functional human body falling detection device and human body fall alarm
US9305216B1 (en) * 2014-12-15 2016-04-05 Amazon Technologies, Inc. Context-based detection and classification of actions
CN104598896A (en) * 2015-02-12 2015-05-06 南通大学 Automatic human tumble detecting method based on Kinect skeleton tracking
CN104850850A (en) * 2015-04-05 2015-08-19 中国传媒大学 Binocular stereoscopic vision image feature extraction method combining shape and color

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Actions as Space-Time Shapes;Lena Gorelick等;《IEEE Transactions on Pattern Analysis and Machine Intelligence》;20071231;第29卷(第12期);第2247-2253页 *
基于泊松方程的异常行为检测;罗志琳等;《科学技术与工程》;20140131;第14卷(第2期);第50-54页 *
视频中的人体运动分析及其应用研究;钱惠敏;《中国博士学位论文全文数据库 医药卫生科技辑》;20110415(第4期);第1-124页 *

Also Published As

Publication number Publication date
CN107657244A (en) 2018-02-02

Similar Documents

Publication Publication Date Title
CN107657244B (en) Human body falling behavior detection system based on multiple cameras and detection method thereof
CN108537112B (en) Image processing apparatus, image processing system, image processing method, and storage medium
CN107358149B (en) Human body posture detection method and device
CN107527009B (en) Remnant detection method based on YOLO target detection
Choudhury et al. Vehicle detection and counting using haar feature-based classifier
KR101523740B1 (en) Apparatus and method for tracking object using space mapping
WO2014092552A2 (en) Method for non-static foreground feature extraction and classification
CN109145696B (en) Old people falling detection method and system based on deep learning
CN104954747B (en) Video monitoring method and device
CN106570490B (en) A kind of pedestrian&#39;s method for real time tracking based on quick clustering
Abdo et al. Fall detection based on RetinaNet and MobileNet convolutional neural networks
Malhi et al. Vision based intelligent traffic management system
KR101030257B1 (en) Method and System for Vision-Based People Counting in CCTV
CN108364306B (en) Visual real-time detection method for high-speed periodic motion
WO2020217812A1 (en) Image processing device that recognizes state of subject and method for same
Shalnov et al. Convolutional neural network for camera pose estimation from object detections
KR101214858B1 (en) Moving object detecting apparatus and method using clustering
CN104778676A (en) Depth ranging-based moving target detection method and system
JP2020109644A (en) Fall detection method, fall detection apparatus, and electronic device
CN110580708B (en) Rapid movement detection method and device and electronic equipment
KR101542206B1 (en) Method and system for tracking with extraction object using coarse to fine techniques
Guan et al. A video-based fall detection network by spatio-temporal joint-point model on edge devices
Hadi et al. Fusion of thermal and depth images for occlusion handling for human detection from mobile robot
Liu et al. A detection and tracking based method for real-time people counting
CN113239772B (en) Personnel gathering early warning method and system in self-service bank or ATM environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20201201