CN111582231A - Fall detection alarm system and method based on video monitoring - Google Patents

Fall detection alarm system and method based on video monitoring Download PDF

Info

Publication number
CN111582231A
CN111582231A CN202010436613.5A CN202010436613A CN111582231A CN 111582231 A CN111582231 A CN 111582231A CN 202010436613 A CN202010436613 A CN 202010436613A CN 111582231 A CN111582231 A CN 111582231A
Authority
CN
China
Prior art keywords
video
ellipse
formula
image
motion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010436613.5A
Other languages
Chinese (zh)
Inventor
倪建军
沈健
朱金秀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changzhou Campus of Hohai University
Original Assignee
Changzhou Campus of Hohai University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changzhou Campus of Hohai University filed Critical Changzhou Campus of Hohai University
Priority to CN202010436613.5A priority Critical patent/CN111582231A/en
Publication of CN111582231A publication Critical patent/CN111582231A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/04Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons
    • G08B21/0407Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons based on behaviour analysis
    • G08B21/043Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons based on behaviour analysis detecting an emergency event, e.g. a fall
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Abstract

The invention discloses a fall detection alarm system and method based on video monitoring, which comprises the following steps; a, collecting a video image in a video monitoring area; b, preprocessing the collected video image; c, segmenting the human body in the video sequence, extracting a moving target in the video image, and describing the shape and the direction of the human body motion by using an approximate graph; d, judging whether the approximate graph has large spot movement, if the approximate graph exceeds a set threshold value, entering the step E, otherwise, returning to the step A; e, analyzing the proportion and the direction change of the approximate graph, entering the step F if the proportion and the direction change exceed a set threshold value, and otherwise, returning to the step A; f, searching for the immobile approximate graph in the image, confirming the falling behavior of the human body if detecting the static approximate graph, sending alarm information, otherwise returning to the step A, utilizing video monitoring to realize the identification of the falling behavior of the personnel, and being beneficial to timely finding the falling behavior of the old people in the community and processing the falling behavior.

Description

Fall detection alarm system and method based on video monitoring
Technical Field
The invention relates to a tumble detection alarm system and method based on video monitoring, and belongs to the technical field of Internet of things, computer vision technology and behavior identification.
Background
The fall detection technology based on video monitoring has a high recognition rate, is low in influence on interference of external factors such as illumination, temperature and the like, has a video recording function, is convenient to check afterwards, and increasingly becomes a mainstream technology of fall detection. For example, in the endowment community, the falling detection alarm system based on video monitoring is used for replacing manual patrol, so that the system is very suitable for monitoring various emergencies in real time, the falling behavior of the old can be found in time, the life safety of the old is ensured, and the monitoring success rate is higher than that of a system based on wearable equipment.
The existing fall detection methods based on video monitoring mainly comprise two types, one type is fall detection based on static characteristics, for example, when a person lies on the ground for a period of time, the method only depends on one state to judge the fall, the recognition rate is not high, and the recognition speed is slow. The other method is to analyze the characteristics based on the human body shape change and the head movement, and the method has high misjudgment rate and low reliability on some special actions in life.
Disclosure of Invention
The invention aims to overcome the defects in the prior art, and provides a falling detection alarm system and method based on video monitoring, which can realize the identification of falling behaviors of people by using the video monitoring and is beneficial to timely finding and processing the falling behaviors of old people in a community.
In order to achieve the purpose, the invention is realized by adopting the following technical scheme:
in a first aspect, the invention provides a fall detection alarm method based on video monitoring, which comprises the following steps:
a, collecting a video image in a video monitoring area;
b, preprocessing the collected video image;
c, segmenting the human body in the video sequence, extracting a moving target in the video image, and describing the shape and the direction of the human body motion by using an approximate graph;
d, judging whether the approximate graph has large spot movement, if the approximate graph exceeds a set threshold value, entering the step E, otherwise, returning to the step A;
e, analyzing the proportion and the direction change of the approximate graph, entering the step F if the proportion and the direction change exceed a set threshold value, and otherwise, returning to the step A;
and F, searching a stationary approximate graph in the image, confirming the falling behavior of the human body if the stationary approximate graph is detected, and sending alarm information, otherwise, returning to the step A.
With reference to the first aspect, further, in the step C, segmenting the human body in the video sequence by using a background subtraction method based on the motion history image, and extracting the moving object in the image, the method includes the following steps:
the update function is defined using the frame-to-frame difference method, and the formula is as follows:
Figure BDA0002502532950000021
D(x,y,t)=|I(x,y,t)-I(x,y,t±Δ)|
in the formula: i (x, y, t) is the intensity value of a t frame coordinate (x, y) pixel point of the video image sequence, delta is the inter-frame distance, xi is an artificially given difference threshold, D (x, y, t) is the absolute value of the difference value of the intensity values of the pixel points of the inter-frame image, and psi (x, y, t) is an updating function;
defining each pixel of the motion history image as a function of the temporal motion history at the point occurring over a fixed duration, the formula is as follows:
Figure BDA0002502532950000031
(x, y) and t are the position and time of the pixel point, respectively; tau is duration, and the time range of motion is determined from the angle of frame number; is a fading parameter.
With reference to the first aspect, in the step C, a figure describing the shape and direction of the human motion is approximated to a human using an ellipse, including the steps of:
calculating an angle between a long axis and a horizontal axis of the person to define a direction of the ellipse;
evaluating the eigenvalues of the covariance matrix to calculate the minimum and maximum inertia rectangles for recovering the major and minor semiaxes of the ellipse, the formula is as follows:
Figure BDA0002502532950000032
Figure BDA0002502532950000033
Figure BDA0002502532950000034
in the formula: i isminIs a minimum inertia rectangle, ImaxIs the maximum inertia rectangle, J is the covariance matrix, μ20、μ11、μ02Is the central moment;
calculating to obtain a long half shaft and a short half shaft of the best fitting ellipse, wherein the formula is as follows:
Figure BDA0002502532950000035
in the formula: a is the longer semi-axis of the ellipse and b is the shorter semi-axis of the ellipse.
With reference to the first aspect, further calculating an angle between a long axis and a horizontal axis of the person to define a direction of the ellipse includes:
calculating the moment of the continuous images, and the formula is as follows:
Figure BDA0002502532950000041
in the formula: m ispqIs a moment, x is an abscissa of the continuous image f (x, y), y is an ordinate of the continuous image f (x, y), and p, q are natural numbers;
the coordinates of the centroid are calculated using first and zero order spatial moments to define the center of the ellipse, as follows:
Figure BDA0002502532950000042
in the formula: m is10、m00、m01In order to be the moment of force,
Figure BDA0002502532950000043
is the abscissa of the center of the ellipse,
Figure BDA0002502532950000044
is the central ordinate of the ellipse;
the center moment is calculated by using the gravity center, and the formula is as follows:
Figure BDA0002502532950000045
in the formula: mu.spqIs the central moment;
the direction of the ellipse is given by the second central moment, and the formula is as follows:
Figure BDA0002502532950000046
in the formula: θ is the angle between the long axis of the person and the horizontal axis x.
With reference to the first aspect, in the step D, the method for quantifying the human body motion by using the motion history image, determining whether the human body has large speckle motion, and if the human body has large speckle motion, determining that a fall may occur includes the following steps:
by calculating the coefficient CmotionRepresents the motion history of a person within a blob, as follows:
Figure BDA0002502532950000047
in the formula: blob is a blob of a person extracted using background subtraction; hτIs a motion history image; cmotionIs the motion history of the person within the blob;
scaling the calculation coefficient to a motion percentage between 0% and 100% and setting the calculation coefficient CmotionThe threshold value of (2).
With reference to the first aspect, in the step E, the method for analyzing the proportion and the direction change of the ellipse and setting the standard deviation of the ellipse direction and the threshold of the ratio standard deviation to distinguish from the normal motion includes:
the ellipse direction standard deviation and the ellipse ratio standard deviation are calculated respectively, and the threshold value of the ellipse direction standard deviation is set to be 15 degrees and the threshold value of the ellipse ratio standard deviation is set to be 0.9.
With reference to the first aspect, in the step F, the method for determining a stationary approximate graph includes:
coefficient C of motion history in human specklemotion<15 percent; and is
Computing
Figure BDA0002502532950000051
And
Figure BDA0002502532950000052
standard deviation of (2), satisfy
Figure BDA0002502532950000053
Pixel and
Figure BDA0002502532950000054
a pixel; and is
Calculate Hx,HyAnd standard deviation of theta, satisfying sigmaa<2 pixels, σb<2 pixels and σθ<15 degrees.
In a second aspect, the present invention provides a fall detection alarm system based on video monitoring, including:
the data acquisition end is used for carrying out video acquisition on the monitored area and coding and converting the acquired image into a video frame;
the server side comprises a streaming media server and a local server, wherein the streaming media server is used for receiving the coded video data and transmitting the coded video data to the local server, and the local server performs autonomous identification and automatic alarm on the received video stream and stores related data;
and the user terminal is used for receiving the alarm short message sent by the local server.
And in combination with the second aspect, the data acquisition end consists of cameras distributed in a community, the cameras adopt spherical and gun-shaped cameras, and H.264 coding is carried out on the acquired images through a hardware coding module carried by the cameras.
With reference to the second aspect, the user terminal includes a computer and a mobile device, and the user terminal is interconnected with the server terminal through a network.
Compared with the prior art, the fall detection alarm system and method based on video monitoring provided by the embodiment of the invention have the following beneficial effects: the method mainly comprises the steps that image information of a monitoring area is collected through front-end video collection equipment, relevant information is coded and then sent to a streaming media server and a local server through a network, the server identifies and gives an alarm to falling behaviors in video images through a falling detection algorithm based on motion history images and human figure changes, and an alarm short message is sent to a mobile terminal of a relevant user. The user can also check and inquire the video monitoring of the relevant place through the network login server. The invention realizes real-time video monitoring in the aged-care community, utilizes the video monitoring to realize the identification and alarm of the falling behavior of the personnel, has the advantages of quick response, high identification rate, low cost and the like, and has good market prospect and application value.
Drawings
Fig. 1 is a schematic structural diagram of a fall detection alarm system based on video monitoring according to an embodiment of the present invention;
fig. 2 is a block flow diagram of a fall detection alarm method based on video monitoring according to an embodiment of the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings. The following examples are only for illustrating the technical solutions of the present invention more clearly, and the protection scope of the present invention is not limited thereby.
As shown in fig. 2, a flow chart of a fall detection alarm method based on video monitoring according to an embodiment of the present invention includes:
s01, the camera collects video images in the video monitoring area, encodes image information and pushes the image information to the server side;
s02, the server side carries out video framing preprocessing on the video image;
s03, segmenting a human body in a video sequence by using a background subtraction method based on the motion history image, and extracting a moving target in the image;
s04, approximating a human body by an ellipse, and describing the shape and the direction of the human body movement;
s05, quantifying human body movement by using the movement history image, judging whether the human body has large speckle movement, if the large speckle movement exceeds a set threshold value, judging that the human body possibly falls down, and entering the step S06, otherwise, returning to the step S01;
s06, analyzing the proportion and direction change of the ellipse, setting the standard deviation of the ellipse direction and the threshold of the ratio standard deviation to distinguish from normal movement, if the standard deviation exceeds the set threshold, entering S07, otherwise returning to the step S01;
s07, searching for an immobile ellipse in the image within 5 seconds after the fall, and if a immobile ellipse is detected, confirming the fall behavior.
And S08, the server side sends the analysis result of the fall detection to the client side, and the user client side can also access the server side through the network to check the real-time monitoring of the monitoring area.
The step S03 includes the following steps:
s031, defining an update function using the interframe difference method:
Figure BDA0002502532950000071
wherein D (x, y, t) ═ I (x, y, t) -I (x, y, t ± Δ) · does not smoke
In the formula: i (x, y, t) is the intensity value of a t frame coordinate (x, y) pixel point of the video image sequence, delta is the inter-frame distance, xi is an artificially given difference threshold value and is adjusted along with the change of a video scene, D (x, y, t) is the absolute value of the difference value of the intensity values of the pixel points of the inter-frame image, and Ψ (x, y, t) is an updating function.
S032, defining motion history image HτIs a function of the temporal motion history of the point occurring during a fixed duration τ (1 ≦ τ ≦ N for a sequence of frames of length N), as follows:
Figure BDA0002502532950000072
(x, y) and t are the position and time of the pixel point; tau is duration, and the time range of motion is determined from the angle of frame number; is a fading parameter; hτ(x, y, t) is a motion history image.
The step S04 includes the following steps:
s041, defining the direction of the ellipse by calculating the angle between the long axis and the horizontal axis of the person, the steps are as follows:
calculating the moment of the continuous images, wherein the calculation formula is as follows:
Figure BDA0002502532950000081
in the formula: m ispqX is the abscissa of the continuous image f (x, y), y is the ordinate of the continuous image f (x, y), and p, q are natural numbers 0,1,2,3 … …;
secondly, the coordinates of the centroid are calculated by using the first-order and zero-order space moments to define the center of the ellipse, and the calculation formula is as follows:
Figure BDA0002502532950000082
in the formula: m is10The moment when p is 1 and q is 0, m00The moment when p is 0 and q is 0, m01The moment when p is 0, q is 1,
Figure BDA0002502532950000083
is the abscissa of the center of the ellipse,
Figure BDA0002502532950000084
is the central ordinate of the ellipse;
③, using center of gravity
Figure BDA0002502532950000085
And (3) calculating the central moment by the following calculation formula:
Figure BDA0002502532950000086
in the formula: mu.spqIs the central moment;
fourthly, the direction of the ellipse is given by the second-order central moment, and the formula is as follows:
Figure BDA0002502532950000087
in the formula: θ is the angle between the long axis of the person and the horizontal axis x.
S042, calculating the minimum inertia rectangle I by evaluating the eigenvalue of the covariance matrixminAnd the maximum inertia rectangle ImaxThe major semi-axis a and the minor semi-axis b for restoring the ellipse.
Figure BDA0002502532950000088
Figure BDA0002502532950000091
Figure BDA0002502532950000092
S043, calculating a formula of the best fitting ellipse of the major semiaxis a and the minor semiaxis b:
Figure BDA0002502532950000093
in the formula: a is the longer semi-axis of the ellipse and b is the shorter semi-axis of the ellipse.
The step S05 specifically includes:
s051, calculating coefficient CmotionTo representThe history of human movement within the blob.
Figure BDA0002502532950000094
In the formula: blob is a blob of a person extracted using background subtraction, HτIs a motion history image;
s052, scaling the coefficient to a motion percentage between 0% (no motion) and 100% (full motion), setting CmotionThe threshold value of (2) is 65%.
The step S06 includes:
respectively calculating the standard deviation sigma of the ellipse directionθStandard deviation from ellipse ratio σρSetting σθHas a threshold value of 15 degrees, σρThe threshold value of (2) is 0.9.
The step S07 specifically includes: to determine a stationary ellipse, the following three conditions must be satisfied simultaneously:
① coefficient C of motion history in human specklemotion<15%。
②, calculating
Figure BDA0002502532950000095
And
Figure BDA0002502532950000096
standard deviation of (2), satisfy
Figure BDA0002502532950000097
Pixel and
Figure BDA0002502532950000098
a pixel.
③, calculate Hx,HyAnd standard deviation of theta, satisfying sigmaa<2 pixels, σb<2 pixels and σθ<15 degrees.
The step S08 includes:
the local server sends the analysis result of the fall detection to the client, and the result can be sent to the user terminal mobile equipment in a form of short message in a wired mode and a wireless mode; the user can also actively access the local server through the network to view real-time video monitoring pictures in the community.
As shown in fig. 1, a schematic structural diagram of a fall detection alarm system based on video monitoring according to an embodiment of the present invention includes a data acquisition end, a server end, and a user terminal;
the data acquisition end is composed of cameras distributed in communities, the cameras are spherical and gun-shaped cameras and used for carrying out long-time video acquisition on a monitored area, H.264 coding is carried out on acquired images through a hardware coding module of the cameras, the acquired images are converted into video frames, and then the coded video data are pushed to the server end through a TCP network.
The server side comprises a streaming media server and a local server, wherein the streaming media server is responsible for responding to the real-time streaming request operation of the user, and transmits the received video data to the local server through a real-time streaming protocol (RTSP) for the user to view the community monitoring video in real time. The local server is connected with the video acquisition end and the user terminal, and videos transmitted to the local server are stored in the folder according to time sequence for a user to check. The local server performs autonomous identification and automatic alarm on the received video stream by using a falling detection algorithm based on the motion history image and the human shape change, stores related data and sends an alarm short message to the user terminal.
The network transmission is used for transmitting video stream data, and the video data coded by the video acquisition end is transmitted to the streaming media server and the local server through the network. The streaming media server compresses the received video stream data and transmits the compressed video stream data to the local server through the optical fiber, and the user terminal is connected with the server through a local area network and the Internet.
The user terminal comprises a computer and mobile equipment, the terminal equipment is interconnected with the server terminal through a network (including a wired mode and a wireless mode which are mutually standby), and the server establishes connection and responds according to a user request so that a user can check real-time video monitoring images of a monitoring area and call historical monitoring images.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several modifications and variations can be made without departing from the technical principle of the present invention, and these modifications and variations should also be regarded as the protection scope of the present invention.

Claims (10)

1. A fall detection alarm method based on video monitoring is characterized by comprising the following steps:
a, collecting a video image in a video monitoring area;
b, preprocessing the collected video image;
c, segmenting the human body in the video sequence, extracting a moving target in the video image, and describing the shape and the direction of the human body motion by using an approximate graph;
d, judging whether the approximate graph has large spot movement, if the approximate graph exceeds a set threshold value, entering the step E, otherwise, returning to the step A;
e, analyzing the proportion and the direction change of the approximate graph, entering the step F if the proportion and the direction change exceed a set threshold value, and otherwise, returning to the step A;
and F, searching a stationary approximate graph in the image, confirming the falling behavior of the human body if the stationary approximate graph is detected, and sending alarm information, otherwise, returning to the step A.
2. The fall detection alarm method based on video monitoring as claimed in claim 1, wherein in the step C, the human body is segmented in the video sequence by using a background subtraction method based on the motion history image, and the moving object in the image is extracted, comprising the following steps:
the update function is defined using the frame-to-frame difference method, and the formula is as follows:
Figure FDA0002502532940000011
D(x,y,t)=|I(x,y,t)-I(x,y,t±Δ)|
in the formula: i (x, y, t) is the intensity value of a t frame coordinate (x, y) pixel point of the video image sequence, delta is the inter-frame distance, xi is an artificially given difference threshold, D (x, y, t) is the absolute value of the difference value of the intensity values of the pixel points of the inter-frame image, and psi (x, y, t) is an updating function;
defining each pixel of the motion history image as a function of the temporal motion history at the point occurring over a fixed duration, the formula is as follows:
Figure FDA0002502532940000021
(x, y) and t are the position and time of the pixel point respectively, tau is the duration, H is a decay parameterτ(x, y, t) is a motion history image.
3. The fall detection alarm method based on video surveillance as claimed in claim 1, wherein in the step C, a figure describing the shape and direction of the human body movement is approximated by an ellipse, comprising the steps of:
calculating an angle between a long axis and a horizontal axis of the person to define a direction of the ellipse;
evaluating the eigenvalues of the covariance matrix to calculate the minimum and maximum inertia rectangles for recovering the major and minor semiaxes of the ellipse, the formula is as follows:
Figure FDA0002502532940000022
Figure FDA0002502532940000023
Figure FDA0002502532940000024
in the formula: i isminIs a minimum inertia rectangle, ImaxIs the maximum inertia rectangle, J is the covariance matrix, μ20、μ11、μ02Is the central moment;
calculating to obtain a long half shaft and a short half shaft of the best fitting ellipse, wherein the formula is as follows:
Figure FDA0002502532940000025
in the formula: a is the longer semi-axis of the ellipse and b is the shorter semi-axis of the ellipse.
4. A video surveillance based fall detection alarm method according to claim 3, wherein calculating the angle between the major axis and the horizontal axis of the person to define the direction of the ellipse comprises:
calculating the moment of the continuous images, and the formula is as follows:
Figure FDA0002502532940000031
in the formula: m ispqIs a moment, x is an abscissa of the continuous image f (x, y), y is an ordinate of the continuous image f (x, y), and p, q are natural numbers;
the coordinates of the centroid are calculated using first and zero order spatial moments to define the center of the ellipse, as follows:
Figure FDA0002502532940000032
in the formula: m is10、m00、m01In order to be the moment of force,
Figure FDA0002502532940000033
is the abscissa of the center of the ellipse,
Figure FDA0002502532940000034
is the central ordinate of the ellipse;
the center moment is calculated by using the gravity center, and the formula is as follows:
Figure FDA0002502532940000035
in the formula: mu.spqIs the central moment;
the direction of the ellipse is given by the second central moment, and the formula is as follows:
Figure FDA0002502532940000036
in the formula: θ is the angle between the long axis of the person and the horizontal axis x.
5. The fall detection and alarm method based on video monitoring of claim 4, wherein in the step D, the motion history image is used to quantify the motion of the human body, and whether the human body has large speckle motion is determined, and if the motion history image exceeds a set threshold, the method for determining that a fall is likely to occur comprises the following steps:
by calculating the coefficient CmotionRepresents the motion history of a person within a blob, as follows:
Figure FDA0002502532940000037
in the formula: blob is a blob of a person extracted using background subtraction, HτAs a motion history image, CmotionIs the motion history of the person within the blob;
scaling the calculation coefficient to a motion percentage between 0% and 100% and setting the calculation coefficient CmotionThe threshold value of (2).
6. The video surveillance based fall detection alarm method according to claim 5,
in step E, the method for analyzing the ratio and direction change of the ellipse and setting the standard deviation of the ellipse direction and the standard deviation of the ratio to distinguish from the normal motion includes:
the ellipse direction standard deviation and the ellipse ratio standard deviation are calculated respectively, and the threshold value of the ellipse direction standard deviation is set to be 15 degrees and the threshold value of the ellipse ratio standard deviation is set to be 0.9.
7. The video surveillance based fall detection alarm method according to claim 5,
in step F, the method for determining a still approximate figure includes:
coefficient C of motion history in human specklemotion<15 percent; and is
Computing
Figure FDA0002502532940000041
And
Figure FDA0002502532940000042
standard deviation of (2), satisfy
Figure FDA0002502532940000043
Pixel and
Figure FDA0002502532940000044
a pixel; and is
Calculate Hx,HyAnd standard deviation of theta, satisfying sigmaa<2 pixels, σb<2 pixels and σθ<15 degrees.
8. A fall detection alarm system based on video surveillance, comprising:
the data acquisition end is used for carrying out video acquisition on the monitored area and coding and converting the acquired image into a video frame;
the server side comprises a streaming media server and a local server, wherein the streaming media server is used for receiving the coded video data and transmitting the coded video data to the local server, and the local server performs autonomous identification and automatic alarm on the received video stream and stores related data;
and the user terminal is used for receiving the alarm short message sent by the local server.
9. The fall detection and alarm system based on video monitoring as claimed in claim 8, wherein the data acquisition end comprises cameras distributed in the community, the cameras are spherical and gun-shaped, and the acquired images are h.264 encoded through a hardware encoding module carried by the cameras.
10. A fall detection alarm system based on video surveillance according to claim 8 or 9, the user terminal comprising a computer and a mobile device, the user terminal being interconnected with the server side via a network.
CN202010436613.5A 2020-05-21 2020-05-21 Fall detection alarm system and method based on video monitoring Pending CN111582231A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010436613.5A CN111582231A (en) 2020-05-21 2020-05-21 Fall detection alarm system and method based on video monitoring

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010436613.5A CN111582231A (en) 2020-05-21 2020-05-21 Fall detection alarm system and method based on video monitoring

Publications (1)

Publication Number Publication Date
CN111582231A true CN111582231A (en) 2020-08-25

Family

ID=72111030

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010436613.5A Pending CN111582231A (en) 2020-05-21 2020-05-21 Fall detection alarm system and method based on video monitoring

Country Status (1)

Country Link
CN (1) CN111582231A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111881898A (en) * 2020-09-27 2020-11-03 西南交通大学 Human body posture detection method based on monocular RGB image

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101180880A (en) * 2005-02-15 2008-05-14 实物视频影像公司 Video surveillance system employing video primitives
CN107808392A (en) * 2017-10-31 2018-03-16 中科信达(福建)科技发展有限公司 The automatic method for tracking and positioning of safety check vehicle and system of open scene
TW201810187A (en) * 2016-08-10 2018-03-16 亞洲大學 Falling detection device and method thereof
CN108830252A (en) * 2018-06-26 2018-11-16 哈尔滨工业大学 A kind of convolutional neural networks human motion recognition method of amalgamation of global space-time characteristic
CN109887238A (en) * 2019-03-12 2019-06-14 朱利 A kind of fall detection system and detection alarm method of view-based access control model and artificial intelligence
CN110796682A (en) * 2019-09-25 2020-02-14 北京成峰科技有限公司 Detection and identification method and detection and identification system for moving target

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101180880A (en) * 2005-02-15 2008-05-14 实物视频影像公司 Video surveillance system employing video primitives
TW201810187A (en) * 2016-08-10 2018-03-16 亞洲大學 Falling detection device and method thereof
CN107808392A (en) * 2017-10-31 2018-03-16 中科信达(福建)科技发展有限公司 The automatic method for tracking and positioning of safety check vehicle and system of open scene
CN108830252A (en) * 2018-06-26 2018-11-16 哈尔滨工业大学 A kind of convolutional neural networks human motion recognition method of amalgamation of global space-time characteristic
CN109887238A (en) * 2019-03-12 2019-06-14 朱利 A kind of fall detection system and detection alarm method of view-based access control model and artificial intelligence
CN110796682A (en) * 2019-09-25 2020-02-14 北京成峰科技有限公司 Detection and identification method and detection and identification system for moving target

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
MIAO YU 等: "A Posture Recognition-Based Fall Detection System for Monitoring an Elderly Person in a Smart Home Environment", 《IEEE》 *
张顺淼等: "一种基于Surendra背景更新的背景减除运动目标检测方法", 《南京工程学院学报(自然科学版)》 *
范凯波: "《中国博士学位论文全文数据库社会科学Ⅱ辑》", 15 October 2019 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111881898A (en) * 2020-09-27 2020-11-03 西南交通大学 Human body posture detection method based on monocular RGB image
CN111881898B (en) * 2020-09-27 2021-02-26 西南交通大学 Human body posture detection method based on monocular RGB image

Similar Documents

Publication Publication Date Title
CN109117827B (en) Video-based method for automatically identifying wearing state of work clothes and work cap and alarm system
EP3321844B1 (en) Action recognition in a video sequence
CN110633612B (en) Monitoring method and system for inspection robot
CN107679471B (en) Indoor personnel air post detection method based on video monitoring platform
KR101215948B1 (en) Image information masking method of monitoring system based on face recognition and body information
WO2020094088A1 (en) Image capturing method, monitoring camera, and monitoring system
CN109711318B (en) Multi-face detection and tracking method based on video stream
CN111767823A (en) Sleeping post detection method, device, system and storage medium
CN112911156B (en) Patrol robot and security system based on computer vision
CN107330414A (en) Act of violence monitoring method
Raya et al. Analysis realization of Viola-Jones method for face detection on CCTV camera based on embedded system
CN110852306A (en) Safety monitoring system based on artificial intelligence
CN112183219A (en) Public safety video monitoring method and system based on face recognition
CN111582231A (en) Fall detection alarm system and method based on video monitoring
JPH09252467A (en) Mobile object detector
CN113628172A (en) Intelligent detection algorithm for personnel handheld weapons and smart city security system
CN210072642U (en) Crowd abnormal behavior detection system based on video monitoring
CN104392201A (en) Human fall identification method based on omnidirectional visual sense
CN110572618B (en) Illegal photographing behavior monitoring method, device and system
CN114488337A (en) High-altitude parabolic detection method and device
CN115995093A (en) Safety helmet wearing identification method based on improved YOLOv5
CN112422895A (en) Image analysis tracking and positioning system and method based on unmanned aerial vehicle
KH et al. Smart CCTV surveillance system for intrusion detection with live streaming
CN111832451A (en) Airworthiness monitoring process supervision system and method based on video data processing
CN112291282B (en) Dynamic inspection alarm method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200825

RJ01 Rejection of invention patent application after publication