CN110490148A - A kind of recognition methods for behavior of fighting - Google Patents

A kind of recognition methods for behavior of fighting Download PDF

Info

Publication number
CN110490148A
CN110490148A CN201910778286.9A CN201910778286A CN110490148A CN 110490148 A CN110490148 A CN 110490148A CN 201910778286 A CN201910778286 A CN 201910778286A CN 110490148 A CN110490148 A CN 110490148A
Authority
CN
China
Prior art keywords
target person
data
frame
behavior
fighting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910778286.9A
Other languages
Chinese (zh)
Inventor
王稳
刘翔
何鸣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Free Health Information Technology Co Ltd
Original Assignee
Sichuan Free Health Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Free Health Information Technology Co Ltd filed Critical Sichuan Free Health Information Technology Co Ltd
Priority to CN201910778286.9A priority Critical patent/CN110490148A/en
Publication of CN110490148A publication Critical patent/CN110490148A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of recognition methods of behavior of fighting.The invention belongs to safe early warning field more particularly to a kind of recognition methods for behavior of fighting.Solving the problems, such as that monitoring system needs to be arranged in the prior art multiple will cause that construction is complicated, higher cost.Technical solution of the present invention: data are obtained by monitor video, video data is extracted using every frame method, the limbs skeleton data of the target person and target person in video is identified by model to judge the quantity of target person, the overlapping rate of target person data, the movement velocity of target person and target person limbs amplitude of fluctuation.The present invention can solve finds that monitoring objective occurred does with corresponding processing, to Activity recognition of fighting the case where having a fist fight and in time in time in the management of gymnasium.

Description

A kind of recognition methods for behavior of fighting
Technical field
The invention belongs to safe early warning field more particularly to a kind of recognition methods for behavior of fighting.
Background technique
On today's society, due to the missing to general knowledge of laws, and the individual character reason of impulsion, it often will appear bucket of fighting Event is beaten up, in order to safeguard that the public security of society is stablized, this class behavior is sternly hit in China.
Currently, monitoring screen is monitored come long-time by the way that multiple monitoring systems are arranged in safety monitoring system, When discovery picture exception, alarmed when fighting by warning device to be prevented this.
In the prior art, the connection type being connected directly using alarm sensor and monitoring system host, wherein monitoring system Several hundred, even thousands of a alarm points will be arranged in system, will will cause complicated, higher cost ask of constructing using this kind of connection type Topic.
Summary of the invention
Monitoring system in the prior art need to be arranged it is multiple will cause construction complicated, higher cost aiming at the problem that, this hair A kind of bright recognition methods that behavior of fighting is provided its object is to: solve gymnasium management in time discovery monitoring mesh Mark it is existing do the case where having a fist fight and in time with corresponding processing, be by trained completion to Activity recognition of fighting Identification model of fighting recognize personage in monitoring data, mobile and movement is analyzed and is calculated, and can judge to walk, race, It the postures such as sits down, to accurately judge that the behavior of fighting.
The technical solution adopted by the invention is as follows:
A kind of recognition methods for behavior of fighting, comprising the following steps:
Step A: obtaining data by monitor video, extracts video data using every frame method, identifies video by model The limbs skeleton data of interior target person and target person;
Step B: judge the quantity of the target person of detection;
Step C: each the target person data of target person data for previous frame data of current frame data are calculated Overlapping rate;
Step D: the movement velocity of target person is calculated;
Step E: the judgement to target person state calculates target person limbs amplitude of fluctuation;
Step F: behavior of fighting is judged according to the result of step A, step B, step C, step D and step E.
Wherein, include two kinds of situations in the step B:
Situation 1: detection target number exits judgement less than two people;
Situation 2: detection target number continues to judge not less than two people.
Wherein, the step C the following steps are included:
Step C1: position coordinates, classification and confidence level are obtained according to the output result of model;
Step C2: defining data set C, if data set C is sky, data set is added in position coordinates, classification and confidence level C, if data set C be not it is empty, obtain the data set C of previous frame and position coordinates in data set C, classification, confidence level and The position coordinates that target person is likely to occur in the next frame of Kalman prediction access boundary rectangle.
Step C3: the boundary rectangle of present frame target person and the boundary rectangle of previous frame target person do intersection and union Overlapping rate.
Wherein, the step D the following steps are included:
Step D1: the mass center (x, y) of target person trunk is defined;
Wherein, n: the skeleton point of torso portion,
Step D2: by the pixel difference between present frame and the centroid position of previous frame target person, according to frame per second and journey The speed of service of sort run carrier server calculates the movement velocity of target person,
V=pixel/s × (FPS × Vrun)
Wherein, s: the dimension of object ratio of pixel size and logic judgment, Vnm: every frame method every frame threshold value, pixel: preceding Euclidean distance between two frame mass centers afterwards.
Wherein, the step E the following steps are included:
Step E1: definition status I, the x of the position coordinates L (x, y) of the left shoulder of target person are less than the R of right shoulder position coordinates (x, y), then target person is in the state in face of monitoring camera, and otherwise target person is in the state back to monitoring camera;It is fixed Adopted state II, the x of the position coordinates L (x, y) of the left shoulder of target person are less than the R (x, y) of right button position coordinates, then personage is in The state of monitoring camera is answered in left side, and otherwise personage is in right side to the state of monitoring camera;
Step E2: target person limbs amplitude of fluctuation in state I:
Angle=(tan ((x2-x1),(y2-y1))) (180/ π),
Target person limbs amplitude of fluctuation in state II:
Angle=180-anglefront,
Anglefront=(tan ((x2-x1),(y2-y1))) (180/ π),
Wherein, the coordinate of previous frame is L (x1,y1) and R (x1,y1), the coordinate of present frame is L (x2,y2) and R (x2,y2)。
Wherein, the step E: if there is the limbs for more than two people being close together, and having a people or more are swung Amplitude or movement velocity are more than threshold value, and movement velocity has velocity variations in close target person, then are judged as and beat Frame behavior.
In conclusion by adopting the above-described technical solution, the beneficial effects of the present invention are:
1. the frame per second (fps) of monitor video is 25-35f/s, actual conditions are separated by number of threshold values by regulation according to the present invention The frame of unit is measured to extract data content of the video data as background model detection identification, to efficiently reduce machine meter It calculates, saves operation consumption and time.
Detailed description of the invention
Examples of the present invention will be described by way of reference to the accompanying drawings, in which:
Fig. 1 is the whole bright flow diagram of this hair.
Fig. 2 is identification process figure of the invention of specifically fighting.
Target person tracing computation schematic diagram of the present invention when Fig. 3.
Fig. 4 is target person bone schematic diagram of the present invention.
Fig. 5 is the overlapping rate computer capacity schematic diagram of the present invention.
Specific embodiment
All features disclosed in this specification or disclosed all methods or in the process the step of, in addition to mutually exclusive Feature and/or step other than, can combine in any way.
It elaborates below with reference to Fig. 1-Fig. 5 to invention.
A kind of recognition methods for behavior of fighting, comprising the following steps:
Step 1: data being obtained by monitor video, video data is extracted using every frame method, video is identified by model The limbs skeleton data of interior target person and target person, wherein model is the human body obtained based on convolutional neural networks Skeleton character figure group, it is exactly skeleton line that the characteristic point of the same person, which is connected, and feature training is based on being currently available that data Collect coco, MPII etc., the present invention is one based on the network extraction skeleton character point for having trained maturation.
Using this scheme, the frame per second (fps) of monitor video is 25-35f/s, and actual conditions pass through regulation according to the present invention It is separated by the frame of number of thresholds unit to extract data content of the video data as background model detection identification, to effectively subtract Few machine calculates, and saves operation consumption and time.
Step 2: judge the target person quantity of detection, if it is less than 2, then jumps out judgement, continue next frame, otherwise, into Enter third step.
Step 3: connection skeleton data obtains the basic body data of target person, passes through the basic body of target person Data calculate the basic poses of target person, and the master data of target person includes header data, shoulder data, arm data, Hand data, buttocks data, leg data, foot's data and foot's data judge personage by the master data of target person Basic poses, stand, sit down, walk, exclude the character data sat down, will stand and walking character data carry out next frame Judgement.
Step 4: firstly the need of progress target following, and predicting the position that his next frame will appear in, obtained by step 1 To corresponding position coordinates, classification, confidence level, wherein position coordinates are the target persons that identify with the two of monitor video Tie up the bone dot position information in coordinate;Classification only takes target person mark conduct to judge data;Confidence level is each position The confidence level of point set.Data set C is defined, if set C is sky, set C is added in data, into next frame, if C is not Sky obtains the position coordinates in the data set C and data C of previous frame, classification, confidence level and Kalman prediction The position coordinates being likely to occur in next frame, quantity are total (C), access boundary rectangle, pass through boundary rectangle frame and upper one The boundary rectangle frame of frame does the overlapping rate of intersection and union, wherein overlapping rate is the ratio of intersection of sets collection and union, ratio Closer to 1, indicate two set closer to as shown in Figure 5.
Wherein, the calculation of boundary rectangle is: obtaining all coordinate point sets of target person first, finds out flat with x-axis Capable and the minimum value and maximum value flat with y-axis connect (min X, min Y), (min X, max Y), (max X, min Y) (max X max Y) (min is minimum value, and max is maximum value), obtain the rectangular bounding box of a profile;
Wherein, intersection lower bound Z1:max(x1,y1), intersection upper bound Z2:max(x2,y2)
A (x1, x2) is the boundary rectangle of previous frame, and B (y1, y2) is the boundary rectangle of next frame.
By each of obtaining overlapping rate distance, the specific data format of overlapping rate distance should be the mesh of current frame data Character data is marked for the overlapping rate of each target person of previous frame data, into Hungary Algorithm matching judgment frame number According to wherein Hungary Algorithm process is as follows:
1. matrix N is added in overlapping rate range data collection;
2. judging whether algorithm obtains best match, if so, just terminating, otherwise go successively in next step;
3. the data of row distance every in matrix most short (minimum value) are subtracted, and it is substituted for T;
4. finding out the row of not selected T since row matrix, recording this journey;The column of all T of this journey are all recorded.Into And ground, if there is T, continues the row that record has T in the column of record, circulation executes aforesaid operations, until all ranks quilts of matrix Record;
5. queue is added in the column of unwritten row and record, minimum value Vmin therein is then found out.
6. subtracting Vmin in the queue of addition, T original at this time becomes-Vmin. and then in the rectangular array of queue In addition Vmin, returns to second step.
Judge frame data, if this all this normal matching of total (C) a data, tracking terminates, if in data set C In do not match normally, then this target data be current frame data in it is emerging, by data be added data set C, if number It regards as disappearing in this frame this data not as correctly matching according to some target person in collection C, be deleted from data set C This data.
It after judgement is completed, by all data in data set C, is included in Kalman filtering and calculates, prediction next frame can The position that can will appear.
Step 5: human target is obtained by the passage of mobile distance and frame per second time, defines the particle of target person, Target person particle is target person trunk mass center, wherein mass center (x, y) is defined as:
Wherein, n: for the skeleton point of torso portion;
Step C1: the mass center (x, y) of target person trunk is defined;
Wherein, n: the skeleton point of torso portion
By the pixel difference between present frame and the centroid position of previous frame target person, carried according to frame per second and program operation The speed of service of body server calculates the movement velocity of target person.
V=pixel/s* (FPS*Vrun)
Wherein, s: the dimension of object ratio of pixel size and logic judgment, Vnm: every frame method every frame threshold value, pixe l: Euclidean distance Az (x1, y1) between two frame mass center of front and back, Bz (x2, y2), i.e.,
Step 6: the skeleton point obtained by model determines shoulder position coordinates, neck location coordinate, head position coordinate Judge figure picture to the state of monitoring camera, the x of definition status I, the position coordinates L (x, y) of the left shoulder of target person is less than The R (x, y) of right shoulder position coordinates, then target person is in the state in face of monitoring camera, and otherwise target person is in back to prison Control the state of camera lens;Definition status II, the x of the position coordinates L (x, y) of the left shoulder of target person are less than the R of right button position coordinates (x, y), then personage is in the state that monitoring camera is answered in left side, and otherwise personage is in right side to the state of monitoring camera;
Step 7: the analysis to human target right and left shoulders point position calculates target person by the judgement to personage's state Limbs amplitude of fluctuation:
Target person limbs amplitude of fluctuation in state I:
Angle=(tan ((x2-x1), (y2-y1))) (180/ π),
Target person limbs amplitude of fluctuation in state II:
Angle=180-anglefront
Wherein, anglefront: the angle in state I.
Step 8: if abrupt change occurs in the instantaneous velocity of a certain target person in data set, and he is close separately An outer people, extracts the boundary rectangle of the position coordinate of two people, if there is a certain individual limbs amplitude of fluctuation or Speed is more than threshold value, then is judged as the behavior of fighting;If there is more than two people being close together and having the limb of a people or more Body amplitude of fluctuation or speed are more than threshold value, and in close target person, movement velocity has velocity variations, then is judged as It fights behavior, if it is determined that fighting, saving current frame data and time and sending information to system alarm module.
Wherein, the threshold value value process of movement velocity are as follows: this value is a preset value, obtains the movement velocity of target person Set (movement velocity of same personage only takes once) is added, when set sizes are greater than 10, calculates average movement velocity.If In a certain frame, the movement speed of target person is more than the 30% of average movement velocity, then belongs to more than threshold value.This threshold value needs reality Shi Gengxin, and need to exclude following data addition and recalculate:
(1), it has been judged as the angular movement speed data more than threshold value
(2), it is considered to be (speed is in the case where 40 pixels/s and there is no realities for the angular movement speed data of noise data Personage's speed of border displacement).
The specific embodiment of the application above described embodiment only expresses, the description thereof is more specific and detailed, but simultaneously The limitation to the application protection scope therefore cannot be interpreted as.It should be pointed out that for those of ordinary skill in the art For, under the premise of not departing from technical scheme design, various modifications and improvements can be made, these belong to this The protection scope of application.

Claims (6)

1. a kind of recognition methods for behavior of fighting, which comprises the following steps:
Step A: obtaining data by monitor video, extracts video data using every frame method, is identified in video by model The limbs skeleton data of target person and target person;
Step B: judge the quantity of the target person of detection;
Step C: the friendship of the target person data of current frame data for each target person data of previous frame data is calculated Folded rate;
Step D: the movement velocity of target person is calculated;
Step E: the judgement to target person state calculates target person limbs amplitude of fluctuation;
Step F: behavior of fighting is judged according to the result of step A, step B, step C, step D and step E.
2. a kind of recognition methods of behavior of fighting according to claim 1, which is characterized in that wrapped in the step B Include two kinds of situations:
Situation 1: detection target number exits judgement less than two people;
Situation 2: detection target number continues to judge not less than two people.
3. a kind of recognition methods of behavior of fighting according to claim 1, which is characterized in that the step C includes Following steps:
Step C1: position coordinates, classification and confidence level are obtained according to the output result of model;
Step C2: defining data set C, if data set C is sky, data set C is added in position coordinates, classification and confidence level, such as Fruit data set C is not sky, obtains the data set C of previous frame and position coordinates, classification, confidence level and karr in data set C The position coordinates that target person is likely to occur in the next frame of graceful filter forecasting access boundary rectangle.
Step C3: the boundary rectangle of present frame target person and the boundary rectangle of previous frame target person do intersection and the friendship of union Folded rate.
4. a kind of recognition methods of behavior of fighting according to claim 1, which is characterized in that the step D includes Following steps:
Step D1: the mass center (x, y) of target person trunk is defined;
Wherein, n: the skeleton point of torso portion,
Step D2: it by the pixel difference between present frame and the centroid position of previous frame target person, is transported according to frame per second and program The speed of service of row carrier server calculates the movement velocity of target person,
V=pixel/s × (FPS × Vrun)
Wherein, s: the dimension of object ratio of pixel size and logic judgment, Vnm: every frame method every frame threshold value, pixel: front and back two Euclidean distance between frame mass center.
5. a kind of recognition methods of behavior of fighting according to claim 1, which is characterized in that the step E includes Following steps:
Step E1: definition status I, the x of the position coordinates L (x, y) of the left shoulder of target person be less than right shoulder position coordinates R (x, Y), then target person is in the state in face of monitoring camera, and otherwise target person is in the state back to monitoring camera;Define shape State II, the x of the position coordinates L (x, y) of the left shoulder of target person are less than the R (x, y) of right button position coordinates, then personage is in left side The state of monitoring camera is answered in face, and otherwise personage is in right side to the state of monitoring camera;
Step E2: target person limbs amplitude of fluctuation in state I:
Angle=(tan ((x2-x1),(y2-y1))) (180/ π),
Target person limbs amplitude of fluctuation in state II:
Angle=180-anglefront,
Anglefront=(tan ((x2-x1),(y2-y1))) (180/ π),
Wherein, the coordinate of previous frame is L (x1,y1) and R (x1,y1), the coordinate of present frame is L (x2,y2) and R (x2,y2)。
6. a kind of recognition methods of behavior of fighting according to claim 1, which is characterized in that the step E: if It is more than threshold value there are two the limbs amplitude of fluctuation for more than people being close together, and having a people or more or movement velocity, and And movement velocity has velocity variations in close target person, then is judged as the behavior of fighting.
CN201910778286.9A 2019-08-22 2019-08-22 A kind of recognition methods for behavior of fighting Pending CN110490148A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910778286.9A CN110490148A (en) 2019-08-22 2019-08-22 A kind of recognition methods for behavior of fighting

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910778286.9A CN110490148A (en) 2019-08-22 2019-08-22 A kind of recognition methods for behavior of fighting

Publications (1)

Publication Number Publication Date
CN110490148A true CN110490148A (en) 2019-11-22

Family

ID=68552855

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910778286.9A Pending CN110490148A (en) 2019-08-22 2019-08-22 A kind of recognition methods for behavior of fighting

Country Status (1)

Country Link
CN (1) CN110490148A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111611904A (en) * 2020-05-15 2020-09-01 新石器慧通(北京)科技有限公司 Dynamic target identification method based on unmanned vehicle driving process
CN113111733A (en) * 2021-03-24 2021-07-13 广州华微明天软件技术有限公司 Posture flow-based fighting behavior recognition method

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015025249A2 (en) * 2013-08-23 2015-02-26 Dor Givon Methods, systems, apparatuses, circuits and associated computer executable code for video based subject characterization, categorization, identification, tracking, monitoring and/or presence response
CN105389567A (en) * 2015-11-16 2016-03-09 上海交通大学 Group anomaly detection method based on a dense optical flow histogram
CN106241534A (en) * 2016-06-28 2016-12-21 西安特种设备检验检测院 Many people boarding abnormal movement intelligent control method
CN107146386A (en) * 2017-05-05 2017-09-08 广东小天才科技有限公司 Abnormal behavior detection method and device, and user equipment
CN108053427A (en) * 2017-10-31 2018-05-18 深圳大学 A kind of modified multi-object tracking method, system and device based on KCF and Kalman
CN109086717A (en) * 2018-08-01 2018-12-25 四川电科维云信息技术有限公司 Act of violence detection system and method based on human skeleton and motor message feature
CN109117794A (en) * 2018-08-16 2019-01-01 广东工业大学 A kind of moving target behavior tracking method, apparatus, equipment and readable storage medium storing program for executing
CN109299646A (en) * 2018-07-24 2019-02-01 北京旷视科技有限公司 Crowd's accident detection method, apparatus, system and storage medium
CN109993775A (en) * 2019-04-01 2019-07-09 云南大学 Monotrack method based on feature compensation
CN110072022A (en) * 2019-04-19 2019-07-30 努比亚技术有限公司 Security alarm processing method, wearable device and computer readable storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015025249A2 (en) * 2013-08-23 2015-02-26 Dor Givon Methods, systems, apparatuses, circuits and associated computer executable code for video based subject characterization, categorization, identification, tracking, monitoring and/or presence response
CN105389567A (en) * 2015-11-16 2016-03-09 上海交通大学 Group anomaly detection method based on a dense optical flow histogram
CN106241534A (en) * 2016-06-28 2016-12-21 西安特种设备检验检测院 Many people boarding abnormal movement intelligent control method
CN107146386A (en) * 2017-05-05 2017-09-08 广东小天才科技有限公司 Abnormal behavior detection method and device, and user equipment
CN108053427A (en) * 2017-10-31 2018-05-18 深圳大学 A kind of modified multi-object tracking method, system and device based on KCF and Kalman
CN109299646A (en) * 2018-07-24 2019-02-01 北京旷视科技有限公司 Crowd's accident detection method, apparatus, system and storage medium
CN109086717A (en) * 2018-08-01 2018-12-25 四川电科维云信息技术有限公司 Act of violence detection system and method based on human skeleton and motor message feature
CN109117794A (en) * 2018-08-16 2019-01-01 广东工业大学 A kind of moving target behavior tracking method, apparatus, equipment and readable storage medium storing program for executing
CN109993775A (en) * 2019-04-01 2019-07-09 云南大学 Monotrack method based on feature compensation
CN110072022A (en) * 2019-04-19 2019-07-30 努比亚技术有限公司 Security alarm processing method, wearable device and computer readable storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111611904A (en) * 2020-05-15 2020-09-01 新石器慧通(北京)科技有限公司 Dynamic target identification method based on unmanned vehicle driving process
CN111611904B (en) * 2020-05-15 2023-12-01 新石器慧通(北京)科技有限公司 Dynamic target identification method based on unmanned vehicle driving process
CN113111733A (en) * 2021-03-24 2021-07-13 广州华微明天软件技术有限公司 Posture flow-based fighting behavior recognition method

Similar Documents

Publication Publication Date Title
CN106250867B (en) A kind of implementation method of the skeleton tracking system based on depth data
JP4198951B2 (en) Group attribute estimation method and group attribute estimation apparatus
CN108053427A (en) A kind of modified multi-object tracking method, system and device based on KCF and Kalman
CN105574501B (en) A kind of stream of people's video detecting analysis system
CN101577812B (en) Method and system for post monitoring
CN108062349A (en) Video frequency monitoring method and system based on video structural data and deep learning
CN110648352B (en) Abnormal event detection method and device and electronic equipment
CN103279737B (en) A kind of behavioral value method of fighting based on space-time interest points
US20190294890A1 (en) Method and apparatus for surveillance
CN110472612A (en) Human bodys' response method and electronic equipment
CN103761516B (en) ATM abnormal human face detection based on video monitoring
CN108052859A (en) A kind of anomaly detection method, system and device based on cluster Optical-flow Feature
CN109309808A (en) A kind of monitoring system and method based on recognition of face
CN110147738A (en) A kind of driver fatigue monitoring and pre-alarming method and system
CN110490148A (en) A kind of recognition methods for behavior of fighting
US20220036056A1 (en) Image processing apparatus and method for recognizing state of subject
CN108898108A (en) A kind of user's abnormal behaviour monitoring system and method based on sweeping robot
CN115546904A (en) Method for tracking and identifying danger of fallen personnel based on target detection time sequence
CN114067236A (en) Target person information detection device, detection method and storage medium
CN111144174A (en) System for identifying falling behavior of old people in video by using neural network and traditional algorithm
JP2019029747A (en) Image monitoring system
Zhao et al. Abnormal behavior detection based on dynamic pedestrian centroid model: Case study on u-turn and fall-down
CN117132949B (en) All-weather fall detection method based on deep learning
CN107832728A (en) A kind of judge based on video makes a phone call Activity recognition method
JP2021189587A (en) Display device, display method, and display program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20191122