WO2020139071A1 - System and method for detecting aggressive behaviour activity - Google Patents

System and method for detecting aggressive behaviour activity Download PDF

Info

Publication number
WO2020139071A1
WO2020139071A1 PCT/MY2019/050126 MY2019050126W WO2020139071A1 WO 2020139071 A1 WO2020139071 A1 WO 2020139071A1 MY 2019050126 W MY2019050126 W MY 2019050126W WO 2020139071 A1 WO2020139071 A1 WO 2020139071A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
frames
processing module
aggressive behaviour
determining
Prior art date
Application number
PCT/MY2019/050126
Other languages
French (fr)
Inventor
Kadim ZULAIKHA BINTI
Chin Wee WONG
Hock Woon Hon
Mohamed Johari KHAIRUNNISA BINTI
Nik Zulkepeli NIK AHMAD AKRAM BIN
Original Assignee
Mimos Berhad
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mimos Berhad filed Critical Mimos Berhad
Publication of WO2020139071A1 publication Critical patent/WO2020139071A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/579Depth or shape recovery from multiple images from motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Definitions

  • the present invention generally relates to detection of aggressive behaviour activity, and more particularly relates to a system and method for detecting aggressive behavior activity in a confined area based on level of interaction between objects within a potential area of the aggressive behaviour activity.
  • Aggressive behaviour such as fighting or punching activity is defined as an unhealthy activity that may cause fatality if it is not being detected or alerted earlier.
  • Such action that happened in a confined area such as detention area, ward, dorm, elevator, etc. is hard to be monitored by using a fix parameters of a conventional surveillance camera due to much variations in terms of number of people involved, duration of the action and pattern movement of the action.
  • the false interpretation may also be presumed if when the fighting only involve a little actions, e.g. when only one person leading the fighting scene or when the person punching to the wall or ground.
  • the application disclosed a violent video detection method based on slow characteristic analysis, which comprises the steps of performing intensive trajectory extraction on classified videos, learning out a slow characteristic function based on a slow characteristic analysis method for trajectory, obtaining the characteristic representation of video segments through the slow characteristic function, training extracted characteristics, and modeling; and performing characteristic extraction on a new video, and inputting the extracted characteristics into the model obtained through training, thus obtaining the classification of the video (a violent video or a non-violent video).
  • the application relates to a violence detection system for detecting violence behaviour and method thereof.
  • the violence detection system of the present invention comprises of an image obtainment unit for obtaining an image captured by a camera.
  • a person tracking unit is used for detecting and tracking a person by setting the person as an object based on an image processing technology through the obtained image.
  • the movement of the person attacked is analyzed and extracted by a multi-period movement analysis unit within short term period, mid-term period and a long term period.
  • a multi-layered violent behaviour determination unit is then used for recognizing a violence behaviour in a multi-layered structure based on data analyzed by the multi-period movement analysis unit and for detecting violent behaviour by integrating and analyzing a recognized result for the multi-layered violence behaviour.
  • an alarm generation unit generated an alarm according to the detection of the violent behaviour.
  • Another objective of the present invention is to provide an accurate detection of aggressive behaviour within a small confined area such as in a detention area or in room or elevator.
  • the present invention provides a system for detecting aggressive behaviour activity comprising of an image processing module for retrieving a plurality of image frames from an image capturing device; a frame processing module for determining numbers of image frames to be processed from the image processing module by regulating frame rate of the image frames; a motion calculation module for identifying a moving object areas in the image frame from the frame processing module by computing differences between a current image and background image; and a contour processing module for determining potential aggressive behavior activity area by analyzing the moving object areas from the motion calculation module.
  • the system further comprising an aggressive behaviour detection module for detecting the aggressive behaviour activity based on level of interaction within the potential aggressive behaviour activity area from the contour processing module.
  • system further comprising an extensible markup language, XML processing module having parameters in XML file for defining the numbers of frame to be processed by the frame processing module.
  • a method of detecting aggressive behaviour activity within a confined area is provided.
  • the method of the present invention comprises the steps of retrieving a plurality of image frames from an image capturing device by an image processing module, determining numbers of frames to be processed by regulating frame rate of the image frames by a frame processing module, identifying a moving object areas in the image frames by computing differences between a current image and background image by a motion calculation module, determining potential aggressive behaviour activity area by analyzing the moving object areas in the image frames by a contour processing module, estimating level of interaction between the moving objects within the potential aggressive behaviour activity area by computing number of contradict pixels and rate of change in major direction for successive frames by an aggressive behaviour module and determining the aggressive behaviour activity based on the estimated level of interaction between the moving objects areas within the potential areas by the aggressive behaviour module.
  • the method of determining numbers of frames to be processed by regulating frame rate of the image frames by a frame processing module comprising the steps of loading a plurality of parameters in form of extensible markup language, XML file into an XML processing module; defining the plurality of parameters by the XML processing module; receiving the plurality of image frames from the image processing module, and determining numbers of frames to be processed based on the predefined parameters by the frame processing module.
  • the method of identifying a moving object areas in the image frames by computing differences between a current image and background image by a motion calculation module comprising the steps of computing a dense optical flow between current images and previous image, wherein the previous images are retrieved from existing frames in the frame processor module (20), computing optical flow magnitude and direction of optical flow points, determining the optical magnitude by comparing with a predefined threshold, quantizing the direction of optical flow points into a number of major angles if the optical flow magnitude is more than the predefined threshold, defining corresponding points coordinate for the previous image, marking the corresponding points in the current image as coloured image, computing image differences between coloured image and a background image, and identifying the moving object areas in the image frames.
  • the method of determining potential aggressive behaviour activity area by analyzing the moving object areas in the image frames by a contour processing module comprising the steps of extracting contours of the identified moving object areas, filtering contours that overlapped with other object contours based on a predefined parameters, computing distance between each object contours, merging the object contours that are closed to each other, extracting a smallest enclosing box having the merged object contours, determining overlapping of the merged contour location if the merged contour size is less than predefined threshold, and determining the potential aggressive behaviour activity area if the merged contours size is more than predefined threshold.
  • the method of estimating level of interaction between the moving objects within the potential aggressive behaviour activity area by computing number of contradict pixels and rate of change in major direction for successive frames by an aggressive behaviour module comprising the steps of quantizing motion pixels direction into four major directions, wherein the directions are up, down, left and right, computing percentage of the pixels for the major direction, computing number of contradict pixels, wherein the contradict pixels are the pixels in opposite direction and close to each other and computing rate of change in major direction for the processed frame within a preferred frames.
  • the method of determining the aggressive behaviour activity based on the estimated level of interaction between the moving objects areas within the potential areas by the aggressive behaviour module comprising the steps of selecting predefined threshold based on status of detection in current frame, wherein the status is either potential new detection or potential on-going detection,
  • determining level of interaction between the moving objects by comparing with the predefined threshold, increasing counted numbers of continuous detected frames if the level of interaction is more than the selected threshold, determining numbers of detection frames compared to a preferred detection frames size and detecting aggressive behaviour if the numbers of detection frames are more than the detection frames size.
  • Figure 1 illustrates a block diagram of a system for detecting aggressive behaviour activity according to one embodiment of the present invention.
  • Figure 2 illustrates a flowchart for a method of detecting aggressive behaviour activity according to one embodiment of the present invention.
  • Figure 3 illustrates a flowchart of the sub-steps for regulating frame rate of the image frame by skipping unnecessary frame according to one embodiment of the present invention.
  • Figure 4 illustrates a flowchart of the sub-steps for identifying a moving object areas in the image frames according to one embodiment of the present invention.
  • Figure 5 illustrates a flowchart of the sub-steps for determining potential aggressive behaviour activity area by analyzing the moving object areas according to one embodiment of the present invention.
  • Figure 6 illustrates a flowchart of the sub-steps for estimating level of interaction between the moving objects within the potential aggressive behaviour activity area according to one embodiment of the present invention.
  • Figure 7 illustrates a flowchart of the sub-steps for determining the aggressive behaviour activity according to one embodiment of the present invention.
  • the present invention provides a system and method for detecting an aggressive behaviour activity.
  • the aggressive behaviour activity that is captured by at least one image capturing device is difficult to be defined due to variations of movements and interaction between the captured objects.
  • the system and method provided in the present invention may help in detecting the aggressive behaviour activity as well as to differentiate between a normal activity and the aggressive activity.
  • FIG. 1 illustrates a block diagram of the system (100) for detecting aggressive behaviour activity according to an embodiment of the present invention.
  • the system (100) preferably implemented in a confined area, such as in a detention area, room or in an elevator.
  • the system (100) comprises of an image processing module (10), a frame processing module (20), a motion calculation module (30), a contour processing module (40), an aggressive behaviour detection module (50), and an extensible markup language (XML) processing module (60).
  • Any aggressive image or video in the confined area is captured beforehand by at least one of image capturing device preferably refers to a device such as closed-circuit television (CCTV) camera.
  • the image processing module (10) is then configured to retrieve a plurality of image frames from the at least one of image capturing device to be further regulated by the frame processing module (20).
  • the frame processing module (20) which is connected to the image processing module (10) determines numbers of frame to be processed in a subsequent module by regulating frame rate of the image frames. Number of frames to be processed is defined in the frame processing module (20) by invoking input from the XML processing module (60).
  • the numbers of image frames that have been defined to be processed by the frame processing module (20) is then transmitted to the motion calculation module (30) for identifying a moving object areas in the image frames by computing differences between current image and background image.
  • the background image refers to the image of capturing device without any interest object inside the image.
  • the moving object areas is then being analyzed by the contour processing module (40) which is connected to the motion calculation module (30) for identifying potential aggressive behaviour activity area.
  • the aggressive behaviour detection module (50) that connected to the contour processing module (40) configured for detecting the aggressive behaviour activity based on level of interaction within the potential aggressive behaviour area.
  • a method (1000) of detecting aggressive behaviour activity begins with the steps of retrieving (1100) a plurality of image frames captured by the image capturing device by the image processing module (10).
  • the frame rate of the plurality image frames are then regulated (1200) by the frame processing module (20) to determine numbers of frames to be processed in the subsequent module, i.e. the motion calculation module (30).
  • the determined numbers of frames from the frame processing module (20) subsequently processed by the motion calculation module (30) to identify a moving object areas by computing differences between current image and background image (1300).
  • the moving object areas in the image frames is further analyzed by the contour processing module (40) to determine a potential aggressive behaviour activity area (1400). Thereon, level of interaction between the moving objects within the potential aggressive behaviour activity area is estimated by computing number of contradict pixels and rate of change in major direction of the processed frames by the aggressive behaviour module (50) (1500). Finally, based on the estimated level of interaction between the moving objects within the potential area, the aggressive behaviour activity is determined by the aggressive behaviour module (50).
  • the method of determining numbers of frames to be processed by the frame processing module (20) by regulating frame rate of the image frames is illustrated as in FIG. 3 (1200).
  • the frame rate regulation is important at this stage as the solution of the present invention is based on optical flow analysis.
  • a time distance between the processed frames have to be similar in order to obtain an accurate optical flow magnitude.
  • a plurality of parameters are loaded (1210) into the XML processing module (60) in form of XML file to determine the numbers of image frames that can be processed in the frame processing module (20).
  • the parameters defined in the XML processing module (60) are preferably but not limited to frame rate parameters, filtering parameters, detection parameters, etc.
  • the plurality of image frames received (1230) from the image processing module (10) are combined with the parameters from the XML processing module (60) to identify a successive frame to be processed (1240) in the motion calculation module (30) . If the frame images are defined not to be processed, the frame processing module (20) will wait for a new frame coming from the image processing module (10).
  • the motion calculation module (30) subsequently will identify the moving object areas in the image frames as in a flowchart in FIG. 4.
  • the method for identifying (1300) the moving object areas in the image frames by computing differences between the current image and background image by the motion calculation module (30) includes the steps of computing a dense optical flow between the current images and previous image (1310), wherein the previous images are retrieved from existing frames in the frame processor module (20). The current image and previous image of the frame will be analyzed by the optical flow analysis to determine the movement of current image compared to the previous image.
  • the optical flow analysis will generate a points of pixel of the movement in the image, whereby the points of pixels contains the information of original x,y coordinate and the destination x,y coordinate.
  • the original x,y coordinate refers to previous image frame, while the destination x,y coordinate refers to current image frame.
  • Optical flow magnitude and direction of optical flow points are then computed as in step (1320).
  • the optical flow magnitude is computed by subtracting the original coordinate from the destination coordinate, while the direction of the optical flow points are determined by comparing the destination coordinate and the original coordinate.
  • the optical magnitude is then determined by comparing with a predefined threshold (1330). If the optical flow magnitude is more than the predefined threshold, the direction of the optical flow points are quantized into a number of major angles preferably in eight directions (1340). While for the optical flow magnitude that is less than the predefined threshold, the process will be ended.
  • a corresponding points coordinate for the previous image is defined (1350) in order to mark the corresponding points in the current image as coloured image (1360), whereby the coloured image refer to the quantized direction of the pixels.
  • coloured image preferably marked as blue, red, green, yellow, etc.
  • image differences between coloured image and background image are computed (1370) to identify the moving object areas in the image frames (1380).
  • the identified moving object areas are then transferred to the contour processing module (40) for further processing.
  • FIG. 5 illustrated the method of determining potential aggressive behaviour activity area by analyzing the moving object areas in the image frames (1400) by the contour processing module (40).
  • the contours of the detected moving object areas are extracted (1410). Any insignificant contours that are overlapped, subset or intercept with other object contours are filtered out (1420) from the extracted contours based on a predefined parameters.
  • the predefined parameters are preferably but not limited to position, size, direction and magnitude of the direction. For example, if the contours of the detected moving object areas are overlapped or subset with other bigger contour, then the contours that are overlapped or subset will be filtered out.
  • a distance for remaining object contours are computed (1430) in order to merge the object contours that are closed to each other (1440).
  • a motion interaction among the objects contours for the aggressive activity detection can be combined together and furthermore, a smallest enclosing box having the merged object contours can be extracted from the frame (1450). If the merged contours size are less than a predefined threshold (1460), the merged contours location that are overlapped with a previous detection is determined (1470). Else, if the merged contours size are more than the predefined threshold, the potential areas of the aggressive behaviour are identified (1480). Then, the identified areas of the aggressive behaviour are transmitted to the aggressive behaviour module (50) for further processing.
  • Figure 6 illustrates the method of estimating level of interaction between the moving objects within the potential aggressive behaviour activity area by computing number of contradict pixels and rate of change in major direction for successive frames (1500) by the aggressive behaviour module (50).
  • all pixels within the potential aggressive behaviour areas are examined and direction of motion pixels are quantized into four major directions (1510), whereby the directions are preferably but not limited to up, down, left and right.
  • the percentage of the pixels will be computed in order to determine (1520) a significant direction for all the pixels. For example, if the percentage of pixels in each major directions (up, down, left and right) are 30,30,5,35 respectively.
  • the significant directions are considered only on the 3 major directions with percentage more than 25%; up, down and right.
  • Left direction is filtered out as the percentage of pixels in that direction is small, and most probably they belong to noise.
  • the number of contradict pixels is computed (1530). Contradict pixels are the pixels in opposite directions and close to each other. Two sets of opposite directions are the right-left and up-down. For example, if the significant directions are right and left, then the distance between all pixels with right and left direction will be computed and if the pixels are close to each other, then it will contribute to the numbers of contradict pixels. After that, the rate of change in major direction in the processed frames within a preferred frames will be computed (1540) in order to obtain the rate of change in major directions and the number of the contradict pixels.
  • the method of determining the aggressive behaviour activity based on the estimated level of interaction between the moving objects within the potential areas (1600) by the aggressive behaviour module (50) is illustrated herein.
  • Different threshold will be selected based on a status of detection in the current frame, whereby the status might be either potential new detection or potential on-going detection (1610).
  • the level of interaction that is obtained from the previous step (1500) will be determined by comparing with the selected threshold (1620). If the level of interaction is more than the selected threshold, then the counted number of continuous detected frames will be increased (1630). Otherwise, no further action is processed. After that, the numbers of detection frames will be determined by comparing it with the preferred detection frames size (1640). If the numbers of detection frames is more than the detection frames size, the aggressive behaviour activity is detected and an alert is triggered (1650).
  • the terms“a” and“an,” as used herein, are defined as one or more than one.
  • the term “plurality,” as used herein, is defined as two or more than two.
  • the term“another,” as used herein, is defined as at least a second or more.
  • the terms“including” and/or“having,” as used herein, are defined as comprising (i.e., open language).

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The present invention discloses a system and method for detecting aggressive behaviour activity in a confined area. The system of the present invention comprising of an image processing module (10) for retrieving a plurality of image frames, a frame processing module (20) for determining numbers of frames to be processed, a motion calculation module (30) for identifying a moving object areas in the image frames, a contour processing module (40) for determining potential area of the aggressive behavior activity, and an aggressive behaviour detection module (50) for detecting the aggressive behaviour activity based on level of interaction between the moving objects within the potential area. Further, the methods of detecting the aggressive behaviour activity are provided herein to perform the same.

Description

SYSTEM AND METHOD FOR DETECTING AGGRESSIVE BEHAVIOUR
ACTIVITY
FIELD OF THE INVENTION
The present invention generally relates to detection of aggressive behaviour activity, and more particularly relates to a system and method for detecting aggressive behavior activity in a confined area based on level of interaction between objects within a potential area of the aggressive behaviour activity.
BACKGROUND OF THE INVENTION
Aggressive behaviour such as fighting or punching activity is defined as an unhealthy activity that may cause fatality if it is not being detected or alerted earlier. Such action that happened in a confined area such as detention area, ward, dorm, elevator, etc. is hard to be monitored by using a fix parameters of a conventional surveillance camera due to much variations in terms of number of people involved, duration of the action and pattern movement of the action. The false interpretation may also be presumed if when the fighting only involve a little actions, e.g. when only one person leading the fighting scene or when the person punching to the wall or ground.
An example of patent application that disclosed a method for detecting an aggressive behaviour is disclosed in a China application, CN 102902981. The application disclosed a violent video detection method based on slow characteristic analysis, which comprises the steps of performing intensive trajectory extraction on classified videos, learning out a slow characteristic function based on a slow characteristic analysis method for trajectory, obtaining the characteristic representation of video segments through the slow characteristic function, training extracted characteristics, and modeling; and performing characteristic extraction on a new video, and inputting the extracted characteristics into the model obtained through training, thus obtaining the classification of the video (a violent video or a non-violent video).
Another example of a system and method for detection an aggressive behaviour is disclosed in Korea patent application no. KR 20160057503. The application relates to a violence detection system for detecting violence behaviour and method thereof. The violence detection system of the present invention comprises of an image obtainment unit for obtaining an image captured by a camera. A person tracking unit is used for detecting and tracking a person by setting the person as an object based on an image processing technology through the obtained image. The movement of the person attacked is analyzed and extracted by a multi-period movement analysis unit within short term period, mid-term period and a long term period. A multi-layered violent behaviour determination unit is then used for recognizing a violence behaviour in a multi-layered structure based on data analyzed by the multi-period movement analysis unit and for detecting violent behaviour by integrating and analyzing a recognized result for the multi-layered violence behaviour. Finally, an alarm generation unit generated an alarm according to the detection of the violent behaviour.
Therefore, there is a need for an improved a system and method for detecting aggressive behavior activity in a confined area based on level of interaction between objects within a potential area of the aggressive behaviour activity. Although there are systems and methods for the same in the prior art, for many practical purposes, there is still considerable room for improvements.
SUMMARY OF THE INVENTION
The following presents a simplified summary of the invention in order to provide a basic understanding of some aspects of the invention. This summary is not an extensive overview of the invention. Its sole purpose is to present some concepts of the invention in a simplified form as a prelude to the more detailed description that is presented later.
It is an object of the present invention to provide a system and method to determined an aggressive behaviour activity in a confined area.
Another objective of the present invention is to provide an accurate detection of aggressive behaviour within a small confined area such as in a detention area or in room or elevator.
Accordingly, the present invention provides a system for detecting aggressive behaviour activity comprising of an image processing module for retrieving a plurality of image frames from an image capturing device; a frame processing module for determining numbers of image frames to be processed from the image processing module by regulating frame rate of the image frames; a motion calculation module for identifying a moving object areas in the image frame from the frame processing module by computing differences between a current image and background image; and a contour processing module for determining potential aggressive behavior activity area by analyzing the moving object areas from the motion calculation module. The system further comprising an aggressive behaviour detection module for detecting the aggressive behaviour activity based on level of interaction within the potential aggressive behaviour activity area from the contour processing module.
Preferably, the system further comprising an extensible markup language, XML processing module having parameters in XML file for defining the numbers of frame to be processed by the frame processing module.
In accordance with another aspect of the present invention, a method of detecting aggressive behaviour activity within a confined area is provided.
The method of the present invention comprises the steps of retrieving a plurality of image frames from an image capturing device by an image processing module, determining numbers of frames to be processed by regulating frame rate of the image frames by a frame processing module, identifying a moving object areas in the image frames by computing differences between a current image and background image by a motion calculation module, determining potential aggressive behaviour activity area by analyzing the moving object areas in the image frames by a contour processing module, estimating level of interaction between the moving objects within the potential aggressive behaviour activity area by computing number of contradict pixels and rate of change in major direction for successive frames by an aggressive behaviour module and determining the aggressive behaviour activity based on the estimated level of interaction between the moving objects areas within the potential areas by the aggressive behaviour module.
Preferably, the method of determining numbers of frames to be processed by regulating frame rate of the image frames by a frame processing module comprising the steps of loading a plurality of parameters in form of extensible markup language, XML file into an XML processing module; defining the plurality of parameters by the XML processing module; receiving the plurality of image frames from the image processing module, and determining numbers of frames to be processed based on the predefined parameters by the frame processing module.
Preferably, the method of identifying a moving object areas in the image frames by computing differences between a current image and background image by a motion calculation module comprising the steps of computing a dense optical flow between current images and previous image, wherein the previous images are retrieved from existing frames in the frame processor module (20), computing optical flow magnitude and direction of optical flow points, determining the optical magnitude by comparing with a predefined threshold, quantizing the direction of optical flow points into a number of major angles if the optical flow magnitude is more than the predefined threshold, defining corresponding points coordinate for the previous image, marking the corresponding points in the current image as coloured image, computing image differences between coloured image and a background image, and identifying the moving object areas in the image frames.
Preferably, the method of determining potential aggressive behaviour activity area by analyzing the moving object areas in the image frames by a contour processing module comprising the steps of extracting contours of the identified moving object areas, filtering contours that overlapped with other object contours based on a predefined parameters, computing distance between each object contours, merging the object contours that are closed to each other, extracting a smallest enclosing box having the merged object contours, determining overlapping of the merged contour location if the merged contour size is less than predefined threshold, and determining the potential aggressive behaviour activity area if the merged contours size is more than predefined threshold.
Preferably, the method of estimating level of interaction between the moving objects within the potential aggressive behaviour activity area by computing number of contradict pixels and rate of change in major direction for successive frames by an aggressive behaviour module comprising the steps of quantizing motion pixels direction into four major directions, wherein the directions are up, down, left and right, computing percentage of the pixels for the major direction, computing number of contradict pixels, wherein the contradict pixels are the pixels in opposite direction and close to each other and computing rate of change in major direction for the processed frame within a preferred frames. Preferably, the method of determining the aggressive behaviour activity based on the estimated level of interaction between the moving objects areas within the potential areas by the aggressive behaviour module comprising the steps of selecting predefined threshold based on status of detection in current frame, wherein the status is either potential new detection or potential on-going detection,
determining level of interaction between the moving objects by comparing with the predefined threshold, increasing counted numbers of continuous detected frames if the level of interaction is more than the selected threshold, determining numbers of detection frames compared to a preferred detection frames size and detecting aggressive behaviour if the numbers of detection frames are more than the detection frames size.
The foregoing and other objects, features, aspects and advantages of the present invention will become better understood from a careful reading of a detailed description provided herein below with appropriate reference to the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
A more complete appreciation of the invention and many of the attendant advantages thereof will be readily as the same becomes better understood by reference to the following detailed description when considered in connection with the accompanying drawings, wherein:
Figure 1 illustrates a block diagram of a system for detecting aggressive behaviour activity according to one embodiment of the present invention.
Figure 2 illustrates a flowchart for a method of detecting aggressive behaviour activity according to one embodiment of the present invention.
Figure 3 illustrates a flowchart of the sub-steps for regulating frame rate of the image frame by skipping unnecessary frame according to one embodiment of the present invention.
Figure 4 illustrates a flowchart of the sub-steps for identifying a moving object areas in the image frames according to one embodiment of the present invention. Figure 5 illustrates a flowchart of the sub-steps for determining potential aggressive behaviour activity area by analyzing the moving object areas according to one embodiment of the present invention.
Figure 6 illustrates a flowchart of the sub-steps for estimating level of interaction between the moving objects within the potential aggressive behaviour activity area according to one embodiment of the present invention.
Figure 7 illustrates a flowchart of the sub-steps for determining the aggressive behaviour activity according to one embodiment of the present invention.
It is noted that the drawings may not be to scale. The drawings are intended to depict only typical aspects of the invention, and therefore should not be considered as limiting the scope of the invention. In the drawings, like numberings represent like elements between the drawings.
DETAILED DESCRIPTION OF THE INVENTION
The above mentioned features and objectives of this invention will become more apparent and better understood by reference to the following detailed description. It should be understood that the detailed description made known below is not intended to be exhaustive or limit the invention to the precise disclosed form, as the invention may assume various alternative forms. On the contrary, the detailed description covers all the relevant modifications and alterations made to the present invention, unless the claims expressly state otherwise.
The present invention provides a system and method for detecting an aggressive behaviour activity. The aggressive behaviour activity that is captured by at least one image capturing device is difficult to be defined due to variations of movements and interaction between the captured objects. Hence, the system and method provided in the present invention may help in detecting the aggressive behaviour activity as well as to differentiate between a normal activity and the aggressive activity. Initial reference is made to FIG. 1 which illustrates a block diagram of the system (100) for detecting aggressive behaviour activity according to an embodiment of the present invention. The system (100) preferably implemented in a confined area, such as in a detention area, room or in an elevator. The system (100) comprises of an image processing module (10), a frame processing module (20), a motion calculation module (30), a contour processing module (40), an aggressive behaviour detection module (50), and an extensible markup language (XML) processing module (60).
Any aggressive image or video in the confined area is captured beforehand by at least one of image capturing device preferably refers to a device such as closed-circuit television (CCTV) camera. The image processing module (10) is then configured to retrieve a plurality of image frames from the at least one of image capturing device to be further regulated by the frame processing module (20). The frame processing module (20) which is connected to the image processing module (10) determines numbers of frame to be processed in a subsequent module by regulating frame rate of the image frames. Number of frames to be processed is defined in the frame processing module (20) by invoking input from the XML processing module (60).
The numbers of image frames that have been defined to be processed by the frame processing module (20) is then transmitted to the motion calculation module (30) for identifying a moving object areas in the image frames by computing differences between current image and background image. The background image refers to the image of capturing device without any interest object inside the image. The moving object areas is then being analyzed by the contour processing module (40) which is connected to the motion calculation module (30) for identifying potential aggressive behaviour activity area. The aggressive behaviour detection module (50) that connected to the contour processing module (40) configured for detecting the aggressive behaviour activity based on level of interaction within the potential aggressive behaviour area.
In one of a preferred embodiment, a method (1000) of detecting aggressive behaviour activity is illustrated in FIG. 2. The method (1000) of detecting aggressive behaviour activity begins with the steps of retrieving (1100) a plurality of image frames captured by the image capturing device by the image processing module (10). The frame rate of the plurality image frames are then regulated (1200) by the frame processing module (20) to determine numbers of frames to be processed in the subsequent module, i.e. the motion calculation module (30). The determined numbers of frames from the frame processing module (20) subsequently processed by the motion calculation module (30) to identify a moving object areas by computing differences between current image and background image (1300).
Once the moving object areas is identified, the moving object areas in the image frames is further analyzed by the contour processing module (40) to determine a potential aggressive behaviour activity area (1400). Thereon, level of interaction between the moving objects within the potential aggressive behaviour activity area is estimated by computing number of contradict pixels and rate of change in major direction of the processed frames by the aggressive behaviour module (50) (1500). Finally, based on the estimated level of interaction between the moving objects within the potential area, the aggressive behaviour activity is determined by the aggressive behaviour module (50).
The method of determining numbers of frames to be processed by the frame processing module (20) by regulating frame rate of the image frames is illustrated as in FIG. 3 (1200). The frame rate regulation is important at this stage as the solution of the present invention is based on optical flow analysis. Preferably, a time distance between the processed frames have to be similar in order to obtain an accurate optical flow magnitude.
A plurality of parameters are loaded (1210) into the XML processing module (60) in form of XML file to determine the numbers of image frames that can be processed in the frame processing module (20). The parameters defined in the XML processing module (60) are preferably but not limited to frame rate parameters, filtering parameters, detection parameters, etc. In the meantime, the plurality of image frames received (1230) from the image processing module (10) are combined with the parameters from the XML processing module (60) to identify a successive frame to be processed (1240) in the motion calculation module (30) . If the frame images are defined not to be processed, the frame processing module (20) will wait for a new frame coming from the image processing module (10). If there is no new frame coming from the image processing module (10), the current frame will be stored as previous frame and no further action is performed. Once the image frames to be processed are defined by the frame processing module (20), the motion calculation module (30) subsequently will identify the moving object areas in the image frames as in a flowchart in FIG. 4. The method for identifying (1300) the moving object areas in the image frames by computing differences between the current image and background image by the motion calculation module (30) includes the steps of computing a dense optical flow between the current images and previous image (1310), wherein the previous images are retrieved from existing frames in the frame processor module (20). The current image and previous image of the frame will be analyzed by the optical flow analysis to determine the movement of current image compared to the previous image. The optical flow analysis will generate a points of pixel of the movement in the image, whereby the points of pixels contains the information of original x,y coordinate and the destination x,y coordinate. The original x,y coordinate refers to previous image frame, while the destination x,y coordinate refers to current image frame.
Optical flow magnitude and direction of optical flow points are then computed as in step (1320). The optical flow magnitude is computed by subtracting the original coordinate from the destination coordinate, while the direction of the optical flow points are determined by comparing the destination coordinate and the original coordinate. The optical magnitude is then determined by comparing with a predefined threshold (1330). If the optical flow magnitude is more than the predefined threshold, the direction of the optical flow points are quantized into a number of major angles preferably in eight directions (1340). While for the optical flow magnitude that is less than the predefined threshold, the process will be ended.
A corresponding points coordinate for the previous image is defined (1350) in order to mark the corresponding points in the current image as coloured image (1360), whereby the coloured image refer to the quantized direction of the pixels. Such coloured image preferably marked as blue, red, green, yellow, etc. Thereafter, the image differences between coloured image and background image are computed (1370) to identify the moving object areas in the image frames (1380). The identified moving object areas are then transferred to the contour processing module (40) for further processing.
Now, reference is made to FIG. 5 which illustrated the method of determining potential aggressive behaviour activity area by analyzing the moving object areas in the image frames (1400) by the contour processing module (40). The contours of the detected moving object areas are extracted (1410). Any insignificant contours that are overlapped, subset or intercept with other object contours are filtered out (1420) from the extracted contours based on a predefined parameters. The predefined parameters are preferably but not limited to position, size, direction and magnitude of the direction. For example, if the contours of the detected moving object areas are overlapped or subset with other bigger contour, then the contours that are overlapped or subset will be filtered out. A distance for remaining object contours are computed (1430) in order to merge the object contours that are closed to each other (1440). Hence, a motion interaction among the objects contours for the aggressive activity detection can be combined together and furthermore, a smallest enclosing box having the merged object contours can be extracted from the frame (1450). If the merged contours size are less than a predefined threshold (1460), the merged contours location that are overlapped with a previous detection is determined (1470). Else, if the merged contours size are more than the predefined threshold, the potential areas of the aggressive behaviour are identified (1480). Then, the identified areas of the aggressive behaviour are transmitted to the aggressive behaviour module (50) for further processing.
Before the aggressive behaviour activity are determined, the estimated level of interaction between the objects are identified by referring to FIG. 6. Figure 6 illustrates the method of estimating level of interaction between the moving objects within the potential aggressive behaviour activity area by computing number of contradict pixels and rate of change in major direction for successive frames (1500) by the aggressive behaviour module (50). Firstly, all pixels within the potential aggressive behaviour areas are examined and direction of motion pixels are quantized into four major directions (1510), whereby the directions are preferably but not limited to up, down, left and right. For each of the major directions, the percentage of the pixels will be computed in order to determine (1520) a significant direction for all the pixels. For example, if the percentage of pixels in each major directions (up, down, left and right) are 30,30,5,35 respectively. Thus, the significant directions are considered only on the 3 major directions with percentage more than 25%; up, down and right. Left direction is filtered out as the percentage of pixels in that direction is small, and most probably they belong to noise. Next, the number of contradict pixels is computed (1530). Contradict pixels are the pixels in opposite directions and close to each other. Two sets of opposite directions are the right-left and up-down. For example, if the significant directions are right and left, then the distance between all pixels with right and left direction will be computed and if the pixels are close to each other, then it will contribute to the numbers of contradict pixels. After that, the rate of change in major direction in the processed frames within a preferred frames will be computed (1540) in order to obtain the rate of change in major directions and the number of the contradict pixels.
Finally, as referred to FIG. 7, the method of determining the aggressive behaviour activity based on the estimated level of interaction between the moving objects within the potential areas (1600) by the aggressive behaviour module (50) is illustrated herein. Different threshold will be selected based on a status of detection in the current frame, whereby the status might be either potential new detection or potential on-going detection (1610). The level of interaction that is obtained from the previous step (1500) will be determined by comparing with the selected threshold (1620). If the level of interaction is more than the selected threshold, then the counted number of continuous detected frames will be increased (1630). Otherwise, no further action is processed. After that, the numbers of detection frames will be determined by comparing it with the preferred detection frames size (1640). If the numbers of detection frames is more than the detection frames size, the aggressive behaviour activity is detected and an alert is triggered (1650).
The terms“a” and“an,” as used herein, are defined as one or more than one. The term “plurality,” as used herein, is defined as two or more than two. The term“another,” as used herein, is defined as at least a second or more. The terms“including” and/or“having,” as used herein, are defined as comprising (i.e., open language).
While this invention has been particularly shown and described with reference to the exemplary embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the invention as defined by the appended claims.

Claims

1. A system (100) for detecting aggressive behaviour activity comprising of:
an image processing module (10) for retrieving a plurality of image frames from an image capturing device;
a frame processing module (20) for determining numbers of image frames to be processed from the image processing module (10) by regulating frame rate of the image frames;
a motion calculation module (30) for identifying a moving object areas in the image frame from the frame processing module (20) by computing differences between a current image and background image; and
a contour processing module (40) for determining potential aggressive behavior activity area by analyzing the moving object areas from the motion calculation module (30), characterised in that the system (100) further comprising:
an aggressive behaviour detection module (50) for detecting the aggressive behaviour activity based on level of interaction within the potential aggressive behaviour activity area from the contour processing module (40).
2. The system (100) according to claim 1, wherein the system further comprising an extensible markup language, XML processing module (60) having parameters in XML file for defining the numbers of frames to be processed by the frame processing module (20).
3. A method (1000) of detecting aggressive behaviour activity is characterized by the steps of:
retrieving a plurality of image frames from an image capturing device by an image processing module (10);
determining numbers of frames to be processed by regulating frame rate of the image frames by a frame processing module (20);
identifying a moving object areas in the image frames by computing differences between a current image and background image by a motion calculation module (30); determining potential aggressive behaviour activity area by analyzing the moving object areas in the image frames by a contour processing module (40);
estimating level of interaction between the moving objects within the potential aggressive behaviour activity area by computing number of contradict pixels and rate of change in major direction for successive frames by an aggressive behaviour module (50); and
determining the aggressive behaviour activity based on the estimated level of interaction between the moving objects areas within the potential areas by the aggressive behaviour module (50).
4. The method according to claim 3, wherein the determining numbers of frames to be processed by regulating frame rate of the image frames by a frame processing module (20) comprising the steps of:
loading a plurality of parameters in form of extensible markup language, XML file into an XML processing module (60);
defining the plurality of parameters by the XML processing module (60); receiving the plurality of image frames from the image processing module (10); and
determining numbers of frames to be processed based on the predefined parameters by the frame processing module (20).
5. The method according to claim 3, wherein the identifying a moving object areas in the image frames by computing differences between a current image and background image by a motion calculation module (30) comprising the steps of:
computing a dense optical flow between current images and previous image, wherein the previous images are retrieved from existing frames in the frame processor module (20);
computing optical flow magnitude and direction of optical flow points; determining the optical magnitude by comparing with a predefined threshold; quantizing the direction of optical flow points into a number of major angles if the optical flow magnitude is more than the predefined threshold;
defining corresponding points coordinate for the previous image;
marking the corresponding points in the current image as coloured image; computing image differences between coloured image and a background image; and
identifying the moving object areas in the image frames.
6. The method according to claim 3, wherein the determining potential aggressive behaviour activity area by analyzing the moving object areas in the image frames by a contour processing module (40) comprising the steps of:
extracting contours of the identified moving object areas;
filtering contours that overlapped with other object contours based on a predefined parameters;
computing distance between each object contours;
merging the object contours that are closed to each other;
extracting a smallest enclosing box having the merged object contours; determining overlapping of the merged contour location if the merged contour size is less than predefined threshold; and
determining the potential aggressive behaviour activity area if the merged contours size is more than predefined threshold.
7. The method according to claim 3, wherein the estimating level of interaction between the moving objects within the potential aggressive behaviour activity area by computing number of contradict pixels and rate of change in major direction for successive frames by an aggressive behaviour module (50) comprising the steps of: quantizing motion pixels direction into four major directions, wherein the directions are up, down, left and right;
computing percentage of the pixels for the major direction;
computing number of contradict pixels, wherein the contradict pixels are the pixels in opposite direction and close to each other; and
computing rate of change in major direction for the processed frame within a preferred frames.
8. The method according to claim 3, wherein the determining the aggressive behaviour activity based on the estimated level of interaction between the moving objects areas within the potential areas by the aggressive behaviour module (50) comprising the steps of:
selecting predefined threshold based on status of detection in current frame, wherein the status is either potential new detection or potential on-going detection;
determining level of interaction between the moving objects by comparing with the predefined threshold;
increasing counted numbers of continuous detected frames if the level of interaction is more than the selected threshold;
determining numbers of detection frames compared to a preferred detection frames size; and
detecting aggressive behaviour if the numbers of detection frames are more than the detection frames size.
PCT/MY2019/050126 2018-12-26 2019-12-24 System and method for detecting aggressive behaviour activity WO2020139071A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
MYPI2018002926 2018-12-26
MYPI2018002926A MY198232A (en) 2018-12-26 2018-12-26 System and method for detecting aggressive behaviour activity

Publications (1)

Publication Number Publication Date
WO2020139071A1 true WO2020139071A1 (en) 2020-07-02

Family

ID=71125804

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/MY2019/050126 WO2020139071A1 (en) 2018-12-26 2019-12-24 System and method for detecting aggressive behaviour activity

Country Status (2)

Country Link
MY (1) MY198232A (en)
WO (1) WO2020139071A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114821808A (en) * 2022-05-18 2022-07-29 湖北大学 A kind of attack behavior early warning method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101751678A (en) * 2009-12-16 2010-06-23 北京智安邦科技有限公司 Method and device for detecting violent crowd movement
KR101484263B1 (en) * 2013-10-24 2015-01-16 주식회사 에스원 System and method for detecting violence
KR20150088613A (en) * 2014-01-24 2015-08-03 아이브스테크놀러지(주) Apparatus and method for detecting violence situation
CN107194317A (en) * 2017-04-24 2017-09-22 广州大学 A kind of act of violence detection method analyzed based on Grid Clustering

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101751678A (en) * 2009-12-16 2010-06-23 北京智安邦科技有限公司 Method and device for detecting violent crowd movement
KR101484263B1 (en) * 2013-10-24 2015-01-16 주식회사 에스원 System and method for detecting violence
KR20150088613A (en) * 2014-01-24 2015-08-03 아이브스테크놀러지(주) Apparatus and method for detecting violence situation
CN107194317A (en) * 2017-04-24 2017-09-22 广州大学 A kind of act of violence detection method analyzed based on Grid Clustering

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
BEN MABROUK AMIRA; ZAGROUBA EZZEDDINE: "Spatio-temporal feature using optical flow based distribution for violence detection", PATTERN RECOGNITION LETTERS., ELSEVIER, AMSTERDAM., NL, vol. 92, 24 April 2017 (2017-04-24), NL, pages 62 - 67, XP085023413, ISSN: 0167-8655, DOI: 10.1016/j.patrec.2017.04.015 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114821808A (en) * 2022-05-18 2022-07-29 湖北大学 A kind of attack behavior early warning method and system
CN114821808B (en) * 2022-05-18 2023-05-26 湖北大学 Attack behavior early warning method and system

Also Published As

Publication number Publication date
MY198232A (en) 2023-08-15

Similar Documents

Publication Publication Date Title
KR101942808B1 (en) Apparatus for CCTV Video Analytics Based on Object-Image Recognition DCNN
US8619135B2 (en) Detection of abnormal behaviour in video objects
JP4663756B2 (en) Abnormal behavior detection device
RU2393544C2 (en) Method and device to detect flame
US20060170769A1 (en) Human and object recognition in digital video
Mishra et al. An intelligent motion detection using OpenCV
US10970823B2 (en) System and method for detecting motion anomalies in video
US20090310822A1 (en) Feedback object detection method and system
KR102217253B1 (en) Apparatus and method for analyzing behavior pattern
WO2021139049A1 (en) Detection method, detection apparatus, monitoring device, and computer readable storage medium
JP2008192131A (en) System and method for performing feature level segmentation
KR101868103B1 (en) A video surveillance apparatus for identification and tracking multiple moving objects and method thereof
KR20190046351A (en) Method and Apparatus for Detecting Intruder
CN109255360B (en) Target classification method, device and system
KR20090086898A (en) Smoke detection using video camera
CN107122743B (en) Security monitoring method and device and electronic equipment
JP7125843B2 (en) Fault detection system
KR102101623B1 (en) Method and apparatus for detecting informal situation using image
KR101454644B1 (en) Loitering Detection Using a Pedestrian Tracker
KR101646000B1 (en) Surveillance System And Method To Object And Area
EP2000998B1 (en) Flame detecting method and device
CN104809742A (en) Article safety detection method in complex scene
KR101581162B1 (en) Automatic detection method, apparatus and system of flame, smoke and object movement based on real time images
WO2016019973A1 (en) Method for determining stationary crowds
KR101690050B1 (en) Intelligent video security system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19901695

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19901695

Country of ref document: EP

Kind code of ref document: A1