KR101731461B1 - Apparatus and method for behavior detection of object - Google Patents

Apparatus and method for behavior detection of object Download PDF

Info

Publication number
KR101731461B1
KR101731461B1 KR1020150175219A KR20150175219A KR101731461B1 KR 101731461 B1 KR101731461 B1 KR 101731461B1 KR 1020150175219 A KR1020150175219 A KR 1020150175219A KR 20150175219 A KR20150175219 A KR 20150175219A KR 101731461 B1 KR101731461 B1 KR 101731461B1
Authority
KR
South Korea
Prior art keywords
behavior
image data
target object
behavior model
model
Prior art date
Application number
KR1020150175219A
Other languages
Korean (ko)
Inventor
이성환
곽인엽
Original Assignee
고려대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 고려대학교 산학협력단 filed Critical 고려대학교 산학협력단
Priority to KR1020150175219A priority Critical patent/KR101731461B1/en
Application granted granted Critical
Publication of KR101731461B1 publication Critical patent/KR101731461B1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
    • H04N7/185Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source from a mobile camera, e.g. for remote control

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The present invention includes a video receiver, a database for storing a behavior model, a memory for storing a behavior detection program for an object, and a processor for executing the program. According to the execution of the program, the processor recognizes a target object from video data inputted through the video receiver, and detects abnormal behavior of the target object from the video data based on the recognized target object and the preliminarily generated behavior model. The behavior model is generated based on the normal behavior of the object extracted from the plurality of image data collected to generate the behavior model, and then stored in the database.

Description

[0001] APPARATUS AND METHOD FOR BEHAVIOR DETECTION OF OBJECT [0002]

The present invention relates to a behavior detection device for an object and a behavior detection method using the same.

Recently, the importance of surveillance system using CCTV etc. is increasing. However, most CCTV surveillance systems are not out of labor-intensive environments that focus on simple monitoring. Recently, it is required to develop an intelligent surveillance system capable of recognizing a more intelligent and autonomous situation.

A typical method of intelligent surveillance system is an abnormal behavior detection method based on behavior analysis. The conventional abnormal behavior detection method learns based on the collected normal behavior and abnormal behavior or judges abnormal behavior based on a predetermined threshold value. For example, the anomaly detection method can determine a threshold value for abnormal behavior such as a fight or a fall, and detect anomalous activity in real time based on a predetermined threshold value.

However, since the conventional abnormal behavior detection method based on such a threshold value utilizes features such as color change and brightness change extracted at low-level from data collected in real time, There is a disadvantage that it judges action by only fragmentary information.

Therefore, the abnormal behavior detection method can use context information which is a characteristic of high-level. For example, there is a behavior analysis method based on trajectory information in an abnormal behavior detection method using context information. Trajectory information based behavior analysis methods are mainly used to recognize abnormal behavior in traffic situations. These methods mainly use topic models such as latent Dirichlet allocation (LDA). However, the method of analyzing the trajectory information is difficult to detect abnormal behavior in real time, and it does not properly reflect characteristics of human behavior.

In this regard, Korean Patent Laid-Open Publication No. 10-2013-0056170 (entitled " Method and apparatus for detecting real-time abnormal behavior using a motion sequence ") analyzes a motion sequence constituting a human behavior, Time abnormal behavior detection method and apparatus for detecting an abnormal behavior.

SUMMARY OF THE INVENTION The present invention has been made to solve the above problems of the prior art, and it is an object of the present invention to provide a behavior detection apparatus for an object that detects an abnormal behavior of an object based on a behavior model generated by analyzing normal behavior of the object, to provide.

It should be understood, however, that the technical scope of the present invention is not limited to the above-described technical problems, and other technical problems may exist.

According to a first aspect of the present invention, there is provided an apparatus for detecting a behavior of an object, the apparatus comprising: an image receiving unit; a database storing a behavior model; a memory storing a behavior detection program for the object; Lt; / RTI > At this time, according to the execution of the program, the processor recognizes the target object from the image data inputted through the image receiving unit, and detects abnormal behavior of the target object from the image data based on the recognized target object and the generated behavior model. Then, the behavior model is generated based on the normal behavior of the object extracted from the plurality of image data collected to generate the behavior model, and is stored in the database.

According to a second aspect of the present invention, there is provided a behavior detection method for an object of a behavior detection apparatus, comprising: recognizing a target object from image data; And detecting an abnormal behavior of the target object from the image data based on the recognized target object and the generated behavior model through the recognizing step. At this time, the behavior model is generated based on the normal behavior of the object extracted from the plurality of image data collected to generate the behavior model, and is stored in the database.

The present invention generates a behavior model based on the normal behavior of an object, so that it is possible to detect various abnormal behaviors without separately defining the abnormal behavior. Since the present invention uses the pre-generated behavioral model, it is possible to effectively detect abnormal behavior in real time.

The present invention can classify normal and abnormal behaviors based on global action models and local action models, so that the user can easily understand the basis for judging normal and abnormal behaviors, and visualization is easy. The present invention also receives user feedback on normal and abnormal behavior and can re-learn the behavioral model according to the received feedback. Therefore, as the behavior detection period increases, the present invention can reduce misrecognition and improve recognition accuracy.

1 is a block diagram of a behavior detection apparatus for an object according to an exemplary embodiment of the present invention.
2 is a diagram illustrating an example of a preprocessing process of the behavior detection apparatus according to an embodiment of the present invention.
3 is a diagram illustrating an example of a feature extraction process of the behavior detection apparatus according to an embodiment of the present invention.
4 is a view illustrating an example of a global behavior model generation process of the behavior detection apparatus according to an embodiment of the present invention.
5 is a diagram illustrating an example of a process of generating a local action model of the behavior detection apparatus according to an embodiment of the present invention.
6 is a diagram illustrating an example of a behavior recognition process of the behavior detection apparatus according to an embodiment of the present invention.
7 is a diagram illustrating an example of visualization of a behavior recognition result according to an embodiment of the present invention.
8 is a block diagram of a behavior detection program according to an embodiment of the present invention.
FIG. 9 is a flowchart of a behavior detection method according to an embodiment of the present invention.
10 is a flowchart of a behavior model generation method according to an embodiment of the present invention.

Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings, which will be readily apparent to those skilled in the art. The present invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. In order to clearly illustrate the present invention, parts not related to the description are omitted, and similar parts are denoted by like reference characters throughout the specification.

Throughout the specification, when a part is referred to as being "connected" to another part, it includes not only "directly connected" but also "electrically connected" with another part in between . Also, when a part is referred to as "including " an element, it does not exclude other elements unless specifically stated otherwise.

Next, a behavior detection apparatus 100 for an object according to an embodiment of the present invention will be described with reference to FIGS. 1 to 8. FIG.

FIG. 1 is a block diagram of an action detection apparatus 100 for an object according to an embodiment of the present invention.

The behavior detection device 100 generates a behavior model based on normal behavior for one or more objects. Then, the behavior detection apparatus 100 detects abnormal behavior of the target object, which is the object of the behavior detection, based on the generated behavior model. The behavior detection apparatus 100 includes an image receiving unit 110, a memory 120, a database 140, and a processor 130.

The image receiving unit 110 receives image data. In this case, the image receiving unit 110 may be a module for receiving image data generated in real time through a camera or an image sensor connected to the behavior detecting apparatus 100. The image receiving unit 110 may be a module that receives image data in real time via a built-in camera or an image sensor.

The memory 120 stores a behavior detection program of the object. At this time, the memory 120 is collectively referred to as a non-volatile storage device that keeps stored information even when power is not supplied, and a volatile storage device that requires power to maintain stored information.

The database 140 stores the generated behavioral model. In addition, the database 140 may store feature vectors extracted from the image data.

When the image data is inputted through the image receiving unit 110, the processor 130 recognizes the target object from the input image data. Throughout the specification, the object is assumed to be human. However, the object may be a moving animal, a moving object such as a car and a bicycle, but is not limited thereto.

The processor 130 detects an abnormal behavior of the target object from the image data based on the recognized target object and the generated behavior model. At this time, the pre-generated behavior model is generated based on normal behavior for a plurality of objects extracted from the plurality of image data collected to generate a behavior model, and then stored in the database 140.

Specifically, the processor 130 may collect a plurality of image data to generate a behavior model. At this time, a plurality of image data may be collected for a plurality of objects to generate a behavior model.

For example, the processor 130 may collect a plurality of image data. The processor 130 may use the collected whole image data or a part of the image data extracted from the collected image data as training data for generating a behavior model. In addition, the processor 130 may use a part of the collected image data as evaluation data (test data) for evaluating the generated behavior model.

The processor 130 may collect a plurality of image data and then perform preprocessing and feature extraction on the collected image data. The preprocessing and feature extraction will be described in detail with reference to FIG. 2 and FIG.

2 is a diagram illustrating an example of a preprocessing process of the behavior detection apparatus 100 according to an embodiment of the present invention.

The processor 130 may learn a background model for each frame included in the plurality of image data 200 when receiving the plurality of image data 200 through the image receiving unit 110 at step S200. The learning of the background model can be to analyze the color, texture, etc. existing in the image data and to learn the reference constituting the ground based on the analysis.

For example, the processor 130 may learn a background model based on a Gaussian mixture model. The processor 130 calculates a Gaussian distribution based on the RGB values of each frame. In this case, the number of Gaussian distributions used for the background model learning may be 5 or more and 125 or less depending on the complexity of the used image data, but is not limited thereto.

The processor 130 may learn the background model and then detect the background and the figure based on the learned background model (S210). The processor 130 may then generate a foreground matrix corresponding to each frame based on the detected foreground.

For example, the processor 130 may compare each of a plurality of pixels included in each frame with a learned background model. The processor 130 may classify the pixels included in the learned background model into a ground pixel. The processor 130 may classify pixels not included in the learned background model into foreground pixels (figure pixels). The processor 130 may generate a foreground matrix in which a value corresponding to a pixel classified as a background pixel is set to 0 and a value corresponding to a pixel classified into a foreground pixel is set to 1.

At this time, the detected background and foreground may include pixels misclassified by minute changes of light, shadows, or the like. Such background and misclassification results included in the foreground may act as noise in the generation of the behavioral model and may reduce the accuracy of normal or abnormal behavior discrimination. Therefore, the processor 130 may detect the background and foreground, and then remove the misclassified results included in the background and the foreground (S220).

For example, the processor 130 may extract an RGB color histogram from the RGB values of the pixels included in the extracted foreground. At this time, the processor 130 may calculate an accumulated frequency value of color values ranging from 0 to 255 for three channels of RGB colors (R channel, G channel, and B channel) to extract an RGB color histogram. The processor 130 may perform Gaussian fitting on the distribution of the RGB color histogram. At this time, the Gaussian fitting is used to find out which color value has the largest distribution in the RGB color histogram.

In other words, the processor 130 regards a color value having a large distribution in the RGB color histogram for the foreground as an area in which the actual object exists in the foreground, and outputs a pixel that deviates from the fitted distribution in the RGB color histogram, The result can be judged. The processor 130 may remove pixels determined to be misclassified from the foreground matrix.

The processor 130 may remove the pixel determined to be misclassified from the foreground matrix and then track the object (S230). At this time, if an object is successively detected for each frame included in the image data, the processor 130 can recognize the continuously detected object as an object for behavior model learning based on the tracking technique.

For example, the processor 130 detects an object through a blob analysis from the foreground matrix for each frame. At this time, the BROBE analysis is to search for a blob that is a plurality of pixel blocks having a value of 1 classified as foreground in the foreground matrix, and to detect the found blob as one object. The processor 130 may then track the detected object through a brooke analysis based on tracking techniques such as Kalman filter, partial filter, and optical flow. Then, the processor 130 can recognize the object detected continuously over a certain frame as an object for learning the behavior model through tracking. At this time, the processor 130 can recognize one or more objects from a plurality of image data.

In this way, the processor 130 can perform preprocessing on a plurality of image data through the preprocessing process, recognize the object, and generate the preprocessed image data 210.

FIG. 3 is an exemplary diagram illustrating a feature extraction process of the behavior detection apparatus 100 according to an embodiment of the present invention.

The processor 130 may extract a feature vector from the preprocessed image data 210 after performing the preprocessing to recognize the object from the plurality of image data. At this time, the processor 130 may extract a feature vector including object tracking information and motion trajectory information.

For example, the processor 130 may estimate the velocity and the direction of the object through quantization based on the optical flow vectors of the pixels included in the object recognized for each frame (S300, S310).

The processor 130 may extract an accumulated histogram for the estimated direction (S320). At this time, the processor 130 divides the directions into eight directions, and generates cumulative histograms for eight directions using the directions.

The processor 130 may calculate the average velocity in eight directions and express the average velocity as a vector to extract the feature vector 300 (S330). At this time, the processor 130 may combine the 8-direction cumulative histogram and the 8-direction average speed for each frame, and generate the direction and speed feature vectors for each frame. The processor 130 may extract the object tracking information by merging the generated direction and speed characteristic vectors for each frame. In addition, the processor 130 may extract the motion locus information based on the center of gravity of coordinates belonging to the recognized blobs in the pre-processing.

The processor 130 may store the extracted feature vector 300 in the database 140. That is, the processor 130 may store the motion locator information including the x-coordinate and the y-coordinate, which are positions of the object for each frame, and the velocity of the object in the database 140. In addition, the processor 130 may store in the database 140 the object tracking information obtained by combining the cumulative histograms of the eight directions and the average velocity of the eight directions for each frame.

For example, when N frames are included in the image data, the processor 130 may generate motion object information as a 3 X N matrix, and store the 3 X N matrix in the database 140. In addition, the processor 130 may generate object tracking information of any object tracked in the τ frames in a 16 X τ matrix and store the 16 × τ matrix in the database 140.

Meanwhile, when the feature vector 300 is extracted, the processor 130 can generate a behavior model using the extracted feature vector 300. [ First, in order to generate a behavioral model, the processor 130 may divide an area in the image data. The processor 130 then generates a global behavior model that can detect normal and abnormal behavior of the object according to the path of the object between the regions and a local behavior model that can detect normal and abnormal behavior of the object according to the path in each region Can be generated. A method of generating a global behavior model and a local behavior model will be described with reference to FIGS. 4 and 5. FIG.

FIG. 4 is a diagram illustrating a global behavior model generation process of the behavior detection apparatus 100 according to an embodiment of the present invention.

In order to generate the global behavior model, the processor 130 may extract the three-dimensional trajectory information based on the object tracing information included in the feature vector 300 (S400). At this time, the three-dimensional trajectory information can be generated based on the frame-by-frame trajectory information.

The processor 130 may perform clustering based on the three-dimensional trajectory information (S410). At this time, the processor 130 may perform clustering based on a graph cut algorithm such as energy-minimization, but is not limited thereto. The processor 130 may perform clustering to extract a plurality of clusters.

Trajectories of objects passing through a similar movement path can be included in the same community in a plurality of extracted clusters by performing clustering. Therefore, the cluster can be used to represent the movement path existing in the image data. At this time, since the feature vector includes the object tracing information indicating the direction of the object along with the motion trajectory information, even if the trajectory has a similar trajectory, the feature vector can be included in other clusters when the direction is different.

The processor 130 may perform scene segmentation in which the image data is divided into a plurality of spaces based on the clusters extracted through clustering (S420). At this time, the processor 130 may use a tree-based space division technique such as a binary space division technique, a k- d tree, an R tree, and a priority R tree. The processor 130 may display the divided space in a tree form based on the space division technique.

For example, when performing scene segmentation based on the k- d tree, the processor 130 may determine the size of the space to be divided and the number of regions to be divided according to the predetermined depth k set. At this time, the predetermined depth may be determined empirically by the manager of the behavior detection apparatus 100 according to the image data. For example, the predetermined depth may be an integer between 3 and 5.

When performing scene division based on the binary space division technique, the processor 130 may divide the image data into two areas so that the distribution of image data included in each divided space after the division is the same. Then, the processor 130 may recursively repeat the dividing process until a predetermined depth of the tree is generated.

After completing the partitioning process, the processor 130 may display the partitioned space represented by the tree in a graph (S440). The processor 130 may then calculate the transition probability between adjacent nodes based on the trajectory for the object and the partitioned space represented by the graph. The processor 130 may divide the locus based on the size of the divided space. The processor 130 may then re-cluster the segmented trajectory.

Through this process, the processor 130 can obtain a representative pattern, a distribution of the locus, and an average speed in each area. At this time, the representative pattern may be an average of loci included in each region. The processor 130 may generate the global behavior model 400 using the representative patterns of the obtained regions, the distribution of the loci, the average speed, and the transition probabilities between the regions. The processor 130 may then store the generated global behavior model 400 in the database 140.

5 is a diagram illustrating an example of a process of generating a local action model of the behavior detection apparatus 100 according to an embodiment of the present invention.

After generating the global behavior model 400, the processor 130 may generate a local behavior model 500 for each partitioned space.

For this purpose, the processor 130 may first model the behaviors existing in the space for generating the local behavior model using the clustering of the feature vectors (S500). At this time, the clustering may be the same as the clustering algorithm used when generating the global behavior model.

The processor 130 may perform clustering in a specific space, and then may derive an action expression and a cumulative histogram for the extracted clusters within a specific space (S510). At this time, the behavioral expression may be expressed as a moving direction of the object included in each cluster, a moving speed, and a time remaining in the cluster. In addition, behavioral expressions can be normalized and expressed.

Processor 130 may generate a classifier to determine anomalous behavior based on behavioral representations of the extracted clusters within a particular space. At this time, the classifier can use a one-class classification technique.

For example, the processor 130 may determine a normal behavior of the collected image data to generate a current behavior model, and generate a classifier for classifying the collected image data into a normal class. Then, the processor 130 can determine whether the newly input image data can be classified as a normal class, based on the generated classifier. That is, the generated classifier is a single class classifier that classifies normal and abnormal (abnormal) according to whether or not the input data can be included in the normal class. At this time, the processor 130 may use a classification algorithm such as a one-class support vector machine.

The processor 130 may generate a classifier for discriminating the behavior and the abnormal behavior generated for each divided space in the local behavior model 500 and store the generated local behavior model 500 in the database 140. [

As such, the processor 130 may generate the global behavior model 400 and the regional behavior model 500 as behavioral models. Then, the processor 130 can determine whether the target object is abnormal based on the generated behavior model. The process of discriminating whether or not abnormal behavior is described with reference to FIG.

6 is a diagram illustrating an example of a behavior recognition process of the behavior detection apparatus 100 according to an embodiment of the present invention.

When the new image data 600 for discriminating the abnormal behavior is input, the processor 130 performs preprocessing and recognizes the target object (S600).

Then, the processor 130 extracts the real-time trajectory information and extracts the feature vector (S610).

At this time, the processor 130 may use the same pre-processing and feature extraction methods as those used in the behavior model generation. In addition, the new image data for discriminating the abnormal behavior may be input in real time as shown in FIG. 6, or may be input to the behavior detection device 100 after being recorded outside.

The processor 130 may detect abnormal behavior of the target object based on the learned global behavior model (S620). For example, the processor 130 may detect anomalous behavior that may occur in the movement process for the partitioned space based on the global behavior model. That is, the processor 130 can detect a case in which the target object moves in a path different from that of the other object as an abnormal behavior, using the transition probability for each space.

Based on the local behavior model, the processor 130 can determine whether there is an abnormality in the behavior of the target object in each divided space (S630).

For example, the processor 130 can determine whether the target behavior is abnormal based on the classifier included in the local behavior model. The processor 130 may determine normal behavior if the classifier result corresponding to the feature vector of the object object is included in the normal class or if the probability of inclusion in the normal class according to the result of the classifier exceeds a predetermined value. Conversely, the processor 130 may determine that an abnormal behavior occurs when the classifier result is not included in the normal class, or if the probability that the classifier result is included in the normal class according to the result of the classifier is less than or equal to a predetermined value.

Alternatively, the processor 130 may determine whether the target object is abnormal by comparing the similarity between the behavioral expression included in the local behavior model and the behavioral expression of the target object. The processor 130 may calculate the similarity between the behavioral expressions included in the local behavior model and the behavioral expressions of the target object, and may determine the abnormal behavior when the calculated similarity is less than a predetermined value.

The processor 130 may detect the abnormal behavior 610 based on the abnormal behavior determination result through the global behavior model and the abnormal behavior determination result through the local behavior model (S640). At this time, the abnormal behavior 610 may be an abnormal behavior such as a normal behavior such as walking and a long term suspension as shown in FIG.

In this way, the processor 130 can detect a specific path such as illegal invasion and abnormal behavior of the target object through the global behavior model. In addition, the processor 130 can detect sudden behavior changes of a target object, such as roaming, long-term suspension, and fighting, in a specific space, through a local behavior model.

The processor 130 visualizes the sensed behavior recognition result and displays the sensed behavior recognition result through a display module (not shown) when normal or abnormal behavior is detected with respect to the target object. This will be described in detail with reference to FIG.

7 is a diagram illustrating an example of visualization of a behavior recognition result according to an embodiment of the present invention.

The processor 130 displays the current image data P700, the preprocessed image P703, the behavior detection history P704, and the current behavior detection result P705 in a display module (not shown) Can be displayed. At this time, when the processor 130 detects an object having an abnormal behavior, the processor 130 displays the object in a box form (P702) so that the manager of the behavior detecting apparatus 100 can identify the object, have.

Through this visualization, the administrator of the behavior detection apparatus 100 can confirm what actions the objects are currently doing. In addition, the manager of the behavior detection device 100 can identify an object that has made an abnormal behavior when an abnormal behavior occurs. At this time, the behavior detection result is shown in detail in FIG. 7 (b).

In FIG. 7B, the image data includes a total of six divided spaces P711, P712, P713, P714, P715, and P716. In this case, the divided spaces where the abnormal behavior occurs are 'partition space 3' (P713) and 'partition space 4' (P714). The processor 130 may display the detailed screens P720 and P730 for the 'split space 3' P713 and the 'split space 4' P714 in the current behavior detection result P705 of FIG. 7A have.

The processor 130 may sense an action on the target object based on the generated action model, and then perform an evaluation on the detected action. The behavior model can be regenerated based on the evaluation result after the evaluation is performed. At this time, the evaluation can be performed through the manager of the behavior detection apparatus 100. [

For example, if the manager of the behavior detection apparatus 100 determines that the currently detected abnormal behavior is misclassified, the processor 130 may regenerate the behavior model using the misclassified image data . At this time, the processor 130 may regenerate the behavioral model by adding image data misclassified to the collected image data to generate a behavioral model. Alternatively, the processor 130 may further modify the behavioral model by further learning image data misclassified in the previously generated behavioral model.

As described above, the processor 130 can detect whether the target object is abnormal based on the execution of the behavior detection program stored in the memory 120. [ At this time, the behavior detection program executed by the processor 130 will be described in detail with reference to FIG.

8 is a block diagram of a behavior detection program according to an embodiment of the present invention.

The behavior detection program includes a pre-processing module 800, a feature extraction module 810, a behavior model learning module 820, a behavior model storage module 830, a database 140, an abnormal behavior recognition module 850, 870 and an output module 860.

In order to generate a behavior model, the behavior detection apparatus 100 includes a preprocessing module 800, a feature extraction module 810, a behavior model learning module 820, a behavior model storage module 810, A database 830 and a database 140 can be used.

In particular, the preprocessing module 800 may perform preprocessing on a plurality of image data collected to generate a behavioral model. The preprocessing module 800 may transmit the preprocessed image data to the feature extraction module 810.

The feature extraction module 810 can extract a feature vector from the preprocessed image data. The feature extraction module 810 may then transmit the extracted feature vectors to the behavior model learning module 820. [

The behavior model learning module 820 may generate a behavior model based on the feature vectors received from the feature extraction module 810. [ At this time, the behavior model learning module 820 can generate a global behavior model and a local behavior model. The behavior model learning module 820 may store the generated global behavior model and the local behavior model in the database 140 via the behavior model storage module 830. [

The behavior detection apparatus 100 may be configured to detect the abnormal behavior of the object based on the behavior model generated through the behavior model learning module 820 by using the pre-processing module 800 included in FIG. 8B, A feature extraction module 810, an abnormal behavior recognition module 850, an output module 860, and a result analysis module 870 can be used.

When the preprocessing module 800 receives image data to detect an abnormal behavior, the preprocessing module 800 can pre-process the received image data. At this time, the preprocessing module 800 can preprocess the image data received under the same condition as the preprocess used in generating the behavioral model.

The feature extraction module 810 can extract features from the preprocessed image data through the preprocessing module 800.

Similar to the pre-processing module 800, the feature extraction module 810 may generate a feature vector under the same conditions as those used when the behavior model is generated.

In this way, upon receiving the image data to detect the abnormal behavior, the processor 130 must extract the pre-processing and feature vector under the same conditions as when generating the behavioral model. Therefore, the preprocessing module 800 and the feature extraction module 810 can classify and store the conditions such as the method and parameters used when the behavior model is generated, or store the classified data in the database 140. The preprocessing module 800 and the feature extraction module 810 may generalize or use the classed condition when the image data is received to detect an abnormal behavior or may load the condition stored in the database 140 load can be used.

When a feature vector is extracted by the feature extraction module 810, the abnormal behavior recognition module 850 can recognize an abnormal behavior from a feature vector. And transmit the recognized abnormal behavior to the output module 860 and the result analysis module 870.

The output module 860 can visualize the abnormal behavior recognized by the abnormal behavior recognition module 850 and display it on a display module (not shown).

The result analysis module 870 can receive and analyze the results of the abnormal behavior recognized by the abnormal behavior recognition module 850. At this time, if the recognized abnormal behavior or normal behavior is misclassified, the result analysis module 870 can regenerate the behavior model through the behavior model learning module 820.

Next, a behavior detection method for an object of the behavior detection device 100 according to an embodiment of the present invention will be described with reference to FIGS. 9 and 10. FIG.

FIG. 9 is a flowchart of a behavior detection method according to an embodiment of the present invention.

The behavior detection apparatus 100 recognizes a target object from the image data (S900).

Then, the behavior detection apparatus 100 detects abnormal behavior of the target object from the image data based on the recognized target object and the generated behavior model (S910). At this time, the behavior model is generated based on the normal behavior of the object extracted from the plurality of image data collected to generate the behavior model, and then stored in the database 140.

10 is a flowchart of a behavior model generation method according to an embodiment of the present invention.

At this time, the behavior detection apparatus 100 can recognize the object for any one of the plurality of collected image data to generate a behavior model (S1000).

Then, the behavior detection apparatus 100 may perform clustering on any image data that recognizes the object and divide it into a plurality of spaces (S1010).

The behavior detection apparatus 100 may generate a global behavior model and a local behavior model for the behavior of the object in the plurality of divided spaces (S1020).

At this time, in order to generate the global behavior model, the behavior detection apparatus 100 may calculate the transition probability of the recognized object in a plurality of divided spaces. And the behavior detection apparatus 100 can generate a global behavior model based on the calculated transition probability.

And to generate a local behavior model, the behavior detection device 100 may generate a local behavior model based on the behavior of the object for each of the divided spaces.

On the other hand, in order to detect an abnormal behavior of the target object from the image data, the behavior detection apparatus 100 may calculate the similarity degree corresponding to the target object based on the behavior model for the generated normal behavior.

The behavior detection apparatus 100 can detect the behavior of the target object as an abnormal behavior when the calculated similarity degree is equal to or less than a predetermined value. If the calculated similarity exceeds a predetermined value, the behavior detection apparatus 100 may detect the behavior of the target object as normal behavior.

Referring again to FIG. 9, the behavior detection apparatus 100 may perform an evaluation after monitoring an abnormal behavior of a target object (S920).

After performing the evaluation of the abnormal behavior, the behavior detection apparatus 100 may regenerate the behavior model based on the result of the evaluation performed (S930).

The behavior detection apparatus 100 for an object and the behavior detection method using the same according to an embodiment of the present invention generate a behavior model based on the normal behavior of an object, and therefore, it is possible to detect various abnormal behaviors . Since the behavior detection apparatus 100 and the behavior detection method using the same use the pre-generated behavior model, it is possible to effectively detect an abnormal behavior in real time.

The behavior detection apparatus 100 for an object and the behavior detection method using the same can distinguish normal and abnormal behaviors based on a global behavior model and a local behavior model so that the user can easily understand the basis for determining normal and abnormal behavior , Visualization is easy. Also, the behavior detection apparatus 100 for an object and the behavior detection method using the same can receive the user's feedback on normal and abnormal behavior, and can re-learn the behavior model according to the received feedback. Therefore, the behavior detection apparatus 100 and the behavior detection method using the object can reduce the misrecognition and improve the recognition accuracy as the behavior detection period increases.

One embodiment of the present invention may also be embodied in the form of a recording medium including instructions executable by a computer, such as program modules, being executed by a computer. Computer readable media can be any available media that can be accessed by a computer and includes both volatile and nonvolatile media, removable and non-removable media. The computer-readable medium is also a computer storage medium. Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data.

While the methods and systems of the present invention have been described in connection with specific embodiments, some or all of those elements or operations may be implemented using a computer system having a general purpose hardware architecture.

It will be understood by those skilled in the art that the foregoing description of the present invention is for illustrative purposes only and that those of ordinary skill in the art can readily understand that various changes and modifications may be made without departing from the spirit or essential characteristics of the present invention. will be. It is therefore to be understood that the above-described embodiments are illustrative in all aspects and not restrictive. For example, each component described as a single entity may be distributed and implemented, and components described as being distributed may also be implemented in a combined form.

The scope of the present invention is defined by the appended claims rather than the detailed description and all changes or modifications derived from the meaning and scope of the claims and their equivalents are to be construed as being included within the scope of the present invention do.

100: Behavior detection device
110:
120: Memory
130: Processor
140: Database

Claims (15)

A behavior detection device for an object,
Image receiving unit,
A database for storing behavioral models,
A memory in which the behavior detection program for the object is stored,
And a processor for executing the program,
The processor recognizes a target object from the image data input through the image receiving unit according to the execution of the program and detects an abnormal path of the target object from the image data based on the global behavior model for the target object Detecting a change in behavior of the target object from the image data based on a local action model for the target object, detecting an abnormal behavior of the target object based on an abnormal path of the target object and a change in behavior of the target object However,
Wherein the global behavior model and the local behavior model are generated based on normal behavior of an object extracted from a plurality of previously collected image data and then stored in the database,
Wherein the global behavior model is generated by calculating a transition probability of the recognized object with respect to a plurality of spaces included in each image data after recognizing the object from the plurality of image data collected in the past,
Wherein the local behavior model is generated based on an action of the recognized object in each space included in the plurality of spaces.
The method according to claim 1,
Wherein the processor recognizes an object with respect to any one of the plurality of captured image data, performs clustering on the one of the plurality of captured image data, and divides the image data into a plurality of spaces, And generating the global behavior model based on the calculated transition probability, and based on the behavior of each of the recognized objects with respect to each of the divided spaces, To generate a local behavior model corresponding to the object.
3. The method of claim 2,
Wherein the processor performs a pre-processing on any one of the image data, recognizes the object based on the pre-processing performed,
Extracts a trajectory for the recognized object, and performs the clustering based on the extracted trajectory.
3. The method of claim 2,
Wherein the clustering is performed based on a graph cut algorithm.
3. The method of claim 2,
Wherein the processor generates the global behavior model based on a transition probability and a spatial partitioning technique of the recognized object in the divided plurality of spaces.
3. The method of claim 2,
Wherein the processor generates the local behavior model based on the behavior of the recognized object for each partitioned space and a single class classification technique.
The method according to claim 1,
Wherein the processor calculates a degree of similarity corresponding to the target object based on the global behavior model and the local behavior model,
And detects the behavior of the target object as an abnormal behavior when the calculated similarity is equal to or less than a predetermined value.
The method according to claim 1,
Wherein the processor performs an evaluation of abnormal behavior detection for the target object,
And regenerates the global behavior model and the local behavior model based on the result of the evaluation.
The method according to claim 1,
Wherein the processor visualizes an abnormal behavior detection result for the target object.
1. A behavior detection method for an object of a behavior detection device,
Recognizing a target object from the image data; And
Detecting an abnormal path of the target object from the image data based on a global behavior model for the target object;
Detecting a change in behavior of the target object from the image data based on a local action model of the recognized target object; And
Detecting abnormal behavior of the target object from the image data based on an abnormal path of the target object and a change in behavior of the target object,
Wherein the global behavior model and the local behavior model are generated based on normal behavior of the object extracted from the plurality of image data collected in advance,
Wherein the global behavior model is generated by calculating a transition probability of the recognized object with respect to a plurality of spaces included in each image data after recognizing the object from the plurality of image data collected in the past,
Wherein the local behavior model is generated based on an action of the recognized object in each space included in the plurality of spaces.
11. The method of claim 10,
Recognizing an object of any one of the plurality of image data collected before the recognition of the target object;
Performing clustering on any one of the image data and dividing the image data into a plurality of spaces; And
Further comprising generating the global behavior model and the local behavior model for an action of an object in the plurality of divided spaces.
12. The method of claim 11,
Wherein the generating the global behavior model and the local behavior model comprises:
Calculating a transition probability of the object recognized through the recognizing step in the divided plurality of spaces;
Generating the global behavior model based on the calculated transition probability; and
And generating a local behavior model corresponding to each space based on the behavior of the object for each of the divided spaces.
11. The method of claim 10,
Wherein the step of detecting abnormal behavior of the target object comprises:
Calculating a similarity degree corresponding to the target object based on the global behavior model and the local behavior model; And
Detecting a behavior of the target object as an abnormal behavior when the calculated similarity is equal to or less than a predetermined value.
11. The method of claim 10,
Performing an evaluation of abnormal behavior detection on the target object after the detection of abnormal behavior of the target object; And
Further comprising regenerating the global behavior model and the local behavior model based on a result of the performed evaluation.
A computer-readable recording medium recording a program for performing the method according to any one of claims 10 to 14 on a computer.
KR1020150175219A 2015-12-09 2015-12-09 Apparatus and method for behavior detection of object KR101731461B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150175219A KR101731461B1 (en) 2015-12-09 2015-12-09 Apparatus and method for behavior detection of object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150175219A KR101731461B1 (en) 2015-12-09 2015-12-09 Apparatus and method for behavior detection of object

Publications (1)

Publication Number Publication Date
KR101731461B1 true KR101731461B1 (en) 2017-05-11

Family

ID=58740821

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150175219A KR101731461B1 (en) 2015-12-09 2015-12-09 Apparatus and method for behavior detection of object

Country Status (1)

Country Link
KR (1) KR101731461B1 (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101900237B1 (en) * 2017-10-20 2018-09-19 주식회사 삼진엘앤디 On-site judgment method using situation judgment data on acquired images
WO2018226882A1 (en) * 2017-06-07 2018-12-13 Amazon Technologies, Inc. Behavior-aware security systems and associated methods
KR101956008B1 (en) * 2018-06-07 2019-03-08 쿠도커뮤니케이션 주식회사 Apparatus for Providing Trajectory and Data Building Method Thereof
KR101979375B1 (en) * 2018-02-23 2019-08-28 주식회사 삼알글로벌 Method of predicting object behavior of surveillance video
CN110473232A (en) * 2017-07-14 2019-11-19 腾讯科技(深圳)有限公司 Image-recognizing method, device, storage medium and electronic equipment
KR102043366B1 (en) * 2018-11-21 2019-12-05 (주)터보소프트 Method for measuring trajectory similarity between geo-referenced videos using largest common view
CN111539339A (en) * 2020-04-26 2020-08-14 北京市商汤科技开发有限公司 Data processing method and device, electronic equipment and storage medium
KR102148607B1 (en) * 2019-07-26 2020-08-26 연세대학교 산학협력단 Audio-video matching area detection apparatus and method
KR20200119386A (en) * 2019-03-26 2020-10-20 연세대학교 산학협력단 Apparatus and method for recognizing activity and detecting activity area in video
KR20200119391A (en) * 2019-03-27 2020-10-20 연세대학교 산학협력단 Apparatus and method for recognizing activity and detecting activity duration in video
CN112200081A (en) * 2020-10-10 2021-01-08 平安国际智慧城市科技股份有限公司 Abnormal behavior identification method and device, electronic equipment and storage medium
KR20210084330A (en) * 2019-12-27 2021-07-07 권세기 A monitering system for wearing muzzles of dog using deep learning and monitering method
KR102341715B1 (en) * 2021-02-23 2021-12-21 주식회사 딥팜 Apparatus and method for livestock monitoring
KR102347811B1 (en) * 2021-05-31 2022-01-06 한국교통대학교산학협력단 Apparatus and method for detecting object of abnormal behavior
WO2023096092A1 (en) * 2021-11-25 2023-06-01 한국전자기술연구원 Method and system for detecting abnormal behavior on basis of composite image
CN117932233A (en) * 2024-03-21 2024-04-26 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) User behavior model fine-tuning method, system and medium based on similar abnormal behaviors

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101179276B1 (en) * 2011-06-13 2012-09-03 고려대학교 산학협력단 Device and method for detecting abnormal crowd behavior

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101179276B1 (en) * 2011-06-13 2012-09-03 고려대학교 산학협력단 Device and method for detecting abnormal crowd behavior

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018226882A1 (en) * 2017-06-07 2018-12-13 Amazon Technologies, Inc. Behavior-aware security systems and associated methods
US10936655B2 (en) 2017-06-07 2021-03-02 Amazon Technologies, Inc. Security video searching systems and associated methods
CN110473232A (en) * 2017-07-14 2019-11-19 腾讯科技(深圳)有限公司 Image-recognizing method, device, storage medium and electronic equipment
CN110473232B (en) * 2017-07-14 2024-02-09 腾讯科技(深圳)有限公司 Image recognition method and device, storage medium and electronic equipment
KR101900237B1 (en) * 2017-10-20 2018-09-19 주식회사 삼진엘앤디 On-site judgment method using situation judgment data on acquired images
KR101979375B1 (en) * 2018-02-23 2019-08-28 주식회사 삼알글로벌 Method of predicting object behavior of surveillance video
KR101956008B1 (en) * 2018-06-07 2019-03-08 쿠도커뮤니케이션 주식회사 Apparatus for Providing Trajectory and Data Building Method Thereof
KR102043366B1 (en) * 2018-11-21 2019-12-05 (주)터보소프트 Method for measuring trajectory similarity between geo-referenced videos using largest common view
KR102174656B1 (en) 2019-03-26 2020-11-05 연세대학교 산학협력단 Apparatus and method for recognizing activity and detecting activity area in video
KR20200119386A (en) * 2019-03-26 2020-10-20 연세대학교 산학협력단 Apparatus and method for recognizing activity and detecting activity area in video
KR20200119391A (en) * 2019-03-27 2020-10-20 연세대학교 산학협력단 Apparatus and method for recognizing activity and detecting activity duration in video
KR102174658B1 (en) 2019-03-27 2020-11-05 연세대학교 산학협력단 Apparatus and method for recognizing activity and detecting activity duration in video
KR102148607B1 (en) * 2019-07-26 2020-08-26 연세대학교 산학협력단 Audio-video matching area detection apparatus and method
KR20210084330A (en) * 2019-12-27 2021-07-07 권세기 A monitering system for wearing muzzles of dog using deep learning and monitering method
KR102581941B1 (en) 2019-12-27 2023-09-22 권세기 A monitering system for wearing muzzles of dog using deep learning and monitering method
CN111539339A (en) * 2020-04-26 2020-08-14 北京市商汤科技开发有限公司 Data processing method and device, electronic equipment and storage medium
CN112200081A (en) * 2020-10-10 2021-01-08 平安国际智慧城市科技股份有限公司 Abnormal behavior identification method and device, electronic equipment and storage medium
KR102341715B1 (en) * 2021-02-23 2021-12-21 주식회사 딥팜 Apparatus and method for livestock monitoring
KR102347811B1 (en) * 2021-05-31 2022-01-06 한국교통대학교산학협력단 Apparatus and method for detecting object of abnormal behavior
WO2023096092A1 (en) * 2021-11-25 2023-06-01 한국전자기술연구원 Method and system for detecting abnormal behavior on basis of composite image
CN117932233A (en) * 2024-03-21 2024-04-26 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) User behavior model fine-tuning method, system and medium based on similar abnormal behaviors

Similar Documents

Publication Publication Date Title
KR101731461B1 (en) Apparatus and method for behavior detection of object
US10248860B2 (en) System and method for object re-identification
US9959630B2 (en) Background model for complex and dynamic scenes
US9652863B2 (en) Multi-mode video event indexing
Afiq et al. A review on classifying abnormal behavior in crowd scene
US8548198B2 (en) Identifying anomalous object types during classification
US8374393B2 (en) Foreground object tracking
RU2475853C2 (en) Behaviour recognition system
US8218819B2 (en) Foreground object detection in a video surveillance system
US8416296B2 (en) Mapper component for multiple art networks in a video analysis system
CN109829382B (en) Abnormal target early warning tracking system and method based on intelligent behavior characteristic analysis
KR101720781B1 (en) Apparatus and method for prediction of abnormal behavior of object
KR101472674B1 (en) Method and apparatus for video surveillance based on detecting abnormal behavior using extraction of trajectories from crowd in images
Azimjonov et al. Vision-based vehicle tracking on highway traffic using bounding-box features to extract statistical information
Yang et al. Cluster-based crowd movement behavior detection
Mudjirahardjo A Study On Human Motion Detection-Toward Abnormal Motion Identification
Jiang Anomalous event detection from surveillance video
Varadharajan Object Detection in a video based on Frame Differencing using Deep learning
Li Road User Detection and Analysis in Traffic Surveillance Videos
Yang Multiple humans tracking by learning appearance and motion patterns

Legal Events

Date Code Title Description
E701 Decision to grant or registration of patent right
GRNT Written decision to grant