KR101547255B1 - Object-based Searching Method for Intelligent Surveillance System - Google Patents

Object-based Searching Method for Intelligent Surveillance System Download PDF

Info

Publication number
KR101547255B1
KR101547255B1 KR1020150070907A KR20150070907A KR101547255B1 KR 101547255 B1 KR101547255 B1 KR 101547255B1 KR 1020150070907 A KR1020150070907 A KR 1020150070907A KR 20150070907 A KR20150070907 A KR 20150070907A KR 101547255 B1 KR101547255 B1 KR 101547255B1
Authority
KR
South Korea
Prior art keywords
color
image
objects
information
following equation
Prior art date
Application number
KR1020150070907A
Other languages
Korean (ko)
Inventor
김태경
김수경
Original Assignee
주식회사 넥스파시스템
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 넥스파시스템 filed Critical 주식회사 넥스파시스템
Priority to KR1020150070907A priority Critical patent/KR101547255B1/en
Application granted granted Critical
Publication of KR101547255B1 publication Critical patent/KR101547255B1/en

Links

Images

Classifications

    • G06F17/3025
    • G06F17/30268
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Abstract

The present invention relates to an object-based retrieval method of an intelligent surveillance system, which enables a specific object to be searched and found from an image photographed and recorded by a plurality of cameras, The object of the present invention is to define an object based on information and to extract and trace and search a desired object based on the color information of a defined object.
To this end, the present invention is characterized in that it comprises: defining a plurality of objects appearing in a plurality of images captured by a plurality of cameras, respectively, and then extracting color information of the objects to define respective objects; Storing image data in a database based on color information of an object; Retrieving image data stored in a DB as a color keyword, and extracting all images including an object having desired color information; Analyzing the extracted image to detect a target object, and checking additional information about the detected target object.

Description

{Object-based Searching Method for Intelligent Surveillance System}

The present invention relates to an object-based search method of an intelligent surveillance system capable of searching for and searching for a specific object from an image captured and recorded by a plurality of cameras, Based object based on the color information of a defined object and extracting a desired object based on the color information of the defined object to track and search the object based search method of the intelligent monitoring system.

Generally, a CCTV system is a system that transmits image information to a small number of specific recipients of image information photographed by a camera, acquires information desired by the user through the transmitted image information, and observes an area to be monitored Video display system.

Such a CCTV system is used for a wide variety of purposes such as security surveillance, industrial use, video surveillance of a vehicle, and traffic management.

On the other hand, video surveillance system technology becomes important technology as CCTV integrated control system becomes more intelligent.

In addition, efficient object retrieval technology in multi-camera environment is expected to solve the problem of labor-intensive current CCTV video retrieval.

In particular, intelligent multi-image retrieval is a technology that can search objects based on cost, efficiency, and accuracy by analyzing the associations between separately collected images. It provides real-time detection of incidents / accidents using CCTV images, detection and tracking of suspects It can be usefully applied to the field.

In case of incident / accident, passive analysis of many images requires a lot of time. Therefore, it is possible to reduce cost, improve efficiency and accuracy by automatically analyzing the association between images collected separately .

On the other hand, current intelligent video security technology has a high level of technology in the field of CCTV and DVR (Digital Video Recorder) equipment, but it does not meet the requirement of intelligent integrated control system.

In other words, although it implements the functions of object tracking and search, change detection, and change measurement of video surveillance technology, the performance is insufficient to be applied to the integrated control system installed and operated by each local government.

Intelligent video security and retrieval technologies are expected to expand to all areas including industrial, financial, sales, education, culture, housing, and individual fields in order to secure an unspecified number of safety. Currently, only a few companies are developing their own technologies based on their own technology.

In addition, most CCTV systems have adopted a managerial monitoring method as a precautionary system for crime.

However, in the CCTV system described above, it is very difficult for the administrator to keep a close watch on the monitor, and there is a problem in that the concentration of the CCTV system is deteriorated when watching for a long time. This is one of the factors that led to the development of intelligent surveillance system.

The intelligent surveillance system refers to a system for analyzing images inputted from a CCTV camera in real time to detect, track and classify moving objects.

The intelligent monitoring system can determine whether an object generates an event corresponding to a security policy, provide real-time information to the administrator, store related data and event information, and maximize post-management and prevention.

Conventional detection and tracking systems, however, focus on moving objects without concern for analyzing what scene or what situation.

Specifically, Cootes proposed an Active Shape Model (ASM) method to find the most similar human shape model in the image by creating a learning set with a model of Shape.

In addition, Haritasoglu (Mapoglu) solved the overlap problem by extracting silhouettes of human form and suggesting a model-based algorithm through analysis, and Wren modeled the person and tracked the object by real time blob method. The

In addition, Papageorgiou used SVM (Support Vector Machine), which is a statistical analysis method, to create patterns using wavelets to analyze the characteristics of objects.

In addition, Leyrit generated a pedestrian pattern for pedestrian recognition and applied the AdaBoost algorithm.

As another pedestrian detection method, Anuj Mohan proposed a method of using human structural relations using Haar Wavelet Templates and SVM, And correlated LSSD (Local Self-Similarities Descriptor).

Various studies are being conducted to improve the accuracy of object detection and the ease of data analysis.

On the other hand, the following Patent Document 1 relates to "Intelligent surveillance system and intelligent surveillance method using CCTV", and it is possible to monitor and control images photographed by the remote surveillance unit in real time in other areas besides the image reception processing unit through the Internet In addition, in the case where the image captured and combined by the remote monitoring unit is integratedly recorded by the image reception processing unit, the image quality of the subject is too small and it is hard to check the disadvantage that the image is overcome by the individual recording in the remote monitoring unit, It is possible to perform centralized surveillance of detected images by detecting the amount of motion change of the image, and it is possible to perform intensive surveillance without a real time rectangular area by having a pan / tilt / zoom camera and one or more auxiliary cameras, Self-diagnosis and voice communication are possible, and the combination of the pan / tilt / zoom camera and peripheral devices It is possible to quickly check the occurrence of a chapter by the central monitoring unit and to simplify the maintenance procedure. The pan / tilt / zoom camera installed in the surveillance area and the images photographed by the plurality of auxiliary cameras are combined in the remote monitoring unit It is possible to store and transmit a single image so that the image transmission can be performed more efficiently and the bidirectional voice communication can be performed between the surveillance area and the main base without installing separate voice equipment, And the monitoring method.

Patent Document 2 is related to "Object Detection Method for Intelligent Surveillance System", and it relates to a method for detecting a principal component obtained through PCA (Principal Component Analysis) without using an artificially set target value of a user, Unsupervised Learning method, clustering algorithm, and eigenvalues that effectively detect the body are selected, and the selected eigenvectors are used as the weight of the input image and clustered through MoG (Mixture of Gaussian) And an intelligent monitoring system capable of detecting objects robust to noise.

KR 10-0822017 B1 KR 10-2012-0079495 A

The object of the present invention is to provide a video server in which an image input from a plurality of cameras is stored, an object is defined through object detection based on background extraction and color definition and extraction of the detected object, The object of the present invention is to provide an object-based retrieval method of an intelligent surveillance system capable of tracking and monitoring an object by applying it.

According to an aspect of the present invention, there is provided a method of generating a color image, the method comprising: defining a plurality of objects appearing in a plurality of images captured by a plurality of cameras; Storing image data in a database based on color information of an object; Retrieving image data stored in the DB as a color keyword, and extracting all images including objects having desired color information; Analyzing the extracted image to detect a target object, and checking additional information about the detected target object.

In addition, the step of defining the object may include detecting a plurality of objects appearing in the images input from the plurality of cameras and classifying the objects by objects; Identifying each object based on object information such as the center of gravity, silhouette and size of each classified object; And extracting color information for the specified object, and defining the object according to the extracted color information.

The center of gravity of the object is defined by the following equation.

Figure 112015048824646-pat00001

Here, m x and m y represent the center of gravity of the x and y axes of the object, respectively, and Obj (x i ) and Obj (y i )

Figure 112015048824646-pat00002
Th coordinate, and N denotes the total number of pixels.

In addition, the outline component of the object at the time of extracting the silhouette of the object is characterized by the following equation.

Figure 112015048824646-pat00003

Here, B is an outline component, and p is a coordinate of each outer component.

Also, the following equation is used to determine the same object as the movement trajectory of the object.

Figure 112015048824646-pat00004

here,

Figure 112015048824646-pat00005
Represents the distance between the object detected in the previous image and the object detected in the current image, m x and m y are the center of gravity detected in the previous image,
Figure 112015048824646-pat00006
,
Figure 112015048824646-pat00007
Is the center of gravity detected in the current image.

In addition, it is determined that the object is determined as an object only when the size of the object is equal to or greater than a predetermined level. Otherwise, the object is determined as noise and is ignored, and the noise and the object are determined using the following equation.

Figure 112015048824646-pat00008

Here, Obj_S means the size of the object, and S_T means the size of the reference object.

In order to analyze the color pattern of an object, six colors of red, green, blue, white, black and yellow are set as characteristic patterns of an object.

The average and variance of the color channels in each pixel are characterized by the following equations.

Figure 112015048824646-pat00009

here,

Figure 112015048824646-pat00010
Is an average color value,
Figure 112015048824646-pat00011
Is the variance value, and the color extraction is selected as the color having the largest distribution among the R, G, and B components.

Also, the representative color of the selected final object is determined to be the color having the largest distribution among the colors determined in all the pixels in the 20 to 50% area of the extracted object.

Also, the representative color of the final object is determined by the following equation.

Figure 112015048824646-pat00012

Here, C is a selected color, and Th var is a threshold value of dispersion for distinguishing between contrast and color, 300 to 400, and Th mean is a threshold value for discriminating between white and black for 100 to 150 colors.

According to the object-based retrieval method of the intelligent monitoring system according to the present invention, each object included in the image input from the plurality of cameras is specified based on the center of gravity, silhouette, size and color information, It is possible to extract the desired object based on information and to track and monitor the object, thereby securing the reliability of object detection, and also enabling post-management and prevention of events such as crime.

1 is a flowchart showing an object-based retrieval method of an intelligent monitoring system according to the present invention;
FIG. 2 is a flowchart showing an object defining process that is a main part of the present invention. FIG.
3 is a reference diagram showing an object information collection state according to the present invention.
4 is a reference diagram showing an image change in the color naming process.
5 is a reference diagram showing six color components of a characteristic pattern defined through color naming;
6 is a reference view showing an interface screen of the intelligent monitoring system;
7 is a reference diagram showing an object search screen according to the present invention.
8 to 12 are reference views showing the results of searching images by specific colors.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Hereinafter, a preferred embodiment of an object-based retrieval method of an intelligent monitoring system according to the present invention will be described with reference to the accompanying drawings.

As shown in FIGS. 1 and 2, the object-based retrieval method of the intelligent monitoring system according to the present invention extracts color information of a corresponding object after identifying a plurality of objects appearing in a video captured by a plurality of cameras, Defining an object; Storing image data in a database based on color information of an object; Retrieving image data stored in a DB as a color keyword, and extracting all images including an object having desired color information; Analyzing the extracted image to detect a target object, and checking additional information about the detected target object.

The step of defining the object may include detecting a plurality of objects appearing in an image input from a plurality of cameras and classifying the objects by objects; Identifying each object based on object information such as the center of gravity, silhouette and size of each classified object; And color information for an object specified by the color naming method, and defining the object according to the extracted color information.

Specifically, the characteristics of each object appearing in the image are defined according to the center of gravity, silhouette, size, and color information of the object.

The center of gravity, silhouette, and size of the object are used to determine the same object between adjacent images, and the same object is determined according to the movement trajectory information of the object. The center of gravity from the detected object can be obtained by the following equation (1).

Figure 112015048824646-pat00013

Here, m x and m y represent the center of gravity of the x and y axes of the object, respectively, and Obj (x i ) and Obj (y i )

Figure 112015048824646-pat00014
Th coordinate, and N denotes the total number of pixels.

The object silhouettes use the same number of components because they have size variations in successive images.

At this time, the outline component is rotated by 5 degrees to extract the object silhouette, and the outline component of the object silhouette is defined as the following Equation (2).

Figure 112015048824646-pat00015

Where B is the outer component of the object silhouette and p is the coordinate of each outer component.

This makes it possible to effectively locate moving objects in the video within the video server with the definition of object extraction.

In general, it is necessary to perform continuous detection at the stage after the detection of the moving object so that the desired object can be analyzed.

In other words, when a position where a person appears and disappears from a certain position is recognized, effective management and supervision information can be provided to the manager.

Accordingly, the following Equation (3) is used to determine the same object as the movement trajectory of the object using the information of the object feature extraction.

Figure 112015048824646-pat00016

here,

Figure 112015048824646-pat00017
Represents the distance between the object detected in the previous image and the object detected in the current image, m x and m y are the center of gravity detected in the previous image,
Figure 112015048824646-pat00018
,
Figure 112015048824646-pat00019
Is the center of gravity detected in the current image.

FIG. 3 shows the center of gravity, silhouette, and size of a specific object obtained from the input image. In FIG. 3, (a) is an input image, (b) Respectively.

On the other hand, the object size in the image is detected in various sizes according to the position of the camera, and it becomes very difficult to distinguish the noise from the size of the detected result when the object is sufficiently away from the camera and the object size becomes smaller.

Accordingly, the images in the video server are judged to be objects only if they can be visually recognized, and the object size is set to 10 mm × 10 mm as a judgment criterion and detected only when the object size is larger than a predetermined value.

At this time, the noise and the object are discriminated using the following Equation (4).

Figure 112015048824646-pat00020

Here, Obj_S means the size of the object, and S_T means the size of the reference object.

In Equation (4), if Obj_S is larger than S_T, it is an object, otherwise it is determined as noise.

In the present invention, in order to more accurately define an object, a color component of an object is analyzed and classified into a feature pattern.

Conventional color distribution analysis has many constraints such as camera characteristics, illumination changes, inherent color distortion, color perception depending on distance, object size, and camera installation position.

Accordingly, it is preferable to first define the size and shape of the object in the manner described above, and extract only the representative color in the corresponding area, and use it for storage and retrieval.

In general, the image information input from the camera signal has color components of R, G, and B channels, and inter-channel interference and mixed components are present according to the variation of illuminance. Therefore, do.

This is because even with the same R component (red), differences in components (white and black mixed color combinations) are shown within the region due to changes in camera position or illumination.

Accordingly, in the present invention, a color distribution of a dictionary is defined by applying a color naming method, and FIG. 4 shows a color distribution in an object detected using the color naming method.

The definition of the color distribution is essential because environmental factors, ie, constraints, express the same color contained in the object differently.

 In the present invention, as shown in FIG. 5, six colors of red, green, blue, yellow, white, and black are set as the characteristic patterns. This takes into account the clothes color distribution of a typical pedestrian object.

At this time, it is necessary to distinguish whether each pixel is a white or black contrast pixel or a pixel of a red, green, or blue color component in order to distinguish colors.

Since the light and dark components of the light and dark pixels are similar in value to each other, the average and variance of the color channel are obtained from each pixel according to the following equation (5).

Figure 112015048824646-pat00021

here,

Figure 112015048824646-pat00022
Is an average color value,
Figure 112015048824646-pat00023
Is the variance value, and the color extraction is selected as the color having the largest distribution among the R, G, and B components.

Equation (6) below shows how to determine the hue.

Figure 112015048824646-pat00024

Here, C is a selected color, Th var is a threshold value of dispersion which distinguishes between lightness and color, and Th mean is a threshold value of 100 to 150 that distinguishes white and black in the case of light and dark colors .

The color of each pixel is determined in all the pixels in the range of 20 to 50% of the extracted object using Equation (6).

The final color, that is, the representative color of the object, utilizes and stores the color having the largest distribution as the color information of the object.

The stored image DB information includes the detection of the corresponding object in the DB table of the single image related frame, the definition from the plurality of cameras, and the color information.

In the above description, a method for effectively detecting an object from an input image from a plurality of cameras, an analysis of an object color distribution using the object, and a method for efficiently retrieving mass data using object color information stored in a video server .

Hereinafter, a process of searching for a desired object using this will be described with reference to the following experimental examples.

<Experimental Example>

A plurality of cameras are cameras installed at different viewpoints such as a CCTV, and object detection in an image input from these cameras is based on a foreground part excluding a background component.

In other words, after detecting the area of object detection, the representative color is detected for the corresponding area, and then reconstructed with the defined six pieces of color information and stored in the server.

At this time, it is desirable not to remove the shadow because the inherent size and information of the object may be lost.

The defined color information is reconstructed in the form of a matadata to easily configure the search conditions.

Accordingly, it is possible to input an image stored in a specific camera together with an image stored in another camera in a keyword form of the same search condition, that is, metadata, and an object in the image stored as the input value can be effectively detected and retrieved.

Experimental environment was optimized by using Windows 8 as operating system, compiler of CPU version 2.8㎓, 4G memory, Visual Studio 2010 version, and configured interface screen as shown in Fig.

In the object search, as shown in FIG. 7, the shooting time of the image, the camera channel and the color keyword are used, and the corresponding image can be found through the search result list.

8 to 12 show search processing results using the respective color keywords.

In order to determine the accuracy of the object search, the color matching rate is calculated according to the following equation (7).

Figure 112015048824646-pat00025

Here, 'Matching_Ratio' represents the color matching rate, 'Matching_true' is the number of images detected according to a given color keyword, and ' Total _ Frame' means the number of accumulated images.

Also, we investigated how to detect the same color object when the object exists in the image using the feature pattern extracted from the images taken at different places.

The following [Table 1] shows the results.

Figure 112015048824646-pat00026

As shown in Table 1, the average detection rate for the experimental images was 88.9%. Thus, it can be estimated that the object detection method according to the present invention has a considerably high reliability.

While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. It will be understood by those skilled in the art that various changes and modifications may be made without departing from the scope of the present invention.

Claims (11)

delete Defining a plurality of objects appearing in a plurality of images captured by a plurality of cameras, respectively, and then extracting color information of the objects to define respective objects;
Storing image data in a database based on color information of an object;
Retrieving image data stored in the DB as a color keyword, and extracting all images including objects having desired color information;
Analyzing the extracted image to detect a target object and confirming additional information about the detected target object, the object-based searching method of an intelligent monitoring system comprising:
Wherein defining the object comprises:
Detecting a plurality of objects appearing in an image input from a plurality of cameras and classifying them by objects;
Identifying each object based on object information such as the center of gravity, silhouette and size of each classified object; And
Extracting color information of the specified object, and defining the object according to the extracted color information.
3. The method of claim 2,
Wherein the center of gravity of the object is defined by the following equation:
Figure 112015048824646-pat00027

Here, m x and m y represent the center of gravity of the x and y axes of the object, respectively, and Obj (x i ) and Obj (y i )
Figure 112015048824646-pat00028
Th coordinate, and N denotes the total number of pixels.
3. The method of claim 2,
An object-based retrieval method of an intelligent monitoring system, wherein an outline component of an object at the time of extracting a silhouette of an object is defined by the following equation:
Figure 112015048824646-pat00029

Where B is the outer component of the object and p is the coordinate of each outer component.
3. The method of claim 2,
An object-based retrieval method of an intelligent surveillance system, wherein the following equation is used to determine the same object as a moving trajectory of an object;
Figure 112015048824646-pat00030

here,
Figure 112015048824646-pat00031
Represents the distance between the object detected in the previous image and the object detected in the current image, m x and m y are the center of gravity detected in the previous image,
Figure 112015048824646-pat00032
,
Figure 112015048824646-pat00033
Is the center of gravity detected in the current image.
3. The method of claim 2,
Wherein the object is determined to be an object only when the size of the object is equal to or greater than a predetermined size, and if not, the object is determined to be noise and ignored.
The method according to claim 6,
The object-based retrieval method of an intelligent surveillance system according to claim 1, wherein the determination of the noise and the object is performed using the following equation:
Figure 112015048824646-pat00034

Here, Obj_S means the size of the object, and S_T means the size of the reference object.
3. The method of claim 2,
The object-based retrieval method of an intelligent surveillance system, characterized by setting six colors of red, green, blue, white, black, and yellow as characteristic patterns for analyzing color patterns of objects.
9. The method of claim 8,
Wherein an average and a variance of the color channel in each pixel are obtained by the following equation.
Figure 112015048824646-pat00035

here,
Figure 112015048824646-pat00036
Is an average color value,
Figure 112015048824646-pat00037
Is the variance value, and the color extraction is selected as the color having the largest distribution among the R, G, and B components.
3. The method of claim 2,
Wherein the representative color of the selected final object is determined as a color having the largest distribution among the colors determined in all the pixels in the range of 20 to 50% of the extracted object.
11. The method of claim 10,
Wherein the representative color of the final object is determined by the following equation.
Figure 112015048824646-pat00038

Here, C is a selected color, and Th var is a threshold value of dispersion for distinguishing between contrast and color, 300 to 400, and Th mean is a threshold value for separating white and black in 100 to 150 colors.
KR1020150070907A 2015-05-21 2015-05-21 Object-based Searching Method for Intelligent Surveillance System KR101547255B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150070907A KR101547255B1 (en) 2015-05-21 2015-05-21 Object-based Searching Method for Intelligent Surveillance System

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150070907A KR101547255B1 (en) 2015-05-21 2015-05-21 Object-based Searching Method for Intelligent Surveillance System

Publications (1)

Publication Number Publication Date
KR101547255B1 true KR101547255B1 (en) 2015-08-25

Family

ID=54061926

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150070907A KR101547255B1 (en) 2015-05-21 2015-05-21 Object-based Searching Method for Intelligent Surveillance System

Country Status (1)

Country Link
KR (1) KR101547255B1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101690050B1 (en) 2015-10-30 2016-12-27 한국과학기술연구원 Intelligent video security system
KR101765722B1 (en) * 2016-01-12 2017-08-08 소프트온넷(주) System and method of generating narrative report based on cognitive computing for recognizing, tracking, searching and predicting vehicles and person attribute objects and events
KR20180109378A (en) 2017-03-28 2018-10-08 주식회사 리얼타임테크 Appartus for saving and managing of object-information for analying image data
KR20190060161A (en) 2017-11-24 2019-06-03 주식회사 리얼타임테크 Apparatus for analyzing Multi-Distributed Video Data

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101113515B1 (en) 2011-09-20 2012-02-29 제주특별자치도 Video index system using surveillance camera and the method thereof

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101113515B1 (en) 2011-09-20 2012-02-29 제주특별자치도 Video index system using surveillance camera and the method thereof

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101690050B1 (en) 2015-10-30 2016-12-27 한국과학기술연구원 Intelligent video security system
KR101765722B1 (en) * 2016-01-12 2017-08-08 소프트온넷(주) System and method of generating narrative report based on cognitive computing for recognizing, tracking, searching and predicting vehicles and person attribute objects and events
KR20180109378A (en) 2017-03-28 2018-10-08 주식회사 리얼타임테크 Appartus for saving and managing of object-information for analying image data
KR20190060161A (en) 2017-11-24 2019-06-03 주식회사 리얼타임테크 Apparatus for analyzing Multi-Distributed Video Data

Similar Documents

Publication Publication Date Title
US10242282B2 (en) Video redaction method and system
KR101942808B1 (en) Apparatus for CCTV Video Analytics Based on Object-Image Recognition DCNN
TWI759286B (en) System and method for training object classifier by machine learning
Bertini et al. Multi-scale and real-time non-parametric approach for anomaly detection and localization
EP2795600B1 (en) Cloud-based video surveillance management system
US8566314B2 (en) System and related techniques for detecting and classifying features within data
CN109766779B (en) Loitering person identification method and related product
CA3000127A1 (en) System and method for appearance search
US20140214885A1 (en) Apparatus and method for generating evidence video
Manfredi et al. Detection of static groups and crowds gathered in open spaces by texture classification
KR101964683B1 (en) Apparatus for Processing Image Smartly and Driving Method Thereof
KR101547255B1 (en) Object-based Searching Method for Intelligent Surveillance System
WO2019114145A1 (en) Head count detection method and device in surveillance video
KR101372860B1 (en) System for searching video and server for analysing video
KR101092472B1 (en) Video indexing system using surveillance camera and the method thereof
KR20200069911A (en) Method and apparatus for identifying object and object location equality between images
Beghdadi et al. Towards the design of smart video-surveillance system
KR101413620B1 (en) Apparatus for video to text using video analysis
KR101113515B1 (en) Video index system using surveillance camera and the method thereof
KR20180001356A (en) Intelligent video surveillance system
KR101485512B1 (en) The sequence processing method of images through hippocampal neual network learning of behavior patterns in case of future crimes
Badii et al. Visual context identification for privacy-respecting video analytics
KR101826669B1 (en) System and method for video searching
Senior An introduction to automatic video surveillance
Ahmed et al. Automated intruder detection from image sequences using minimum volume sets

Legal Events

Date Code Title Description
E701 Decision to grant or registration of patent right
GRNT Written decision to grant
FPAY Annual fee payment

Payment date: 20180712

Year of fee payment: 4

FPAY Annual fee payment

Payment date: 20190808

Year of fee payment: 5