KR101826669B1 - System and method for video searching - Google Patents

System and method for video searching Download PDF

Info

Publication number
KR101826669B1
KR101826669B1 KR1020160017207A KR20160017207A KR101826669B1 KR 101826669 B1 KR101826669 B1 KR 101826669B1 KR 1020160017207 A KR1020160017207 A KR 1020160017207A KR 20160017207 A KR20160017207 A KR 20160017207A KR 101826669 B1 KR101826669 B1 KR 101826669B1
Authority
KR
South Korea
Prior art keywords
moving object
moving
image frame
module
descriptor
Prior art date
Application number
KR1020160017207A
Other languages
Korean (ko)
Other versions
KR20170095599A (en
Inventor
이규원
정건희
Original Assignee
대전대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 대전대학교 산학협력단 filed Critical 대전대학교 산학협력단
Priority to KR1020160017207A priority Critical patent/KR101826669B1/en
Publication of KR20170095599A publication Critical patent/KR20170095599A/en
Application granted granted Critical
Publication of KR101826669B1 publication Critical patent/KR101826669B1/en

Links

Images

Classifications

    • G06F17/30023
    • G06F17/30058
    • G06F17/30784
    • G06F17/3079
    • G06F17/30811
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/84Generation or processing of descriptive data, e.g. content descriptors
    • H04N21/8405Generation or processing of descriptive data, e.g. content descriptors represented by keywords
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Abstract

A moving image retrieval system and a moving image retrieval system capable of easily retrieving a moving object (object or person) photographed from a moving image photographed by a video camera such as a CCTV. According to an aspect of the present invention, there is provided a video frame receiving module for receiving each video frame periodically photographed by a video camera, wherein each video frame is divided into a plurality of predefined divided areas, A moving object identification module that identifies a moving object from the image frame to be processed when the receiving module receives the image frame to be processed, a determination module that determines a position on the processing object image frame of each of the moving objects identified by the moving object identification module A descriptor for managing a movement history of the moving object generated in a predetermined storage space for each of the moving objects identified by the moving object identification module, Describing element (descr a search condition input module for receiving a search condition including identification information of at least a part of the plurality of divided regions, and a descriptor corresponding to each mobile object created in the storage space, And a condition determination module that determines a search target description element that satisfies the search condition among the included description elements.

Description

System and method for video searching

The present invention relates to a video retrieval system and a video retrieval system. And more particularly, to a moving image retrieval system and a moving image retrieval system capable of easily retrieving a portion where an object (object or person) moving in a moving image photographed by a video camera such as CCTV is photographed.

Security and surveillance are emerging as a very important issue in society where crime and terrorism occur frequently. Most of the places where security and surveillance are important, such as airports, stations, banks, government offices, etc., are equipped with closed circuits (CCTV). Through this closed circuit, a human being watches in real time or an image taken in a closed circuit is stored in the form of a moving picture. These videos can then be used for purposes such as investigating an incident, but it takes a very long time to play back the stored video. Therefore, various methods of searching a specific part of a moving picture are being studied.

Conventional image retrieval methods include semantic-based and content-based methods in addition to the existing text-based methods. Semantic-based retrieval is a method of expressing images as abstract features, and is a method of analyzing the syntax or semantics of query terms. Content - based image retrieval automatically extracts and retrieves features from the image itself, enabling objective and automated image retrieval. Various features existing in the image can be classified into global feature and local feature.

A color Corelogram is an algorithm that uses global features. Color Correlograms show the probability distribution of colors occurring between pixels with a certain distance in the whole image. The color Corelogram can include color and edge information since it includes color change information of a pixel at a certain distance. Therefore, in order to obtain excellent image search results, a color Corelogram should be obtained at various distances. At this time, it takes a lot of calculation time to apply the color corelogram to the whole image. In addition, because it stores the calculated amount of data, it is inefficient in data management. That is, when the descriptor is designed using the global feature, the throughput of the descriptor becomes large and the processing time may not be effective.

In order to solve these drawbacks, a descriptor using a local area has been proposed. The local descriptor is a feature point based technique that emphasizes robustness to distortion in an image. Extracts a local patch using pixel information around the minutiae of the image, and extracts a keypoint containing feature information from the local patch. Extract the local descriptor using the extracted key point. The performance of feature point based image matching technology is fundamentally dependent on the robustness of feature points and local descriptors for image distortion (rotation, enlargement, reduction, brightness change, noise, etc.). The extracted local descriptor can act as an indexer by attaching keypoints of only the current image in the image to be searched. Therefore, the local descriptor is relatively easy to manage data because of the relatively high processing speed and descriptor processing data. However, images of different sizes or totally different images may cause a search error by creating similar keypoints. That is, in the case of the descriptor using the local area, there is a disadvantage in that an erroneous search irrespective of the query may occur at the time of image retrieval.

SUMMARY OF THE INVENTION It is an object of the present invention to provide a moving image retrieval system and a moving image retrieval system that can quickly and accurately determine and retrieve a portion where an object moving in a moving image photographed by a video camera is photographed.

According to an aspect of the present invention, there is provided a video frame receiving module for receiving each video frame periodically photographed by a video camera, wherein each video frame is divided into a plurality of predefined divided areas, A moving object identification module that identifies a moving object from the image frame to be processed when the receiving module receives the image frame to be processed, a determination module that determines a position on the processing object image frame of each of the moving objects identified by the moving object identification module A descriptor for managing a movement history of the moving object generated in a predetermined storage space for each of the moving objects identified by the moving object identification module, Describing element (descr a search condition input module for receiving a search condition including identification information of at least a part of the plurality of divided regions, and a descriptor corresponding to each mobile object created in the storage space, And a condition determination module that determines a search target description element that satisfies the search condition among the included description elements.

In one embodiment, the moving picture retrieval system may include a retrieval module for retrieving an image frame corresponding to the retrieval condition based on the retrieval target description element.

In one embodiment, the moving picture search system includes a moving object determination module that determines a new moving object that is not present in a preceding image frame, which is a previous frame of the processing target image frame, among the moving objects identified by the moving object identification module, And a descriptor generation module for generating a descriptor for the new moving object in the storage space.

In one embodiment, the moving object determination module may determine a moving object that is separated from the moving object extracted from the preceding image frame by a predetermined distance or more among the moving objects extracted from the image frame to be processed as a new moving object .

In one embodiment, the moving picture search system further includes a background learning module for generating a background model by performing background learning on a learning target image frame, which is at least a part of image frames photographed by the video camera, The identification module can identify the moving object from the image frame to be processed based on the generated background model.

In one embodiment, the background learning module generates a background model by calculating a representative value of each pixel, and the moving object identification module determines, for each pixel constituting the processing target image frame, And the representative value of the pixel is greater than or equal to a predetermined threshold value, the pixel may be determined as a pixel included in the moving object.

In one embodiment, the position determination module determines the center of gravity of the moving object for each of the moving objects identified by the moving object identification module, and determines the position of the center of gravity of the determined moving object, It can be judged as a position.

In one embodiment, the video camera may be a fixed camera for photographing a predetermined place.

According to another aspect of the present invention, there is provided a video frame receiving module for receiving each video frame periodically photographed by a video camera, wherein each video frame is divided into a plurality of predefined divided areas, A moving object identification module for identifying a moving object from the image frame to be processed when the frame receiving module receives the image frame to be processed, a moving object identifying module for identifying a position on the processing object image frame of each moving object identified by the moving object identifying module A moving object identification module for determining a moving object, a moving object identification module for identifying a moving object, a moving object identification module for determining a moving object, A description element including positional information of < RTI ID = 0.0 > A retrieval condition input module for receiving a retrieval condition including identification information of at least a part of the plurality of partitioned areas, and a description element included in a descriptor corresponding to each moving object created in the storage space, And a search module for searching an image frame corresponding to the search condition based on the search condition.

According to another aspect of the present invention, there is provided a video retrieval system comprising: a video frame receiving step of receiving a video frame periodically taken by a video camera, wherein each video frame is divided into a plurality of predefined divided areas Identifying a moving object from the image frame to be processed when the moving image searching system receives the image frame to be processed in the image frame receiving step; Determining a position of each of the moving objects identified in the moving object identification step in the processing target image frame; and determining the position of each of the moving objects identified in the moving object identification step, Descriptor for managing the movement history of moving objects A descriptor management step of adding a description element including positional information of the moving object on the image frame to be processed, the moving image search system comprising: a descriptor management step of receiving a search condition including identification information of at least a part of the plurality of divided areas And a condition determination step of determining a search target description element satisfying the search condition among the description elements included in the descriptor corresponding to each moving object created in the storage space A video search method is provided.

In one embodiment, the moving image searching method may further include a searching step in which the moving image searching system searches for an image frame corresponding to the search condition, based on the search target description element.

In one embodiment, the moving picture search method further comprises a moving picture search system that determines a new moving object that is not present in a preceding image frame, which is a previous frame of the processing target image frame, among the moving objects identified in the moving object identification step The moving object determining step and the moving picture searching system may further include a descriptor generating step of generating a descriptor for the new moving object in the storage space.

In one embodiment, the moving object determination step may include determining that a moving object that is separated from the moving object extracted from the preceding image frame by a predetermined distance or more among the moving objects extracted from the processing target image frame is a new moving object .

In one embodiment, the moving picture search method further includes a background learning step of generating a background model by performing background learning on a learning target image frame that is at least a part of image frames captured by the video camera Wherein the moving object identification step includes identifying a moving object from the image frame to be processed based on the generated background model.

In one embodiment, the background learning step includes a step of calculating a representative value of each pixel to generate the background model, and the moving object identifying step includes a step of calculating, for each pixel constituting the image frame to be processed And determining that the pixel is a pixel included in the moving object when the difference between the pixel value and the representative value of the pixel is equal to or greater than a predetermined threshold value.

In one embodiment, the position determination step determines the center of gravity of the moving object for each of the moving objects identified by the moving object identification module, and determines the position of the center of gravity of the determined moving object, And determining the position of the mobile terminal.

In one embodiment, the video camera may be a fixed camera for photographing a predetermined place.

According to another aspect of the present invention, there is provided a video retrieval system comprising: a video frame receiving step of receiving a video frame periodically taken by a video camera, wherein each video frame is divided into a plurality of predefined divided areas Identifying a moving object from the image frame to be processed when the moving image searching system receives the image frame to be processed in the image frame receiving step; Determining a position of each of the moving objects identified in the moving object identification step in the processing target image frame; and determining the position of each of the moving objects identified in the moving object identification step, Descriptor for managing the movement history of moving objects Wherein the moving picture search system further includes a description element that includes location information of the moving object on the image frame to be processed, the moving picture search system further comprising: a searching step of receiving a search condition including identification information of at least a part of the plurality of divided areas, And a search step of searching for a video frame corresponding to the search condition on the basis of a description element included in a descriptor corresponding to each moving object created in the storage space, Method is provided.

According to another aspect of the present invention, there is provided a computer program installed in a data processing apparatus and stored in a recording medium for performing the above-described method.

According to another aspect of the present invention there is provided a motion picture retrieval system comprising a processor and a memory for storing a computer program executed by the processor, wherein the computer program, when executed by the processor, , A moving picture search system is provided which allows the above-described method to be performed.

The moving picture search system according to an embodiment of the present invention has an advantage of having a retrieval rate or accuracy of a content based image retrieval level because it describes a descriptor indicating the movement of the moving object based on the minutiae of the moving object. In addition, since the search is performed using the search conditions represented by the plurality of divided regions, it is possible to quickly search a moving image portion matching the search condition related to the trajectory of the moving object.

BRIEF DESCRIPTION OF THE DRAWINGS A brief description of each drawing is provided to more fully understand the drawings recited in the description of the invention.
FIG. 1 is a block diagram schematically showing a configuration of a motion picture search system according to an embodiment of the present invention. Referring to FIG.
2 is a diagram for explaining a divided area constituting an image frame.
FIG. 3 is a diagram for explaining a method for determining a representative value of a pixel in a moving image search system according to an embodiment of the present invention. Referring to FIG.
4 is a diagram illustrating an example of a background of an image frame generated by a moving image search system according to an embodiment of the present invention.
FIG. 5A is a view showing an example of a specific image frame (i-th image frame), and FIG. 5B is a view showing the center of gravity of a moving object included in the image frame shown in FIG.
6A is a view showing an example of a next image frame (i + 1th image frame) of the image frame shown in FIG. 5A, FIG. 6B is a view showing an example of the center of gravity of the moving object included in the image frame shown in FIG. Fig.
FIG. 7A is a diagram illustrating an example of a next image frame (i + 2) -th image frame of the image frame shown in FIG. 6A, FIG. 7B is a view showing an example of a center of gravity of a moving object included in the image frame shown in FIG. Fig.
8A to 8C are diagrams illustrating descriptors generated and managed by a moving image search system according to an exemplary embodiment of the present invention.

BRIEF DESCRIPTION OF THE DRAWINGS The present invention is capable of various modifications and various embodiments, and specific embodiments are illustrated in the drawings and described in detail in the detailed description. It is to be understood, however, that the invention is not to be limited to the specific embodiments, but includes all modifications, equivalents, and alternatives falling within the spirit and scope of the invention. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Hereinafter, the present invention will be described in detail with reference to the accompanying drawings.

The terms first, second, etc. may be used to describe various components, but the components should not be limited by the terms. The terms are used only for the purpose of distinguishing one component from another.

The terminology used in this application is used only to describe a specific embodiment and is not intended to limit the invention. The singular expressions include plural expressions unless the context clearly dictates otherwise.

In this specification, the terms "comprises" or "having" and the like refer to the presence of stated features, integers, steps, operations, elements, components, or combinations thereof, But do not preclude the presence or addition of features, numbers, steps, operations, components, parts, or combinations thereof.

Also, in this specification, when any one element 'transmits' data to another element, the element may transmit the data directly to the other element, or may be transmitted through at least one other element And may transmit the data to the other component. Conversely, when one element 'directly transmits' data to another element, it means that the data is transmitted to the other element without passing through another element in the element.

Hereinafter, the present invention will be described in detail with reference to the embodiments of the present invention with reference to the accompanying drawings. Like reference symbols in the drawings denote like elements.

FIG. 1 is a block diagram schematically showing a configuration of a motion picture search system according to an embodiment of the present invention. Referring to FIG.

1, the moving picture search system 100 includes an image frame receiving module 110, a background learning module 120, a moving object identifying module 130, a position determining module 140, a moving object determining module 150 A descriptor generation module 160, a descriptor management module 170, a search condition input module 180, a condition determination module 190, and a search module 195. According to an embodiment of the present invention, some of the above-described components may not necessarily be necessary components necessary for the implementation of the present invention, But may include more components. For example, the motion picture retrieval system 100 may include other components of the motion picture retrieval system 100 (e.g., an image frame receiving module 110, a background learning module 120, a moving object identification module 130, A location determination module 140, a moving object determination module 150, a descriptor generation module 160, a descriptor management module 170, a search condition input module 180, a condition determination module 190, and / And a control module (not shown) for controlling the functions and / or the resources of the devices (e.

The moving picture retrieval system 100 may include hardware resources and / or software necessary to implement the technical idea of the present invention. It should be understood that the moving picture retrieval system 100 means one physical component or one device no. That is, the moving picture search system 100 may mean a logical combination of hardware and / or software provided to implement the technical idea of the present invention. If necessary, The present invention may be embodied as a set of logical structures for realizing the technical idea of the present invention. In addition, the moving picture search system 100 may mean a set of structures separately implemented for each function or role for implementing the technical idea of the present invention. For example, the image frame receiving module 110, the background learning module 120, the moving object identifying module 130, the position determining module 140, the moving object determining module 150, the descriptor generating module 160, The descriptor management module 170, the search condition input module 180, the condition determination module 190, and / or the search module 195 may be located in different physical devices or may be located in the same physical device. According to an embodiment of the present invention, the image frame receiving module 110, the background learning module 120, the moving object identifying module 130, the position determining module 140, the moving object determining module 150, The detailed elements constituting each individual module such as the server module 160, the descriptor management module 170, the search condition input module 180, the condition determination module 190, and / or the search module 195 are also located in different physical devices , The detailed elements located in different physical devices may be combined with each other to realize the function performed by each individual module.

In this specification, a module may mean a functional and structural combination of hardware for carrying out the technical idea of the present invention and software for driving the hardware. For example, the module may refer to a logical unit of a predetermined code and a hardware resource for executing the predetermined code, and it does not necessarily mean a physically connected code or a kind of hardware But can be easily deduced to the average expert in the field of the present invention.

The moving picture retrieval system 100 may be connected to the video camera 200 wirelessly or wirelessly and the image frame receiving module 110 may receive the image frames periodically photographed by the video camera 200 have.

The video camera 200 may be a fixed camera for photographing a predetermined place (for example, distance, inside of a building, etc.). For example, the video camera 200 may be a closed circuit (CCTV).

The video camera 200 may photograph an image frame at a predetermined period (frame rate) and provide the photographed image frame to the moving image search system 100. Meanwhile, the frame rate of the video frame received by the video frame receiving module 110 may be different from the frame rate of the video frame captured by the video camera 200. Accordingly, the video camera 200 may receive all image frames captured by the video camera 200. However, according to an embodiment, the image frame receiving module 110 may skip several image frames after receiving a specific image frame, Or may be implemented by receiving frames.

Each video frame captured by the video camera 200 may be divided into a plurality of predefined divided regions. A plurality of predefined dividing areas may be equally applied to all frames captured by the video camera 200. [

The fact that one video frame is divided into a plurality of divided regions does not necessarily mean that the video frame is physically divided into a plurality of divided regions, It is possible to treat each image frame as if the image frames are logically divided into the plurality of divided areas in order to realize the mapping.

FIG. 2 is a diagram for explaining a divided area constituting an image frame photographed by the video camera 200. FIG.

Although FIG. 2 shows an example of dividing one image frame into 3 3 = 9 divided areas having the same size, according to an embodiment, the sizes of the divided areas may be different from each other, The image frame may be divided into nine or more sub-regions.

Referring to FIG. 2, each video frame captured by the video camera 200 may be divided into 3 × 3 divided areas, that is, divided areas 1 to 9 (11 to 19).

In the following description, it is assumed that each video frame captured by the video camera 200 is divided into divided regions defined as shown in FIG. 2 for convenience of explanation.

1, the moving object identification module 130 identifies a moving object from the image frame to be processed when the image frame receiving module 110 receives an image frame (image frame to be processed) have.

The moving object identification module 130 can identify a moving object, not a background, by separating the background in the image frame to be processed.

In order to allow the moving object identification module 130 to separate the background in the image frame to be processed, the background learning module 120 can generate a background model, and the moving object identification module 130 Based on the generated background model, the moving object can be identified from the image frame to be processed.

The background learning module 120 may perform a background learning on a learning target image frame, which is at least a part of image frames captured by the video camera 200, to generate a background model.

The learning object image frame may be a plurality of image frames photographed by the video camera 200 and may be a plurality of image frames previously photographed separately for background learning. Or may be at least part of an image frame.

The background learning module 120 generates a background model based on a pixel included in one learning target image frame and a pixel included in another learning target image frame, And the background learning model can be constructed by accumulating the absolute values of the differences. At this time, the absolute value can be calculated by the following equation.

<Type> h t (x, y) = | f t (x, y) - f t-1 (x, y) |

In the above equation, h t (x, y) are the coordinates <x, y> means an absolute value of the difference in brightness of pixels located in and, f t (x, y) are the coordinates of the current learning object image frame <x, y> The brightness of the located pixel, f t-1 (x, y), is the brightness of the pixel located at the coordinate <x, y> of the previous learning target image frame.

The background learning module 120 calculates a representative value of each pixel using the absolute value of the difference between the calculated pixels and obtains a statistical model for the background.

3 is a view for explaining the above process.

Referring to FIG. 3, the background learning module 120 determines whether the pixel P 1 located at the coordinate <x, y> of the learning target image frame F 1 and the coordinate x of the learning target image frame F 2 (P 2 ) located at the coordinate <x, y> of the learning target image frame F 2 and the pixel P 2 located at the y < the absolute value of the difference between the coordinates of F 3) <x, y> pixel (P 3) which is located can be determined. Similarly, in a similar manner, the pixel P N-1 located at the coordinate <x, y> of the learning target image frame F N-1 and the pixel P N-1 located at the coordinate <x, y> after obtaining the pixel to the absolute value of the difference between (P N), which, by using these values, it is possible to calculate a representative value of the pixel <x, y>. Also, the background learning module 120 may calculate a representative value for a corresponding pixel by performing a similar method on pixels located at different coordinates on an image frame.

In addition, the background learning module 120 may generate a background model using various conventional background learning techniques such as a Gaussian mixture model, a particle filter, and on-line boosting.

If the difference between the pixel value and the representative value of the pixel is greater than or equal to a predetermined threshold value, the moving object identification module 130 determines that the pixel belongs to the moving object It can be determined that the pixel is included. In this manner, the moving object identification module 130 may determine the pixels included in the moving object and then determine the pixel group constituting one group as one moving object.

The location determination module 140 may determine the location of each of the moving objects identified by the moving object identification module on the processing target image frame.

In one embodiment, the position determination module 140 determines the center of gravity of the moving object for each of the moving objects identified by the moving object identification module 130 and determines the center of gravity of the determined moving object As the position of the moving object. The x-coordinate of the center of gravity of a specific moving object can be calculated by dividing the x-coordinate of each pixel constituting the moving object by the total number of pixels, and the y-coordinate of the center of gravity of the moving object corresponds to It is possible to calculate the sum of the y coordinates of each pixel constituting the moving object by dividing the sum by the number of pixels.

The moving object determination module 150 can identify the moving objects identified by the moving object identification module 130. [ That is, the moving object determination module 150 determines whether the moving object identified by the moving object identification module 130 is a new moving object that does not exist in the preceding image frame, which is the previous frame of the object image frame, It is possible to determine an existing moving object that exists in the preceding image frame but can also determine a missing moving object that is present in the preceding image frame but does not exist in the image frame to be processed.

The moving object determination module 150 may determine a moving object that is separated from the moving object extracted from the preceding image frame by a predetermined distance or more among the moving objects extracted from the image frame to be processed as a new moving object. Also, the moving object determination module 150 may determine that the moving object located within a predetermined distance from any one of the moving objects extracted from the preceding image frame among the moving objects extracted from the image frame to be processed is an existing moving object. Also, a moving object that does not correspond to any of the moving objects extracted from the processing target image frame among the moving objects extracted from the preceding image frame may be determined as a missing moving object.

Hereinafter, referring to FIGS. 4 and 5A to 7B, the moving image search system 100 identifies a moving object, determines the position of the identified moving object, and identifies each identified moving object Will be described.

FIG. 4 is a diagram illustrating an example of a background 20 based on a background model generated by the background learning module 120. FIG.

FIG. 5A is a view showing an example of a specific image frame (i-th image frame), and FIG. 5B is a view showing the center of gravity of a moving object included in the image frame shown in FIG.

The moving object identification module 130 separates the background 20 as shown in FIG. 4 from the i-th image frame 30 as shown in FIG. 5A and identifies two moving objects 31 and 32 can do. Then, the position determination module 140 can determine the position of the center of gravity of each of the two moving objects 31 and 32 and displays it on the specific positions 33 and 34 on the image frame, as shown in FIG. 5B.

6A is a view showing an example of a next image frame (i + 1th image frame) of the image frame shown in FIG. 5A, FIG. 6B is a view showing an example of the center of gravity of the moving object included in the image frame shown in FIG. Fig.

The moving object identification module 130 separates the background 20 as shown in FIG. 4 from the i + 1th image frame 40 as shown in FIG. 6A, identifies one moving object 41 can do. Then, the position determination module 140 can determine the position of the center of gravity of the moving object 41 and display it on the specific position 42 on the image frame, as shown in FIG. 6B.

Meanwhile, since the position 42 of the moving object 41 identified from the (i + 1) th image frame 40 is within a certain distance from the position 34 of the moving object 32 identified from the i-th image frame 30 The moving object determining module 150 determines whether the moving object 41 of the i + 1th image frame 40 has moved from the position 34 to the position 42, . That is, the moving object determination module 150 may determine that the moving object 41 is an existing moving object.

On the other hand, since the moving object corresponding to the moving object 31 identified from the i-th video frame 30 is not present in the (i + 1) -th video frame 40, The moving object 31 identified from the moving image 30 may be determined as a missing moving object that has disappeared from the (i + 1) th image frame 40.

FIG. 7A is a diagram illustrating an example of a next image frame (i + 2) -th image frame of the image frame shown in FIG. 6A, FIG. 7B is a view showing an example of a center of gravity of a moving object included in the image frame shown in FIG. Fig.

The moving object identification module 130 separates the background 20 as shown in FIG. 4 from the i + 2th image frame 50 as shown in FIG. 7A, and separates the two moving objects 51 and 52, Can be identified. Then, the position determination module 140 can determine the position of the center of gravity of each of the two moving objects 51 and 52 and displays it on specific positions 53 and 54 on the image frame, as shown in FIG. 5B.

On the other hand, when the position 53 of the moving object 51 identified from the (i + 2) -th image frame 50 is shifted from the position 42 of the moving object 41 identified from the (i + The moving object determining module 150 determines that the moving object 51 of the (i + 2) th video frame 50 is moved to the i + 1th moving image frame 30 moved from the position 42 to the position 53 It can be judged to be the object 41. That is, the moving object determination module 150 may determine that the moving object 51 is an existing moving object.

On the other hand, the moving object 52 identified from the (i + 2) th video frame 50 is separated from all the moving objects 41 on the (i + 1) ) May determine that the moving object 52 identified from the (i + 2) th image frame 50 is a new moving object newly appearing in the (i + 2) th image frame 40.

Referring again to FIG. 1, the descriptor generation module 160 may generate a descriptor for a new moving object in a predetermined storage space.

The descriptor may be a predetermined data structure for managing the movement history of the moving object. The descriptor may be generated for each object appearing in an image frame photographed by the video camera 200, and may store identification information of a moving object corresponding to the descriptor. Also, the descriptor may store information of an image frame in which a moving object corresponding to the descriptor appears, information of an image frame in which the moving object corresponding to the descriptor disappeared and / or a moving trajectory of the moving object corresponding to the descriptor, It is possible to store predetermined information that can be derived.

Meanwhile, the descriptors may be implemented in the form of a linked list, an array, a queue, a stack, or the like.

For each of the moving objects identified by the moving object identification module 130, the descriptor management module 170 writes, in a descriptor corresponding to the moving object, a description including location information of the moving object on the processing target image frame You can add a description element. Accordingly, it is possible to track each description element included in the specific descriptor so that the movement trajectory of the moving object corresponding to the descriptor can be grasped. Meanwhile, according to an embodiment, the description element may further include identification information of a video frame to be processed in which the moving object appears.

Meanwhile, the location information of the moving object included in the description element may be a coordinate value of a pixel corresponding to the center of gravity of the moving object, and according to an embodiment, may include a pixel corresponding to the center of gravity of the moving object It may be the identification information of the area.

8A to 8C are diagrams illustrating descriptors generated and managed by a moving image search system according to an exemplary embodiment of the present invention. More specifically, FIG. 8A is a diagram illustrating a descriptor state corresponding to an image frame to be processed as shown in FIG. 5A, FIG. 8B is a diagram illustrating a state of a descriptor corresponding to a processing object image frame as shown in FIG. And FIG. 8C is a diagram showing the state of the descriptor corresponding to the image frame to be processed as shown in FIG. 7A.

The descriptors illustrated in FIGS. 8A to 8C are merely examples, and the descriptors maintained and managed by the moving image search system 100 according to the technical idea of the present invention may include information of image frames in which the corresponding objects appear, The corresponding object can be implemented in various forms capable of storing information of the disappearing image frame and information capable of deriving the movement trajectory of the object corresponding to the descriptor.

Referring to FIG. 8A, since two moving objects 31 and 32 are identified from the i-th image frame 30 as shown in FIG. 5A, the descriptor management module 170 determines whether the moving object 31 The description element 61 can be added to the descriptor 60 and the description element 71 can be added to the descriptor 70 corresponding to the moving object 32. [

On the other hand, the description element 61 includes the frame number (i) of the i-th image frame and the information (<1,3>) of the position 33 of the moving object 31 on the i-th image frame 30 . The description element 71 may include frame number i of the i-th video frame and information of the position 34 of the moving object 32 on the i-th video frame 30 .

8A to 8C illustrate that the location information of the moving object included in the description element is identification information (e.g., < 1,3 >) of the divided area including pixels corresponding to the center of gravity of the moving object The position information of the moving object included in the description element may be the coordinate value of the pixel corresponding to the center of gravity of the moving object.

Referring to FIG. 8B, one moving object 41 is identified from the (i + 1) th video frame 40 as shown in FIG. 6A, and the identified moving object 41 is divided into an existing moving object The descriptor management module 170 may add the description element 72 to the descriptor 70 corresponding to the moving object 41 because the moving object is the same moving object as the moving object 32 of the moving object 41. [

On the other hand, the description element 72 stores information (<2, i + 1) about the frame number (i + 1) of the i + 1th frame and the position 42 of the moving object 41 on the 2 >).

6A, when the moving object determining module 150 determines that the moving object 31 on the i-th image frame 30 is a missing moving object disappeared from the i + 1-th image frame 40 The description management module 170 adds the information 62 indicating that the moving object 31 has disappeared from the (i + 1) th video frame 40 to the descriptor 60 corresponding to the moving object 31 .

Referring to FIG. 8C, two moving objects 51 and 52 are identified from the (i + 2) th image frame 50 as shown in FIG. 7A, The descriptor management module 170 may add the description element 73 to the descriptor 70 corresponding to the moving object 41 because the moving object 41 is the same moving object as the moving object 41 of the first image frame.

On the other hand, the description element 73 stores information (<1, i + 2) of the frame number (i + 2) of the i + 2th frame and the position 53 of the moving object 51 on the 2 >).

As described above with reference to FIG. 6A, when the moving object determining module 150 determines that the moving object 52 on the (i + 2) th image frame 50 is a new moving object in the i + 2th image frame 50 The description generation module 160 may generate a new descriptor 80 corresponding to the moving object 52 and the description management module 170 may transmit the description element 81 to the descriptor 80 Can be added. The description element 81 stores information (<3,3>) about the frame number (i + 2) of the i + 2th frame and the position 54 of the moving object 52 on the (i + ).

Referring again to FIG. 1, the search condition input module 180 may receive a search condition for searching for a specific portion of a moving image photographed by the video camera 200. The search condition may indicate a movement trajectory of an arbitrary moving object.

In particular, the search condition may include identification information of at least a part of the plurality of divided regions. Accordingly, when the search condition includes the identification information of a plurality of divided regions, the search condition may be a condition for searching for a portion of the captured image of the moving object sequentially passing through the plurality of divided regions. For example, when the search condition includes the divided region 2 (12), the divided region 5 (15) and the divided region 8 (18) in order, the search condition is divided into the divided region 2 (12), the divided region 5 15) and the segmented region 8 (18) may be a condition for searching the captured image portion.

According to an embodiment, the search condition may further include additional search parameters related to the movement trajectory such as time, moving direction of the moving object, and moving speed of the moving object in addition to the identification information of the divided area. In this case, And may be a condition for searching for an image portion satisfying an included search parameter.

The condition determination module 190 may determine a description element of the description element that satisfies the search condition among the description elements included in the descriptor corresponding to each moving object created in the storage space. Since the search target description element determined by the condition determination module 190 is a description element that satisfies a search condition indicating a trajectory in which the moving object continuously moves, the search target description element may include a plurality of consecutive descriptions Element.

As described above, each descriptor includes information indicating the movement trajectory of the object corresponding to the descriptor. Since the search condition is also a condition related to the trajectory of the moving object, the condition determination module 190 generates A description element included in the descriptor corresponding to each moving object is compared with a retrieval condition to determine a retrieval target description element satisfying the retrieval condition among the description elements included in each descriptor.

If the condition determining module 190 can not determine the search target description element that satisfies the search condition, the description determining module 190 searches the description element closest to the trajectory of the moving object represented by the search condition as the search target description element .

If the search target description element satisfying the search condition is determined as described above, the search module 195 can search for an image frame corresponding to the search condition based on the search target description element.

As described above, since the moving picture search system 100 according to the technical idea of the present invention describes a descriptor indicating the movement of the moving object based on the minutiae of the moving object, the advantage of being able to have the retrieval rate and accuracy of the content- . In addition, since the search is performed using the search conditions represented by the plurality of divided regions, it is possible to quickly search a moving image portion matching the search condition related to the trajectory of the moving object.

Meanwhile, the moving picture search system 100 may be used to search for a part of the moving picture, which is photographed by the video camera 200 and stored in the form of a predetermined video file, satisfying the search condition, and the video camera 200 ) From the real-time image captured by the image-capturing device.

On the other hand, according to an embodiment, the moving picture retrieval system 100 may include a processor and a memory for storing a program executed by the processor. The processor may include a single-core CPU or a multi-core CPU. The memory may include high speed random access memory and may include non-volatile memory such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid state memory devices. Access to the memory by the processor and other components can be controlled by the memory controller. Here, when the program is executed by a processor, the program may cause the moving picture searching system 100 according to the present embodiment to perform the moving picture searching method described above.

Meanwhile, the moving picture search method according to the embodiment of the present invention may be implemented in the form of a program command readable by a computer and stored in a computer readable recording medium, and the control program and the target program according to the embodiment of the present invention And can be stored in a computer-readable recording medium. A computer-readable recording medium includes all kinds of recording apparatuses in which data that can be read by a computer system is stored.

Program instructions to be recorded on a recording medium may be those specially designed and constructed for the present invention or may be available to those skilled in the art of software.

Examples of the computer-readable recording medium include magnetic media such as a hard disk, a floppy disk and a magnetic tape, optical media such as CD-ROM and DVD, a floptical disk, And hardware devices that are specially configured to store and execute program instructions such as magneto-optical media and ROM, RAM, flash memory, and the like. The computer readable recording medium may also be distributed over a networked computer system so that computer readable code can be stored and executed in a distributed manner.

Examples of program instructions include machine language code such as those produced by a compiler, as well as devices for processing information electronically using an interpreter or the like, for example, a high-level language code that can be executed by a computer.

It will be understood by those skilled in the art that the foregoing description of the present invention is for illustrative purposes only and that those of ordinary skill in the art can readily understand that various changes and modifications may be made without departing from the spirit or essential characteristics of the present invention. will be. It is therefore to be understood that the above-described embodiments are illustrative in all aspects and not restrictive. For example, each component described as a single entity may be distributed and implemented, and components described as being distributed may also be implemented in a combined form.

It is intended that the present invention covers the modifications and variations of this invention provided they come within the scope of the appended claims and their equivalents. .

Claims (20)

A video frame receiving module for receiving each video frame periodically photographed by a video camera, wherein each video frame is divided into a plurality of predefined divided areas;
A moving object identification module for identifying a moving object from the image frame to be processed when the image frame receiving module receives the image frame to be processed;
A position determination module for determining a position on the processing target image frame of each of the moving objects identified by the moving object identification module;
For each of the moving objects identified by the moving object identification module, a descriptor for managing a moving history of the moving object generated in a predetermined storage space, the position information of the moving object on the processing target image frame A descriptor management module for adding a description element including the descriptor element to the descriptor;
A search condition input module that receives a search condition including identification information of at least a part of the plurality of divided regions; And
And a condition determination module that determines a search target description element satisfying the search condition among description elements included in a descriptor corresponding to each moving object created in the storage space,
A moving object determining module that determines a new moving object that does not exist in a preceding image frame, which is a previous frame of the processing target image frame, among the moving objects identified by the moving object identifying module; And
And a descriptor generation module for generating a descriptor for the new moving object in the storage space.
The video search system according to claim 1,
And a retrieval module for retrieving an image frame corresponding to the retrieval condition based on the retrieval target description element.
A video frame receiving module for receiving each video frame periodically photographed by a video camera, wherein each video frame is divided into a plurality of predefined divided areas;
A moving object identification module for identifying a moving object from the image frame to be processed when the image frame receiving module receives the image frame to be processed;
A position determination module for determining a position on the processing target image frame of each of the moving objects identified by the moving object identification module;
And a location descriptor for managing the location movement history of the moving object generated in the predetermined storage space for each of the moving objects identified by the moving object identification module, A descriptor management module for adding a description element;
A search condition input module that receives a search condition including identification information of at least a part of the plurality of divided regions; And
A retrieval module for retrieving an image frame corresponding to the retrieval condition based on the description element included in the descriptor corresponding to each moving object generated in the storage space,
Wherein the position determination module comprises:
Wherein the center of gravity of the moving object is determined for each of the moving objects identified by the moving object identification module and the position of the center of gravity of the determined moving object is determined as the position of the moving object.
The method according to claim 1,
The moving object judging module comprises:
And determines a moving object that is separated from the moving object extracted from the preceding image frame by a predetermined distance or more among the moving objects extracted from the image frame to be processed as a new moving object.
The method according to claim 1,
The moving picture search system comprising:
Further comprising a background learning module for performing a background learning on a learning target image frame, which is at least a part of image frames photographed by the video camera, to generate a background model,
Wherein the moving object identification module comprises:
And identifies the moving object from the image frame to be processed based on the generated background model.
6. The method of claim 5,
The background learning module comprises:
A representative value of each pixel is calculated to generate the background model,
Wherein the moving object identification module comprises:
And determines that the pixel is a pixel included in the moving object when the difference between the pixel value and the representative value of the pixel is greater than or equal to a predetermined threshold value for each pixel constituting the image frame to be processed.
A video frame receiving module for receiving each video frame periodically photographed by a video camera, wherein each video frame is divided into a plurality of predefined divided areas;
A moving object identification module for identifying a moving object from the image frame to be processed when the image frame receiving module receives the image frame to be processed;
A position determination module for determining a position on the processing target image frame of each of the moving objects identified by the moving object identification module;
And a location descriptor for managing the location movement history of the moving object generated in the predetermined storage space for each of the moving objects identified by the moving object identification module, A descriptor management module for adding a description element;
A search condition input module that receives a search condition including identification information of at least a part of the plurality of divided regions; And
And a condition determination module that determines a search target description element satisfying the search condition among description elements included in a descriptor corresponding to each moving object created in the storage space,
Wherein the position determination module comprises:
Wherein the center of gravity of the moving object is determined for each of the moving objects identified by the moving object identification module and the position of the center of gravity of the determined moving object is determined as the position of the moving object.
The method according to claim 1,
Wherein the video camera is a fixed camera for photographing a predetermined place.
A video frame receiving module for receiving each video frame periodically photographed by a video camera, wherein each video frame is divided into a plurality of predefined divided areas;
A moving object identification module for identifying a moving object from the image frame to be processed when the image frame receiving module receives the image frame to be processed;
A position determination module for determining a position on the processing target image frame of each of the moving objects identified by the moving object identification module;
And a location descriptor for managing the location movement history of the moving object generated in the predetermined storage space for each of the moving objects identified by the moving object identification module, A descriptor management module for adding a description element;
A search condition input module that receives a search condition including identification information of at least a part of the plurality of divided regions; And
A retrieval module for retrieving an image frame corresponding to the retrieval condition based on the description element included in the descriptor corresponding to each moving object generated in the storage space,
A moving object determining module that determines a new moving object that does not exist in a preceding image frame, which is a previous frame of the processing target image frame, among the moving objects identified by the moving object identifying module; And
And a descriptor generation module for generating a descriptor for the new moving object in the storage space.
A video retrieval system, comprising: a video frame receiving step of receiving a video frame periodically photographed by a video camera, wherein each video frame is divided into a plurality of predefined divided areas;
A moving object identifying step of identifying a moving object from the image frame to be processed when the moving image searching system receives the image frame to be processed in the image frame receiving step;
The moving picture search system comprising: a position determination step of determining a position on the processing target image frame of each of the moving objects identified in the moving object identification step;
Wherein the moving image search system further comprises a moving image search step of moving the moving object on the processing target image frame to a descriptor for managing a moving history of the moving object generated in a predetermined storage space, A descriptor management step of adding a description element including location information;
The moving picture search system comprising: a search condition input step of receiving a search condition including identification information of at least a part of the plurality of divided areas; And
Wherein the moving picture search system includes a condition determination step of determining a description element of a description that satisfies the search condition among description elements included in a descriptor corresponding to each moving object created in the storage space,
Wherein the moving image search system comprises: a moving object determining step of determining a new moving object that is not present in a preceding image frame, which is a previous frame of the processing target image frame, among the moving objects identified in the moving object identifying step; And
Wherein the moving picture search system further comprises a descriptor generation step of generating a descriptor for the new moving object in the storage space.
The method of claim 10,
Wherein the moving picture search system further comprises a searching step of searching for an image frame corresponding to the search condition based on the search target description element.
A video retrieval system, comprising: a video frame receiving step of receiving a video frame periodically photographed by a video camera, wherein each video frame is divided into a plurality of predefined divided areas;
A moving object identifying step of identifying a moving object from the image frame to be processed when the moving image searching system receives the image frame to be processed in the image frame receiving step;
The moving picture search system comprising: a position determination step of determining a position on the processing target image frame of each of the moving objects identified in the moving object identification step;
Wherein the moving image search system further comprises a moving image search step of moving the moving object on the processing target image frame to a descriptor for managing a moving history of the moving object generated in a predetermined storage space, A descriptor management step of adding a description element including location information;
The moving picture search system comprising: a search condition input step of receiving a search condition including identification information of at least a part of the plurality of divided areas; And
Wherein the moving picture search system searches for an image frame corresponding to the search condition based on a description element included in a descriptor corresponding to each moving object created in the storage space,
The position determination step may include:
Determining the center of gravity of the moving object for each of the moving objects identified by the moving object identification module and determining the position of the center of gravity of the determined moving object as the position of the moving object.
11. The method of claim 10,
The moving object determination step may include:
Determining a moving object that is separated from the moving object extracted from the preceding image frame by a predetermined distance or more among the moving objects extracted from the image frame to be processed as a new moving object.
11. The method of claim 10,
The moving picture search method includes:
The moving picture search system further includes a background learning step of performing a background learning on a learning target image frame that is at least a part of image frames photographed by the video camera to generate a background model,
Wherein the moving object identification step comprises:
And identifying a moving object from the image frame to be processed based on the generated background model.
15. The method of claim 14,
The background learning step comprises:
And generating a background model by calculating a representative value of each pixel,
Wherein the moving object identification step comprises:
Determining that the pixel is included in the moving object when the difference between the pixel value and the representative value of the pixel is greater than or equal to a predetermined threshold value for each pixel constituting the image frame to be processed How to search for videos.
A video retrieval system, comprising: a video frame receiving step of receiving a video frame periodically photographed by a video camera, wherein each video frame is divided into a plurality of predefined divided areas;
A moving object identifying step of identifying a moving object from the image frame to be processed when the moving image searching system receives the image frame to be processed in the image frame receiving step;
The moving picture search system comprising: a position determination step of determining a position on the processing target image frame of each of the moving objects identified in the moving object identification step;
Wherein the moving image search system further comprises a moving image search step of moving the moving object on the processing target image frame to a descriptor for managing a moving history of the moving object generated in a predetermined storage space, A descriptor management step of adding a description element including location information;
The moving picture search system comprising: a search condition input step of receiving a search condition including identification information of at least a part of the plurality of divided areas; And
Wherein the moving picture search system includes a condition determination step of determining a description element of a description that satisfies the search condition among description elements included in a descriptor corresponding to each moving object created in the storage space,
The position determination step may include:
Determining the center of gravity of the moving object for each of the moving objects identified by the moving object identification module and determining the position of the center of gravity of the determined moving object as the position of the moving object.
11. The method of claim 10,
Wherein the video camera is a fixed camera for photographing a predetermined place.
A video retrieval system, comprising: a video frame receiving step of receiving a video frame periodically photographed by a video camera, wherein each video frame is divided into a plurality of predefined divided areas;
A moving object identifying step of identifying a moving object from the image frame to be processed when the moving image searching system receives the image frame to be processed in the image frame receiving step;
The moving picture search system comprising: a position determination step of determining a position on the processing target image frame of each of the moving objects identified in the moving object identification step;
Wherein the moving image search system further comprises a moving image search step of moving the moving object on the processing target image frame to a descriptor for managing a moving history of the moving object generated in a predetermined storage space, A descriptor management step of adding a description element including location information;
The moving picture search system comprising: a search condition input step of receiving a search condition including identification information of at least a part of the plurality of divided areas; And
Wherein the moving picture search system searches for an image frame corresponding to the search condition based on a description element included in a descriptor corresponding to each moving object created in the storage space,
Wherein the moving image search system comprises: a moving object determining step of determining a new moving object that is not present in a preceding image frame, which is a previous frame of the processing target image frame, among the moving objects identified in the moving object identifying step; And
Wherein the moving picture search system further comprises a descriptor generation step of generating a descriptor for the new moving object in the storage space.
18. A computer program installed in a data processing apparatus and stored in a recording medium for performing the method according to any one of claims 10 to 18.
A video retrieval system comprising:
A processor; And
A memory for storing a computer program executed by the processor,
Wherein the computer program causes the moving picture retrieval system to perform the method according to any one of claims 10 to 18 when being executed by the processor.
KR1020160017207A 2016-02-15 2016-02-15 System and method for video searching KR101826669B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020160017207A KR101826669B1 (en) 2016-02-15 2016-02-15 System and method for video searching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020160017207A KR101826669B1 (en) 2016-02-15 2016-02-15 System and method for video searching

Publications (2)

Publication Number Publication Date
KR20170095599A KR20170095599A (en) 2017-08-23
KR101826669B1 true KR101826669B1 (en) 2018-03-22

Family

ID=59759388

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020160017207A KR101826669B1 (en) 2016-02-15 2016-02-15 System and method for video searching

Country Status (1)

Country Link
KR (1) KR101826669B1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20210011707A (en) 2019-07-23 2021-02-02 서강대학교산학협력단 A CNN-based Scene classifier with attention model for scene recognition in video

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101951232B1 (en) * 2018-09-12 2019-02-22 (주)휴진시스템 A High Quality CCTV Image System Using Separated Storage of Object Area and Adaptive Background Image
CN115186119B (en) * 2022-09-07 2022-12-06 深圳市华曦达科技股份有限公司 Picture processing method and system based on picture and text combination and readable storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100777199B1 (en) * 2006-12-14 2007-11-16 중앙대학교 산학협력단 Apparatus and method for tracking of moving target
KR101074850B1 (en) 2010-08-09 2011-10-19 이정무 Serch system of images

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100777199B1 (en) * 2006-12-14 2007-11-16 중앙대학교 산학협력단 Apparatus and method for tracking of moving target
KR101074850B1 (en) 2010-08-09 2011-10-19 이정무 Serch system of images

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20210011707A (en) 2019-07-23 2021-02-02 서강대학교산학협력단 A CNN-based Scene classifier with attention model for scene recognition in video

Also Published As

Publication number Publication date
KR20170095599A (en) 2017-08-23

Similar Documents

Publication Publication Date Title
Beery et al. Context r-cnn: Long term temporal context for per-camera object detection
Huang et al. Intelligent intersection: Two-stream convolutional networks for real-time near-accident detection in traffic video
WO2019218824A1 (en) Method for acquiring motion track and device thereof, storage medium, and terminal
Arroyo et al. Expert video-surveillance system for real-time detection of suspicious behaviors in shopping malls
Zhang et al. A new method for violence detection in surveillance scenes
Tian et al. Robust detection of abandoned and removed objects in complex surveillance videos
Bertini et al. Multi-scale and real-time non-parametric approach for anomaly detection and localization
Zabłocki et al. Intelligent video surveillance systems for public spaces–a survey
US10824935B2 (en) System and method for detecting anomalies in video using a similarity function trained by machine learning
US10970823B2 (en) System and method for detecting motion anomalies in video
Lin et al. Visual-attention-based background modeling for detecting infrequently moving objects
Fradi et al. Spatial and temporal variations of feature tracks for crowd behavior analysis
KR101826669B1 (en) System and method for video searching
Mousse et al. People counting via multiple views using a fast information fusion approach
KR101492059B1 (en) Real Time Object Tracking Method and System using the Mean-shift Algorithm
Sandifort et al. An entropy model for loiterer retrieval across multiple surveillance cameras
Shi et al. Saliency-based abnormal event detection in crowded scenes
EP4332910A1 (en) Behavior detection method, electronic device, and computer readable storage medium
Mantini et al. Camera Tampering Detection using Generative Reference Model and Deep Learned Features.
CN113869163B (en) Target tracking method and device, electronic equipment and storage medium
Narayan et al. Learning deep features for online person tracking using non-overlapping cameras: A survey
Seidenari et al. Non-parametric anomaly detection exploiting space-time features
Yang et al. Visual detection and tracking algorithms for human motion
Balasubramanian et al. Forensic video solution using facial feature‐based synoptic Video Footage Record
Constantinou et al. Spatial keyframe extraction of mobile videos for efficient object detection at the edge

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
GRNT Written decision to grant