KR101826669B1 - System and method for video searching - Google Patents
System and method for video searching Download PDFInfo
- Publication number
- KR101826669B1 KR101826669B1 KR1020160017207A KR20160017207A KR101826669B1 KR 101826669 B1 KR101826669 B1 KR 101826669B1 KR 1020160017207 A KR1020160017207 A KR 1020160017207A KR 20160017207 A KR20160017207 A KR 20160017207A KR 101826669 B1 KR101826669 B1 KR 101826669B1
- Authority
- KR
- South Korea
- Prior art keywords
- moving object
- moving
- image frame
- module
- descriptor
- Prior art date
Links
Images
Classifications
-
- G06F17/30023—
-
- G06F17/30058—
-
- G06F17/30784—
-
- G06F17/3079—
-
- G06F17/30811—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/84—Generation or processing of descriptive data, e.g. content descriptors
- H04N21/8405—Generation or processing of descriptive data, e.g. content descriptors represented by keywords
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/845—Structuring of content, e.g. decomposing content into time segments
- H04N21/8456—Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
Abstract
A moving image retrieval system and a moving image retrieval system capable of easily retrieving a moving object (object or person) photographed from a moving image photographed by a video camera such as a CCTV. According to an aspect of the present invention, there is provided a video frame receiving module for receiving each video frame periodically photographed by a video camera, wherein each video frame is divided into a plurality of predefined divided areas, A moving object identification module that identifies a moving object from the image frame to be processed when the receiving module receives the image frame to be processed, a determination module that determines a position on the processing object image frame of each of the moving objects identified by the moving object identification module A descriptor for managing a movement history of the moving object generated in a predetermined storage space for each of the moving objects identified by the moving object identification module, Describing element (descr a search condition input module for receiving a search condition including identification information of at least a part of the plurality of divided regions, and a descriptor corresponding to each mobile object created in the storage space, And a condition determination module that determines a search target description element that satisfies the search condition among the included description elements.
Description
The present invention relates to a video retrieval system and a video retrieval system. And more particularly, to a moving image retrieval system and a moving image retrieval system capable of easily retrieving a portion where an object (object or person) moving in a moving image photographed by a video camera such as CCTV is photographed.
Security and surveillance are emerging as a very important issue in society where crime and terrorism occur frequently. Most of the places where security and surveillance are important, such as airports, stations, banks, government offices, etc., are equipped with closed circuits (CCTV). Through this closed circuit, a human being watches in real time or an image taken in a closed circuit is stored in the form of a moving picture. These videos can then be used for purposes such as investigating an incident, but it takes a very long time to play back the stored video. Therefore, various methods of searching a specific part of a moving picture are being studied.
Conventional image retrieval methods include semantic-based and content-based methods in addition to the existing text-based methods. Semantic-based retrieval is a method of expressing images as abstract features, and is a method of analyzing the syntax or semantics of query terms. Content - based image retrieval automatically extracts and retrieves features from the image itself, enabling objective and automated image retrieval. Various features existing in the image can be classified into global feature and local feature.
A color Corelogram is an algorithm that uses global features. Color Correlograms show the probability distribution of colors occurring between pixels with a certain distance in the whole image. The color Corelogram can include color and edge information since it includes color change information of a pixel at a certain distance. Therefore, in order to obtain excellent image search results, a color Corelogram should be obtained at various distances. At this time, it takes a lot of calculation time to apply the color corelogram to the whole image. In addition, because it stores the calculated amount of data, it is inefficient in data management. That is, when the descriptor is designed using the global feature, the throughput of the descriptor becomes large and the processing time may not be effective.
In order to solve these drawbacks, a descriptor using a local area has been proposed. The local descriptor is a feature point based technique that emphasizes robustness to distortion in an image. Extracts a local patch using pixel information around the minutiae of the image, and extracts a keypoint containing feature information from the local patch. Extract the local descriptor using the extracted key point. The performance of feature point based image matching technology is fundamentally dependent on the robustness of feature points and local descriptors for image distortion (rotation, enlargement, reduction, brightness change, noise, etc.). The extracted local descriptor can act as an indexer by attaching keypoints of only the current image in the image to be searched. Therefore, the local descriptor is relatively easy to manage data because of the relatively high processing speed and descriptor processing data. However, images of different sizes or totally different images may cause a search error by creating similar keypoints. That is, in the case of the descriptor using the local area, there is a disadvantage in that an erroneous search irrespective of the query may occur at the time of image retrieval.
SUMMARY OF THE INVENTION It is an object of the present invention to provide a moving image retrieval system and a moving image retrieval system that can quickly and accurately determine and retrieve a portion where an object moving in a moving image photographed by a video camera is photographed.
According to an aspect of the present invention, there is provided a video frame receiving module for receiving each video frame periodically photographed by a video camera, wherein each video frame is divided into a plurality of predefined divided areas, A moving object identification module that identifies a moving object from the image frame to be processed when the receiving module receives the image frame to be processed, a determination module that determines a position on the processing object image frame of each of the moving objects identified by the moving object identification module A descriptor for managing a movement history of the moving object generated in a predetermined storage space for each of the moving objects identified by the moving object identification module, Describing element (descr a search condition input module for receiving a search condition including identification information of at least a part of the plurality of divided regions, and a descriptor corresponding to each mobile object created in the storage space, And a condition determination module that determines a search target description element that satisfies the search condition among the included description elements.
In one embodiment, the moving picture retrieval system may include a retrieval module for retrieving an image frame corresponding to the retrieval condition based on the retrieval target description element.
In one embodiment, the moving picture search system includes a moving object determination module that determines a new moving object that is not present in a preceding image frame, which is a previous frame of the processing target image frame, among the moving objects identified by the moving object identification module, And a descriptor generation module for generating a descriptor for the new moving object in the storage space.
In one embodiment, the moving object determination module may determine a moving object that is separated from the moving object extracted from the preceding image frame by a predetermined distance or more among the moving objects extracted from the image frame to be processed as a new moving object .
In one embodiment, the moving picture search system further includes a background learning module for generating a background model by performing background learning on a learning target image frame, which is at least a part of image frames photographed by the video camera, The identification module can identify the moving object from the image frame to be processed based on the generated background model.
In one embodiment, the background learning module generates a background model by calculating a representative value of each pixel, and the moving object identification module determines, for each pixel constituting the processing target image frame, And the representative value of the pixel is greater than or equal to a predetermined threshold value, the pixel may be determined as a pixel included in the moving object.
In one embodiment, the position determination module determines the center of gravity of the moving object for each of the moving objects identified by the moving object identification module, and determines the position of the center of gravity of the determined moving object, It can be judged as a position.
In one embodiment, the video camera may be a fixed camera for photographing a predetermined place.
According to another aspect of the present invention, there is provided a video frame receiving module for receiving each video frame periodically photographed by a video camera, wherein each video frame is divided into a plurality of predefined divided areas, A moving object identification module for identifying a moving object from the image frame to be processed when the frame receiving module receives the image frame to be processed, a moving object identifying module for identifying a position on the processing object image frame of each moving object identified by the moving object identifying module A moving object identification module for determining a moving object, a moving object identification module for identifying a moving object, a moving object identification module for determining a moving object, A description element including positional information of < RTI ID = 0.0 > A retrieval condition input module for receiving a retrieval condition including identification information of at least a part of the plurality of partitioned areas, and a description element included in a descriptor corresponding to each moving object created in the storage space, And a search module for searching an image frame corresponding to the search condition based on the search condition.
According to another aspect of the present invention, there is provided a video retrieval system comprising: a video frame receiving step of receiving a video frame periodically taken by a video camera, wherein each video frame is divided into a plurality of predefined divided areas Identifying a moving object from the image frame to be processed when the moving image searching system receives the image frame to be processed in the image frame receiving step; Determining a position of each of the moving objects identified in the moving object identification step in the processing target image frame; and determining the position of each of the moving objects identified in the moving object identification step, Descriptor for managing the movement history of moving objects A descriptor management step of adding a description element including positional information of the moving object on the image frame to be processed, the moving image search system comprising: a descriptor management step of receiving a search condition including identification information of at least a part of the plurality of divided areas And a condition determination step of determining a search target description element satisfying the search condition among the description elements included in the descriptor corresponding to each moving object created in the storage space A video search method is provided.
In one embodiment, the moving image searching method may further include a searching step in which the moving image searching system searches for an image frame corresponding to the search condition, based on the search target description element.
In one embodiment, the moving picture search method further comprises a moving picture search system that determines a new moving object that is not present in a preceding image frame, which is a previous frame of the processing target image frame, among the moving objects identified in the moving object identification step The moving object determining step and the moving picture searching system may further include a descriptor generating step of generating a descriptor for the new moving object in the storage space.
In one embodiment, the moving object determination step may include determining that a moving object that is separated from the moving object extracted from the preceding image frame by a predetermined distance or more among the moving objects extracted from the processing target image frame is a new moving object .
In one embodiment, the moving picture search method further includes a background learning step of generating a background model by performing background learning on a learning target image frame that is at least a part of image frames captured by the video camera Wherein the moving object identification step includes identifying a moving object from the image frame to be processed based on the generated background model.
In one embodiment, the background learning step includes a step of calculating a representative value of each pixel to generate the background model, and the moving object identifying step includes a step of calculating, for each pixel constituting the image frame to be processed And determining that the pixel is a pixel included in the moving object when the difference between the pixel value and the representative value of the pixel is equal to or greater than a predetermined threshold value.
In one embodiment, the position determination step determines the center of gravity of the moving object for each of the moving objects identified by the moving object identification module, and determines the position of the center of gravity of the determined moving object, And determining the position of the mobile terminal.
In one embodiment, the video camera may be a fixed camera for photographing a predetermined place.
According to another aspect of the present invention, there is provided a video retrieval system comprising: a video frame receiving step of receiving a video frame periodically taken by a video camera, wherein each video frame is divided into a plurality of predefined divided areas Identifying a moving object from the image frame to be processed when the moving image searching system receives the image frame to be processed in the image frame receiving step; Determining a position of each of the moving objects identified in the moving object identification step in the processing target image frame; and determining the position of each of the moving objects identified in the moving object identification step, Descriptor for managing the movement history of moving objects Wherein the moving picture search system further includes a description element that includes location information of the moving object on the image frame to be processed, the moving picture search system further comprising: a searching step of receiving a search condition including identification information of at least a part of the plurality of divided areas, And a search step of searching for a video frame corresponding to the search condition on the basis of a description element included in a descriptor corresponding to each moving object created in the storage space, Method is provided.
According to another aspect of the present invention, there is provided a computer program installed in a data processing apparatus and stored in a recording medium for performing the above-described method.
According to another aspect of the present invention there is provided a motion picture retrieval system comprising a processor and a memory for storing a computer program executed by the processor, wherein the computer program, when executed by the processor, , A moving picture search system is provided which allows the above-described method to be performed.
The moving picture search system according to an embodiment of the present invention has an advantage of having a retrieval rate or accuracy of a content based image retrieval level because it describes a descriptor indicating the movement of the moving object based on the minutiae of the moving object. In addition, since the search is performed using the search conditions represented by the plurality of divided regions, it is possible to quickly search a moving image portion matching the search condition related to the trajectory of the moving object.
BRIEF DESCRIPTION OF THE DRAWINGS A brief description of each drawing is provided to more fully understand the drawings recited in the description of the invention.
FIG. 1 is a block diagram schematically showing a configuration of a motion picture search system according to an embodiment of the present invention. Referring to FIG.
2 is a diagram for explaining a divided area constituting an image frame.
FIG. 3 is a diagram for explaining a method for determining a representative value of a pixel in a moving image search system according to an embodiment of the present invention. Referring to FIG.
4 is a diagram illustrating an example of a background of an image frame generated by a moving image search system according to an embodiment of the present invention.
FIG. 5A is a view showing an example of a specific image frame (i-th image frame), and FIG. 5B is a view showing the center of gravity of a moving object included in the image frame shown in FIG.
6A is a view showing an example of a next image frame (i + 1th image frame) of the image frame shown in FIG. 5A, FIG. 6B is a view showing an example of the center of gravity of the moving object included in the image frame shown in FIG. Fig.
FIG. 7A is a diagram illustrating an example of a next image frame (i + 2) -th image frame of the image frame shown in FIG. 6A, FIG. 7B is a view showing an example of a center of gravity of a moving object included in the image frame shown in FIG. Fig.
8A to 8C are diagrams illustrating descriptors generated and managed by a moving image search system according to an exemplary embodiment of the present invention.
BRIEF DESCRIPTION OF THE DRAWINGS The present invention is capable of various modifications and various embodiments, and specific embodiments are illustrated in the drawings and described in detail in the detailed description. It is to be understood, however, that the invention is not to be limited to the specific embodiments, but includes all modifications, equivalents, and alternatives falling within the spirit and scope of the invention. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Hereinafter, the present invention will be described in detail with reference to the accompanying drawings.
The terms first, second, etc. may be used to describe various components, but the components should not be limited by the terms. The terms are used only for the purpose of distinguishing one component from another.
The terminology used in this application is used only to describe a specific embodiment and is not intended to limit the invention. The singular expressions include plural expressions unless the context clearly dictates otherwise.
In this specification, the terms "comprises" or "having" and the like refer to the presence of stated features, integers, steps, operations, elements, components, or combinations thereof, But do not preclude the presence or addition of features, numbers, steps, operations, components, parts, or combinations thereof.
Also, in this specification, when any one element 'transmits' data to another element, the element may transmit the data directly to the other element, or may be transmitted through at least one other element And may transmit the data to the other component. Conversely, when one element 'directly transmits' data to another element, it means that the data is transmitted to the other element without passing through another element in the element.
Hereinafter, the present invention will be described in detail with reference to the embodiments of the present invention with reference to the accompanying drawings. Like reference symbols in the drawings denote like elements.
FIG. 1 is a block diagram schematically showing a configuration of a motion picture search system according to an embodiment of the present invention. Referring to FIG.
1, the moving
The moving
In this specification, a module may mean a functional and structural combination of hardware for carrying out the technical idea of the present invention and software for driving the hardware. For example, the module may refer to a logical unit of a predetermined code and a hardware resource for executing the predetermined code, and it does not necessarily mean a physically connected code or a kind of hardware But can be easily deduced to the average expert in the field of the present invention.
The moving
The
The
Each video frame captured by the
The fact that one video frame is divided into a plurality of divided regions does not necessarily mean that the video frame is physically divided into a plurality of divided regions, It is possible to treat each image frame as if the image frames are logically divided into the plurality of divided areas in order to realize the mapping.
FIG. 2 is a diagram for explaining a divided area constituting an image frame photographed by the
Although FIG. 2 shows an example of dividing one image frame into 3 3 = 9 divided areas having the same size, according to an embodiment, the sizes of the divided areas may be different from each other, The image frame may be divided into nine or more sub-regions.
Referring to FIG. 2, each video frame captured by the
In the following description, it is assumed that each video frame captured by the
1, the moving
The moving
In order to allow the moving
The
The learning object image frame may be a plurality of image frames photographed by the
The
<Type> h t (x, y) = | f t (x, y) - f t-1 (x, y) |
In the above equation, h t (x, y) are the coordinates <x, y> means an absolute value of the difference in brightness of pixels located in and, f t (x, y) are the coordinates of the current learning object image frame <x, y> The brightness of the located pixel, f t-1 (x, y), is the brightness of the pixel located at the coordinate <x, y> of the previous learning target image frame.
The
3 is a view for explaining the above process.
Referring to FIG. 3, the
In addition, the
If the difference between the pixel value and the representative value of the pixel is greater than or equal to a predetermined threshold value, the moving
The
In one embodiment, the
The moving
The moving
Hereinafter, referring to FIGS. 4 and 5A to 7B, the moving
FIG. 4 is a diagram illustrating an example of a
FIG. 5A is a view showing an example of a specific image frame (i-th image frame), and FIG. 5B is a view showing the center of gravity of a moving object included in the image frame shown in FIG.
The moving
6A is a view showing an example of a next image frame (i + 1th image frame) of the image frame shown in FIG. 5A, FIG. 6B is a view showing an example of the center of gravity of the moving object included in the image frame shown in FIG. Fig.
The moving
Meanwhile, since the
On the other hand, since the moving object corresponding to the moving
FIG. 7A is a diagram illustrating an example of a next image frame (i + 2) -th image frame of the image frame shown in FIG. 6A, FIG. 7B is a view showing an example of a center of gravity of a moving object included in the image frame shown in FIG. Fig.
The moving
On the other hand, when the
On the other hand, the moving
Referring again to FIG. 1, the
The descriptor may be a predetermined data structure for managing the movement history of the moving object. The descriptor may be generated for each object appearing in an image frame photographed by the
Meanwhile, the descriptors may be implemented in the form of a linked list, an array, a queue, a stack, or the like.
For each of the moving objects identified by the moving
Meanwhile, the location information of the moving object included in the description element may be a coordinate value of a pixel corresponding to the center of gravity of the moving object, and according to an embodiment, may include a pixel corresponding to the center of gravity of the moving object It may be the identification information of the area.
8A to 8C are diagrams illustrating descriptors generated and managed by a moving image search system according to an exemplary embodiment of the present invention. More specifically, FIG. 8A is a diagram illustrating a descriptor state corresponding to an image frame to be processed as shown in FIG. 5A, FIG. 8B is a diagram illustrating a state of a descriptor corresponding to a processing object image frame as shown in FIG. And FIG. 8C is a diagram showing the state of the descriptor corresponding to the image frame to be processed as shown in FIG. 7A.
The descriptors illustrated in FIGS. 8A to 8C are merely examples, and the descriptors maintained and managed by the moving
Referring to FIG. 8A, since two moving
On the other hand, the
8A to 8C illustrate that the location information of the moving object included in the description element is identification information (e.g., < 1,3 >) of the divided area including pixels corresponding to the center of gravity of the moving object The position information of the moving object included in the description element may be the coordinate value of the pixel corresponding to the center of gravity of the moving object.
Referring to FIG. 8B, one moving
On the other hand, the
6A, when the moving
Referring to FIG. 8C, two moving
On the other hand, the
As described above with reference to FIG. 6A, when the moving
Referring again to FIG. 1, the search
In particular, the search condition may include identification information of at least a part of the plurality of divided regions. Accordingly, when the search condition includes the identification information of a plurality of divided regions, the search condition may be a condition for searching for a portion of the captured image of the moving object sequentially passing through the plurality of divided regions. For example, when the search condition includes the divided region 2 (12), the divided region 5 (15) and the divided region 8 (18) in order, the search condition is divided into the divided region 2 (12), the divided
According to an embodiment, the search condition may further include additional search parameters related to the movement trajectory such as time, moving direction of the moving object, and moving speed of the moving object in addition to the identification information of the divided area. In this case, And may be a condition for searching for an image portion satisfying an included search parameter.
The
As described above, each descriptor includes information indicating the movement trajectory of the object corresponding to the descriptor. Since the search condition is also a condition related to the trajectory of the moving object, the
If the
If the search target description element satisfying the search condition is determined as described above, the
As described above, since the moving
Meanwhile, the moving
On the other hand, according to an embodiment, the moving
Meanwhile, the moving picture search method according to the embodiment of the present invention may be implemented in the form of a program command readable by a computer and stored in a computer readable recording medium, and the control program and the target program according to the embodiment of the present invention And can be stored in a computer-readable recording medium. A computer-readable recording medium includes all kinds of recording apparatuses in which data that can be read by a computer system is stored.
Program instructions to be recorded on a recording medium may be those specially designed and constructed for the present invention or may be available to those skilled in the art of software.
Examples of the computer-readable recording medium include magnetic media such as a hard disk, a floppy disk and a magnetic tape, optical media such as CD-ROM and DVD, a floptical disk, And hardware devices that are specially configured to store and execute program instructions such as magneto-optical media and ROM, RAM, flash memory, and the like. The computer readable recording medium may also be distributed over a networked computer system so that computer readable code can be stored and executed in a distributed manner.
Examples of program instructions include machine language code such as those produced by a compiler, as well as devices for processing information electronically using an interpreter or the like, for example, a high-level language code that can be executed by a computer.
It will be understood by those skilled in the art that the foregoing description of the present invention is for illustrative purposes only and that those of ordinary skill in the art can readily understand that various changes and modifications may be made without departing from the spirit or essential characteristics of the present invention. will be. It is therefore to be understood that the above-described embodiments are illustrative in all aspects and not restrictive. For example, each component described as a single entity may be distributed and implemented, and components described as being distributed may also be implemented in a combined form.
It is intended that the present invention covers the modifications and variations of this invention provided they come within the scope of the appended claims and their equivalents. .
Claims (20)
A moving object identification module for identifying a moving object from the image frame to be processed when the image frame receiving module receives the image frame to be processed;
A position determination module for determining a position on the processing target image frame of each of the moving objects identified by the moving object identification module;
For each of the moving objects identified by the moving object identification module, a descriptor for managing a moving history of the moving object generated in a predetermined storage space, the position information of the moving object on the processing target image frame A descriptor management module for adding a description element including the descriptor element to the descriptor;
A search condition input module that receives a search condition including identification information of at least a part of the plurality of divided regions; And
And a condition determination module that determines a search target description element satisfying the search condition among description elements included in a descriptor corresponding to each moving object created in the storage space,
A moving object determining module that determines a new moving object that does not exist in a preceding image frame, which is a previous frame of the processing target image frame, among the moving objects identified by the moving object identifying module; And
And a descriptor generation module for generating a descriptor for the new moving object in the storage space.
And a retrieval module for retrieving an image frame corresponding to the retrieval condition based on the retrieval target description element.
A moving object identification module for identifying a moving object from the image frame to be processed when the image frame receiving module receives the image frame to be processed;
A position determination module for determining a position on the processing target image frame of each of the moving objects identified by the moving object identification module;
And a location descriptor for managing the location movement history of the moving object generated in the predetermined storage space for each of the moving objects identified by the moving object identification module, A descriptor management module for adding a description element;
A search condition input module that receives a search condition including identification information of at least a part of the plurality of divided regions; And
A retrieval module for retrieving an image frame corresponding to the retrieval condition based on the description element included in the descriptor corresponding to each moving object generated in the storage space,
Wherein the position determination module comprises:
Wherein the center of gravity of the moving object is determined for each of the moving objects identified by the moving object identification module and the position of the center of gravity of the determined moving object is determined as the position of the moving object.
The moving object judging module comprises:
And determines a moving object that is separated from the moving object extracted from the preceding image frame by a predetermined distance or more among the moving objects extracted from the image frame to be processed as a new moving object.
The moving picture search system comprising:
Further comprising a background learning module for performing a background learning on a learning target image frame, which is at least a part of image frames photographed by the video camera, to generate a background model,
Wherein the moving object identification module comprises:
And identifies the moving object from the image frame to be processed based on the generated background model.
The background learning module comprises:
A representative value of each pixel is calculated to generate the background model,
Wherein the moving object identification module comprises:
And determines that the pixel is a pixel included in the moving object when the difference between the pixel value and the representative value of the pixel is greater than or equal to a predetermined threshold value for each pixel constituting the image frame to be processed.
A moving object identification module for identifying a moving object from the image frame to be processed when the image frame receiving module receives the image frame to be processed;
A position determination module for determining a position on the processing target image frame of each of the moving objects identified by the moving object identification module;
And a location descriptor for managing the location movement history of the moving object generated in the predetermined storage space for each of the moving objects identified by the moving object identification module, A descriptor management module for adding a description element;
A search condition input module that receives a search condition including identification information of at least a part of the plurality of divided regions; And
And a condition determination module that determines a search target description element satisfying the search condition among description elements included in a descriptor corresponding to each moving object created in the storage space,
Wherein the position determination module comprises:
Wherein the center of gravity of the moving object is determined for each of the moving objects identified by the moving object identification module and the position of the center of gravity of the determined moving object is determined as the position of the moving object.
Wherein the video camera is a fixed camera for photographing a predetermined place.
A moving object identification module for identifying a moving object from the image frame to be processed when the image frame receiving module receives the image frame to be processed;
A position determination module for determining a position on the processing target image frame of each of the moving objects identified by the moving object identification module;
And a location descriptor for managing the location movement history of the moving object generated in the predetermined storage space for each of the moving objects identified by the moving object identification module, A descriptor management module for adding a description element;
A search condition input module that receives a search condition including identification information of at least a part of the plurality of divided regions; And
A retrieval module for retrieving an image frame corresponding to the retrieval condition based on the description element included in the descriptor corresponding to each moving object generated in the storage space,
A moving object determining module that determines a new moving object that does not exist in a preceding image frame, which is a previous frame of the processing target image frame, among the moving objects identified by the moving object identifying module; And
And a descriptor generation module for generating a descriptor for the new moving object in the storage space.
A moving object identifying step of identifying a moving object from the image frame to be processed when the moving image searching system receives the image frame to be processed in the image frame receiving step;
The moving picture search system comprising: a position determination step of determining a position on the processing target image frame of each of the moving objects identified in the moving object identification step;
Wherein the moving image search system further comprises a moving image search step of moving the moving object on the processing target image frame to a descriptor for managing a moving history of the moving object generated in a predetermined storage space, A descriptor management step of adding a description element including location information;
The moving picture search system comprising: a search condition input step of receiving a search condition including identification information of at least a part of the plurality of divided areas; And
Wherein the moving picture search system includes a condition determination step of determining a description element of a description that satisfies the search condition among description elements included in a descriptor corresponding to each moving object created in the storage space,
Wherein the moving image search system comprises: a moving object determining step of determining a new moving object that is not present in a preceding image frame, which is a previous frame of the processing target image frame, among the moving objects identified in the moving object identifying step; And
Wherein the moving picture search system further comprises a descriptor generation step of generating a descriptor for the new moving object in the storage space.
Wherein the moving picture search system further comprises a searching step of searching for an image frame corresponding to the search condition based on the search target description element.
A moving object identifying step of identifying a moving object from the image frame to be processed when the moving image searching system receives the image frame to be processed in the image frame receiving step;
The moving picture search system comprising: a position determination step of determining a position on the processing target image frame of each of the moving objects identified in the moving object identification step;
Wherein the moving image search system further comprises a moving image search step of moving the moving object on the processing target image frame to a descriptor for managing a moving history of the moving object generated in a predetermined storage space, A descriptor management step of adding a description element including location information;
The moving picture search system comprising: a search condition input step of receiving a search condition including identification information of at least a part of the plurality of divided areas; And
Wherein the moving picture search system searches for an image frame corresponding to the search condition based on a description element included in a descriptor corresponding to each moving object created in the storage space,
The position determination step may include:
Determining the center of gravity of the moving object for each of the moving objects identified by the moving object identification module and determining the position of the center of gravity of the determined moving object as the position of the moving object.
The moving object determination step may include:
Determining a moving object that is separated from the moving object extracted from the preceding image frame by a predetermined distance or more among the moving objects extracted from the image frame to be processed as a new moving object.
The moving picture search method includes:
The moving picture search system further includes a background learning step of performing a background learning on a learning target image frame that is at least a part of image frames photographed by the video camera to generate a background model,
Wherein the moving object identification step comprises:
And identifying a moving object from the image frame to be processed based on the generated background model.
The background learning step comprises:
And generating a background model by calculating a representative value of each pixel,
Wherein the moving object identification step comprises:
Determining that the pixel is included in the moving object when the difference between the pixel value and the representative value of the pixel is greater than or equal to a predetermined threshold value for each pixel constituting the image frame to be processed How to search for videos.
A moving object identifying step of identifying a moving object from the image frame to be processed when the moving image searching system receives the image frame to be processed in the image frame receiving step;
The moving picture search system comprising: a position determination step of determining a position on the processing target image frame of each of the moving objects identified in the moving object identification step;
Wherein the moving image search system further comprises a moving image search step of moving the moving object on the processing target image frame to a descriptor for managing a moving history of the moving object generated in a predetermined storage space, A descriptor management step of adding a description element including location information;
The moving picture search system comprising: a search condition input step of receiving a search condition including identification information of at least a part of the plurality of divided areas; And
Wherein the moving picture search system includes a condition determination step of determining a description element of a description that satisfies the search condition among description elements included in a descriptor corresponding to each moving object created in the storage space,
The position determination step may include:
Determining the center of gravity of the moving object for each of the moving objects identified by the moving object identification module and determining the position of the center of gravity of the determined moving object as the position of the moving object.
Wherein the video camera is a fixed camera for photographing a predetermined place.
A moving object identifying step of identifying a moving object from the image frame to be processed when the moving image searching system receives the image frame to be processed in the image frame receiving step;
The moving picture search system comprising: a position determination step of determining a position on the processing target image frame of each of the moving objects identified in the moving object identification step;
Wherein the moving image search system further comprises a moving image search step of moving the moving object on the processing target image frame to a descriptor for managing a moving history of the moving object generated in a predetermined storage space, A descriptor management step of adding a description element including location information;
The moving picture search system comprising: a search condition input step of receiving a search condition including identification information of at least a part of the plurality of divided areas; And
Wherein the moving picture search system searches for an image frame corresponding to the search condition based on a description element included in a descriptor corresponding to each moving object created in the storage space,
Wherein the moving image search system comprises: a moving object determining step of determining a new moving object that is not present in a preceding image frame, which is a previous frame of the processing target image frame, among the moving objects identified in the moving object identifying step; And
Wherein the moving picture search system further comprises a descriptor generation step of generating a descriptor for the new moving object in the storage space.
A processor; And
A memory for storing a computer program executed by the processor,
Wherein the computer program causes the moving picture retrieval system to perform the method according to any one of claims 10 to 18 when being executed by the processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020160017207A KR101826669B1 (en) | 2016-02-15 | 2016-02-15 | System and method for video searching |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020160017207A KR101826669B1 (en) | 2016-02-15 | 2016-02-15 | System and method for video searching |
Publications (2)
Publication Number | Publication Date |
---|---|
KR20170095599A KR20170095599A (en) | 2017-08-23 |
KR101826669B1 true KR101826669B1 (en) | 2018-03-22 |
Family
ID=59759388
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020160017207A KR101826669B1 (en) | 2016-02-15 | 2016-02-15 | System and method for video searching |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR101826669B1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20210011707A (en) | 2019-07-23 | 2021-02-02 | 서강대학교산학협력단 | A CNN-based Scene classifier with attention model for scene recognition in video |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101951232B1 (en) * | 2018-09-12 | 2019-02-22 | (주)휴진시스템 | A High Quality CCTV Image System Using Separated Storage of Object Area and Adaptive Background Image |
CN115186119B (en) * | 2022-09-07 | 2022-12-06 | 深圳市华曦达科技股份有限公司 | Picture processing method and system based on picture and text combination and readable storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100777199B1 (en) * | 2006-12-14 | 2007-11-16 | 중앙대학교 산학협력단 | Apparatus and method for tracking of moving target |
KR101074850B1 (en) | 2010-08-09 | 2011-10-19 | 이정무 | Serch system of images |
-
2016
- 2016-02-15 KR KR1020160017207A patent/KR101826669B1/en active IP Right Grant
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100777199B1 (en) * | 2006-12-14 | 2007-11-16 | 중앙대학교 산학협력단 | Apparatus and method for tracking of moving target |
KR101074850B1 (en) | 2010-08-09 | 2011-10-19 | 이정무 | Serch system of images |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20210011707A (en) | 2019-07-23 | 2021-02-02 | 서강대학교산학협력단 | A CNN-based Scene classifier with attention model for scene recognition in video |
Also Published As
Publication number | Publication date |
---|---|
KR20170095599A (en) | 2017-08-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Beery et al. | Context r-cnn: Long term temporal context for per-camera object detection | |
Huang et al. | Intelligent intersection: Two-stream convolutional networks for real-time near-accident detection in traffic video | |
WO2019218824A1 (en) | Method for acquiring motion track and device thereof, storage medium, and terminal | |
Arroyo et al. | Expert video-surveillance system for real-time detection of suspicious behaviors in shopping malls | |
Zhang et al. | A new method for violence detection in surveillance scenes | |
Tian et al. | Robust detection of abandoned and removed objects in complex surveillance videos | |
Bertini et al. | Multi-scale and real-time non-parametric approach for anomaly detection and localization | |
Zabłocki et al. | Intelligent video surveillance systems for public spaces–a survey | |
US10824935B2 (en) | System and method for detecting anomalies in video using a similarity function trained by machine learning | |
US10970823B2 (en) | System and method for detecting motion anomalies in video | |
Lin et al. | Visual-attention-based background modeling for detecting infrequently moving objects | |
Fradi et al. | Spatial and temporal variations of feature tracks for crowd behavior analysis | |
KR101826669B1 (en) | System and method for video searching | |
Mousse et al. | People counting via multiple views using a fast information fusion approach | |
KR101492059B1 (en) | Real Time Object Tracking Method and System using the Mean-shift Algorithm | |
Sandifort et al. | An entropy model for loiterer retrieval across multiple surveillance cameras | |
Shi et al. | Saliency-based abnormal event detection in crowded scenes | |
EP4332910A1 (en) | Behavior detection method, electronic device, and computer readable storage medium | |
Mantini et al. | Camera Tampering Detection using Generative Reference Model and Deep Learned Features. | |
CN113869163B (en) | Target tracking method and device, electronic equipment and storage medium | |
Narayan et al. | Learning deep features for online person tracking using non-overlapping cameras: A survey | |
Seidenari et al. | Non-parametric anomaly detection exploiting space-time features | |
Yang et al. | Visual detection and tracking algorithms for human motion | |
Balasubramanian et al. | Forensic video solution using facial feature‐based synoptic Video Footage Record | |
Constantinou et al. | Spatial keyframe extraction of mobile videos for efficient object detection at the edge |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
A201 | Request for examination | ||
E902 | Notification of reason for refusal | ||
GRNT | Written decision to grant |