WO2020027513A1 - Syntax-based image analysis system for compressed image, and interworking processing method - Google Patents

Syntax-based image analysis system for compressed image, and interworking processing method Download PDF

Info

Publication number
WO2020027513A1
WO2020027513A1 PCT/KR2019/009374 KR2019009374W WO2020027513A1 WO 2020027513 A1 WO2020027513 A1 WO 2020027513A1 KR 2019009374 W KR2019009374 W KR 2019009374W WO 2020027513 A1 WO2020027513 A1 WO 2020027513A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
moving object
analysis system
object region
unique
Prior art date
Application number
PCT/KR2019/009374
Other languages
French (fr)
Korean (ko)
Inventor
이현우
정승훈
이성진
Original Assignee
이노뎁 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 이노뎁 주식회사 filed Critical 이노뎁 주식회사
Publication of WO2020027513A1 publication Critical patent/WO2020027513A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/137Motion inside a coding unit, e.g. average field, frame or block difference
    • H04N19/139Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Definitions

  • the present invention relates to a technique for improving the processing performance of a plurality of compressed images in a CCTV control system.
  • the present invention provides a syntax (eg, a motion vector, coding type) of a compressed image without the need for object identification and behavior recognition through complex image processing of the entire compressed image as in the conventional art in an image analysis system.
  • the present invention relates to a technique for improving the processing performance of a compressed image by extracting a region in which something meaningful movement exists in the image, that is, a moving object region, and analyzing the extracted result in conjunction with an image analysis system.
  • the reality is that the number of control personnel is very low compared to the installation status of CCTV cameras. In order to effectively perform video surveillance with such limited number of people, simply displaying CCTV images on the monitor screen is not enough. It is preferable to process the object to be effectively detected by detecting the movement of the object present in each CCTV image and displaying something in the corresponding area in real time. In this case, the monitoring personnel do not monitor the entire CCTV image with uniform interest, but monitor the CCTV image centering on the part where the object moves.
  • the video detection system adopts compressed video for the efficiency of the storage space.
  • high-compression complex video compression technologies such as H.264 AVC and H.265 HEVC have been adopted.
  • CCTV cameras generate and provide compressed images according to these technical specifications, and the video control system decodes the compressed images in reverse.
  • a process of image processing is required after decoding a compressed image to obtain a reproduced image, that is, a decompressed original image.
  • a video decoding apparatus includes a parser 11, an entropy decoder 12, an inverse converter 13, a motion vector operator 14, a predictor 15, and a deblocking filter ( 16) is configured to include.
  • These hardware modules process compressed video sequentially to decompress and restore the original video data.
  • the parser 11 parses the motion vector and the coding type for the coding unit of the compressed image.
  • Such a coding unit is generally an image block such as a macroblock or a subblock.
  • FIG. 2 is a flowchart illustrating a process of extracting a moving object by analyzing a compressed image and performing object classification and event identification through the conventional art.
  • the compressed image is decoded according to H.264 AVC and H.265 HEVC, etc. (S10), and the frame images of the reproduced image are downscaled to a small image, for example, 320 ⁇ 240 (S20).
  • the reason for downscaling resizing is to reduce processing burden in subsequent steps.
  • the moving objects are extracted through a complicated image analysis process, and the coordinates of the moving objects are calculated (S30).
  • object classification and event identification are performed using the extracted object thumbnail images and coordinates of the moving objects (S40).
  • An object of the present invention is to provide a technique for improving the processing performance of a plurality of compressed images in a CCTV control system.
  • an object of the present invention is to perform an image based on the syntax (eg, motion vector, coding type) of the compressed image without the need of object identification and behavior recognition through complex image processing of the entire compressed image in the image analysis system as in the prior art. It is to provide a technique for improving the processing performance of compressed images by extracting a region in which something meaningful movement exists, that is, a moving object region, and analyzing the extracted result in conjunction with an image analysis system.
  • the syntax eg, motion vector, coding type
  • a syntax-based image analysis system and an interworking processing method for a compressed image include a first step of parsing a bitstream of a compressed image to obtain a motion vector and a coding type for a coding unit.
  • the present invention is performed between the sixth step and the seventh step, the image analysis system sorts the thumbnail image and coordinate information on the basis of the unique ID and image analysis processing by the unit of unique ID to classify objects and events for the moving object area Performing identification; may be configured to further include.
  • the fifth step may include: determining whether a moving object region of the same object exists in a previous frame for each moving object region based on the calculation of the overlapping degree of the image blocks between the moving object regions; A 52nd step of determining whether a unique ID is pre-assigned for each moving object region according to the determination result of the 51st step; A fifty-third step of maintaining a pre-allocated Unique ID for the mobile object area in the Unique ID allocation state according to the determination result of the fifty-second step; A new step of allocating a unique ID to a mobile object area in the unique ID unassigned state according to the determination result of the step 52; If a unique ID has been assigned in the previous frame but disappeared from the current frame image is identified, step 55 for revoking the unique ID assigned to the disappeared mobile object region may be performed.
  • the present invention may further include a first step of identifying a plurality of adjacent image blocks (hereinafter, referred to as 'neighbor block') around a moving object area, which are performed between the fourth step and the fifth step; B) comparing a motion vector value with a second preset threshold value for a plurality of neighboring blocks; C) additionally marking a neighboring block having a motion vector value exceeding a second threshold as a moving object region; D) additionally marking a neighboring block having a coding type of an intra picture among the plurality of neighboring blocks as a moving object region;
  • the method may further include an e-step of performing interpolation on the plurality of moving object regions to additionally mark a predetermined number or less of unmarked image blocks surrounded by the moving object region as the moving object region.
  • the computer program according to the present invention is stored in the medium in combination with hardware to execute the syntax-based image analysis system and the interworking processing method for the compressed image as described above.
  • the present invention by quickly identifying the moving object region from the syntax of the compressed image without performing complicated image processing on the entire compressed image as in the prior art, only the identified portion is selectively processed in association with the image analysis system.
  • FIG. 1 is a block diagram showing a general configuration of a video decoding apparatus.
  • FIG. 2 is a flowchart illustrating a process of performing object classification and event identification by analyzing a compressed image in the prior art.
  • FIG. 3 is a view illustrating a concept in which an image analysis system and an object region identification device interoperate in the present invention.
  • FIG. 4 is a flowchart illustrating a process of interworking with an image analysis system based on compressed image syntax according to the present invention.
  • FIG. 5 is a flowchart illustrating an example of a process of detecting effective motion from a compressed image in the present invention.
  • FIG. 6 is a diagram illustrating an example of a result of applying an effective motion region detection process according to the present invention to a CCTV compressed image.
  • FIG. 7 is a flowchart illustrating an example of a process of detecting a boundary region for a moving object region in the present invention.
  • FIG. 8 is a diagram illustrating an example of a result of applying a boundary region detection process to the CCTV image of FIG. 6.
  • FIG. 9 is a diagram illustrating an example of a result of arranging a moving object region through interpolation with respect to the CCTV image of FIG. 8.
  • FIG. 10 is a diagram for one example in which a unique ID is assigned to a moving object area in the present invention.
  • FIG. 11 is a diagram for one example of deriving a thumbnail image for a moving object region in the present invention.
  • FIG. 12 is a diagram illustrating an example in which location information and size information are identified for a moving object region in the present invention.
  • FIG. 3 is a diagram illustrating a concept in which an image analysis system and an object region identification apparatus interoperate in the present invention.
  • the present invention is a technique for effectively processing a compressed image transmitted from a CCTV camera 100.
  • CCTV surveillance systems collect high-quality captured images from hundreds to tens of thousands of CCTV cameras 100 compressed with complex image compression algorithms (eg H.264 AVC, H.265 HEVC).
  • complex image compression algorithms eg H.264 AVC, H.265 HEVC.
  • the processing burden of the image analysis server is very high, and the maximum CCTV channel that one server can accommodate is usually only 16 channels.
  • the present invention increases the overall processing performance by linking the object region identification apparatus 200 to the general image analysis system 300.
  • the image analysis system 300 analyzes the contents of the image in the manner used in the related art, recognizes an object, and identifies an event (eg, an offender wandering, a wall story, a crime, a fight, etc.) according to a result of analyzing the behavior of the object.
  • an event eg, an offender wandering, a wall story, a crime, a fight, etc.
  • the image analysis system 300 in the present invention is not limited to operating according to the conventional image analysis technology, but may be so.
  • the object region identification apparatus 200 extracts a region in which something meaningful movement exists in the image, that is, a moving object region, based on the syntax of the compressed image (eg, motion vector, coding type), and then the thumbnail image or the like.
  • a structure that analyzes and processes location information in conjunction with the image analysis system 300 is adopted. Through this, the image analysis system 300 recognizes an object and analyzes the behavior of the object to significantly reduce the amount of data required to identify the event to improve the overall performance.
  • the object region identification apparatus 200 parses a bitstream of a compressed image without having to decode the compressed image, and obtains syntax information, for example, a motion vector, for each image block, that is, a macro block and a sub block.
  • the moving object region is quickly extracted through the motion vector and coding type information.
  • the moving object region thus obtained does not accurately reflect the boundary of the moving object, but its processing speed is about 20 times faster than image analysis, but it shows a certain level of reliability for the existence of significant movement.
  • the object area identification apparatus 200 quickly filters most of the compressed images based on the syntax, thereby reducing the burden on the image analysis system 300 and increasing the overall processing performance.
  • the moving object region extracted by the object region identification apparatus 200 is merely a lump of an image block estimated to include the moving object, there is a limit in determining something therefrom. This is because the contents of the images are not judged, but the characteristics of the movements in the images are distinguished. Accordingly, the moving object region identification information and the thumbnail image or position information of the moving object region acquired by the object region identification apparatus 200 are transmitted to the image analysis system 300.
  • the image analysis system 300 performs image analysis on thumbnail images derived from a series of frame images constituting the compressed image, for example, classifies objects by determining image contents in a moving object region and performs an event. To identify.
  • the object region identification apparatus 200 When the compressed image is transmitted as described above, the object region identification apparatus 200 quickly picks out image chunks that seem to be important because there is something moving, and the image analysis system 300 properly analyzes the contents of the selected image chunks to classify the objects. Eg people, cars, animals, etc.) and events.
  • the object area identification apparatus 200 receives an image analysis result, that is, an object classification result and an event identification result from the image analysis system 300, and provides the control personnel to utilize the image analysis result.
  • the object region identification apparatus 200 does not need to decode the compressed image in the process of extracting the moving object region.
  • the apparatus or software to which the present invention is applied should not perform the operation of decoding the compressed image, but the scope of the present invention is not limited.
  • an operation of decoding the compressed image partially or entirely may be performed.
  • the image analysis system 300 may be a system for performing a conventional image analysis process, but the scope of the present invention is not limited thereto.
  • FIG. 4 is a flowchart illustrating a process of interworking with an image analysis system based on compressed image syntax according to the present invention.
  • Step S100 An effective motion that can substantially recognize meaning is detected from the compressed image based on the motion vector of the compressed image, and the image region in which the effective motion is detected is set as the moving object region.
  • data of a compressed image is parsed according to video compression standards such as H.264 AVC and H.265 HEVC to obtain a motion vector and a coding type for a coding unit.
  • video compression standards such as H.264 AVC and H.265 HEVC to obtain a motion vector and a coding type for a coding unit.
  • the size of the coding unit is generally about 64x64 to 4x4 pixels and may be set to be flexible.
  • the motion vectors are accumulated for a predetermined time period (for example, 500 msec) for each image block, and it is checked whether the motion vector accumulation value exceeds a first predetermined threshold (for example, 20 pixels). If such an image block is found, it is considered that effective motion has been found in the image block and marked as a moving object area. On the other hand, even if a motion vector is generated, if the cumulative value for a predetermined time does not exceed the first threshold, the image change is assumed to be insignificant and ignored.
  • a predetermined time period for example, 500 msec
  • a first predetermined threshold for example, 20 pixels
  • Step S200 Detects how far the boundary region is to the moving object region detected in S100 based on the motion vector and the coding type. If a motion vector occurs above a second threshold (for example, 0) or a coding type is an intra picture by inspecting a plurality of adjacent image blocks centered on the image block marked as a moving object area, the corresponding image block is also moved. Mark as an object area. Through this process, the corresponding image block is substantially in the form of forming a lump with the moving object region detected in S100.
  • a second threshold for example, 0
  • a coding type is an intra picture by inspecting a plurality of adjacent image blocks centered on the image block marked as a moving object area, the corresponding image block is also moved. Mark as an object area.
  • an effective motion is found and there is a certain amount of motion in the vicinity of the moving object area, it is marked as a moving object area because it is likely to be a mass with the previous moving object area.
  • determination based on a motion vector is impossible. Accordingly, the intra picture located adjacent to the image block already detected as the moving object region is estimated as a mass together with the previously extracted moving object region.
  • Step S300 The interpolation is applied to the moving object areas detected at S100 and S200 to clean up the fragmentation of the moving object area.
  • the moving object area since it is determined whether the moving object area is the image block unit, even though it is actually a moving object (for example, a person), there is an image block that is not marked as the moving object area in the middle.
  • the phenomenon of dividing into may occur. Accordingly, if there are one or a few unmarked image blocks surrounded by a plurality of image blocks marked with the moving object region, they additionally mark the moving object region. By doing so, it is possible to make the mobile object region divided into several into one. The influence of such interpolation is clearly seen when comparing FIG. 8 and FIG.
  • the moving object region was quickly extracted from each frame image based on the syntax (motion vector, coding type) of the compressed image through steps S100 to S300.
  • the moving object region derived through this process is merely a concept of a mass of images that seem to have something moving in each frame image, and a concept of an object that is uniformly recognized as a frame progresses in a compressed image. There is no.
  • Step S400 Unique ID is managed for the moving object region extracted based on the syntax from the compressed image.
  • the moving object region is derived from each image frame constituting the compressed image.
  • the image content is not determined by analyzing the image content, but merely extracting a chunk of an image that seems to be moving in the image frame. Accordingly, by assigning and managing a unique ID for the mobile object area, attributes as an object are created in the mobile object area. Through this, the area of the moving object can be treated like an object, not just a region, and the movement of a specific object can be interpreted while moving over a series of frame images in the compressed image.
  • Unique ID management of the mobile object area is handled in the following three cases. If a unique ID is assigned in a previous frame and a moving object area is identified in the current frame image (S410), the moving object area identified as an unassigned ID in the current frame image because it has not been identified in the previous frame. In the case of newly assigning a unique ID to the SID, a mobile object region in which a unique ID is assigned in the previous frame but disappeared from the current frame image is identified and revokes the allocated unique ID (S430).
  • the image block is a moving object region without checking the contents of the original image, it is not possible to confirm whether the chunks of the moving object region are actually the same in the image frames before and after. That is, since the contents of the moving object area are not known, such a change cannot be identified, for example, when the cat is replaced by a dog between the front and rear frames at the same point. However, considering that the time interval between frames is very short and that the observation object of the CCTV camera moves at a normal speed, the possibility of this happening can be excluded.
  • the present invention estimates that the ratio or number of image blocks overlapping between the chunks of the moving object region in the front and back frames is equal to or greater than a predetermined threshold. According to this approach, even if the contents of the image are not known, it is possible to determine whether the previously identified moving object region is moved or whether a new moving object region is newly discovered or the existing moving object region is lost. This judgment is lower in accuracy than the prior art, but can greatly increase the data processing speed, which is advantageous in practical applications.
  • the previously allocated Unique ID is allocated to the corresponding moving object region.
  • the identification may be marked in the management database of the Unique ID.
  • step S420 when a new object is unidentified in the current frame image because it has not been identified in the previous frame, a new ID is newly assigned to the corresponding mobile object area. This means that a new moving object is found in the image. 10 illustrates an example in which unique IDs are allocated to three moving object regions derived from CCTV images.
  • Step S500 Next, a thumbnail image and coordinate information (eg, location information and size information) of the moving object area are derived.
  • FIG. 11 is a diagram illustrating an example in which thumbnail images are derived for three moving object areas of a CCTV photographing image in the present invention, and FIG. 12 shows position information and size information as coordinate information for these moving object areas. It is a figure which shows the identified example.
  • the object region identification apparatus 200 may be configured to have a function of decoding a compressed image or selectively decoding a portion of the compressed image. Meanwhile, when the object region identification apparatus 200 transmits the location information to the image analysis system 300, the image analysis system 300 may be configured to obtain a thumbnail image of the moving object region therefrom.
  • the image analysis system 300 since the internal process of the image analysis system 300 has to be changed a lot compared with the prior art, it is determined that it is not a very preferable approach. Rather, a method of generating a thumbnail image of the object region identification apparatus 200 is more preferable.
  • the position information of the moving object region means a position where the moving object region is disposed in the image of the corresponding video block.
  • the upper left coordinate of the rectangle optimally surrounding the moving object region may be used, or the rectangular
  • the center grid can also be used as location information.
  • the size information may use a rectangular size that optimally surrounds the moving object region as shown in FIG. 12.
  • the object region identification apparatus 200 performs unique ID management on the moving object region, and through this, the moving object region in the compressed image is not simply a region but a concept of an object. To have it. Therefore, the image analysis system 300 may treat the series of moving object region identification information provided by the object region identification apparatus 200 in the concept of an object.
  • the image analysis system 300 sorts a plurality of moving object region identification information (thumbnail image, coordinate information) transmitted from the object region identification apparatus 200 based on a unique ID and performs image analysis in units of a unique ID. Through such image analysis, it is possible to recognize the contents of the moving object area by unique ID, and accordingly, the object classification result (eg, person, car, animal, etc.) regarding what the object is, and the object in the image Obtain event identification results (e.g. offender roaming, wall talks, crimes, fights, etc.) as to whether or not you are acting. In this case, the image analysis system 300 does not perform image analysis on the entire compressed image, but performs image analysis only on a series of moving object regions derived by the object region identification apparatus 200 from the compressed image. Significantly lower than the prior art.
  • the object region identification apparatus 200 uses something in the image using syntax information of the compressed image, for example, a motion vector and a coding type.
  • syntax information of the compressed image for example, a motion vector and a coding type.
  • This process is conceptually characterized in that it does not recognize the moving object by interpreting the contents of the image, but extracts a block of the image block that is assumed to contain the moving object without knowing the contents. have.
  • FIG. 5 is a flowchart illustrating an example of a process of detecting effective motion from a compressed image in the present invention
  • FIG. 6 is a diagram illustrating an example of a result of applying the effective motion region detection process according to the present invention to a CCTV compressed image.
  • the process of FIG. 5 corresponds to step S100 in FIG. 4.
  • Step S110 First, a coding unit of a compressed image is parsed to obtain a motion vector and a coding type.
  • a video decoding apparatus performs parsing (header parsing) and motion vector operations on a stream of compressed video according to a video compression standard such as H.264 AVC and H.265 HEVC. Through this process, the motion vector and coding type are parsed for the coding unit of the compressed image.
  • Step S120 Acquire a motion vector cumulative value for a preset time (for example, 500 ms) for each of the plurality of image blocks constituting the compressed image.
  • This step is presented with the intention to detect if there are effective movements that are practically recognizable from the compressed image, such as driving cars, running people, and fighting crowds. Shaky leaves, ghosts that appear momentarily, and shadows that change slightly due to light reflections, though they are moving, are virtually meaningless objects and should not be detected.
  • a motion vector cumulative value is obtained by accumulating a motion vector in units of one or more image blocks for a predetermined time period (for example, 500 msec).
  • the image block is used as a concept including a macroblock and a subblock.
  • Steps S130 and S140 Comparing a motion vector cumulative value with respect to a plurality of image blocks with a preset first threshold value (for example, 20 pixels), and moving the video block having a motion vector cumulative value exceeding the first threshold value. Mark the area.
  • a preset first threshold value for example, 20 pixels
  • an image block having a predetermined motion vector accumulation value is found as described above, it is considered that something significant movement, that is, effective movement, is found in the image block and is marked as a moving object region. For example, we want to detect and detect object movements that are worth the attention of the controller, to the extent that a person runs. On the contrary, even if a motion vector is generated, if the cumulative value for a predetermined time is small enough not to exceed the first threshold, the change in the image is assumed to be small and insignificant and is neglected in the detection step.
  • FIG. 6 is an example illustrating a result of detecting an effective motion region from a CCTV compressed image through the process of FIG. 5.
  • an image block having a motion vector accumulation value equal to or greater than a first threshold is marked as a moving object area and displayed as a bold line area.
  • the sidewalk block, the road, and the part with the shadow are not displayed as the moving object area, while the walking people or the driving car are displayed as the moving object area.
  • FIG. 7 is a flowchart illustrating an example of a process of detecting a boundary region for a moving object region in the present invention
  • FIG. 8 is a boundary region according to FIG. 7 with respect to the CCTV image of FIG. Figure 1 shows an example of the results of further applying the detection process.
  • the process of FIG. 7 corresponds to step S200 in FIG. 4.
  • marking is not properly performed on an object (moving object) actually moving in the image, and only some of them are marked. In other words, if you look at a person walking or driving a car, you will find that not all of the objects are marked, but only some blocks. In addition, although it is actually one moving object, many are marked as if they are a plurality of moving object areas. This means that the criterion of the moving object region adopted in (S100) above was useful for filtering out the general region but was a very strict condition. Therefore, it is necessary to detect the boundary of the moving object by looking around the moving object area.
  • Step S210 First, a plurality of adjacent image blocks are identified based on the image blocks marked as moving object areas by the previous S100. In the present specification, these are referred to as 'neighborhood blocks'. These neighboring blocks are portions that are not marked as the moving object region by S100, and the process of FIG. 7 further examines them to determine whether any of these neighboring blocks may be included in the boundary of the moving object region.
  • Steps S220 and S230 compare a motion vector value with respect to a plurality of neighboring blocks with a second preset threshold (eg, 0), and mark the neighboring block having a motion vector value exceeding the second threshold as a moving object region. do. If the movement is located adjacent to the area of the moving object where effective motion that is practically meaningful is found and a certain amount of movement is found for itself, the image block is likely to be a block with the area of the adjacent moving object due to the characteristics of the photographed image. . Therefore, such neighboring blocks are also marked as moving object regions.
  • a second preset threshold eg, 0
  • Step S240 Also, the coding type is an intra picture among the plurality of neighboring blocks as a moving object region.
  • an intra picture since a motion vector does not exist, it is fundamentally impossible to determine whether a motion exists in a corresponding neighboring block based on the motion vector. In this case, it is safer for the intra picture located adjacent to the image block already detected as the moving object region to maintain the settings of the previously extracted moving object region.
  • FIG. 8 is a diagram visually showing a result of applying a boundary region detection process to a CCTV compressed image.
  • a plurality of image blocks marked as a moving object region through the above process are indicated by a bold line.
  • the area of the moving object was further extended to the vicinity of the moving object area indicated by the bold line in FIG. 6, and thus, it was found that the moving object area was enough to cover the moving object when compared to the image captured by CCTV. Can be.
  • FIG. 9 is a diagram illustrating an example of a result of arranging a moving object region through interpolation according to the present invention for a CCTV image image to which the boundary region detection process illustrated in FIG. 8 is applied.
  • Step S300 is a process of arranging the division of the moving object area by applying interpolation to the moving object areas detected in the previous steps S100 and S200.
  • an unmarked image block is found between the moving object regions indicated by the bold lines. If there is an unmarked image block in the middle, it can be regarded as if they are a plurality of individual moving objects. When the moving object region is fragmented in this way, the result of step S500 may be inaccurate, and the number of moving object regions may increase, thereby complicating the process of steps S500 to S700.
  • the present invention if there is one or a few unmarked image blocks surrounded by a plurality of image blocks marked as the moving object region, this is marked as the moving object region, which is called interpolation.
  • interpolation In contrast to FIG. 8, all of the non-marked image blocks existing between the moving object regions are marked as moving object regions.
  • the moving object region properly reflects the situation of the actual image through the boundary region detection process and the interpolation process.
  • FIG. 6 if the block is marked as a bold line area, a large number of very small objects move in the image screen, which does not correspond to reality.
  • it is determined as a block marked with the bold line area in Fig. 9 will be treated as a few moving objects having a certain volume to reflect the actual scene similarly.
  • the present invention may be embodied in the form of computer readable codes on a computer readable nonvolatile recording medium.
  • Such nonvolatile recording media include various types of storage devices, such as hard disks, SSDs, CD-ROMs, NAS, magnetic tapes, web disks, and cloud disks. Forms that are implemented and executed may also be implemented.
  • the present invention may be implemented in the form of a computer program stored in a medium in combination with hardware to execute a specific procedure.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The present invention relates to a technique for improving processing performance for a plurality of compressed images in a CCTV control system, and the like. More specifically, the present invention relates to a technique for improving processing performance for a compressed image by: extracting, by an image analysis system, an area where significant motion exists in an image, that is, a moving object area, on the basis of syntax (e.g.: a motion vector and a coding type) of the compressed image without the need for object identification and behavior recognition via a complicated image processing for the entire compressed image as in the conventional technique; and then analyzing a result of the extraction by interworking with the image analysis system. According to the present invention, it is advantageous that processing performance for a compressed image can be significantly improved by quickly identifying a moving object area on the basis of syntax of the compressed image without the need for performing a complicated image processing for the entire compressed image as in the conventional technique, and then selectively processing only an identified part by interworking with an image analysis system.

Description

압축영상에 대한 신택스 기반의 영상분석 시스템과 연동 처리 방법Syntax-based Image Analysis System for Compressed Video
본 발명은 CCTV 관제시스템 등에서 다수의 압축영상에 대한 처리 성능을 개선하는 기술에 관한 것이다.The present invention relates to a technique for improving the processing performance of a plurality of compressed images in a CCTV control system.
더욱 상세하게는, 본 발명은 영상분석 시스템에서 종래기술처럼 압축영상 전체에 대한 복잡한 이미지 프로세싱을 통해 객체 식별 및 행위 인식을 할 필요없이 압축영상의 신택스(syntax)(예: 모션벡터, 코딩유형)에 기초하여 영상 내에 무언가 유의미한 움직임이 존재하는 영역, 즉 이동객체 영역을 추출한 후에 그 추출 결과물을 영상분석 시스템과 연동하여 분석 처리함으로써 압축영상에 대한 처리 성능을 개선하는 기술에 관한 것이다.More specifically, the present invention provides a syntax (eg, a motion vector, coding type) of a compressed image without the need for object identification and behavior recognition through complex image processing of the entire compressed image as in the conventional art in an image analysis system. The present invention relates to a technique for improving the processing performance of a compressed image by extracting a region in which something meaningful movement exists in the image, that is, a moving object region, and analyzing the extracted result in conjunction with an image analysis system.
최근에는 범죄예방, 불법감시, 사후증거 확보 등을 위해 CCTV를 이용하는 영상관제 시스템을 구축하는 것이 일반적이다. 지역별로 다수의 CCTV 카메라를 설치해둔 상태에서 이들 CCTV 카메라가 생성하는 영상을 모니터에 표시하고 스토리지 장치에 저장해두는 것이다. 범죄나 사고가 발생하는 장면을 관제요원이 발견하게 되면 그 즉시 적절하게 대처하는 한편, 필요에 따라서는 사후증거 확보를 위해 스토리지에 저장되어 있는 영상을 검색하는 것이다.Recently, it is common to establish a video control system using CCTV for crime prevention, illegal surveillance, and securing after evidence. With multiple CCTV cameras installed by region, images generated by these CCTV cameras are displayed on a monitor and stored in a storage device. When a control agent finds a scene where a crime or accident occurs, he or she immediately responds appropriately, and if necessary, searches for images stored in the storage to secure post evidence.
그런데. CCTV 카메라의 설치 현황에 비해 관제요원의 수는 매우 부족한 것이 현실이다. 이처럼 제한된 인원으로 영상 감시를 효과적으로 수행하려면 CCTV 영상을 모니터 화면에 단순 표시하는 것만으로는 충분하지 않다. 각각의 CCTV 영상에 존재하는 객체의 움직임을 감지하여 실시간으로 해당 영역에 무언가 추가 표시함으로써 효과적으로 발견되도록 처리하는 것이 바람직하다. 이러한 경우에 관제요원은 CCTV 영상 전체를 균일한 관심도를 가지고 지켜보는 것이 아니라 객체 움직임이 있는 부분을 중심으로 CCTV 영상을 감시하면 된다.By the way. The reality is that the number of control personnel is very low compared to the installation status of CCTV cameras. In order to effectively perform video surveillance with such limited number of people, simply displaying CCTV images on the monitor screen is not enough. It is preferable to process the object to be effectively detected by detecting the movement of the object present in each CCTV image and displaying something in the corresponding area in real time. In this case, the monitoring personnel do not monitor the entire CCTV image with uniform interest, but monitor the CCTV image centering on the part where the object moves.
한편, 영상감지 시스템에서는 스토리지 공간의 효율을 위해 압축영상을 채택하고 있다. 특히 최근에는 CCTV 카메라의 설치 대수가 급속하게 증가하고 고화질 카메라가 주로 설치됨에 따라 H.264 AVC 및 H.265 HEVC 등과 같은 고압축율의 복잡한 영상압축 기술이 채택되고 있다. CCTV 카메라는 이들 기술규격에 따라 압축영상을 생성하여 제공하며, 영상관제 시스템에서는 그 압축영상을 역으로 디코딩을 수행한다. 영상압축 기술이 적용된 CCTV 영상에서 이동객체(moving objects)를 판단하려면 종래에는 압축영상을 디코딩하여 재생영상, 즉 압축이 풀려있는 원래 영상을 얻은 후에 이미지 처리하는 과정이 필요하였다.On the other hand, the video detection system adopts compressed video for the efficiency of the storage space. In particular, as the number of CCTV cameras is rapidly increasing and high quality cameras are mainly installed, high-compression complex video compression technologies such as H.264 AVC and H.265 HEVC have been adopted. CCTV cameras generate and provide compressed images according to these technical specifications, and the video control system decodes the compressed images in reverse. In order to determine moving objects in a CCTV image to which image compression technology is applied, conventionally, a process of image processing is required after decoding a compressed image to obtain a reproduced image, that is, a decompressed original image.
도 1은 H.264 AVC 기술규격에 따른 동영상 디코딩 장치의 일반적인 구성을 나타내는 블록도이다. 도 1을 참조하면, H.264 AVC에 따른 동영상 디코딩 장치는 구문분석기(11), 엔트로피 디코더(12), 역 변환기(13), 모션벡터 연산기(14), 예측기(15), 디블로킹 필터(16)를 포함하여 구성된다. 이들 하드웨어 모듈이 압축영상을 순차적으로 처리함으로써 압축을 풀고 원래의 영상 데이터를 복원해낸다. 이때, 구문분석기(11)는 압축영상의 코딩 유닛에 대해 모션벡터 및 코딩유형을 파싱해낸다. 이러한 코딩 유닛(coding unit)은 일반적으로는 매크로블록이나 서브 블록과 같은 영상 블록이다.1 is a block diagram illustrating a general configuration of a video decoding apparatus according to the H.264 AVC Technical Specification. Referring to FIG. 1, a video decoding apparatus according to H.264 AVC includes a parser 11, an entropy decoder 12, an inverse converter 13, a motion vector operator 14, a predictor 15, and a deblocking filter ( 16) is configured to include. These hardware modules process compressed video sequentially to decompress and restore the original video data. At this time, the parser 11 parses the motion vector and the coding type for the coding unit of the compressed image. Such a coding unit is generally an image block such as a macroblock or a subblock.
도 2는 종래기술에서 압축영상을 분석하여 이동객체를 추출하고 이를 통해객체 분류 및 이벤트 식별을 수행하는 과정을 나타내는 순서도이다. 도 2를 참조하면, 종래기술에서는 압축영상을 H.264 AVC 및 H.265 HEVC 등에 따라 디코딩하고(S10), 재생영상의 프레임 이미지들을 작은 이미지, 예컨대 320x240 정도로 다운스케일 리사이징을 한다(S20). 이때, 다운스케일 리사이징을 하는 이유는 이후 과정에서의 프로세싱 부담을 나름대로 줄이기 위한 것이다. 그리고 나서, 리사이징된 프레임 이미지들에 대해 차영상(differentials)을 구한 후에 복잡한 영상 분석 과정을 통해 이동객체를 추출하고 이들 이동객체의 좌표를 산출한다(S30). 마지막으로, 그 추출된 이동객체들의 객체 썸네일 이미지와 좌표를 이용하여 객체 분류 및 이벤트 식별을 수행한다(S40)2 is a flowchart illustrating a process of extracting a moving object by analyzing a compressed image and performing object classification and event identification through the conventional art. Referring to FIG. 2, in the prior art, the compressed image is decoded according to H.264 AVC and H.265 HEVC, etc. (S10), and the frame images of the reproduced image are downscaled to a small image, for example, 320 × 240 (S20). In this case, the reason for downscaling resizing is to reduce processing burden in subsequent steps. Then, after obtaining differential images of the resized frame images, the moving objects are extracted through a complicated image analysis process, and the coordinates of the moving objects are calculated (S30). Finally, object classification and event identification are performed using the extracted object thumbnail images and coordinates of the moving objects (S40).
이처럼 종래기술에서는 이동객체를 추출하려면 압축영상 디코딩, 다운스케일 리사이징, 영상 분석을 수행한다. 이들은 복잡도가 매우 높은 프로세스이고, 그로 인해 종래의 영상관제 시스템에서는 한 대의 영상분석 서버가 동시 처리할 수 있는 용량이 상당히 제한되어 있다. 현재 고성능의 영상분석 서버가 커버할 수 있는 최대 CCTV 채널은 통상 16 채널이다. 다수의 CCTV 카메라가 설치되므로 영상관제 시스템에는 다수의 영상분석 서버가 필요하였고, 이는 비용 증가와 물리적 공간 확보의 어려움이라는 문제점을 유발하였다.As described above, in order to extract a moving object, compressed image decoding, downscale resizing, and image analysis are performed. These are very complicated processes, and therefore, in a conventional video control system, the capacity that a single video analysis server can process simultaneously is quite limited. Currently, the largest CCTV channels that a high performance video analytics server can cover are 16 channels. Since a large number of CCTV cameras were installed, a video control system required a plurality of video analysis servers, which caused problems such as increased cost and difficulty in securing physical space.
본 발명의 목적은 CCTV 관제시스템 등에서 다수의 압축영상에 대한 처리 성능을 개선하는 기술을 제공하는 것이다.An object of the present invention is to provide a technique for improving the processing performance of a plurality of compressed images in a CCTV control system.
특히, 본 발명의 목적은 영상분석 시스템에서 종래기술처럼 압축영상 전체에 대한 복잡한 이미지 프로세싱을 통해 객체 식별 및 행위 인식을 할 필요없이 압축영상의 신택스(예: 모션벡터, 코딩유형)에 기초하여 영상 내에 무언가 유의미한 움직임이 존재하는 영역, 즉 이동객체 영역을 추출한 후에 그 추출 결과물을 영상분석 시스템과 연동하여 분석 처리함으로써 압축영상에 대한 처리 성능을 개선하는 기술을 제공하는 것이다.In particular, an object of the present invention is to perform an image based on the syntax (eg, motion vector, coding type) of the compressed image without the need of object identification and behavior recognition through complex image processing of the entire compressed image in the image analysis system as in the prior art. It is to provide a technique for improving the processing performance of compressed images by extracting a region in which something meaningful movement exists, that is, a moving object region, and analyzing the extracted result in conjunction with an image analysis system.
상기의 목적을 달성하기 위하여 본 발명에 따른 압축영상에 대한 신택스 기반의 영상분석 시스템과 연동 처리 방법은, 압축영상의 비트스트림을 파싱하여 코딩 유닛에 대한 모션벡터 및 코딩유형을 획득하는 제 1 단계; 압축영상을 구성하는 복수의 영상 블록 별로 미리 설정된 제 1 시간동안의 모션벡터 누적값을 획득하는 제 2 단계; 복수의 영상 블록에 대하여 모션벡터 누적값을 미리 설정된 제 1 임계치와 비교하는 제 3 단계; 제 1 임계치를 초과하는 모션벡터 누적값을 갖는 영상 블록을 이동객체 영역으로 마킹하는 제 4 단계; 그 마킹된 하나이상의 이동객체 영역에 대하여 Unique ID를 할당 및 관리함으로써 압축영상을 구성하는 일련의 프레임 이미지에서 이동객체 영역에 대해 객체 속성을 생성하는 제 5 단계; 이동객체 영역에 대해 썸네일 이미지 및 좌표 정보를 도출하여 영상분석 시스템으로 제공하는 제 6 단계; 영상분석 시스템으로부터 이동객체 영역에 대한 객체 분류 결과 및 이벤트 식별 결과를 수신하여 이동객체 영역의 Unique ID와 연결 관리하는 제 7 단계;를 포함하여 구성된다.In order to achieve the above object, a syntax-based image analysis system and an interworking processing method for a compressed image according to the present invention include a first step of parsing a bitstream of a compressed image to obtain a motion vector and a coding type for a coding unit. ; A second step of obtaining a motion vector cumulative value for a first preset time for each of the plurality of image blocks constituting the compressed image; A third step of comparing a motion vector cumulative value with a first threshold value for a plurality of image blocks; A fourth step of marking an image block having a motion vector accumulation value exceeding a first threshold as a moving object region; A fifth step of generating an object property for the moving object area from a series of frame images constituting the compressed image by assigning and managing a unique ID to the marked one or more moving object areas; A sixth step of extracting a thumbnail image and coordinate information of the moving object area and providing the same to the image analysis system; And a seventh step of receiving the object classification result and the event identification result for the moving object area from the image analysis system and managing the connection with the Unique ID of the moving object area.
이때 본 발명은, 제 6 단계와 제 7 단계 사이에 수행되는, 영상분석 시스템이 Unique ID 기준으로 썸네일 이미지 및 좌표 정보를 정렬하고 Unique ID 단위로 영상분석 처리하여 이동객체 영역에 대한 객체 분류 및 이벤트 식별을 수행하는 단계;를 더 포함하여 구성될 수 있다.In this case, the present invention is performed between the sixth step and the seventh step, the image analysis system sorts the thumbnail image and coordinate information on the basis of the unique ID and image analysis processing by the unit of unique ID to classify objects and events for the moving object area Performing identification; may be configured to further include.
본 발명에서 제 5 단계는, 이동객체 영역 간의 영상 블록의 중첩도 산정에 기초하여 각각의 이동객체 영역에 대하여 이전의 프레임에 동일 객체에 관한 이동객체 영역이 존재하는지 여부를 판단하는 제 51 단계; 제 51 단계의 판단 결과에 따라 각각의 이동객체 영역에 대해 Unique ID가 기 할당되었는지 여부를 판단하는 제 52 단계; 제 52 단계의 판단 결과에 따라 Unique ID 할당 상태인 이동객체 영역에 대해 기 할당된 Unique ID를 유지하는 제 53 단계; 제 52 단계의 판단 결과에 따라 Unique ID 미할당 상태인 이동객체 영역에 대해 Unique ID를 신규 할당하는 제 54 단계; 이전의 프레임에서 Unique ID가 할당되었으나 현재 프레임 이미지에서 사라진 이동객체 영역이 식별되면 그 사라진 이동객체 영역에 할당되었던 Unique ID를 리보크하는 제 55 단계;를 포함하여 구성될 수 있다.According to the present invention, the fifth step may include: determining whether a moving object region of the same object exists in a previous frame for each moving object region based on the calculation of the overlapping degree of the image blocks between the moving object regions; A 52nd step of determining whether a unique ID is pre-assigned for each moving object region according to the determination result of the 51st step; A fifty-third step of maintaining a pre-allocated Unique ID for the mobile object area in the Unique ID allocation state according to the determination result of the fifty-second step; A new step of allocating a unique ID to a mobile object area in the unique ID unassigned state according to the determination result of the step 52; If a unique ID has been assigned in the previous frame but disappeared from the current frame image is identified, step 55 for revoking the unique ID assigned to the disappeared mobile object region may be performed.
또한 본 발명은, 제 4 단계와 제 5 단계 사이에 수행되는, 이동객체 영역을 중심으로 그 인접하는 복수의 영상 블록(이하, '이웃 블록'이라 함)을 식별하는 제 a 단계; 복수의 이웃 블록에 대해 모션벡터 값을 미리 설정된 제 2 임계치와 비교하는 제 b 단계; 제 2 임계치를 초과하는 모션벡터 값을 갖는 이웃 블록을 이동객체 영역으로 추가 마킹하는 제 c 단계; 복수의 이웃 블록 중에서 코딩유형이 인트라 픽쳐인 이웃 블록을 이동객체 영역으로 추가 마킹하는 제 d 단계; 복수의 이동객체 영역에 대하여 인터폴레이션을 수행하여 이동객체 영역으로 둘러싸인 미리 설정된 갯수 이하의 비마킹 영상 블록을 이동객체 영역으로 추가 마킹하는 제 e 단계;를 더 포함하여 구성될 수 있다.In addition, the present invention may further include a first step of identifying a plurality of adjacent image blocks (hereinafter, referred to as 'neighbor block') around a moving object area, which are performed between the fourth step and the fifth step; B) comparing a motion vector value with a second preset threshold value for a plurality of neighboring blocks; C) additionally marking a neighboring block having a motion vector value exceeding a second threshold as a moving object region; D) additionally marking a neighboring block having a coding type of an intra picture among the plurality of neighboring blocks as a moving object region; The method may further include an e-step of performing interpolation on the plurality of moving object regions to additionally mark a predetermined number or less of unmarked image blocks surrounded by the moving object region as the moving object region.
한편, 본 발명에 따른 컴퓨터프로그램은 하드웨어와 결합되어 이상과 같은 압축영상에 대한 신택스 기반의 영상분석 시스템과 연동 처리 방법을 실행시키기 위하여 매체에 저장된 것이다.On the other hand, the computer program according to the present invention is stored in the medium in combination with hardware to execute the syntax-based image analysis system and the interworking processing method for the compressed image as described above.
본 발명에 따르면 종래와 같이 압축영상 전체에 대해 복잡한 이미지 프로세싱을 수행할 필요없이 압축영상의 신택스로부터 이동객체 영역을 신속하게 식별한 후에 그 식별된 부분만 선별적으로 영상분석 시스템과 연동하여 처리함으로써 CCTV 관제시스템 등에서 압축영상에 대한 처리 성능을 대폭 개선할 수 있는 장점이 있다. 특히, 본 발명에 따르면 종래기술 대비 1/10 정도의 연산량으로도 압축영상의 처리가 가능해져서 대규모 비용투자 없이도 서버당 가용 채널수를 대략 10배 정도 증가시킬 수 있는 장점이 있다.According to the present invention, by quickly identifying the moving object region from the syntax of the compressed image without performing complicated image processing on the entire compressed image as in the prior art, only the identified portion is selectively processed in association with the image analysis system. There is an advantage that can greatly improve the processing performance of compressed video in CCTV control system. In particular, according to the present invention, it is possible to process a compressed image even with a calculation amount of about 1/10 of the prior art, thereby increasing the number of available channels per server by approximately 10 times without large-scale investment.
도 1은 동영상 디코딩 장치의 일반적인 구성을 나타내는 블록도.1 is a block diagram showing a general configuration of a video decoding apparatus.
도 2는 종래기술에서 압축영상을 분석하여 객체 분류 및 이벤트 식별을 수행하는 과정을 나타내는 순서도.2 is a flowchart illustrating a process of performing object classification and event identification by analyzing a compressed image in the prior art.
도 3은 본 발명에서 영상분석 시스템과 객체영역 식별장치가 연동하는 개념을 나타내는 도면.3 is a view illustrating a concept in which an image analysis system and an object region identification device interoperate in the present invention.
도 4는 본 발명에 따라 압축영상 신택스 기반으로 영상분석 시스템과 연동 처리하는 프로세스를 나타내는 순서도.4 is a flowchart illustrating a process of interworking with an image analysis system based on compressed image syntax according to the present invention.
도 5는 본 발명에서 압축영상으로부터 유효 움직임을 검출하는 과정의 구현 예를 나타내는 순서도.5 is a flowchart illustrating an example of a process of detecting effective motion from a compressed image in the present invention.
도 6은 CCTV 압축영상에 대해 본 발명에 따른 유효 움직임 영역 검출 과정을 적용한 결과의 일 예를 나타내는 도면.6 is a diagram illustrating an example of a result of applying an effective motion region detection process according to the present invention to a CCTV compressed image.
도 7은 본 발명에서 이동객체 영역에 대한 바운더리 영역을 검출하는 과정의 구현 예를 나타내는 순서도. FIG. 7 is a flowchart illustrating an example of a process of detecting a boundary region for a moving object region in the present invention. FIG.
도 8은 도 6의 CCTV 영상 이미지에 대해 바운더리 영역 검출 과정을 적용한 결과의 일 예를 나타내는 도면.8 is a diagram illustrating an example of a result of applying a boundary region detection process to the CCTV image of FIG. 6.
도 9는 도 8의 CCTV 영상 이미지에 대해 인터폴레이션을 통해 이동객체 영역을 정리한 결과의 일 예를 나타내는 도면.9 is a diagram illustrating an example of a result of arranging a moving object region through interpolation with respect to the CCTV image of FIG. 8.
도 10은 본 발명에서 이동객체 영역에 Unique ID가 할당된 일 예를 나타내는 도면.FIG. 10 is a diagram for one example in which a unique ID is assigned to a moving object area in the present invention; FIG.
도 11은 본 발명에서 이동객체 영역에 대해 썸네일 이미지를 도출한 예를 나타내는 도면.FIG. 11 is a diagram for one example of deriving a thumbnail image for a moving object region in the present invention; FIG.
도 12는 본 발명에서 이동객체 영역에 대해 위치 정보와 크기 정보가 식별된 일 예를 나타내는 도면.12 is a diagram illustrating an example in which location information and size information are identified for a moving object region in the present invention.
이하에서는 도면을 참조하여 본 발명을 상세하게 설명한다.Hereinafter, with reference to the drawings will be described in detail the present invention.
도 3은 본 발명에서 영상분석 시스템과 객체영역 식별장치가 연동하는 개념을 나타내는 도면이다.3 is a diagram illustrating a concept in which an image analysis system and an object region identification apparatus interoperate in the present invention.
도 3을 참조하면, 본 발명은 CCTV 카메라(100)로부터 전달되는 압축영상을 효과적으로 처리하기 위한 기술이다. 에를 들어, CCTV 관제시스템에는 적게는 수백대에서 많게는 수만대의 CCTV 카메라(100)로부터 고화질의 촬영영상이 복잡한 영상압축 알고리즘(예: H.264 AVC, H.265 HEVC)으로 압축된 상태로 수집된다. 종래에는 이러한 압축영상들을 도 2의 과정을 통해 처리하였기에 영상분석 서버의 프로세싱 부담이 굉장히 높았고, 한 대의 서버가 수용할 수 있는 최대 CCTV 채널도 통상 16 채널에 불과하였다.Referring to FIG. 3, the present invention is a technique for effectively processing a compressed image transmitted from a CCTV camera 100. For example, CCTV surveillance systems collect high-quality captured images from hundreds to tens of thousands of CCTV cameras 100 compressed with complex image compression algorithms (eg H.264 AVC, H.265 HEVC). . Conventionally, since these compressed images are processed through the process of FIG. 2, the processing burden of the image analysis server is very high, and the maximum CCTV channel that one server can accommodate is usually only 16 channels.
본 발명은 일반적인 영상분석 시스템(300)에 객체영역 식별장치(200)를 연동시킴으로써 전체적인 처리 성능을 높인다. 영상분석 시스템(300)은 종래에 사용하였던 방식대로 영상의 내용을 분석하여 객체를 인식하고 그 객체의 행위를 분석한 결과에 따라 이벤트(예: 우범자 배회, 월담, 범죄, 싸움 등)를 식별한다. 한편, 본 발명에서 영상분석 시스템(300)은 종래의 영상분석 기술에 따라 동작하는 것으로 한정되지는 않으나 그러해도 무방하다.The present invention increases the overall processing performance by linking the object region identification apparatus 200 to the general image analysis system 300. The image analysis system 300 analyzes the contents of the image in the manner used in the related art, recognizes an object, and identifies an event (eg, an offender wandering, a wall story, a crime, a fight, etc.) according to a result of analyzing the behavior of the object. . On the other hand, the image analysis system 300 in the present invention is not limited to operating according to the conventional image analysis technology, but may be so.
종래에는 영상분석 시스템(300)이 압축영상 전체에 대해 영상 분석을 수행하였기 때문에 처리 성능이 제한적이었다. 반면, 본 발명에서는 객체영역 식별장치(200)가 압축영상의 신택스(예: 모션벡터, 코딩유형)에 기초하여 영상 내에 무언가 유의미한 움직임이 존재하는 영역, 즉 이동객체 영역을 추출한 후에 그 썸네일 이미지나 위치 정보를 영상분석 시스템(300)과 연동하여 분석 처리하는 구조를 채택하였다. 이를 통해, 영상분석 시스템(300)이 객체를 인식하고 그 객체의 행위를 분석하여 이벤트를 식별해야 하는 데이터 분량을 현저하게 줄여줌으로써 전체적인 성능을 높이려는 것이다.Conventionally, since the image analysis system 300 performs image analysis on the entire compressed image, processing performance is limited. On the other hand, in the present invention, the object region identification apparatus 200 extracts a region in which something meaningful movement exists in the image, that is, a moving object region, based on the syntax of the compressed image (eg, motion vector, coding type), and then the thumbnail image or the like. A structure that analyzes and processes location information in conjunction with the image analysis system 300 is adopted. Through this, the image analysis system 300 recognizes an object and analyzes the behavior of the object to significantly reduce the amount of data required to identify the event to improve the overall performance.
객체영역 식별장치(200)는 압축영상을 디코딩할 필요없이 압축영상의 비트스트림을 파싱하여 각 영상 블록, 즉 매크로블록(Macro Block) 및 서브블록(Sub Block) 등에 대해 얻어지는 신택스 정보, 예컨대 모션벡터(Motion Vector)와 코딩유형(Coding Type) 정보를 통해 이동객체 영역을 빠르게 추출한다. 이렇게 얻어진 이동객체 영역은 이동객체의 경계선을 정밀하게 반영하지는 못하지만 처리속도가 영상분석에 비해 20배 정도로 빠르면서도 유의미한 움직임의 존재 여부에 대해서는 일정 이상의 신뢰도를 나타낸다. 객체영역 식별장치(200)가 이처럼 신택스에 기초하여 압축영상에서 대부분을 신속하게 걸러줌으로써 영상분석 시스템(300)의 부담을 낮추고 전체 처리성능도 높일 수 있다.The object region identification apparatus 200 parses a bitstream of a compressed image without having to decode the compressed image, and obtains syntax information, for example, a motion vector, for each image block, that is, a macro block and a sub block. The moving object region is quickly extracted through the motion vector and coding type information. The moving object region thus obtained does not accurately reflect the boundary of the moving object, but its processing speed is about 20 times faster than image analysis, but it shows a certain level of reliability for the existence of significant movement. As described above, the object area identification apparatus 200 quickly filters most of the compressed images based on the syntax, thereby reducing the burden on the image analysis system 300 and increasing the overall processing performance.
그런데, 객체영역 식별장치(200)가 추출한 이동객체 영역은 단순히 이동객체가 포함된 것으로 추정되는 영상 블록의 덩어리에 불과하기 때문에 이로부터 무언가를 판단하는 것은 한계가 있다. 영상의 내용을 판단한 것이 아니라 영상내 움직임의 속성을 보고 구별해낸 것이기 때문이다. 이에, 객체영역 식별장치(200)가 획득한 이동객체 영역 식별정보, 이동객체 영역에 대한 썸네일 이미지나 위치 정보를 영상분석 시스템(300)으로 전달한다. 영상분석 시스템(300)은 압축영상을 구성하는 일련의 프레임 이미지에서 도출된 썸네일 이미지를 영상분석을 수행함으로써 예컨대 종래로 했던 바에 따라 이동객체 영역 내에서 이루어지는 영상 내용을 판단하여 객체를 분류하고 이벤트를 식별한다.However, since the moving object region extracted by the object region identification apparatus 200 is merely a lump of an image block estimated to include the moving object, there is a limit in determining something therefrom. This is because the contents of the images are not judged, but the characteristics of the movements in the images are distinguished. Accordingly, the moving object region identification information and the thumbnail image or position information of the moving object region acquired by the object region identification apparatus 200 are transmitted to the image analysis system 300. The image analysis system 300 performs image analysis on thumbnail images derived from a series of frame images constituting the compressed image, for example, classifies objects by determining image contents in a moving object region and performs an event. To identify.
이처럼 압축영상이 전달되면, 객체영역 식별장치(200)는 무언가 움직임이 있어 중요해보이는 이미지 덩어리들을 빠르게 골라내고, 영상분석 시스템(300)은 그 골라낸 이미지 덩어리들의 내용을 제대로 분석하여 객체도 분류(예: 사람, 자동차, 동물 등)하고 이벤트도 식별한다. 객체영역 식별장치(200)는 영상분석 시스템(300)으로부터 영상분석 결과, 즉 객체 분류 결과와 이벤트 식별 결과를 전달받으며 관제요원이 이를 활용할 수 있도록 제공한다.When the compressed image is transmitted as described above, the object region identification apparatus 200 quickly picks out image chunks that seem to be important because there is something moving, and the image analysis system 300 properly analyzes the contents of the selected image chunks to classify the objects. Eg people, cars, animals, etc.) and events. The object area identification apparatus 200 receives an image analysis result, that is, an object classification result and an event identification result from the image analysis system 300, and provides the control personnel to utilize the image analysis result.
한편, 본 발명에 따르면 객체영역 식별장치(200)는 이동객체 영역을 추출해내는 과정에서는 압축영상을 디코딩할 필요가 없다. 하지만, 본 발명이 적용된 장치 또는 소프트웨어라면 압축영상을 디코딩하는 동작을 수행하지 말아야 하는 것으로 본 발명의 범위가 한정되는 것은 아니다. 예를 들어, 이동객체 영역에 대한 썸네일 이미지를 획득하기 위해 부분적으로 혹은 전체적으로 압축영상을 디코딩하는 동작을 수행할 수 있다. 마찬가지로, 영상분석 시스템(300)도 종래의 영상분석 처리를 수행하는 시스템일 수도 있지만 본 발명의 적용범위가 여기에 한정되는 것은 아니다.Meanwhile, according to the present invention, the object region identification apparatus 200 does not need to decode the compressed image in the process of extracting the moving object region. However, the apparatus or software to which the present invention is applied should not perform the operation of decoding the compressed image, but the scope of the present invention is not limited. For example, in order to obtain a thumbnail image of the moving object region, an operation of decoding the compressed image partially or entirely may be performed. Similarly, the image analysis system 300 may be a system for performing a conventional image analysis process, but the scope of the present invention is not limited thereto.
도 4는 본 발명에 따라 압축영상 신택스 기반으로 영상분석 시스템과 연동 처리하는 프로세스를 나타내는 순서도이다.4 is a flowchart illustrating a process of interworking with an image analysis system based on compressed image syntax according to the present invention.
먼저, 단계 (S100) 내지 단계 (S300)을 통해 객체영역 식별장치(200)가 압축영상의 신택스(모션벡터, 코딩유형)에 기초하여 각 프레임 이미지로부터 이동객체 영역을 추출하는 과정에 대해 기술한다.First, a process of extracting a moving object region from each frame image by the object region identification apparatus 200 based on the syntax (motion vector, coding type) of the compressed image through steps S100 to S300 will be described. .
단계 (S100) : 압축영상의 모션벡터에 기초하여 압축영상으로부터 실질적으로 의미를 인정할만한 유효 움직임을 검출하며, 이처럼 유효 움직임이 검출된 영상 영역을 이동객체 영역으로 설정한다.Step S100: An effective motion that can substantially recognize meaning is detected from the compressed image based on the motion vector of the compressed image, and the image region in which the effective motion is detected is set as the moving object region.
이를 위해, H.264 AVC 및 H.265 HEVC 등의 동영상압축 표준에 따라서 압축영상의 데이터를 파싱하여 코딩 유닛(coding unit)에 대해 모션벡터와 코딩유형을 획득한다. 이때, 코딩 유닛의 사이즈는 일반적으로 64x64 픽셀 내지 4x4 픽셀 정도이며 플렉서블(flexible)하게 설정될 수 있다.To this end, data of a compressed image is parsed according to video compression standards such as H.264 AVC and H.265 HEVC to obtain a motion vector and a coding type for a coding unit. In this case, the size of the coding unit is generally about 64x64 to 4x4 pixels and may be set to be flexible.
각 영상 블록에 대해 미리 설정된 일정 시간(예: 500 msec) 동안 모션벡터를 누적시키고, 그에 따른 모션벡터 누적값이 미리 설정된 제 1 임계치(예: 20 픽셀)을 초과하는지 검사한다. 만일 그러한 영상 블록이 발견되면 해당 영상 블록에서 유효 움직임이 발견된 것으로 보고 이동객체 영역으로 마킹한다. 반면, 모션벡터가 발생하였더라도 일정 시간동안의 누적값이 제 1 임계치를 넘지 못하는 경우에는 영상 변화가 미미한 것으로 추정하고 무시한다.The motion vectors are accumulated for a predetermined time period (for example, 500 msec) for each image block, and it is checked whether the motion vector accumulation value exceeds a first predetermined threshold (for example, 20 pixels). If such an image block is found, it is considered that effective motion has been found in the image block and marked as a moving object area. On the other hand, even if a motion vector is generated, if the cumulative value for a predetermined time does not exceed the first threshold, the image change is assumed to be insignificant and ignored.
단계 (S200) : 앞의 (S100)에서 검출된 이동객체 영역에 대하여 모션벡터와 코딩유형에 기초하여 바운더리 영역이 대략적으로 어디까지인지 검출한다. 이동객체 영역으로 마킹된 영상 블록을 중심으로 인접한 복수의 영상 블록을 검사하여 모션벡터가 제 2 임계치(예: 0) 이상 발생하였거나 코딩유형이 인트라 픽쳐(Intra Picture)일 경우에는 해당 영상 블록도 이동객체 영역으로 마킹한다. 이러한 과정을 통해서는 실질적으로는 해당 영상 블록이 앞서 (S100)에서 검출된 이동객체 영역과 한 덩어리를 이루는 형태로 되는 결과가 된다.Step S200: Detects how far the boundary region is to the moving object region detected in S100 based on the motion vector and the coding type. If a motion vector occurs above a second threshold (for example, 0) or a coding type is an intra picture by inspecting a plurality of adjacent image blocks centered on the image block marked as a moving object area, the corresponding image block is also moved. Mark as an object area. Through this process, the corresponding image block is substantially in the form of forming a lump with the moving object region detected in S100.
유효 움직임이 발견되어 이동객체 영역의 근방에서 어느 정도의 움직임이 있는 영상 블록이라면 이는 앞의 이동객체 영역과 한 덩어리일 가능성이 높기 때문에 이동객체 영역이라고 마킹한다. 또한, 인트라 픽쳐의 경우에는모션벡터가 존재하지 않기 때문에 모션벡터에 기초한 판정이 불가능하다. 이에, 이동객체 영역으로 이미 검출된 영상 블록에 인접하여 위치하는 인트라 픽쳐는 일단 기 추출된 이동객체 영역과 함께 한 덩어리로 추정한다.If an effective motion is found and there is a certain amount of motion in the vicinity of the moving object area, it is marked as a moving object area because it is likely to be a mass with the previous moving object area. In addition, in the case of an intra picture, since a motion vector does not exist, determination based on a motion vector is impossible. Accordingly, the intra picture located adjacent to the image block already detected as the moving object region is estimated as a mass together with the previously extracted moving object region.
단계 (S300) : 앞의 (S100)과 (S200)에서 검출된 이동객체 영역에 인터폴레이션(interpolation)을 적용하여 이동객체 영역의 분할(fragmentation)을 정리한다. 앞의 과정에서는 영상 블록 단위로 이동객체 영역 여부를 판단하였기 때문에 실제로는 하나의 이동객체(예: 사람)임에도 불구하고 중간중간에 이동객체 영역으로 마킹되지 않은 영상 블록이 존재하여 여러 개의 이동객체 영역으로 분할되는 현상이 발생할 수 있다. 그에 따라, 이동객체 영역으로 마킹된 복수의 영상 블록으로 둘러싸여 하나 혹은 소수의 비마킹 영상 블록이 존재한다면 이들은 이동객체 영역으로 추가로 마킹한다. 이를 통해, 여러 개로 분할되어 있는 이동객체 영역을 하나로 뭉쳐지도록 만들 수 있는데, 이와 같은 인터폴레이션의 영향은 도 8과 도 9을 비교하면 명확하게 드러난다.Step S300: The interpolation is applied to the moving object areas detected at S100 and S200 to clean up the fragmentation of the moving object area. In the above process, since it is determined whether the moving object area is the image block unit, even though it is actually a moving object (for example, a person), there is an image block that is not marked as the moving object area in the middle. The phenomenon of dividing into may occur. Accordingly, if there are one or a few unmarked image blocks surrounded by a plurality of image blocks marked with the moving object region, they additionally mark the moving object region. By doing so, it is possible to make the mobile object region divided into several into one. The influence of such interpolation is clearly seen when comparing FIG. 8 and FIG.
이상에서 기술한 바와 같이, 단계 (S100) 내지 단계 (S300)을 통해 압축영상의 신택스(모션벡터, 코딩유형)에 기초하여 각 프레임 이미지로부터 이동객체 영역을 신속하게 추출하였다. 그런데, 이러한 과정을 통해 도출된 이동객체 영역은 단순히 각각의 프레임 이미지에서 무언가 움직임이 있는 것처럼 보이는 이미지 덩어리라는 개념에 불과하고, 압축영상에서 프레임이 진행됨에 따라 통일감있게 파악되는 객체(object)의 개념은 없다. As described above, the moving object region was quickly extracted from each frame image based on the syntax (motion vector, coding type) of the compressed image through steps S100 to S300. However, the moving object region derived through this process is merely a concept of a mass of images that seem to have something moving in each frame image, and a concept of an object that is uniformly recognized as a frame progresses in a compressed image. There is no.
다음으로, 단계 (S400)을 통해 객체영역 식별장치(200)가 앞서 추출하였던 이동객체 영역에 식별코드(Unique ID)를 관리하여 이동객체 영역에 객체(object)의 개념을 담는 과정을 기술한다.Next, a process of containing the concept of an object in the moving object area by managing an identification code (Unique ID) in the moving object area previously extracted by the object area identification apparatus 200 through the step (S400).
단계 (S400) : 압축영상에서 신택스 기반으로 추출한 이동객체 영역에 대하여 Unique ID를 관리한다. 압축영상을 구성하는 각 영상 프레임에서 이동객체 영역을 도출하였는데, 이는 영상 내용을 분석하여 객체(object)라고 판단한 것이 아니라 해당 영상 프레임 내에서 무언가 움직임이 있는 것처럼 보이는 이미지의 덩어리를 추출한 것 뿐이다. 이에, 이동객체 영역에 대하여 Unique ID를 할당 및 관리함으로써 이동객체 영역에 객체로서의 속성(attributes)을 생성한다. 이를 통해 이동객체 영역을 단순히 영역(region)이 아니라 객체(object)처럼 다룰 수 있게 되고, 압축영상에서 일련의 프레임 이미지를 넘어가면서 특정 객체의 움직임을 해석할 수 있게 된다.Step S400: Unique ID is managed for the moving object region extracted based on the syntax from the compressed image. The moving object region is derived from each image frame constituting the compressed image. The image content is not determined by analyzing the image content, but merely extracting a chunk of an image that seems to be moving in the image frame. Accordingly, by assigning and managing a unique ID for the mobile object area, attributes as an object are created in the mobile object area. Through this, the area of the moving object can be treated like an object, not just a region, and the movement of a specific object can be interpreted while moving over a series of frame images in the compressed image.
이동객체 영역의 Unique ID 관리는 아래의 3가지 경우로 다루어진다. 이전의 프레임에서 Unique ID가 할당되었기에 현재 프레임 이미지에서는 ID 할당 상태인 이동객체 영역을 식별하는 경우(S410), 이전의 프레임에서 식별된 적이 없기에 현재 프레임 이미지에서 ID 미할당 상태로 식별되는 이동객체 영역에 대해 Unique ID 신규 할당하는 경우(S412), 이전의 프레임에서 Unique ID가 할당되었으나 현재 프레임 이미지에서 사라진 이동객체 영역이 식별되어 그 할당하였던 Unique ID를 리보크(revoke)하는 경우(S430)이다.Unique ID management of the mobile object area is handled in the following three cases. If a unique ID is assigned in a previous frame and a moving object area is identified in the current frame image (S410), the moving object area identified as an unassigned ID in the current frame image because it has not been identified in the previous frame. In the case of newly assigning a unique ID to the SID, a mobile object region in which a unique ID is assigned in the previous frame but disappeared from the current frame image is identified and revokes the allocated unique ID (S430).
그런데, 압축영상을 구성하는 일련의 프레임 이미지에서 이전 프레임에서 이동객체 영역이라고 마킹되어진 영상 블록의 덩어리가 앞뒤 프레임 간에 동일 객체에 관한 것인지 아닌지를 판단할 수 있어야 한다. 그래야, 현재 프레임에서 다루고 있는 이동객체 영역에 대해 이전에 Unique ID가 할당되어 있었는지 여부를 판단할 수 있기 때문이다.However, in a series of frame images constituting the compressed image, it should be possible to determine whether the chunk of the image block marked as the moving object region in the previous frame is related to the same object between front and rear frames. This is because it is possible to determine whether the Unique ID has been previously assigned to the moving object area handled in the current frame.
본 발명에서는 원본 영상의 내용을 해석하지 않고 영상 블록이 이동객체 영역인지 여부만 체크하였기 때문에 앞뒤의 영상 프레임에서 이동객체 영역의 덩어리가 실제로 동일한지 아닌지 확인할 수 없다. 즉, 이동객체 영역의 내용을 파악하지 않기 때문에 예컨대 동일 지점에서 앞뒤 프레임 간에 고양이가 개로 치환되었을 때에 그러한 변화를 식별하지 못한다. 하지만, 프레임 간의 시간간격이 매우 짧다는 점과 CCTV 카메라의 관찰 대상은 통상의 속도로 움직인다는 점을 감안하면 이러한 일이 벌어질 가능성은 배제 가능하다.In the present invention, since only the image block is a moving object region without checking the contents of the original image, it is not possible to confirm whether the chunks of the moving object region are actually the same in the image frames before and after. That is, since the contents of the moving object area are not known, such a change cannot be identified, for example, when the cat is replaced by a dog between the front and rear frames at the same point. However, considering that the time interval between frames is very short and that the observation object of the CCTV camera moves at a normal speed, the possibility of this happening can be excluded.
이에, 본 발명에서는 앞뒤 프레임에서 이동객체 영역의 덩어리 간에 중첩되는 영상 블록의 비율 혹은 갯수가 일정 임계치 이상인 것들을 동일한 이동객체 영역이라고 추정한다. 이러한 접근방식에 의하면 영상 내용을 모르더라도 기존에 식별했던 이동객체 영역이 움직인 것인지 아니면 새로운 이동객체 영역이 신규로 발견된 것인지 아니면 기존의 이동객체 영역이 사라진 것인지 판단할 수 있다. 이러한 판단은 정확도는 종래기술에 비해 낮지만 데이터 처리 속도를 획기적으로 높일 수 있어 실제 적용에서 유리하다.Accordingly, the present invention estimates that the ratio or number of image blocks overlapping between the chunks of the moving object region in the front and back frames is equal to or greater than a predetermined threshold. According to this approach, even if the contents of the image are not known, it is possible to determine whether the previously identified moving object region is moved or whether a new moving object region is newly discovered or the existing moving object region is lost. This judgment is lower in accuracy than the prior art, but can greatly increase the data processing speed, which is advantageous in practical applications.
단계 (S410)에서, 이전의 프레임에서 Unique ID가 할당되었던 이동객체 영역을 현재 프레임 이미지에서 식별한 경우에는 기 할당된 Unique ID를 해당 이동객체 영역에 할당 유지한다. 구현 예에 따라서 Unique ID의 관리 데이터베이스에 그 식별 사실을 마킹 처리할 수 있다.In operation S410, when the moving object region to which the Unique ID has been assigned in the previous frame is identified in the current frame image, the previously allocated Unique ID is allocated to the corresponding moving object region. According to the implementation example, the identification may be marked in the management database of the Unique ID.
단계 (S420)에서, 이전의 프레임에서 식별된 적이 없기에 현재 프레임 이미지에서 ID 미할당 상태인 이동객체 영역을 새롭게 발견한 경우에는 해당 이동객체 영역에 대해 Unique ID를 신규 할당해준다. 이는 영상에서 새로운 이동객체가 발견된 상황을 의미한다. 도 10은 CCTV 촬영 영상에서 도출된 세 개의 이동객체 영역에 Unique ID가 할당되어 있는 예를 나타낸다.In step S420, when a new object is unidentified in the current frame image because it has not been identified in the previous frame, a new ID is newly assigned to the corresponding mobile object area. This means that a new moving object is found in the image. 10 illustrates an example in which unique IDs are allocated to three moving object regions derived from CCTV images.
단계 (S430)에서, 압축영상의 이전의 프레임에서 Unique ID가 할당되었던 이동객체 영역이 현재 프레임 이미지에서 사라진 경우에 해당 이동객체 영역에 대해 이전의 프레임과 관련하여 단계 (S420)에서 할당하였고 단계 (S410)에서 유지 관리해주었던 Unique ID를 리보크 처리한다. 즉, 이전에 발견하여 관리해왔던 이동객체가 영상에서 사라진 것이다.In operation S430, when the moving object region in which the unique ID is assigned in the previous frame of the compressed image disappears from the current frame image, the movement object region is allocated in step S420 with respect to the previous frame for the moving object region. Revoke the Unique ID maintained in S410). In other words, the moving object that has been discovered and managed before has disappeared from the image.
다음으로, 단계 (S500) 내지 단계 (S900)을 통해 객체영역 식별장치(200)와 영상분석 시스템(300)이 이동객체 영역을 연동 처리함으로써 객체 분류 및 이벤트 식별을 수행하는 과정에 대해 기술한다.Next, a process of performing object classification and event identification by the object region identification apparatus 200 and the image analysis system 300 interworking the moving object region through steps S500 to S900 will be described.
단계 (S500) : 다음으로, 이동객체 영역의 썸네일 이미지 및 좌표 정보(예: 위치 정보, 크기 정보)를 도출한다. 도 11은 본 발명에서 CCTV 촬영 영상의 세 개의 이동객체 영역에 대해 썸네일 이미지(thumbnail image)를 도출한 예를 나타내는 도면이고, 도 12는 이들 이동객체 영역에 대해 좌표 정보로서 위치 정보와 크기 정보를 식별한 예를 나타내는 도면이다. Step S500: Next, a thumbnail image and coordinate information (eg, location information and size information) of the moving object area are derived. FIG. 11 is a diagram illustrating an example in which thumbnail images are derived for three moving object areas of a CCTV photographing image in the present invention, and FIG. 12 shows position information and size information as coordinate information for these moving object areas. It is a figure which shows the identified example.
도 11의 썸네일 이미지를 획득하기 위해 객체영역 식별장치(200)가 압축영상을 디코딩하거나 혹은 압축영상의 일부를 선택적으로 디코딩하는 기능을 구비하도록 구성될 수 있다. 한편, 객체영역 식별장치(200)가 위치 정보를 영상분석 시스템(300)으로 전달하면 영상분석 시스템(300)이 이로부터 이동객체 영역에 대한 썸네일 이미지를 획득하도록 구성될 수도 있다. 하지만, 이를 위해서는 영상분석 시스템(300)의 내부 프로세스를 종래에 비해 많이 변경해야 하기 때문에 그다지 바람직한 접근법은 아니라고 판단된다. 그보다는 객체영역 식별장치(200)의 썸네일 이미지를 생성하는 방식이 더 바람직하다.In order to obtain the thumbnail image of FIG. 11, the object region identification apparatus 200 may be configured to have a function of decoding a compressed image or selectively decoding a portion of the compressed image. Meanwhile, when the object region identification apparatus 200 transmits the location information to the image analysis system 300, the image analysis system 300 may be configured to obtain a thumbnail image of the moving object region therefrom. However, for this purpose, since the internal process of the image analysis system 300 has to be changed a lot compared with the prior art, it is determined that it is not a very preferable approach. Rather, a method of generating a thumbnail image of the object region identification apparatus 200 is more preferable.
한편, 이동객체 영역의 위치 정보는 해당 영상블록의 영상 내에서 이동객체 영역이 배치된 위치를 의미하는데, 도 12와 같이 이동객체 영역을 최적으로 둘러싸는 사각형의 좌상단 좌표를 사용할 수도 있고 혹은 사각형의 중심 자표를 위치 정보로 사용할 수도 있다. 또한, 크기 정보는 도 12과 같이 이동객체 영역을 최적으로 둘러싸는 사각형 사이즈를 사용할 수 있다.Meanwhile, the position information of the moving object region means a position where the moving object region is disposed in the image of the corresponding video block. As shown in FIG. 12, the upper left coordinate of the rectangle optimally surrounding the moving object region may be used, or the rectangular The center grid can also be used as location information. In addition, the size information may use a rectangular size that optimally surrounds the moving object region as shown in FIG. 12.
단계 (S600, S700) : 다음으로, 객체영역 식별장치(200)는 압축영상에서 도출한 이동객체 영역에 대한 좌표 정보와 썸네일 이미지를 영상분석 시스템(300)으로 제공하고, 영상분석 시스템(300)은 Unique ID 기준으로 이동객체 영역에 대해 영상분석 처리하여 객체 분류 및 이벤트 식별을 수행한다.Step (S600, S700): Next, the object region identification apparatus 200 provides the coordinate information and thumbnail image of the moving object region derived from the compressed image to the image analysis system 300, the image analysis system 300 Classifies and performs event classification by analyzing the moving object region based on the unique ID.
앞서 단계 (S400)을 통하여 객체영역 식별장치(200)는 이동객체 영역에 대해 Unique ID 관리를 수행하였으며, 이를 통해 압축영상에서 이동객체 영역이 단순히 영역(region)이 아니라 객체(object)의 개념을 갖도록 하였다. 따라서, 객체영역 식별장치(200)가 제공하는 일련의 이동객체 영역 식별정보를 영상분석 시스템(300)은 객체(object)의 개념으로 다룰 수 있다.In operation S400, the object region identification apparatus 200 performs unique ID management on the moving object region, and through this, the moving object region in the compressed image is not simply a region but a concept of an object. To have it. Therefore, the image analysis system 300 may treat the series of moving object region identification information provided by the object region identification apparatus 200 in the concept of an object.
영상분석 시스템(300)은 객체영역 식별장치(200)로부터 전달된 다수의 이동객체 영역 식별정보(썸네일 이미지, 좌표 정보)를 Unique ID를 기준으로 정렬하고 Unique ID 단위로 영상분석을 수행한다. 그러한 영상분석을 통하여 Unique ID 별로 이동객체 영역의 내용을 인식할 수 있게 되고, 그에 따라 그 객체가 무엇인지에 관한 객체 분류 결과(예: 사람, 자동차, 동물 등) 및 그 객체가 영상 내에서 어떠한 행위를 하고 있는지에 관한 이벤트 식별 결과(예: 우범자 배회, 월담, 범죄, 싸움 등)를 도출한다. 이때, 영상분석 시스템(300)은 압축영상 전체에 대해서 영상분석을 수행하는 것이 아니라 압축영상으로부터 객체영역 식별장치(200)가 도출해낸 일련의 이동객체 영역에 대해서만 영상분석을 수행하면 되므로 프로세싱 부담이 종래기술에 비해 현저하게 낮다.The image analysis system 300 sorts a plurality of moving object region identification information (thumbnail image, coordinate information) transmitted from the object region identification apparatus 200 based on a unique ID and performs image analysis in units of a unique ID. Through such image analysis, it is possible to recognize the contents of the moving object area by unique ID, and accordingly, the object classification result (eg, person, car, animal, etc.) regarding what the object is, and the object in the image Obtain event identification results (e.g. offender roaming, wall talks, crimes, fights, etc.) as to whether or not you are acting. In this case, the image analysis system 300 does not perform image analysis on the entire compressed image, but performs image analysis only on a series of moving object regions derived by the object region identification apparatus 200 from the compressed image. Significantly lower than the prior art.
단계 (S800, S900) : 다음으로, 객체영역 식별장치(200)는 영상분석 시스템(300)으로부터 이동객체 영역에 대한 객체 분류 및 이벤트 식별 결과를 수신하고, 관제요원이 활용할 수 있도록 이동객체 영역의 Unique ID와 객체 분류 및 이벤트 식별 결과를 연결 관리한다.Step (S800, S900): Next, the object region identification apparatus 200 receives the object classification and event identification results for the moving object region from the image analysis system 300, and the control personnel can use the area of the moving object region. It manages connection of Unique ID, object classification and event identification.
이하에서는 도 5 내지 도 9를 참조하여, 압축영상을 디코딩하여 영상 내용을 분석하지 않고서도, 객체영역 식별장치(200)가 압축영상의 신택스 정보, 예컨대 모션벡터와 코딩유형을 이용하여 영상 내에 무언가 유의미한 움직임이 존재하는 영역, 즉 이동객체 영역을 추출하는 과정에 대해 살펴본다. 후술하는 바와 같이, 이 과정은 영상의 내용을 해석하여 이동객체를 인식해내는 것이 아니라 그 내용을 알지 못하는 상태에서 이동객체가 포함된 것으로 추정되는 영상 블록의 덩어리를 추출한다는 점에서 개념상 특징이 있다.Hereinafter, referring to FIGS. 5 to 9, even without decoding a compressed image and analyzing the contents of the image, the object region identification apparatus 200 uses something in the image using syntax information of the compressed image, for example, a motion vector and a coding type. The process of extracting a region where significant motion exists, that is, a moving object region, will be described. As will be described later, this process is conceptually characterized in that it does not recognize the moving object by interpreting the contents of the image, but extracts a block of the image block that is assumed to contain the moving object without knowing the contents. have.
도 5는 본 발명에서 압축영상으로부터 유효 움직임을 검출하는 과정의 구현 예를 나타내는 순서도이고, 도 6은 CCTV 압축영상에 대해 본 발명에 따른 유효 움직임 영역 검출 과정이 적용된 결과의 일 예를 나타내는 도면이다. 도 5의 프로세스는 도 4에서 단계 (S100)에 대응한다.5 is a flowchart illustrating an example of a process of detecting effective motion from a compressed image in the present invention, and FIG. 6 is a diagram illustrating an example of a result of applying the effective motion region detection process according to the present invention to a CCTV compressed image. . The process of FIG. 5 corresponds to step S100 in FIG. 4.
단계 (S110) : 먼저, 압축영상의 코딩 유닛을 파싱하여 모션벡터 및 코딩유형을 획득한다. 도 1을 참조하면, 동영상 디코딩 장치는 압축영상의 스트림에 대해 H.264 AVC 및 H.265 HEVC 등과 같은 동영상압축 표준에 따라 구문분석(헤더 파싱) 및 모션벡터 연산을 수행한다. 이러한 과정을 통하여 압축영상의 코딩 유닛에 대하여 모션벡터와 코딩유형을 파싱해낸다.Step S110: First, a coding unit of a compressed image is parsed to obtain a motion vector and a coding type. Referring to FIG. 1, a video decoding apparatus performs parsing (header parsing) and motion vector operations on a stream of compressed video according to a video compression standard such as H.264 AVC and H.265 HEVC. Through this process, the motion vector and coding type are parsed for the coding unit of the compressed image.
단계 (S120) : 압축영상을 구성하는 복수의 영상 블록 별로 미리 설정된 시간(예: 500 ms) 동안의 모션벡터 누적값을 획득한다. Step S120: Acquire a motion vector cumulative value for a preset time (for example, 500 ms) for each of the plurality of image blocks constituting the compressed image.
이 단계는 압축영상으로부터 실질적으로 의미를 인정할만한 유효 움직임, 예컨대 주행중인 자동차, 달려가는 사람, 서로 싸우는 군중들이 있다면 이를 검출하려는 의도를 가지고 제시되었다. 흔들리는 나뭇잎, 잠시 나타나는 고스트, 빛의 반사에 의해 약간씩 변하는 그림자 등은 비록 움직임은 있지만 실질적으로는 무의미한 객체이므로 검출되지 않도록 한다.This step is presented with the intention to detect if there are effective movements that are practically recognizable from the compressed image, such as driving cars, running people, and fighting crowds. Shaky leaves, ghosts that appear momentarily, and shadows that change slightly due to light reflections, though they are moving, are virtually meaningless objects and should not be detected.
이를 위해, 미리 설정된 일정 시간(예: 500 msec) 동안 하나이상의 영상 블록 단위로 모션벡터를 누적시켜 모션벡터 누적값을 획득한다. 이때, 영상 블록은 매크로블록과 서브블록을 포함하는 개념으로 사용된 것이다.To this end, a motion vector cumulative value is obtained by accumulating a motion vector in units of one or more image blocks for a predetermined time period (for example, 500 msec). In this case, the image block is used as a concept including a macroblock and a subblock.
단계 (S130, S140) : 복수의 영상 블록에 대하여 모션벡터 누적값을 미리 설정된 제 1 임계치(예: 20 픽셀)와 비교하며, 제 1 임계치를 초과하는 모션벡터 누적값을 갖는 영상 블록을 이동객체 영역으로 마킹한다.Steps S130 and S140: Comparing a motion vector cumulative value with respect to a plurality of image blocks with a preset first threshold value (for example, 20 pixels), and moving the video block having a motion vector cumulative value exceeding the first threshold value. Mark the area.
만일 이처럼 일정 이상의 모션벡터 누적값을 갖는 영상 블록이 발견되면 해당 영상 블록에서 무언가 유의미한 움직임, 즉 유효 움직임이 발견된 것으로 보고 이동객체 영역으로 마킹한다. 예를 들어 사람이 뛰어가는 정도로 관제요원이 관심을 가질만한 가치가 있을 정도의 객체 움직임을 선별하여 검출하려는 것이다. 반대로, 모션벡터가 발생하였더라도 일정 시간동안의 누적값이 제 1 임계치를 넘지 못할 정도로 작을 경우에는 영상에서의 변화가 그다지 크지않고 미미한 것으로 추정하고 검출 단계에서 무시한다.If an image block having a predetermined motion vector accumulation value is found as described above, it is considered that something significant movement, that is, effective movement, is found in the image block and is marked as a moving object region. For example, we want to detect and detect object movements that are worth the attention of the controller, to the extent that a person runs. On the contrary, even if a motion vector is generated, if the cumulative value for a predetermined time is small enough not to exceed the first threshold, the change in the image is assumed to be small and insignificant and is neglected in the detection step.
도 6은 도 5의 과정을 통해 CCTV 압축영상으로부터 유효 움직임 영역을 검출한 결과를 시각적으로 나타낸 일 예이다. 도 6에서는 제 1 임계치 이상의 모션벡터 누적값을 갖는 영상 블록이 이동객체 영역으로 마킹되어 볼드 라인(bold line)의 영역으로 표시되었다. 도 6를 살펴보면 보도블럭, 도로, 그림자가 있는 부분 등은 이동객체 영역으로 표시되지 않은 반면, 걷고있는 사람들이나 주행중인 자동차 등이 이동객체 영역으로 표시되었다.FIG. 6 is an example illustrating a result of detecting an effective motion region from a CCTV compressed image through the process of FIG. 5. In FIG. 6, an image block having a motion vector accumulation value equal to or greater than a first threshold is marked as a moving object area and displayed as a bold line area. Referring to FIG. 6, the sidewalk block, the road, and the part with the shadow are not displayed as the moving object area, while the walking people or the driving car are displayed as the moving object area.
도 7은 본 발명에서 이동객체 영역에 대한 바운더리 영역을 검출하는 과정의 구현 예를 나타내는 순서도이고, 도 8은 유효 움직임 영역 검출 과정을 수행한 도 6의 CCTV 영상 이미지에 대해 도 7에 따른 바운더리 영역 검출 과정을 더 적용된 결과의 일 예를 나타내는 도면이다. 도 7의 프로세스는 도 4에서 단계 (S200)에 대응한다.FIG. 7 is a flowchart illustrating an example of a process of detecting a boundary region for a moving object region in the present invention, and FIG. 8 is a boundary region according to FIG. 7 with respect to the CCTV image of FIG. Figure 1 shows an example of the results of further applying the detection process. The process of FIG. 7 corresponds to step S200 in FIG. 4.
앞서의 도 6를 살펴보면 영상 내에서 실제로 움직이는 사물(이동객체)에 대해 마킹이 제대로 이루어지지 않았으며 그중 일부에 대해서만 마킹이 이루어진 것을 발견할 수 있다. 즉, 걷고있는 사람이나 주행중인 자동차를 살펴보면 객체의 전부가 마킹된 것이 아니라 일부 블록만 마킹되었다는 것을 발견할 수 있다. 또한, 실제로는 하나의 이동객체임에도 불구하고 복수의 이동객체 영역인 것처럼 마킹된 것도 많이 발견된다. 이는 앞의 (S100)에서 채택한 이동객체 영역의 판단 기준이 일반 영역을 필터링 아웃하는 데에는 유용하지만 상당히 엄격한 조건이었음을 의미한다. 따라서, 이동객체 영역을 중심으로 그 주변을 살펴봄으로써 이동객체의 바운더리를 검출하는 과정이 필요하다.Referring to FIG. 6, it can be found that marking is not properly performed on an object (moving object) actually moving in the image, and only some of them are marked. In other words, if you look at a person walking or driving a car, you will find that not all of the objects are marked, but only some blocks. In addition, although it is actually one moving object, many are marked as if they are a plurality of moving object areas. This means that the criterion of the moving object region adopted in (S100) above was useful for filtering out the general region but was a very strict condition. Therefore, it is necessary to detect the boundary of the moving object by looking around the moving object area.
단계 (S210) : 먼저, 앞의 (S100)에 의해 이동객체 영역으로 마킹된 영상 블록을 중심으로 하여 인접하는 복수의 영상 블록을 식별한다. 이들은 본 명세서에서는 편이상 '이웃 블록'이라고 부른다. 이들 이웃 블록은 (S100)에 의해서는 이동객체 영역으로 마킹되지 않은 부분인데, 도 7의 프로세스에서는 이들에 대해 좀더 살펴봄으로써 이들 이웃 블록 중에서 이동객체 영역의 바운더리에 포함될만한 것이 있는지 확인하려는 것이다.Step S210: First, a plurality of adjacent image blocks are identified based on the image blocks marked as moving object areas by the previous S100. In the present specification, these are referred to as 'neighborhood blocks'. These neighboring blocks are portions that are not marked as the moving object region by S100, and the process of FIG. 7 further examines them to determine whether any of these neighboring blocks may be included in the boundary of the moving object region.
단계 (S220, S230) : 복수의 이웃 블록에 대하여 모션벡터 값을 미리 설정된 제 2 임계치(예: 0)와 비교하고, 제 2 임계치를 초과하는 모션벡터 값을 갖는 이웃 블록을 이동객체 영역으로 마킹한다. 실질적으로 의미를 부여할만한 유효 움직임이 인정된 이동객체 영역에 인접하여 위치하고 그 자신에 대해서도 어느 정도의 움직임이 발견되고 있다면 그 영상 블록은 촬영 영상의 특성상 그 인접한 이동객체 영역과 한 덩어리일 가능성이 높다. 따라서, 이러한 이웃 블록도 이동객체 영역이라고 마킹한다.Steps S220 and S230: compare a motion vector value with respect to a plurality of neighboring blocks with a second preset threshold (eg, 0), and mark the neighboring block having a motion vector value exceeding the second threshold as a moving object region. do. If the movement is located adjacent to the area of the moving object where effective motion that is practically meaningful is found and a certain amount of movement is found for itself, the image block is likely to be a block with the area of the adjacent moving object due to the characteristics of the photographed image. . Therefore, such neighboring blocks are also marked as moving object regions.
단계 (S240) : 또한, 복수의 이웃 블록 중에서 코딩유형이 인트라 픽쳐인 것을 이동객체 영역으로 마킹한다. 인트라 픽쳐의 경우에는 모션벡터가 존재하지 않기 때문에 해당 이웃 블록에 움직임이 존재하는지 여부를 모션벡터에 기초하여 판단하는 것이 원천적으로 불가능하다. 이 경우에 이동객체 영역으로 이미 검출된 영상 블록에 인접 위치하는 인트라 픽쳐는 일단 기 추출된 이동객체 영역의 설정을 그대로 유지해주는 편이 안전하다.Step S240: Also, the coding type is an intra picture among the plurality of neighboring blocks as a moving object region. In the case of an intra picture, since a motion vector does not exist, it is fundamentally impossible to determine whether a motion exists in a corresponding neighboring block based on the motion vector. In this case, it is safer for the intra picture located adjacent to the image block already detected as the moving object region to maintain the settings of the previously extracted moving object region.
도 8은 CCTV 압축영상에 바운더리 영역 검출 과정까지 적용된 결과를 시각적으로 나타낸 도면인데, 이상의 과정을 통해 이동객체 영역으로 마킹된 다수의 영상 블록을 볼드 라인으로 표시하였다. 도 8을 살펴보면, 앞서 도 6에서 볼드 라인으로 표시되었던 이동객체 영역의 근방으로 이동객체 영역은 좀더 확장되었으며 이를 통해 CCTV로 촬영된 영상과 비교할 때 이동객체를 전부 커버할 정도가 되었다는 사실을 발견할 수 있다.FIG. 8 is a diagram visually showing a result of applying a boundary region detection process to a CCTV compressed image. A plurality of image blocks marked as a moving object region through the above process are indicated by a bold line. Referring to FIG. 8, the area of the moving object was further extended to the vicinity of the moving object area indicated by the bold line in FIG. 6, and thus, it was found that the moving object area was enough to cover the moving object when compared to the image captured by CCTV. Can be.
도 9는 도 8에 나타낸 바운더리 영역 검출 과정을 적용한 CCTV 영상 이미지에 대해 본 발명에 따라 인터폴레이션을 통해 이동객체 영역을 정리한 결과의 일 예를 나타내는 도면이다.FIG. 9 is a diagram illustrating an example of a result of arranging a moving object region through interpolation according to the present invention for a CCTV image image to which the boundary region detection process illustrated in FIG. 8 is applied.
단계 (S300)은 앞의 (S100)과 (S200)에서 검출된 이동객체 영역에 인터폴레이션을 적용하여 이동객체 영역의 분할을 정리하는 과정이다. 도 8을 살펴보면 볼드 라인으로 표시된 이동객체 영역 사이사이에 비마킹 영상 블록이 발견된다. 이렇게 중간중간에 비마킹 영상 블록이 존재하게 되면 이들이 다수의 개별적인 이동객체인 것처럼 간주될 수 있다. 이렇게 이동객체 영역이 파편화되면 단계 (S500)의 결과가 부정확해질 수 있고 이동객체 영역의 갯수가 많아져서 단계 (S500) 내지 단계 (S700)의 프로세스가 복잡해지는 문제도 있다.Step S300 is a process of arranging the division of the moving object area by applying interpolation to the moving object areas detected in the previous steps S100 and S200. Referring to FIG. 8, an unmarked image block is found between the moving object regions indicated by the bold lines. If there is an unmarked image block in the middle, it can be regarded as if they are a plurality of individual moving objects. When the moving object region is fragmented in this way, the result of step S500 may be inaccurate, and the number of moving object regions may increase, thereby complicating the process of steps S500 to S700.
그에 따라, 본 발명에서는 이동객체 영역으로 마킹된 복수의 영상 블록으로 둘러싸여 하나 혹은 소수의 비마킹 영상 블록이 존재한다면 이는 이동객체 영역으로 마킹하는데, 이를 인터폴레이션이라고 부른다. 도 8과 대비하여 도 9을 살펴보면, 이동객체 영역 사이사이에 존재하던 비마킹 영상 블록이 모두 이동객체 영역이라고 마킹되었다. 이를 통해, 관제요원이 참고하기에 좀더 직관적이고 정확한 이동객체 검출 결과를 도출할 수 있게 되었다.Accordingly, in the present invention, if there is one or a few unmarked image blocks surrounded by a plurality of image blocks marked as the moving object region, this is marked as the moving object region, which is called interpolation. Referring to FIG. 9, in contrast to FIG. 8, all of the non-marked image blocks existing between the moving object regions are marked as moving object regions. Through this, it is possible to derive the result of detecting the moving object more intuitively and accurately for the control personnel.
도 6와 도 9을 비교하면 바운더리 영역 검출 과정과 인터폴레이션 과정을 거치면서 이동객체 영역이 실제 영상의 상황을 제대로 반영하게 되어간다는 사실을 발견할 수 있다. 도 6에서 볼드 라인 영역으로 마킹된 덩어리로 판단한다면 영상 화면 속에 아주 작은 물체들이 다수 움직이는 것처럼 다루어질 것인데, 이는 실제와는 부합하지 않는다. 반면, 도 9에서 볼드 라인 영역으로 마킹된 덩어리로 판단한다면 어느 정도의 부피를 갖는 몇 개의 이동객체가 존재하는 것으로 다루어질 것이어서 실제 장면을 유사하게 반영한다.Comparing FIG. 6 and FIG. 9, it can be found that the moving object region properly reflects the situation of the actual image through the boundary region detection process and the interpolation process. In FIG. 6, if the block is marked as a bold line area, a large number of very small objects move in the image screen, which does not correspond to reality. On the other hand, if it is determined as a block marked with the bold line area in Fig. 9 will be treated as a few moving objects having a certain volume to reflect the actual scene similarly.
한편, 본 발명은 컴퓨터가 읽을 수 있는 비휘발성 기록매체에 컴퓨터가 읽을 수 있는 코드의 형태로 구현되는 것이 가능하다. 이러한 비휘발성 기록매체로는 다양한 형태의 스토리지 장치가 존재하는데 예컨대 하드디스크, SSD, CD-ROM, NAS, 자기테이프, 웹디스크, 클라우드 디스크 등이 있고 네트워크로 연결된 다수의 스토리지 장치에 코드가 분산 저장되고 실행되는 형태도 구현될 수 있다. 또한, 본 발명은 하드웨어와 결합되어 특정의 절차를 실행시키기 위하여 매체에 저장된 컴퓨터프로그램의 형태로 구현될 수도 있다.Meanwhile, the present invention may be embodied in the form of computer readable codes on a computer readable nonvolatile recording medium. Such nonvolatile recording media include various types of storage devices, such as hard disks, SSDs, CD-ROMs, NAS, magnetic tapes, web disks, and cloud disks. Forms that are implemented and executed may also be implemented. In addition, the present invention may be implemented in the form of a computer program stored in a medium in combination with hardware to execute a specific procedure.

Claims (7)

  1. 압축영상의 비트스트림을 파싱하여 코딩 유닛에 대한 모션벡터 및 코딩유형을 획득하는 제 1 단계;Parsing the bitstream of the compressed image to obtain a motion vector and a coding type for the coding unit;
    압축영상을 구성하는 복수의 영상 블록 별로 미리 설정된 제 1 시간동안의 모션벡터 누적값을 획득하는 제 2 단계;A second step of obtaining a motion vector cumulative value for a first preset time for each of the plurality of image blocks constituting the compressed image;
    상기 복수의 영상 블록에 대하여 상기 모션벡터 누적값을 미리 설정된 제 1 임계치와 비교하는 제 3 단계;A third step of comparing the motion vector cumulative value with a first threshold value for the plurality of image blocks;
    상기 제 1 임계치를 초과하는 모션벡터 누적값을 갖는 영상 블록을 이동객체 영역으로 마킹하는 제 4 단계;A fourth step of marking an image block having a motion vector accumulation value exceeding the first threshold as a moving object region;
    상기 마킹된 하나이상의 이동객체 영역에 대하여 Unique ID를 할당 및 관리함으로써 압축영상을 구성하는 일련의 프레임 이미지에서 이동객체 영역에 대해 객체 속성을 생성하는 제 5 단계;A fifth step of generating an object property for a moving object area in a series of frame images constituting a compressed image by allocating and managing a unique ID for the at least one marked moving object area;
    상기 이동객체 영역에 대해 썸네일 이미지 및 좌표 정보를 도출하여 영상분석 시스템으로 제공하는 제 6 단계;A sixth step of extracting a thumbnail image and coordinate information of the moving object area and providing the same to an image analysis system;
    상기 영상분석 시스템으로부터 상기 이동객체 영역에 대한 객체 분류 결과 및 이벤트 식별 결과를 수신하여 상기 이동객체 영역의 Unique ID와 연결 관리하는 제 7 단계;A seventh step of receiving an object classification result and an event identification result of the moving object area from the image analysis system and managing the connection with the Unique ID of the moving object area;
    를 포함하여 구성되는 압축영상에 대한 신택스 기반의 영상분석 시스템과 연동 처리 방법.Syntax-based image analysis system and interworking processing method for a compressed image comprising a.
  2. 청구항 1에 있어서,The method according to claim 1,
    상기 제 6 단계와 상기 제 7 단계 사이에 수행되는,Performed between the sixth and seventh steps,
    상기 영상분석 시스템이 Unique ID 기준으로 썸네일 이미지 및 좌표 정보를 정렬하고 Unique ID 단위로 영상분석 처리하여 상기 이동객체 영역에 대한 객체 분류 및 이벤트 식별을 수행하는 단계;The image analysis system aligning thumbnail images and coordinate information based on a unique ID and performing image analysis on a unique ID basis to perform object classification and event identification on the moving object region;
    를 더 포함하여 구성되는 것을 특징으로 하는 압축영상에 대한 신택스 기반의 영상분석 시스템과 연동 처리 방법.Syntax-based image analysis system and interworking processing method for a compressed image, characterized in that further comprises.
  3. 청구항 1에 있어서,The method according to claim 1,
    상기 제 5 단계는,The fifth step,
    이동객체 영역 간의 영상 블록의 중첩도 산정에 기초하여 각각의 이동객체 영역에 대하여 이전의 프레임에 동일 객체에 관한 이동객체 영역이 존재하는지 여부를 판단하는 제 51 단계;A step 51 for determining whether a moving object region for the same object exists in a previous frame for each moving object region based on the calculation of the overlapping degree of the image blocks between the moving object regions;
    상기 제 51 단계의 판단 결과에 따라 각각의 이동객체 영역에 대해 Unique ID가 기 할당되었는지 여부를 판단하는 제 52 단계;A 52nd step of determining whether a unique ID is pre-assigned for each moving object region according to the determination result of the 51st step;
    상기 제 52 단계의 판단 결과에 따라 Unique ID 할당 상태인 이동객체 영역에 대해 기 할당된 Unique ID를 유지하는 제 53 단계;A fifty-third step of maintaining a pre-allocated Unique ID for the mobile object area in the Unique ID allocation state according to the determination result of the fifty-second step;
    상기 제 52 단계의 판단 결과에 따라 Unique ID 미할당 상태인 이동객체 영역에 대해 Unique ID를 신규 할당하는 제 54 단계;A 54th step of newly assigning a unique ID to the mobile object area in the unique ID unassigned state according to the determination result of the 52nd step;
    이전의 프레임에서 Unique ID가 할당되었으나 현재 프레임 이미지에서 사라진 이동객체 영역이 식별되면 상기 사라진 이동객체 영역에 할당되었던 Unique ID를 리보크하는 제 55 단계;A 55th step of revoking a unique ID allocated to the disappeared moving object region when a moving object region allocated to a unique ID in the previous frame but disappeared from the current frame image is identified;
    를 포함하여 구성되는 것을 특징으로 하는 압축영상에 대한 신택스 기반의 영상분석 시스템과 연동 처리 방법.Syntax-based image analysis system and interworking processing method for a compressed image, characterized in that comprises a.
  4. 청구항 3에 있어서,The method according to claim 3,
    상기 제 4 단계와 상기 제 5 단계 사이에 수행되는,Performed between the fourth and fifth steps,
    상기 이동객체 영역을 중심으로 그 인접하는 복수의 영상 블록(이하, '이웃 블록'이라 함)을 식별하는 제 a 단계;A step of identifying a plurality of adjacent image blocks (hereinafter, referred to as 'neighbor block') around the moving object area;
    상기 복수의 이웃 블록에 대하여 상기 제 1 단계에서 획득된 모션벡터 값을 미리 설정된 제 2 임계치와 비교하는 제 b 단계;B) comparing a motion vector value obtained in the first step with respect to the plurality of neighboring blocks with a second preset threshold value;
    상기 복수의 이웃 블록 중에서 상기 제 b 단계의 비교 결과 상기 제 2 임계치를 초과하는 모션벡터 값을 갖는 이웃 블록을 이동객체 영역으로 추가 마킹하는 제 c 단계;C) additionally marking, as a moving object region, a neighboring block having a motion vector value exceeding the second threshold value as a result of the comparison in the b of the plurality of neighboring blocks;
    를 더 포함하여 구성되는 것을 특징으로 하는 압축영상에 대한 신택스 기반의 영상분석 시스템과 연동 처리 방법.Syntax-based image analysis system and interworking processing method for a compressed image, characterized in that further comprises.
  5. 청구항 4에 있어서,The method according to claim 4,
    상기 제 c 단계 이후에 수행되는,Carried out after the step c,
    상기 복수의 이웃 블록 중에서 코딩유형이 인트라 픽쳐인 이웃 블록을 이동객체 영역으로 추가 마킹하는 제 d 단계;D) additionally marking a neighboring block having a coding type of an intra picture among the plurality of neighboring blocks as a moving object region;
    를 더 포함하여 구성되는 것을 특징으로 하는 압축영상에 대한 신택스 기반의 영상분석 시스템과 연동 처리 방법.Syntax-based image analysis system and interworking processing method for a compressed image, characterized in that further comprises.
  6. 청구항 5에 있어서,The method according to claim 5,
    상기 제 d 단계 이후에 수행되는,Carried out after the d step,
    상기 복수의 이동객체 영역에 대하여 인터폴레이션을 수행하여 이동객체 영역으로 둘러싸인 미리 설정된 갯수 이하의 비마킹 영상 블록을 이동객체 영역으로 추가 마킹하는 제 e 단계;Performing an interpolation operation on the plurality of moving object regions to additionally mark up to a predetermined number of non-marked image blocks surrounded by the moving object region as a moving object region;
    를 더 포함하여 구성되는 것을 특징으로 하는 압축영상에 대한 신택스 기반의 영상분석 시스템과 연동 처리 방법.Syntax-based image analysis system and interworking processing method for a compressed image, characterized in that further comprises.
  7. 하드웨어와 결합되어 청구항 1 내지 6 중 어느 하나의 항에 따른 압축영상에 대한 신택스 기반의 영상분석 시스템과 연동 처리 방법을 실행시키기 위하여 매체에 저장된 컴퓨터프로그램.A computer program stored in a medium in combination with hardware to execute a syntax-based image analysis system and an interworking processing method for a compressed image according to any one of claims 1 to 6.
PCT/KR2019/009374 2018-07-30 2019-07-29 Syntax-based image analysis system for compressed image, and interworking processing method WO2020027513A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2018-0088411 2018-07-30
KR1020180088411A KR102090785B1 (en) 2018-07-30 2018-07-30 syntax-based method of providing inter-operative processing with video analysis system of compressed video

Publications (1)

Publication Number Publication Date
WO2020027513A1 true WO2020027513A1 (en) 2020-02-06

Family

ID=69231920

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2019/009374 WO2020027513A1 (en) 2018-07-30 2019-07-29 Syntax-based image analysis system for compressed image, and interworking processing method

Country Status (2)

Country Link
KR (1) KR102090785B1 (en)
WO (1) WO2020027513A1 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102264252B1 (en) * 2021-01-18 2021-06-14 보은전자방송통신(주) Method for detecting moving objects in compressed image and video surveillance system thereof
KR102343029B1 (en) * 2021-11-11 2021-12-24 이노뎁 주식회사 method of processing compressed video by use of branching by motion vector
KR102459813B1 (en) * 2022-02-17 2022-10-27 코디오 주식회사 video processing method of periodic quality compensation by image switching
KR20230165696A (en) 2022-05-27 2023-12-05 주식회사 다누시스 System For Detecting Objects Through Optical Flow
KR20240059841A (en) 2022-10-28 2024-05-08 (주)한국플랫폼서비스기술 Object tracking method and apparatus for efficient video object detection

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000295600A (en) * 1999-04-08 2000-10-20 Toshiba Corp Monitor system
JP2002262296A (en) * 2001-02-28 2002-09-13 Mitsubishi Electric Corp Mobile object detector, and image supervisory system
KR100883632B1 (en) * 2008-08-13 2009-02-12 주식회사 일리시스 System and method for intelligent video surveillance using high-resolution video cameras
KR20100010734A (en) * 2008-07-23 2010-02-02 한국철도기술연구원 Monitoring system in railway station stereo camera and thermal camera and method thereof
US9215467B2 (en) * 2008-11-17 2015-12-15 Checkvideo Llc Analytics-modulated coding of surveillance video

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000295600A (en) * 1999-04-08 2000-10-20 Toshiba Corp Monitor system
JP2002262296A (en) * 2001-02-28 2002-09-13 Mitsubishi Electric Corp Mobile object detector, and image supervisory system
KR20100010734A (en) * 2008-07-23 2010-02-02 한국철도기술연구원 Monitoring system in railway station stereo camera and thermal camera and method thereof
KR100883632B1 (en) * 2008-08-13 2009-02-12 주식회사 일리시스 System and method for intelligent video surveillance using high-resolution video cameras
US9215467B2 (en) * 2008-11-17 2015-12-15 Checkvideo Llc Analytics-modulated coding of surveillance video

Also Published As

Publication number Publication date
KR20200013340A (en) 2020-02-07
KR102090785B1 (en) 2020-03-18

Similar Documents

Publication Publication Date Title
WO2020027513A1 (en) Syntax-based image analysis system for compressed image, and interworking processing method
WO2019124635A1 (en) Syntax-based method for sensing object intrusion in compressed video
KR102187376B1 (en) syntax-based method of providing selective video surveillance by use of deep-learning image analysis
WO2018030658A1 (en) Method for detecting, through reconstruction image processing, moving object from stored cctv image
US10410059B2 (en) Cloud platform with multi camera synchronization
WO2020171388A2 (en) Method of identifying abnormal motion object in compressed image using motion vector trajectory and pattern
WO2020027512A1 (en) Method for syntax-based object tracking control for compressed image by ptz camera
WO2019039661A1 (en) Method for syntax-based extraction of moving object region of compressed video
WO2012137994A1 (en) Image recognition device and image-monitoring method therefor
US9256789B2 (en) Estimating motion of an event captured using a digital video camera
WO2020027511A1 (en) Method for generating syntax-based heat-map for compressed image
WO2016064107A1 (en) Pan/tilt/zoom camera based video playing method and apparatus
WO2019124634A1 (en) Syntax-based method for object tracking in compressed video
KR102061915B1 (en) syntax-based method of providing object classification for compressed video
WO2019124636A1 (en) Syntax-based method for sensing wrong-way driving on road in compressed video
WO2019124632A1 (en) Syntax-based method for sensing loitering object in compressed video
WO2019124633A1 (en) Syntax-based method for sensing wall-climbing object in compressed video
KR102179077B1 (en) syntax-based method of providing object classification in compressed video by use of neural network which is learned by cooperation with an external commercial classifier
KR102178952B1 (en) method of providing object classification for compressed video by use of syntax-based MRPN-CNN
WO2022231053A1 (en) Multi-resolution image processing apparatus and method capable of recognizing plurality of dynamic objects
WO2011043499A1 (en) Context change recognition method and image processing device using same
CN109886234B (en) Target detection method, device, system, electronic equipment and storage medium
KR102153093B1 (en) syntax-based method of extracting region of moving object out of compressed video with context consideration
JP4736171B2 (en) Monitoring signal communication method and monitoring signal communication apparatus
KR102343029B1 (en) method of processing compressed video by use of branching by motion vector

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19844244

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19844244

Country of ref document: EP

Kind code of ref document: A1