WO2020027511A1 - Procédé de génération d'une carte thermique basée sur la syntaxe pour une image compressée - Google Patents

Procédé de génération d'une carte thermique basée sur la syntaxe pour une image compressée Download PDF

Info

Publication number
WO2020027511A1
WO2020027511A1 PCT/KR2019/009372 KR2019009372W WO2020027511A1 WO 2020027511 A1 WO2020027511 A1 WO 2020027511A1 KR 2019009372 W KR2019009372 W KR 2019009372W WO 2020027511 A1 WO2020027511 A1 WO 2020027511A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
moving object
compressed image
object region
motion vector
Prior art date
Application number
PCT/KR2019/009372
Other languages
English (en)
Korean (ko)
Inventor
이현우
정승훈
이성진
Original Assignee
이노뎁 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 이노뎁 주식회사 filed Critical 이노뎁 주식회사
Publication of WO2020027511A1 publication Critical patent/WO2020027511A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • G06T11/206Drawing of charts or graphs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/137Motion inside a coding unit, e.g. average field, frame or block difference
    • H04N19/139Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Definitions

  • the present invention generally relates to techniques for efficiently generating heat maps from compressed images such as H.264 AVC and H.265 HEVC.
  • the present invention provides a syntax (eg, a motion vector, for example) obtained by parsing compressed image data instead of generating a heat map through a complex image processing, for example, for a compressed image generated by a CCTV camera.
  • the present invention relates to a technology that can generate a heat map with a small number of operations by extracting a region in which something meaningful movement exists in an image, that is, a moving object region, and accumulating a trajectory of the moving object region using a coding type.
  • This data collection method can collect customer interests directly through interviews, membership cards, salespeople, or indirectly through CCTV cameras, sensors, and smartphone apps.
  • Heat maps combine heat (heat) and map (map) to represent information in a thermal distribution.
  • Such a heat map may express people's movements or interests in a color step in a camera image. People's movements are accumulated for a certain unit of time and colors are displayed according to the accumulated degree. In general, the areas in which people's movements are accumulated are expressed in red, and the areas in which people's movements are accumulated are represented in blue, so that they are consistent with the feeling of temperature.
  • heat maps from images taken from store CCTV cameras allow store managers to intuitively identify which products are of interest to customers and vice versa by using heat maps. have. Based on this information, it is possible to decide whether to change the arrangement of goods in the store, to change the price policy or to establish a sale event policy in consideration of the actual purchase rate.
  • heat maps obtained from images taken by alleyway CCTV cameras can be used to identify the paths that people are moving along the alleys.
  • a video decoding apparatus includes a parser 11, an entropy decoder 12, an inverse converter 13, a motion vector operator 14, a predictor 15, and a deblocking filter ( 16) is configured to include.
  • These hardware modules sequentially process the data of the compressed image to decompress the compressed image and restore the original image data.
  • the parser 11 parses the motion vector and the coding type for the coding unit of the compressed image.
  • Such a coding unit is generally an image block such as a macroblock or a subblock.
  • FIG. 2 is a flowchart illustrating a process of generating a heat map from a compressed image in a conventional image analysis solution.
  • a compressed image is decoded according to a video standard such as H.264 AVC and H.265 HEVC to obtain a reproduced image (S10), and the frame images constituting the reproduced image are downscaled to a small image, for example, 320x240. (S20).
  • the reason for this downscaling resizing process is to slightly reduce the processing burden in the subsequent process.
  • the moving objects existing in the compressed image are extracted through image analysis, and the coordinates of the moving objects are extracted (S30).
  • the heat map is generated by accumulating the trajectories of the moving objects through image analysis of a series of frame images over time (S40).
  • a moving object is extracted to generate a heat map.
  • compressed image decoding, downscale resizing, and image analysis are performed. These are very complicated processes, and therefore, in a conventional video control system, the capacity that a single video analysis server can process simultaneously is quite limited.
  • CCTV channels that can be covered by high-performance video analytics servers are typically up to 20 channels. Therefore, in order to generate heat maps for compressed images generated from CCTV cameras installed at various points, a plurality of image analysis servers were required, which caused an increase in cost and difficulty in securing physical space.
  • An object of the present invention is to provide a technique for effectively generating heat maps from compressed images such as H.264 AVC and H.265 HEVC.
  • an object of the present invention is syntax (e.g., motion vector, coding) obtained by parsing compressed image data, rather than generating a heat map for a compressed image generated by a CCTV camera, for example, through complex image processing. It is to provide a technology that can generate a heat map with a small number of operations by extracting a region in which there is something meaningful movement in the image, that is, a moving object region and accumulating the trajectory of the moving object region.
  • syntax e.g., motion vector, coding
  • the present invention is to achieve the above object, the syntax-based heat map generation method for a compressed image according to the present invention, parsing the bitstream of the compressed image to obtain a motion vector and coding type for the coding unit Stage 1; A second step of obtaining a motion vector cumulative value for a predetermined time for each of the plurality of image blocks constituting the compressed image; A third step of comparing a motion vector cumulative value with a first threshold value for a plurality of image blocks; A fourth step of marking an image block having a motion vector accumulation value exceeding a first threshold as a moving object region; And a fifth step of generating a heat map for the compressed image by accumulating a moving object region over a series of image frames of the compressed image.
  • the fifth step may include: identifying a plurality of moving object areas marked above in a series of image frames constituting the compressed image; Calculating a representative position with respect to the plurality of moving object regions; A fifth step of calculating hit data by accumulating the calculated plurality of representative positions; And a fifth step of generating a heat map for the compressed image based on the hit data.
  • a unique ID is newly issued and assigned to a plurality of moving object areas identified above in a series of video frames constituting the compressed image, and assigned to the moving object areas identified as unassigned.
  • a fifth step of reinforcing the hit data by accumulating the movement trajectories for each Unique ID.
  • the method of generating a heat map may include: a) identifying a plurality of adjacent image blocks (hereinafter, referred to as 'neighbor blocks') around a moving object area; B) comparing a motion vector value with a second preset threshold value for a plurality of neighboring blocks; C) additionally marking a neighboring block having a motion vector value exceeding a second threshold as a moving object region; D) additionally marking a neighboring block having a coding type of an intra picture among the plurality of neighboring blocks as a moving object region; The method may further include an e-step of performing interpolation on the plurality of moving object regions to additionally mark a predetermined number or less of unmarked image blocks surrounded by the moving object region as the moving object region.
  • the computer program according to the present invention is stored in the medium in combination with hardware to execute the syntax-based heat map generation method for the compressed image as described above.
  • a heat map can be efficiently generated from a CCTV compressed image without complex processing such as decoding, downscale resizing, difference image acquisition, image analysis, and the like.
  • FIG. 1 is a block diagram showing a general configuration of a video decoding apparatus.
  • FIG. 2 is a flowchart illustrating a process of generating a heat map from a compressed image in the prior art.
  • FIG. 3 is a flowchart illustrating an entire process of a syntax based heatmap generation process for a compressed image according to the present invention.
  • FIG. 4 is a flowchart illustrating an embodiment of a process of detecting effective motion from a compressed image in the present invention.
  • FIG. 5 is a diagram illustrating an example of a result of applying an effective motion region detection process according to the present invention to a CCTV compressed image.
  • FIG. 6 is a flowchart illustrating an example of a process of detecting a boundary region for a moving object region in the present invention.
  • FIG. 7 is a diagram illustrating an example of a result of applying a boundary area detection process according to the present invention to the CCTV image of FIG.
  • FIG. 8 is a diagram illustrating an example of a result of arranging a moving object region through interpolation with respect to the CCTV image of FIG. 7.
  • FIG. 9 is a flowchart illustrating an embodiment of a process of generating a heat map from a moving object region detected in a compressed image according to the present invention.
  • FIG. 10 is a diagram for one example in which a unique ID is assigned to a moving object area in the present invention.
  • FIG. 11 is a view showing an example in which the center coordinates are set in the moving object area in the present invention.
  • an image analysis server may be preferably performed in a system for handling compressed images, for example, a CCTV image control system or a CCTV image analysis system.
  • the present invention parses a bitstream of a compressed image without having to decode the compressed image, thereby syntax information of each image block, that is, a macro block and a sub block, preferably a motion vector. Quickly extract the moving object region using the and coding type information.
  • the moving object region thus obtained does not accurately reflect the boundary of the moving object as shown in the image attached to the present specification, but exhibits a certain level of reliability even though the processing speed is high.
  • the present invention generates a heat map for the space by accumulating the information of the moving object region thus obtained.
  • the moving object region can be extracted and the heat map can be generated without decoding the compressed image.
  • the apparatus or software to which the present invention is applied should not perform the operation of decoding the compressed image, but the scope of the present invention is not limited.
  • Step S100 First, an effective motion that can be substantially recognized from the compressed image is detected from the compressed image based on the motion vector of the compressed image, and the image region in which the effective motion is detected is set as the moving object region.
  • the motion vector and coding type of a coding unit of a compressed image are parsed according to a video compression standard such as H.264 AVC and H.265 HEVC.
  • the size of the coding unit is generally about 64x64 to 4x4 pixels and may be set to be flexible.
  • the motion vectors are accumulated for a predetermined time period (for example, 500 msec) for each image block, and it is checked whether the motion vector accumulation value exceeds the first predetermined threshold (for example, 20). If such an image block is found, it is considered that effective motion has been found in the image block and marked as a moving object area. Accordingly, even if the motion vector is generated, if the cumulative value for a predetermined time does not exceed the first threshold, the image change is assumed to be insignificant and ignored.
  • a predetermined time period for example, 500 msec
  • Step S200 Detects how far the boundary region is to the moving object region detected in S100 based on the motion vector and the coding type. If a motion vector occurs above a second threshold (for example, 0) or a coding type is an intra picture by inspecting a plurality of adjacent image blocks centered on the image block marked as a moving object area, the corresponding image block is also moved. Mark as an object area. Through this process, the corresponding image block is substantially in the form of forming a lump with the moving object region detected in S100.
  • a second threshold for example, 0
  • a coding type is an intra picture by inspecting a plurality of adjacent image blocks centered on the image block marked as a moving object area, the corresponding image block is also moved. Mark as an object area.
  • an effective motion is found and there is a certain amount of motion in the vicinity of the moving object area, it is marked as a moving object area because it is likely to be a mass with the previous moving object area.
  • determination based on a motion vector is impossible. Accordingly, the intra picture located adjacent to the image block already detected as the moving object region is estimated as a mass together with the previously extracted moving object region.
  • Step S300 The interpolation is applied to the moving object areas detected at S100 and S200 to clean up the fragmentation of the moving object area.
  • the moving object area since it is determined whether the moving object area is the image block unit, even though it is actually a moving object (for example, a person), there is an image block that is not marked as the moving object area in the middle.
  • the phenomenon of dividing into may occur. Accordingly, if there are one or a few unmarked image blocks surrounded by a plurality of image blocks marked with the moving object region, they additionally mark the moving object region. By doing so, it is possible to make the mobile object region divided into several into one. The influence of such interpolation is clearly seen when comparing FIG. 7 and FIG.
  • Step S400 The moving object region is quickly extracted from each frame image constituting the compressed image based on the syntax (motion vector, coding type) of the coding unit through the above process.
  • a heat map of the corresponding image is generated by using the extracted result of the moving object region.
  • the present invention accumulates the extraction result of the moving object region over a series of image frames to estimate how frequently the moving object is found for each region in the image and thereby generates a heat map.
  • the detection itself of the moving object region may be accumulated or the movement trajectories of the moving object region may be accumulated.
  • a detailed process of generating a heat map from the extraction result of the moving object region will be described later in detail with reference to FIG. 9.
  • FIG. 4 is a flowchart illustrating an example of a process of detecting effective motion from a compressed image in the present invention
  • FIG. 5 is a diagram illustrating an example of a result of applying the effective motion region detection process according to the present invention to a CCTV compressed image.
  • the process of FIG. 4 corresponds to step S100 in FIG. 3.
  • Step S110 First, a coding unit of a compressed image is parsed to obtain a motion vector and a coding type.
  • a video decoding apparatus performs parsing (header parsing) and motion vector operations on a stream of compressed video according to a video compression standard such as H.264 AVC and H.265 HEVC. Through this process, the motion vector and coding type are parsed for the coding unit of the compressed image.
  • Step S120 Acquire a motion vector cumulative value for a preset time (for example, 500 ms) for each of the plurality of image blocks constituting the compressed image.
  • This step is presented with the intention to detect if there are effective movements that are practically recognizable from the compressed image, such as driving cars, running people, and fighting crowds. Shaky leaves, ghosts that appear momentarily, and shadows that change slightly due to light reflections, though they are moving, are virtually meaningless objects and should not be detected.
  • a motion vector cumulative value is obtained by accumulating a motion vector in units of one or more image blocks for a predetermined time period (for example, 500 msec).
  • the image block is used as a concept including a macroblock and a subblock.
  • Steps S130 and S140 Comparing a motion vector cumulative value with respect to a plurality of image blocks with a preset first threshold value (eg, 20), and moving the image block having a motion vector cumulative value exceeding the first threshold value.
  • an image block having a predetermined motion vector accumulation value is found as described above, it is considered that something significant movement, that is, effective movement, is found in the image block and is marked as a moving object region.
  • a human run is to select and detect a movement that is worth the attention of the control personnel.
  • the cumulative value for a predetermined time is small enough not to exceed the first threshold, the change in the image is assumed to be small and insignificant and is neglected in the detection step.
  • FIG. 5 is an example illustrating a result of detecting an effective motion region from a CCTV compressed image through the process of FIG. 4.
  • an image block having a motion vector accumulation value equal to or greater than a first threshold value is marked as a moving object area and displayed as a bold line area.
  • the sidewalk block, the road, and the shadowed part are not displayed as the moving object area, while the walking people or the driving car are displayed as the moving object area.
  • FIG. 6 is a flowchart illustrating an example of a process of detecting a boundary region of a moving object region in the present invention
  • FIG. 7 is a boundary region of FIG. 5 with respect to the CCTV image of FIG. Figure 1 shows an example of the results of further applying the detection process.
  • the process of FIG. 6 corresponds to step S200 in FIG. 3.
  • the moving object is not properly marked and only a portion of the moving object is marked. In other words, if you look at a person walking or driving a car, you will find that not all of the objects are marked, but only some blocks. In addition, it is also found that a plurality of moving object areas are marked for one moving object. This means that the criterion of the moving object region adopted in S100 was very useful for filtering out the general region but was quite strict. Therefore, it is necessary to detect the boundary of the moving object by looking around the moving object area.
  • Step S210 First, a plurality of adjacent image blocks are identified based on the image blocks marked as moving object areas by the previous S100. In the present specification, these are referred to as 'neighborhood blocks'. These neighboring blocks are portions that are not marked as the moving object region by S100, and the process of FIG. 6 examines them further to determine whether any of these neighboring blocks may be included in the boundary of the moving object region.
  • Steps S220 and S230 compare a motion vector value with respect to a plurality of neighboring blocks with a second preset threshold (eg, 0), and mark the neighboring block having a motion vector value exceeding the second threshold as a moving object region. do. If the movement is located adjacent to the area of the moving object where effective motion that is practically meaningful is found and a certain amount of movement is found for itself, the image block is likely to be a block with the area of the adjacent moving object due to the characteristics of the photographed image. . Therefore, such neighboring blocks are also marked as moving object regions.
  • a second preset threshold eg, 0
  • Step S240 Also, the coding type is an intra picture among the plurality of neighboring blocks as a moving object region.
  • an intra picture since a motion vector does not exist, it is fundamentally impossible to determine whether a motion exists in a corresponding neighboring block based on the motion vector. In this case, it is safer for the intra picture located adjacent to the image block already detected as the moving object region to maintain the settings of the previously extracted moving object region.
  • FIG. 7 is a diagram illustrating a result of applying a boundary region detection process to a CCTV compressed image.
  • a plurality of image blocks marked as a moving object region are displayed as an area of a bold line.
  • the moving object area indicated by the bold line area was further extended near the moving object area indicated by the bold line area in FIG. 5 to cover the entire moving object when compared with the image captured by CCTV. You can find that it is enough.
  • FIG. 8 is a diagram illustrating an example of a result of arranging a moving object region through interpolation according to the present invention for a CCTV image image to which the boundary region detection process illustrated in FIG. 7 is applied.
  • Step S300 is a process of arranging the division of the moving object area by applying interpolation to the moving object areas detected in the previous steps S100 and S200.
  • an unmarked image block is found between the moving object regions indicated by the bold lines. If there is an unmarked image block in the middle, it can be regarded as if they are a plurality of individual moving objects. If the moving object region is fragmented as described above, the result of step S400 may be inaccurate, and the number of moving object regions may increase, thereby complicating the process of step S400.
  • the moving object region which is marked as interpolation.
  • interpolation in contrast to FIG. 7, all of the non-marked image blocks existing between the moving object regions are marked as moving object regions. By doing this, all the moving areas are bundled together and treated as a moving object.
  • FIG. 9 is a flowchart illustrating an example of a process of generating a heat map from a moving object region detected in a compressed image according to the present invention, and corresponds to step S400 of FIG. 3.
  • the present invention extracts a moving object region based on syntax information directly obtained from a compressed image.
  • the process of acquiring and analyzing the difference image of the original image by decoding the compressed image of the prior art is unnecessary, and according to the inventor's test, the processing speed is improved by up to 20 times.
  • this approach has the disadvantage of poor precision.
  • the process of generating the heat map also reflects these structural features.
  • Steps S410 and S420 Identify a plurality of moving object regions from a series of image frames constituting the compressed image. For example, if the heat map is generated from the compressed image for 10 minutes taken at 24 frames per second, the image of 14,400 frames in total through the process of steps S100 to S300 of FIG. Identifies a number of moving object areas, such as those indicated by bold lines in 8.
  • Representative coordinates are then calculated for the plurality of moving object regions identified in this manner, each of which represents a position within the frame image.
  • center coordinates cx1, cy1; cx2, cy2; cx3, cy3 for a virtual optimal rectangle surrounding the moving object region may be used.
  • Step S430 Then, by accumulating a plurality of representative coordinates derived from a series of image frames in image block units, heat data reflecting the frequency of appearance of the moving object in the space is calculated.
  • a heat map may be generated by excluding the heat data calculation process of step S430 and calculating the heat data through steps S440 to S470 from the beginning.
  • Steps S440 and S450 A unique ID is allocated to a plurality of moving object areas in a series of image frames constituting the compressed image to treat the moving object area as an 'object' rather than a region.
  • step S440 when the moving object region that has been assigned the Unique ID disappears while passing through a series of image frames, the unique ID allocated in step S440 is revoked for the moving object region (S450). In other words, the moving object previously found and tracked disappears from the image.
  • the moving object area is regarded as an object, and the moving trajectory of the object is tracked while moving over a series of frame images in the compressed image.
  • steps S440 and S450 it should be possible to determine whether the chunks of the interconnected image blocks marked as moving object regions are the same before and after the series of image frames. This is because it is possible to determine whether the Unique ID has been previously assigned to the mobile object area currently being handled.
  • the image block is a moving object region without checking the original image image, it is not possible to confirm whether the chunks of the moving object region are actually the same in the front and back image frames. That is, since the contents of the image included in the image are not known, such a change cannot be identified, for example, when the cat is replaced by a dog between the front and rear frames at the same point. However, given that the time interval between frames is very short and that the observation object of the CCTV camera moves at a normal speed, this is unlikely to happen.
  • the present invention estimates that the ratio or number of image blocks overlapping between the chunks of the moving object region in the front and back frames is equal to or greater than a predetermined threshold. According to this approach, it is possible to determine whether a specific moving object area is moving, a new moving object area is new, or an existing moving object area disappears even if the contents of the original image are not known. This judgment is lower in accuracy than the prior art, but can greatly increase the data processing speed, which is advantageous in practical applications.
  • Step S460 Arrange a plurality of representative coordinates calculated in step S420 for a plurality of moving object areas based on a unique ID, thereby obtaining a sequence of representative coordinates in which each unique ID appears in a series of image frames. Can be. This corresponds to a movement trajectory indicating how each moving object represented by a unique ID has moved in a series of image frames.
  • Step S470 Then, the hit data is reinforced by accumulating the movement trajectories for the unique IDs in image block units, for example. Since the hit data calculated in step S430 reflects the frequency of appearance of the moving object region, the processing speed is high but the characteristics of the movement trajectories of the objects are not reflected. The hit data obtained in step S470 is relatively slow in processing speed, but has an advantage of reflecting moving lines of objects in a corresponding space.
  • Step S480 A heat map image is generated for the compressed image based on the hit data calculated in the above process.
  • the present invention may be embodied in the form of computer readable codes on a computer readable nonvolatile recording medium.
  • Such nonvolatile recording media include various types of storage devices, such as hard disks, SSDs, CD-ROMs, NAS, magnetic tapes, web disks, and cloud disks. Forms that are implemented and executed may also be implemented.
  • the present invention may be implemented in the form of a computer program stored in a medium in combination with hardware to execute a specific procedure.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

La présente invention concerne de manière générale une technologie permettant de générer efficacement une carte thermique à partir d'une image compressée telle que H.264 AVC ou H.265 HEVC. Plus particulièrement, la présente invention concerne une technologie qui peut générer une carte thermique avec un petit nombre d'opérations, par extraction d'une région ayant un mouvement significatif dans une image, c'est-à-dire une région d'objet mobile, à l'aide d'une syntaxe (par exemple, un vecteur de mouvement, un type de codage) obtenue par analyse de données d'image compressées et accumulation d'une trajectoire de la région d'objet mobile, au lieu de générer une carte thermique par traitement complexe d'image, comme dans l'état de la technique, pour une image compressée générée par, par exemple, une caméra CCTV. Selon la présente invention, il est avantageux qu'une carte thermique puisse être efficacement générée à partir d'une image de CCTV compressée sans traitement complexe tel qu'un décodage, un redimensionnement à l'échelle inférieure, une acquisition d'image de différence, une analyse d'image, ou similaire. En particulier, il est possible de générer une carte thermique à l'aide d'environ 1/10 de la quantité d'opérations de l'état de la technique, et ainsi il existe un avantage en ce que le nombre de canaux d'analyse disponibles d'un serveur d'analyse d'image peut être augmenté d'environ 10 fois ou plus.
PCT/KR2019/009372 2018-07-30 2019-07-29 Procédé de génération d'une carte thermique basée sur la syntaxe pour une image compressée WO2020027511A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2018-0088394 2018-07-30
KR1020180088394A KR102042397B1 (ko) 2018-07-30 2018-07-30 압축영상에 대한 신택스 기반의 히트맵 생성 방법

Publications (1)

Publication Number Publication Date
WO2020027511A1 true WO2020027511A1 (fr) 2020-02-06

Family

ID=68542075

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2019/009372 WO2020027511A1 (fr) 2018-07-30 2019-07-29 Procédé de génération d'une carte thermique basée sur la syntaxe pour une image compressée

Country Status (2)

Country Link
KR (1) KR102042397B1 (fr)
WO (1) WO2020027511A1 (fr)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102238124B1 (ko) 2019-11-29 2021-04-08 주식회사 다누시스 역주행 객체 검출 시스템
KR102190486B1 (ko) 2020-04-29 2020-12-11 주식회사 다누시스 광학흐름을 응용한 선별관제 보행자 이상행동 검출시스템

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100343780B1 (ko) * 2000-07-31 2002-07-20 한국전자통신연구원 압축 비디오의 압축 영역에서의 카메라 움직임 검출 및분할방법
US20140133703A1 (en) * 2012-11-11 2014-05-15 Samsung Electronics Co. Ltd. Video object tracking using multi-path trajectory analysis
KR20170072131A (ko) * 2015-12-16 2017-06-26 파나소닉 아이피 매니지먼트 가부시키가이샤 사람 검지 시스템
KR101798768B1 (ko) * 2016-06-07 2017-12-12 주식회사 에스원 이벤트 감지기반 영상 저장 장치 및 방법
US20180114067A1 (en) * 2016-10-26 2018-04-26 Samsung Sds Co., Ltd. Apparatus and method for extracting objects in view point of moving vehicle

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150080863A (ko) * 2014-01-02 2015-07-10 삼성테크윈 주식회사 히트맵 제공 장치 및 방법
KR102085035B1 (ko) * 2014-09-29 2020-03-05 에스케이 텔레콤주식회사 객체 인식을 위한 객체 후보영역 설정방법 및 장치
KR102126370B1 (ko) * 2014-11-24 2020-07-09 한국전자통신연구원 동작 인식 장치 및 방법
KR20160093809A (ko) * 2015-01-29 2016-08-09 한국전자통신연구원 프레임 영상과 모션 벡터에 기초하는 객체 검출 방법 및 장치
KR101640572B1 (ko) * 2015-11-26 2016-07-18 이노뎁 주식회사 효율적인 코딩 유닛 설정을 수행하는 영상 처리 장치 및 영상 처리 방법
KR101874639B1 (ko) * 2016-09-09 2018-07-04 이노뎁 주식회사 모션 센서를 이용한 엘리베이터용 감시 카메라 장치
KR101949676B1 (ko) * 2017-12-20 2019-02-19 이노뎁 주식회사 압축영상에 대한 신택스 기반의 객체 침입 감지 방법

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100343780B1 (ko) * 2000-07-31 2002-07-20 한국전자통신연구원 압축 비디오의 압축 영역에서의 카메라 움직임 검출 및분할방법
US20140133703A1 (en) * 2012-11-11 2014-05-15 Samsung Electronics Co. Ltd. Video object tracking using multi-path trajectory analysis
KR20170072131A (ko) * 2015-12-16 2017-06-26 파나소닉 아이피 매니지먼트 가부시키가이샤 사람 검지 시스템
KR101798768B1 (ko) * 2016-06-07 2017-12-12 주식회사 에스원 이벤트 감지기반 영상 저장 장치 및 방법
US20180114067A1 (en) * 2016-10-26 2018-04-26 Samsung Sds Co., Ltd. Apparatus and method for extracting objects in view point of moving vehicle

Also Published As

Publication number Publication date
KR102042397B1 (ko) 2019-11-08

Similar Documents

Publication Publication Date Title
WO2020027513A1 (fr) Système d'analyse d'image basé sur la syntaxe pour image compressée, et procédé de traitement d'interfonctionnement
WO2019124635A1 (fr) Procédé orienté syntaxe de détection d'une intrusion d'objet dans une vidéo comprimée
KR102187376B1 (ko) 딥러닝 이미지 분석과 연동하는 신택스 기반의 선별 관제 제공 방법
US20120275524A1 (en) Systems and methods for processing shadows in compressed video images
WO2018135922A1 (fr) Procédé et système de suivi en temps réel d'un objet d'intérêt dans un environnement multi-caméras
WO2020027511A1 (fr) Procédé de génération d'une carte thermique basée sur la syntaxe pour une image compressée
WO2016201683A1 (fr) Plateforme dématérialisée à synchronisation de caméras multiples
US20170213086A1 (en) Cloud platform with multi camera synchronization
WO2016064107A1 (fr) Procédé et appareil de lecture vidéo sur la base d'une caméra à fonctions de panoramique/d'inclinaison/de zoom
WO2019039661A1 (fr) Procédé d'extraction basée sur la syntaxe d'une région d'objet mobile d'une vidéo compressée
KR102061915B1 (ko) 압축영상에 대한 신택스 기반의 객체 분류 방법
WO2020027512A1 (fr) Procédé de commande de suivi d'objet basé sur syntaxe destiné à une image comprimée par un appareil photo ptz
KR102015082B1 (ko) 압축영상에 대한 신택스 기반의 객체 추적 방법
KR102015084B1 (ko) 압축영상에 대한 신택스 기반의 도로 역주행 감지 방법
KR101064946B1 (ko) 다중 영상 분석 기반의 객체추출 장치 및 그 방법
WO2019124633A1 (fr) Procédé orienté syntaxe de détection d'objet d'escalade de mur dans une vidéo comprimée
WO2019124632A1 (fr) Procédé orienté syntaxe de détection d'objet de flânerie dans une vidéo comprimée
KR102179077B1 (ko) 상용분류기 외부 연동 학습형 신경망을 이용한 압축영상에 대한 신택스 기반의 객체 분류 방법
KR102178952B1 (ko) 압축영상에 대한 신택스 기반의 mrpn-cnn을 이용한 객체 분류 방법
KR102343029B1 (ko) 모션벡터 기반 분기처리를 이용한 압축영상의 영상분석 처리 방법
JPH1115981A (ja) 広域監視装置及び広域監視システム
KR102153093B1 (ko) 컨텍스트를 고려한 압축영상에 대한 신택스 기반의 이동객체 영역 추출 방법
KR102585167B1 (ko) 압축영상에 대한 신택스 기반의 동일인 분석 방법
KR102509981B1 (ko) 가상 세계를 이용한 비디오 핑거프린팅 학습 방법 및 이를 이용한 시스템
KR200491642Y1 (ko) 신택스 기반 객체 ROI 압축을 이용한 PoE 카메라 연동형 트랜스코더 장치

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19845472

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19845472

Country of ref document: EP

Kind code of ref document: A1