WO2012137994A1 - Image recognition device and image-monitoring method therefor - Google Patents

Image recognition device and image-monitoring method therefor Download PDF

Info

Publication number
WO2012137994A1
WO2012137994A1 PCT/KR2011/002341 KR2011002341W WO2012137994A1 WO 2012137994 A1 WO2012137994 A1 WO 2012137994A1 KR 2011002341 W KR2011002341 W KR 2011002341W WO 2012137994 A1 WO2012137994 A1 WO 2012137994A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
monitoring
camera
unit
cameras
Prior art date
Application number
PCT/KR2011/002341
Other languages
French (fr)
Korean (ko)
Inventor
정의석
김영석
Original Assignee
(주)아이티엑스시큐리티
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by (주)아이티엑스시큐리티 filed Critical (주)아이티엑스시큐리티
Publication of WO2012137994A1 publication Critical patent/WO2012137994A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Definitions

  • the present invention relates to an image recognition device for dividing a surveillance zone into a plurality of logical unit surveillance zones to perform video surveillance with a plurality of sensors and a plurality of cameras, and a video surveillance method thereof.
  • An image recognition device is a device that recognizes and analyzes objects in an image by using various image processing techniques beyond simply storing an image signal input through a camera.
  • video surveillance apparatus using a conventional camera includes a CCTV system using a video cassette recorder (VCR) and a digital video recorder (DVR) for storing images on a hard disk.
  • VCR video cassette recorder
  • DVR digital video recorder
  • the applicant has proposed an algorithm for tracking the movement of the object in a logical way beyond simply recognizing the object captured in the camera image of the surveillance zone.
  • An object of the present invention is to provide a video recognition apparatus for performing video surveillance with a plurality of sensors and a plurality of cameras by dividing a surveillance zone into a plurality of logical unit surveillance zones and a video surveillance method thereof.
  • the image recognition device includes a plurality of cameras respectively photographing a plurality of unit surveillance zones divided into an entire surveillance zone, a plurality of sensors installed on an outer surface of the entire surveillance zone, and It includes an image controller that receives images from the plurality of cameras and is connected to an external system.
  • the image controller includes an image receiver, an event generator, an image processor, a monitor, and an external device linking unit.
  • An image monitoring method using an image controller includes: establishing a logical connection relationship between the plurality of cameras based on a moving path of a moving object according to spatial connection of the plurality of unit monitoring zones; Sensing one of the plurality of sensors installed outside the entire monitoring zone and generating a monitoring event; Recognizing and tracking a moving object in an image input from the primary tracking camera by using a camera logically connected to a sensor that detects the moving object among the plurality of cameras as a primary tracking camera according to the monitoring event; And providing an image to the external system that recognizes the moving object at the time when the moving object is recognized.
  • the tracking is performed through an object recognition process for an image input from a secondary tracking camera that is logically connected to the primary tracking camera among the plurality of cameras. Tracking the object again; And providing the image of the secondary tracking camera to the external system when the object is re-recognized by the secondary tracking camera.
  • the video surveillance method of the present invention when the object is not recognized during the tracking, when there are a plurality of cameras logically connected to the primary tracking camera among the plurality of cameras, the video surveillance method of the present invention is logically connected.
  • the second tracking camera is selected by recognizing a moving object with respect to an image input from a plurality of cameras.
  • the image recognition apparatus of the present invention may provide a monitoring result to an external system while monitoring a video by dividing the monitoring zone into a plurality of unit monitoring zones using a plurality of sensors and cameras.
  • the image recognition device may have the same effect as if the manager selects and records the monitored camera according to the moving path of the object once the monitored object is captured.
  • FIG. 1 is a block diagram of an image recognition device according to an embodiment of the present invention.
  • FIG. 2 is a diagram illustrating an example in which a sensor and a camera are installed in a surveillance area
  • FIG. 3 is a flowchart provided to explain an image monitoring method according to an embodiment of the present invention.
  • FIG. 4 shows an example of an image provided as a result of monitoring.
  • the image recognition device 100 of the present invention is connected to an external system 30 through a network 10.
  • the external system 30 is a device capable of receiving a video surveillance result provided in the form of an image from the image recognition device 100 and displaying the result to the user.
  • the external system 30 may be a mobile phone, a smart phone, or a separately designed security system.
  • the network 10 may be configured in various ways according to the type of the external system 30.
  • the network 10 may correspond to a mobile communication network, a wired or wireless Internet, and the like.
  • the image recognition apparatus 100 may provide the external system 30 with an image surveillance result through image processing on an analog or digital image signal provided by the plurality of cameras 101a to 101e installed in the surveillance space.
  • the image recognition device 100 includes a plurality of cameras 101a to 101e installed in a surveillance space, a plurality of sensors 103a to 103c, and an image controller 110 for processing images captured by the cameras 101a to 101e. It includes.
  • the cameras 101a to 101e are not only analog cameras for generating analog video signals having a predetermined frame rate, digital cameras for generating digital video signals, but also IP (Internet Protocol) cameras that can be connected to the Internet.
  • IP Internet Protocol
  • the cameras 101a to 101e are installed to divide the surveillance zone to photograph each divided space, and each camera 101a to 101e is mutually logical with at least one other camera as described below. Is connected.
  • Sensors 103a to 103c are installed along the outer periphery of the surveillance zone as shown in FIG. 2 to detect an intruder invading into the surveillance zone, and correspond to an infrared detector and a magnet detector.
  • the sensors 103a through 103c are also logically interconnected with at least one of the cameras 101a through 101e.
  • the image controller 110 stores and manages images captured by the cameras 101a to 101e. If necessary, the image controller 110 performs image processing on the images of the cameras 101a to 101e and provides the information to the external system 30 when it corresponds to a predetermined condition. To this end, the image controller 110 includes an image receiver 111, a storage medium 113, a device interface 115, and a controller 130.
  • the image receiving unit 111 receives and decodes the analog image signals provided by the cameras 101a to 101e and then displays them through a display unit (not shown), or converts them into digital image signals and stores them in the storage medium 113. Provided at 130.
  • the storage medium 113 stores the digital video signal provided by the image receiver 111.
  • the device interface unit 115 is a wired or wireless interface for connecting to the network 10.
  • the device interface unit 115 is connected to the external system 30 through the network 10, and transmits various monitoring results provided by the controller 130 to the external system 30.
  • the controller 130 controls the overall operation of the image controller 110.
  • the image controller 110 of the present invention may perform various functions, for example, storing / searching each camera image, such a function is not a characteristic function of the present invention, and thus the description thereof is omitted and the present invention is presented. It will be described based on the interlocking function with the external system 30.
  • the controller 130 is an event generating unit 131, an image processing unit 133, a monitoring unit 135, a recording and playback processing unit 137 and an external device interlocking unit 139 for the characteristic intelligent image tracking function of the present invention. It includes.
  • the event generator 131 reads the sensing result of detecting the intruder from the sensors 103a to 103c to generate a monitoring event, and provides the generated monitoring event to the monitoring unit 135.
  • the image processor 133 recognizes a moving object by performing object recognition on the image input from the camera that the monitoring unit 135 processes from among the cameras in operation.
  • the image processor 133 provides the monitoring unit 135 with detection information of the corresponding object.
  • the detection information includes at least information on a time when a new object appeared in the image of the requested camera and a time when the new object disappeared.
  • Object recognition and tracking of the image processor 133 may use various known methods. For example, general object recognition may be obtained from a subtraction operation of an image of a previous frame and an image of a new frame, and tracking of the detected object may also use a known algorithm.
  • the monitoring unit 135 performs a monitoring algorithm for a monitoring target area with information on a logical connection relationship between each sensor and a camera (or a logical connection relationship between each monitoring area) to be described below.
  • the monitoring unit 135 requests the image processing unit 133 to recognize the intruder (object) through the analysis of the image input from the specific camera based on the monitoring algorithm, and based on the recognition result provided from the image processing unit 133. Based on the monitoring results are generated in a log file and stored in the storage medium (113).
  • the recording / playback processing unit 137 controls an operation for recording and reproducing a basic video signal provided by the image recognition device 100, and requests an image input from a specific camera according to the request of the monitoring unit 135. From this time, it may be managed to be stored as a separate image file or a separate tag or metadata may be assigned.
  • the external device interlocking unit 139 provides the monitoring result of the monitoring unit 135 to the external system 30 through the device interface unit 115.
  • the monitoring result provided by the external device interworking unit 139 may include a camera still image at the moment of recognition of the object, corresponding camera position information, and time information of detecting the object in the image.
  • the monitoring result is recorded by recording the camera position information and the detection time information as part of the image in the camera still image. It may be provided in the form of an image.
  • each unit monitoring zone L1 to L5 is divided by cameras 101a to 101e.
  • first to third sensors 103a to 103c are disposed along the outer periphery of the surveillance zone, and the first to third sensors 103a to 103c are also at least one of the first to fifth cameras 101a to 101e. Logically connected to
  • the first sensor 103a is a magnet detector installed in the window of the first unit monitoring zone L1
  • the second sensor 103b is a magnet detector installed in the window of the second unit monitoring zone L2
  • the third sensor 103c is an example of an infrared detector installed in a window of the third unit monitoring zone L3.
  • Each surveillance zone, each camera or sensor is in a logical connection with each other by moving path analysis based on the purpose of the surveillance, the spatial characteristics of the surveillance zone and the characteristics of the object to be monitored (eg, intruder).
  • the object to be monitored is a human
  • an intruder who invades the first unit monitoring area L1 must pass through the third unit monitoring area L3, and the intruder who appears in the first unit monitoring area L1 has a fourth moment. It does not occur in the unit monitoring area L4. Therefore, the first unit monitoring zone L1 is logically connected to the second unit monitoring zone L2 and the third unit monitoring zone L3.
  • the first sensor 103a is logically connected to the first camera 101a
  • the second sensor 103b is logically connected to the second camera 101b
  • the third sensor 103c is logically connected to the third camera 101c and the fourth camera 101d.
  • the fifth unit monitoring zone (L5) ⁇ the third unit monitoring zone (L3) ⁇ the first unit monitoring zone (L1) is a possible connection, but the fifth unit monitoring zone (L5) ⁇ the fourth unit monitoring zone (L5) (L4) ⁇
  • the first unit monitoring area L1 is impossible because there is no connection between the fourth unit monitoring area L4 and the first unit monitoring area L1.
  • the third sensor 103c ⁇ fourth camera 101d ⁇ fifth camera 101e are possible connections, but the third sensor 103c ⁇ second camera 101b ⁇ third camera 101c is a third connection. Since there is no connection between the sensor 103c and the second camera 101b, this is impossible.
  • FIG. 3 an image monitoring method of the image recognition device 100 of the present invention centering on the monitoring unit 135 and the image processing unit 133 will be described.
  • the event generator 131 reads the first to third sensors 103a to 103c and generates a monitoring event when an intruder is detected (S301).
  • the monitor 135 When the surveillance event is generated by the event generator 131, the monitor 135 requests the image processor 133 to recognize and track an object of an image of a camera logically connected to the surveillance event (S303).
  • the monitoring event has occurred by the first sensor 103a.
  • the object p enters the first unit monitoring area L1 and moves to the third unit monitoring area L3.
  • the event generator 131 generates a surveillance event by the first sensor 103a, and the monitor 135 requests the image processor 133 to recognize and track the object of the image of the first camera 101a. Done.
  • the image processing unit 133 starts object recognition and tracking of an image input from a camera requested to be monitored (in the example of FIG. 2, the first camera).
  • Object recognition, extraction, and tracking of the image processor 133 may be processed by a method known in the image processing field as described above.
  • the image processor 133 When a moving object is extracted during image analysis, the image processor 133 provides object recognition information to the external device interlocking unit 139 and the monitoring unit 135 and starts tracking the extracted object.
  • the object recognition information may include object recognition time and object recognized camera information, and if necessary, may include a corresponding still image.
  • the external device interlocking unit 139 which receives the object recognition information from the image processing unit 133, provides the image processed by the image processing unit 133 to the external system 30. At this time, the camera location information and time information of detecting the intruder are also provided. In the example of FIG. 2, since the intruder will be recognized in the image of the first camera 101a, the monitoring result is obtained from one image captured by the first camera 101a, the first camera position, and the image of the first camera 101a. Time information identifying the intruder is included.
  • FIG 4 shows an example of one image provided to the cellular phone which is one of the external systems 30 as a result of the monitoring.
  • the position of the first camera and the recognition time are recorded in the image provided to the external system 30.
  • the image processor 133 ends the tracking of the object with the end time information to the monitoring unit 135. to provide.
  • the image processing unit 133 starts object recognition and tracking, and the intruder leaves the third unit monitoring area L3 which is a corridor. At this time, the object disappears from the image input from the first camera 101a and the image processor 133 ends the tracking of the object.
  • the monitoring unit 135 When the monitoring unit 135 is notified of the end of the object tracking for the requested camera image from the image processing unit 133, the camera to continue tracking among the cameras (or unit monitoring zones) logically connected to the camera being tracked (unit monitoring zones) Select. To this end, the monitoring unit 135 requests the image processing unit 133 to track all of the logically connected cameras for a predetermined time. In the example, since the logically connected cameras of the first camera 101a are the second camera 101b and the third camera 101c, the monitoring unit 135 is configured for the second camera 101b and the third camera 101c.
  • the image processor 133 requests the monitoring of the predetermined time.
  • the predetermined time may be determined in consideration of the moving capability of the object, the spatial characteristics of the surveillance zone.
  • the image processor 133 monitors whether there is a newly displayed object with respect to the images of the cameras which are requested to be tracked for a set time at the request of the monitor 135, and provides the monitoring result to the monitor 135. As the object p moves from the first unit surveillance zone L1 to the third unit surveillance zone L3, the image processor 133 notifies the monitoring unit 135 that a new object has been recognized in the third surveillance zone. do.
  • the second unit monitoring zone L2 or the third unit monitoring zone L3 may have a previously moving object. Even in this case, the image processor 133 extracts only the newly appearing object from the image of the unit monitoring zone and provides the result to the monitoring unit 135.
  • step S311 may not be performed.
  • the monitoring unit 135 determines whether there is a camera in which the object to be tracked is searched based on the result provided from the image processing unit 133 in step S311.
  • the monitoring unit 135 selects the unit monitoring zone (the third unit monitoring zone in the example of FIG. 2) to continue tracking, and the corresponding unit monitoring zone. While requesting the monitoring result notification of S307 to the external device interlocking unit 139, the image processing unit 133 requests the same object recognition and tracking as in step S309.
  • the external device interlocking unit 139 is an image of the third camera 101c from which the object p was first captured. To the external system 30 as a monitoring result.
  • the first camera 101a becomes the primary tracking camera and the third camera 101c becomes the secondary tracking camera.
  • the external system 30 can check in real time the information on which path the intruder who invaded the surveillance area is invading.
  • Steps S307 to S313 are repeated until the connection monitoring zone to continue tracking is not selected because the moving object is not recognized.

Abstract

The present invention relates to an image recognition device and to an image-monitoring method therefor. The image recognition device of the present invention performs an image-monitoring operation using a plurality of sensors and a plurality of cameras after dividing a monitoring area into a plurality of logical monitoring areas, and provides a monitoring result to an external system while tracing and monitoring the movement of an object recognized in a monitoring target area through an image interpretation of an image signal.

Description

영상인식장치 및 그 영상 감시방법Image recognition device and video surveillance method
본 발명은, 감시구역을 복수 개의 논리적 단위 감시 구역으로 분할하여 복수 개의 센서와 복수 개의 카메라로 영상 감시를 수행하는 영상인식장치 및 그 영상 감시방법에 관한 것이다.The present invention relates to an image recognition device for dividing a surveillance zone into a plurality of logical unit surveillance zones to perform video surveillance with a plurality of sensors and a plurality of cameras, and a video surveillance method thereof.
영상인식장치는 카메라를 통해 입력되는 영상신호를 단순히 저장하는 것을 넘어 다양한 영상처리 기법을 이용하여 영상 내의 객체를 인식하고 분석하는 장치이다.An image recognition device is a device that recognizes and analyzes objects in an image by using various image processing techniques beyond simply storing an image signal input through a camera.
다만, 종래의 카메라를 이용한 영상 감시장치는 브이씨알(VCR: Video Cassette Recorder)을 이용한 CCTV 시스템과, 하드 디스크(Hard Disk)에 영상을 저장하는 디브이알(DVR: Digital Video Recoder)이 있었으며, 다만 이러한 장치는 단순히 영상을 저장하는 장치에 지나지 않았다. However, video surveillance apparatus using a conventional camera includes a CCTV system using a video cassette recorder (VCR) and a digital video recorder (DVR) for storing images on a hard disk. Such a device was merely a device for storing an image.
영상을 분석하여 영상 내에서 특별히 관심영역 또는 객체를 추출하여 인식하는 방법에 대한 연구는 이미 오래전에 있어 왔다. 출원인도 이미 영상인식에 기반한 객체 추적 감시방법에 관한 특허출원 제2010-0040469호를 출원한바 있다. Researches on analyzing images and extracting and recognizing regions of interest or objects within images have been around for a long time. Applicant has already applied for a patent application 2010-0040469 on a method for monitoring object tracking based on image recognition.
이 발명에서, 출원인은 감시구역을 촬영한 카메라 영상에 포착된 객체를 단순히 인식하는 것을 넘어 객체의 이동을 논리적 방법으로 추적하는 알고리즘을 제시한바 있다. In the present invention, the applicant has proposed an algorithm for tracking the movement of the object in a logical way beyond simply recognizing the object captured in the camera image of the surveillance zone.
본 발명의 목적은 감시구역을 복수 개의 논리적 단위 감시 구역으로 분할하여 복수 개의 센서와 복수 개의 카메라로 영상 감시를 수행하는 영상인식장치 및 그 영상 감시방법을 제공함에 있다.SUMMARY OF THE INVENTION An object of the present invention is to provide a video recognition apparatus for performing video surveillance with a plurality of sensors and a plurality of cameras by dividing a surveillance zone into a plurality of logical unit surveillance zones and a video surveillance method thereof.
상기 목적을 달성하기 위해 본 발명에 따른 영상인식장치는, 전체 감시구역을 분할한 복수 개의 단위 감시구역을 각각 촬영하는 복수 개의 카메라와, 상기 전체 감시구역의 외측면에 설치된 복수 개의 센서와, 상기 복수 개의 카메라로부터 영상을 수신하고 외부 시스템과 연결되는 영상제어기를 포함한다.In order to achieve the above object, the image recognition device according to the present invention includes a plurality of cameras respectively photographing a plurality of unit surveillance zones divided into an entire surveillance zone, a plurality of sensors installed on an outer surface of the entire surveillance zone, and It includes an image controller that receives images from the plurality of cameras and is connected to an external system.
상기 영상제어기는, 영상수신부, 이벤트발생부, 영상처리부, 감시부 및 외부장치연동부를 포함한다. The image controller includes an image receiver, an event generator, an image processor, a monitor, and an external device linking unit.
영상제어기에 의한 영상감시방법은, 상기 복수 개 단위 감시구역의 공간적 연결에 따른 이동체의 이동 경로를 기초로 상기 복수 개의 카메라의 논리적 연결관계를 설정하는 단계; 상기 전체 감시구역의 외측에 설치된 복수 개의 센서 중 하나가 상기 이동체를 감지하고 감시 이벤트를 생성하는 단계; 상기 감시 이벤트에 따라, 상기 복수 개의 카메라 중에서 상기 이동체를 감지한 센서에 논리적으로 연결된 카메라를 1차 추적 카메라로 하여, 상기 1차 추적 카메라부터 입력되는 영상에서 움직이는 객체를 인식하고 추적하는 단계; 및 상기 움직이는 객체를 인식한 시점에, 상기 움직이는 객체를 인식한 영상을 외부 시스템으로 제공하는 단계를 포함한다.An image monitoring method using an image controller includes: establishing a logical connection relationship between the plurality of cameras based on a moving path of a moving object according to spatial connection of the plurality of unit monitoring zones; Sensing one of the plurality of sensors installed outside the entire monitoring zone and generating a monitoring event; Recognizing and tracking a moving object in an image input from the primary tracking camera by using a camera logically connected to a sensor that detects the moving object among the plurality of cameras as a primary tracking camera according to the monitoring event; And providing an image to the external system that recognizes the moving object at the time when the moving object is recognized.
실시 예에 따라, 상기 추적 중 상기 객체가 인식되지 않는 경우, 상기 복수 개의 카메라 중에서 상기 1차 추적 카메라와 논리적 연결관계에 있는 2차 추적 카메라로부터 입력되는 영상에 대한 객체인식 과정을 통해 상기 추적 중인 객체를 다시 추적하는 단계; 및 상기 2차 추적 카메라에서 상기 객체를 다시 인식한 시점에, 상기 2차 추적 카메라의 영상을 상기 외부 시스템으로 제공하는 단계를 포함할 수 있다.According to an embodiment of the present disclosure, if the object is not recognized during the tracking, the tracking is performed through an object recognition process for an image input from a secondary tracking camera that is logically connected to the primary tracking camera among the plurality of cameras. Tracking the object again; And providing the image of the secondary tracking camera to the external system when the object is re-recognized by the secondary tracking camera.
다른 실시 예에 따라 상기 추적 중 상기 객체가 인식되지 않는 경우, 상기 복수 개의 카메라 중에서 상기 1차 추적 카메라와 논리적 연결관계에 있는 카메라가 복수 개인 경우, 본 발명의 영상감시방법은, 상기 논리적으로 연결된 복수 개 카메라 전체로부터 입력되는 영상에 대하여 움직이는 객체를 인식하여 상기 2차 추적 카메라를 선택하게 된다.According to another embodiment of the present invention, when the object is not recognized during the tracking, when there are a plurality of cameras logically connected to the primary tracking camera among the plurality of cameras, the video surveillance method of the present invention is logically connected. The second tracking camera is selected by recognizing a moving object with respect to an image input from a plurality of cameras.
본 발명의 영상인식장치는 복수 개의 센서와 카메라를 이용하여 감시구역을 복수 개의 단위 감시구역으로 나누어 영상 감시하면서, 그 감시 결과를 외부 시스템으로 제공할 수 있다. The image recognition apparatus of the present invention may provide a monitoring result to an external system while monitoring a video by dividing the monitoring zone into a plurality of unit monitoring zones using a plurality of sensors and cameras.
카메라와 센서는 상호 논리적으로 연결되어 있기 때문에, 영상인식장치는 감시 대상물이 일단 포착되면 마치 관리자가 대상물의 이동경로에 따라 감시 대상 카메라를 선택하면서 기록하는 것과 같은 효과를 볼 수 있다. Since the camera and the sensor are logically connected to each other, the image recognition device may have the same effect as if the manager selects and records the monitored camera according to the moving path of the object once the monitored object is captured.
또한, 이러한 영상 해석 및 추적은 설치된 카메라 전체로부터 입력되는 영상신호 전체에 대해서 이루어질 필요가 없기 때문에, 영상처리에 따라 시스템 전체의 속도가 떨어지는 등의 문제가 발생하지 아니한다.In addition, since such image analysis and tracking do not need to be performed on the entire video signal input from the entire installed camera, there is no problem that the overall system speed decreases according to the image processing.
도 1은 본 발명의 일 실시 예에 따른 영상인식장치의 블록도,1 is a block diagram of an image recognition device according to an embodiment of the present invention;
도 2는 감시구역에 센서와 카메라가 설치된 예를 도시한 도면,2 is a diagram illustrating an example in which a sensor and a camera are installed in a surveillance area;
도 3은 본 발명의 일 실시 예에 따른 영상 감시방법의 설명에 제공되는 흐름도, 그리고3 is a flowchart provided to explain an image monitoring method according to an embodiment of the present invention; and
도 4는 감시 결과로 제공된 이미지의 예를 도시한 도면이다.4 shows an example of an image provided as a result of monitoring.
이하 도면을 참조하여 본 발명을 더욱 상세히 설명한다.Hereinafter, the present invention will be described in more detail with reference to the accompanying drawings.
도 1을 참조하면, 본 발명의 영상인식장치(100)는 네트워크(10)를 통해 외부 시스템(30)과 연결된다. Referring to FIG. 1, the image recognition device 100 of the present invention is connected to an external system 30 through a network 10.
외부 시스템(30)은 영상인식장치(100)로부터 이미지 형태로 제공되는 영상 감시 결과를 수신하여 사용자에게 표시할 수 있는 장치로서, 휴대전화기, 스마트폰, 뿐만 아니라 별도로 설계된 방범 시스템일 수도 있다. The external system 30 is a device capable of receiving a video surveillance result provided in the form of an image from the image recognition device 100 and displaying the result to the user. The external system 30 may be a mobile phone, a smart phone, or a separately designed security system.
네트워크(10)는 외부 시스템(30)의 종류에 따라 다양하게 구성할 수 있다. 예컨대, 외부 시스템(30)이 휴대 단말기나 방범 시스템인 경우 네트워크(10)는 이동통신망, 유무선 인터넷 등이 모두 해당할 수 있을 것이다.The network 10 may be configured in various ways according to the type of the external system 30. For example, when the external system 30 is a portable terminal or a security system, the network 10 may correspond to a mobile communication network, a wired or wireless Internet, and the like.
영상인식장치(100)는 감시 공간 내에 설치된 복수 개의 카메라(101a 내지 101e)가 제공하는 아날로그 또는 디지털 영상신호에 대한 영상 처리를 통한 영상감시 결과를 외부 시스템(30)에게 제공할 수 있다. The image recognition apparatus 100 may provide the external system 30 with an image surveillance result through image processing on an analog or digital image signal provided by the plurality of cameras 101a to 101e installed in the surveillance space.
영상인식장치(100)는 감시 공간 내에 설치되는 복수 개의 카메라(101a 내지 101e)와, 복수 개의 센서(103a 내지 103c)와, 카메라(101a 내지 101e)가 촬영한 영상을 처리하는 영상제어기(110)를 포함한다.The image recognition device 100 includes a plurality of cameras 101a to 101e installed in a surveillance space, a plurality of sensors 103a to 103c, and an image controller 110 for processing images captured by the cameras 101a to 101e. It includes.
카메라(101a 내지 101e)는 소정 프레임 속도의 아날로그 영상신호를 생성하는 아날로그 카메라, 디지털 영상신호를 생성하는 디지털 카메라뿐만 아니라, 인터넷에 연결할 수 있는 IP(Internet Protocol) 카메라도 가능하다. The cameras 101a to 101e are not only analog cameras for generating analog video signals having a predetermined frame rate, digital cameras for generating digital video signals, but also IP (Internet Protocol) cameras that can be connected to the Internet.
도 2를 참조하면, 카메라(101a 내지 101e)는 감시구역을 분할하여 각 분할된 공간을 촬영하도록 설치되며, 각각의 카메라(101a 내지 101e)는 아래에서 설명되는 것처럼 다른 적어도 하나의 카메라와 상호 논리적으로 연결된다. 2, the cameras 101a to 101e are installed to divide the surveillance zone to photograph each divided space, and each camera 101a to 101e is mutually logical with at least one other camera as described below. Is connected.
센서(103a 내지 103c)는 도 2에서처럼 감시 구역의 외주를 따라 설치되어 감시구역 내로 침입하는 침입자를 감지하기 위한 것으로서, 적외선 감지기, 자석 감지기 등이 해당한다. 센서(103a 내지 103c)도 카메라(101a 내지 101e) 중적어도 하나와 논리적으로 상호 연결된다. Sensors 103a to 103c are installed along the outer periphery of the surveillance zone as shown in FIG. 2 to detect an intruder invading into the surveillance zone, and correspond to an infrared detector and a magnet detector. The sensors 103a through 103c are also logically interconnected with at least one of the cameras 101a through 101e.
영상제어기(110)는 카메라(101a 내지 101e)가 촬영한 영상을 저장하고 관리한다. 필요한 경우, 영상제어기(110)는 카메라(101a 내지 101e) 영상에 대한 영상처리를 수행하여 일정한 조건에 해당할 경우 그 정보를 외부 시스템(30)으로 제공한다. 이를 위해, 영상제어기(110)는 영상수신부(111), 저장매체(113), 장치인터페이스부(115) 및 제어부(130)를 포함한다.The image controller 110 stores and manages images captured by the cameras 101a to 101e. If necessary, the image controller 110 performs image processing on the images of the cameras 101a to 101e and provides the information to the external system 30 when it corresponds to a predetermined condition. To this end, the image controller 110 includes an image receiver 111, a storage medium 113, a device interface 115, and a controller 130.
영상수신부(111)는 카메라(101a 내지 101e)가 제공하는 아날로그 영상신호를 수신하여 디코딩한 다음 표시부(미도시)를 통해 표시되도록 하거나, 디지털 영상신호로 변환하여 저장매체(113)에 저장하고 제어부(130)에 제공한다.The image receiving unit 111 receives and decodes the analog image signals provided by the cameras 101a to 101e and then displays them through a display unit (not shown), or converts them into digital image signals and stores them in the storage medium 113. Provided at 130.
저장매체(113)는 영상수신부(111)가 제공하는 디지털 영상신호를 저장한다. The storage medium 113 stores the digital video signal provided by the image receiver 111.
장치인터페이스부(115)는 네트워크(10)에 접속하기 위한 유선 또는 무선 인터페이스이다. 장치인터페이스부(115)는 네트워크(10)를 통해 외부 시스템(30)과 연결되어, 제어부(130)가 제공하는 각종 감시 결과를 외부 시스템(30)에게 전송한다. The device interface unit 115 is a wired or wireless interface for connecting to the network 10. The device interface unit 115 is connected to the external system 30 through the network 10, and transmits various monitoring results provided by the controller 130 to the external system 30.
제어부(130)는 영상제어기(110)의 전반적인 동작을 제어한다. 다만, 본 발명의 영상제어기(110)가 다양한 기능, 예컨대 각 카메라 영상 저장/검색 등의 기능을 수행할 수도 있으나 이러한 기능은 본 발명의 특징적 기능이 아니므로 그 설명을 생략하고, 본 발명이 제시하는 외부 시스템(30)과의 연동 기능을 중심으로 설명한다.The controller 130 controls the overall operation of the image controller 110. However, although the image controller 110 of the present invention may perform various functions, for example, storing / searching each camera image, such a function is not a characteristic function of the present invention, and thus the description thereof is omitted and the present invention is presented. It will be described based on the interlocking function with the external system 30.
제어부(130)는 본 발명의 특징적인 지능형 영상 추적기능을 위해, 이벤트발생부(131), 영상처리부(133), 감시부(135), 기록재생처리부(137) 및 외부장치연동부(139)를 포함한다. The controller 130 is an event generating unit 131, an image processing unit 133, a monitoring unit 135, a recording and playback processing unit 137 and an external device interlocking unit 139 for the characteristic intelligent image tracking function of the present invention. It includes.
이벤트발생부(131)는 센서(103a 내지 103c)로부터 침입자를 감지한 센싱 결과를 읽어와서 감시 이벤트를 생성하며, 생성한 감시 이벤트를 감시부(135)에게 제공한다. The event generator 131 reads the sensing result of detecting the intruder from the sensors 103a to 103c to generate a monitoring event, and provides the generated monitoring event to the monitoring unit 135.
영상처리부(133)는 동작 중인 카메라들 중에서 감시부(135)가 처리요청한 카메라로부터 입력되는 영상에 대해 객체인식을 수행하여 움직이는 객체를 인식한다. 영상처리부(133)는 움직이는 객체가 검출되면, 해당 객체의 검출정보를 감시부(135)에게 제공한다. 여기서, 검출정보는 적어도 해당 요청된 카메라의 영상에서 새로운 객체가 나타난 시간, 그 새로운 객체가 사라진 시간에 대한 정보를 포함한다.The image processor 133 recognizes a moving object by performing object recognition on the image input from the camera that the monitoring unit 135 processes from among the cameras in operation. When the moving object is detected, the image processor 133 provides the monitoring unit 135 with detection information of the corresponding object. Here, the detection information includes at least information on a time when a new object appeared in the image of the requested camera and a time when the new object disappeared.
영상처리부(133)의 객체 인식 및 추적(Tracking)과정은 기 알려진 다양한 방법을 사용할 수 있다. 예컨대, 일반적인 객체 인식은 이전 프레임의 영상과 새로운 프레임의 영상의 뺄셈 연산으로부터 구할 수 있으며, 검출된 객체의 추적도 기 알려진 알고리즘을 사용할 수 있다. Object recognition and tracking of the image processor 133 may use various known methods. For example, general object recognition may be obtained from a subtraction operation of an image of a previous frame and an image of a new frame, and tracking of the detected object may also use a known algorithm.
감시부(135)는 아래에서 설명될 각 센서와 카메라의 논리적 연결관계(또는 각 감시구역의 논리적 연결관계)에 대한 정보를 가지고 감시 대상영역에 대한 감시 알고리즘을 수행한다. The monitoring unit 135 performs a monitoring algorithm for a monitoring target area with information on a logical connection relationship between each sensor and a camera (or a logical connection relationship between each monitoring area) to be described below.
감시부(135)는 상기 감시 알고리즘에 기반하여, 영상처리부(133)에게 특정 카메라로부터 입력되는 영상에 대한 해석을 통한 침입자(객체) 인지를 요청하고, 영상처리부(133)로부터 제공받은 인지결과에 기초한 감시결과를 로그 파일로 생성하여 저장매체(113)에 저장한다.The monitoring unit 135 requests the image processing unit 133 to recognize the intruder (object) through the analysis of the image input from the specific camera based on the monitoring algorithm, and based on the recognition result provided from the image processing unit 133. Based on the monitoring results are generated in a log file and stored in the storage medium (113).
기록재생처리부(137)는 영상인식장치(100)이 제공하는 기본적인 영상신호의 기록 및 재생처리를 위한 동작을 제어함과 동시에, 감시부(135)의 요청에 따라 특정 카메라로부터 입력되는 영상을 요청된 시간부터 별도의 영상 파일로 저장되도록 관리하거나 별도의 태그(Tag) 또는 메타 데이터를 부여할 수 있다. The recording / playback processing unit 137 controls an operation for recording and reproducing a basic video signal provided by the image recognition device 100, and requests an image input from a specific camera according to the request of the monitoring unit 135. From this time, it may be managed to be stored as a separate image file or a separate tag or metadata may be assigned.
외부장치연동부(139)는 감시부(135)의 감시 결과를 장치인터페이스부(115)를 통해 외부 시스템(30)에게 제공한다. 이때, 외부장치연동부(139)가 제공하는 감시 결과는 객체를 인식한 순간의 카메라 스틸 영상, 해당 카메라 위치정보 및 영상에서 객체를 검출한 시간 정보를 포함할 수 있다. The external device interlocking unit 139 provides the monitoring result of the monitoring unit 135 to the external system 30 through the device interface unit 115. In this case, the monitoring result provided by the external device interworking unit 139 may include a camera still image at the moment of recognition of the object, corresponding camera position information, and time information of detecting the object in the image.
실시 예에 따라, 외부장치연동부(139)가 감시 결과를 시스템 자원이 부족한 휴대 단말기 등에 제공하는 경우라면, 감시결과는 해당 카메라 위치정보와 검출 시간정보를 카메라 스틸 이미지에 이미지의 일부로 기록함으로써 하나의 이미지 형태로 제공될 수 있다. According to an embodiment, when the external device interlocking unit 139 provides the monitoring result to a portable terminal lacking system resources, the monitoring result is recorded by recording the camera position information and the detection time information as part of the image in the camera still image. It may be provided in the form of an image.
이하에서는 본 발명의 영상인식장치(100)의 동작을 설명하기에 앞서, 감시구역의 공간적 연결에 따른 이동체의 경로분석에 따른 방범용 감시 카메라 구성, 그 논리적 연결 및 감시 알고리즘에 대하여 설명한다. Hereinafter, prior to explaining the operation of the image recognition device 100 of the present invention, a description will be given of the configuration of the security surveillance camera according to the path analysis of the moving object according to the spatial connection of the surveillance zone, its logical connection and monitoring algorithm.
도 2에는 감시 대상이 되는 전체 감시구역이 보이며, 제1 카메라(101a)에 의한 제1 단위 감시구역(L1)과, 제2 카메라(101b)에 의한 제2 단위 감시구역(L2), 제3 카메라(101c)에 의한 제3 단위 감시구역(L3), 제4 카메라(101d)에 의한 제4 단위 감시구역(L4), 그리고 제5 카메라(101e)에 의한 제5 단위 감시구역(L5)이 도시되어 있다. 다시 말해, 각 단위 감시구역(L1~L5)은 각 카메라(101a 내지 101e)에 의해 구분된다. 2 shows the entire surveillance zone to be monitored, the first unit surveillance zone L1 by the first camera 101a, the second unit surveillance zone L2 by the second camera 101b, and the third. The third unit monitoring zone L3 by the camera 101c, the fourth unit monitoring zone L4 by the fourth camera 101d, and the fifth unit monitoring zone L5 by the fifth camera 101e Is shown. In other words, each unit monitoring zone L1 to L5 is divided by cameras 101a to 101e.
또한, 제1 내지 제3 센서(103a 내지 103c)가 감시구역의 외주를 따라 배치되어 있으며, 제1 내지 제3 센서(103a 내지 103c)도 제1 내지 제5 카메라(101a 내지 101e) 중 적어도 하나와 논리적으로 연결되어 있다.In addition, the first to third sensors 103a to 103c are disposed along the outer periphery of the surveillance zone, and the first to third sensors 103a to 103c are also at least one of the first to fifth cameras 101a to 101e. Logically connected to
도 2는, 제1 센서(103a)가 제1 단위 감시구역(L1)의 창문에 설치된 자석 감지기이고, 제2 센서(103b)가 제2 단위 감시구역(L2)의 창문에 설치된 자석 감지기이며, 제3 센서(103c)가 제3 단위 감시구역(L3)의 창문에 설치된 적외선 감지기인 예이다. 2 shows that the first sensor 103a is a magnet detector installed in the window of the first unit monitoring zone L1, the second sensor 103b is a magnet detector installed in the window of the second unit monitoring zone L2, The third sensor 103c is an example of an infrared detector installed in a window of the third unit monitoring zone L3.
각 감시구역, 각 카메라 또는 센서는, 감시의 목적, 감시구역의 공간적 특성 및 감시의 대상물(예컨대, 침입자)의 특성에 기반한 이동 경로 해석에 의해 상호 논리적 연결관계에 있다. 예컨대, 감시 대상물이 사람인 경우, 제1 단위 감시구역(L1)에 침입한 침입자는 반드시 제3 단위 감시구역(L3)을 지나게 되며, 제1 단위 감시구역(L1)에 나타난 침입자가 다음 순간 제4 단위 감시구역(L4)에 나타나는 일은 발생하지 않는다. 따라서 제1 단위 감시구역(L1)은 제2 단위 감시구역(L2) 및 제3 단위 감시구역(L3)과 논리적으로 연결된다. Each surveillance zone, each camera or sensor is in a logical connection with each other by moving path analysis based on the purpose of the surveillance, the spatial characteristics of the surveillance zone and the characteristics of the object to be monitored (eg, intruder). For example, when the object to be monitored is a human, an intruder who invades the first unit monitoring area L1 must pass through the third unit monitoring area L3, and the intruder who appears in the first unit monitoring area L1 has a fourth moment. It does not occur in the unit monitoring area L4. Therefore, the first unit monitoring zone L1 is logically connected to the second unit monitoring zone L2 and the third unit monitoring zone L3.
또한 동일한 이동 경로 해석에 의하면, 제1 센서(103a)는 제1 카메라(101a)와 논리적으로 연결되고, 제2 센서(103b)는 제2 카메라(101b)와 논리적으로 연결된다. 또한, 제3 센서(103c)는 제3 카메라(101c) 및 제4 카메라(101d)와 논리적으로 연결된다.In addition, according to the same movement path analysis, the first sensor 103a is logically connected to the first camera 101a, and the second sensor 103b is logically connected to the second camera 101b. In addition, the third sensor 103c is logically connected to the third camera 101c and the fourth camera 101d.
따라서, 도 2의 각 카메라(101a 내지 101e) 및 센서(103a 내지 103c)의 논리적 연결관계, 즉 각 단위 감시구역의 논리적 연결관계는 다음의 표 1과 같다. Accordingly, the logical connection relations of the cameras 101a to 101e and the sensors 103a to 103c of FIG. 2, that is, the logical connection relations of the unit monitoring zones, are shown in Table 1 below.
표 1
구성 논리적 연결 카메라
제1 카메라 제2, 3 카메라
제2 카메라 제1, 3 카메라
제3 카메라 제1, 2, 5 카메라
제4 카메라 제5 카메라
제5 카메라 제3, 4 카메라
제1 센서 제1 카메라
제2 센서 제2 카메라
제3 센서 제3, 4 카메라
Table 1
Configuration Logical connection camera
First camera 2nd, 3 camera
Second camera 1st, 3 camera
Third camera 1st, 2nd, 5th camera
4th camera 5th camera
5th camera 3rd, 4th camera
First sensor First camera
Second sensor Second camera
Third sensor 3rd, 4th camera
표 1에 의하면, 제5 단위 감시구역(L5) → 제3 단위 감시구역(L3) → 제1 단위 감시구역(L1)은 가능한 연결이지만, 제5 단위 감시구역(L5) → 제4 단위 감시구역(L4) → 제1 단위 감시구역(L1)은 제4 단위 감시구역(L4)과 제1 단위 감시구역(L1)의 연결관계가 없으므로 불가능하다.According to Table 1, the fifth unit monitoring zone (L5) → the third unit monitoring zone (L3) → the first unit monitoring zone (L1) is a possible connection, but the fifth unit monitoring zone (L5) → the fourth unit monitoring zone (L5) (L4) → The first unit monitoring area L1 is impossible because there is no connection between the fourth unit monitoring area L4 and the first unit monitoring area L1.
또한, 제3 센서(103c) → 제4 카메라(101d) → 제5 카메라(101e)는 가능한 연결이지만, 제3 센서(103c) → 제2 카메라(101b) → 제3 카메라(101c)는 제3 센서(103c)와 제2 카메라(101b)의 연결관계가 없으므로 불가능하다.Further, the third sensor 103c → fourth camera 101d → fifth camera 101e are possible connections, but the third sensor 103c → second camera 101b → third camera 101c is a third connection. Since there is no connection between the sensor 103c and the second camera 101b, this is impossible.
이하에서는 도 3을 참조하여, 감시부(135)와 영상처리부(133)를 중심으로 하는 본 발명의 영상인식장치(100)의 영상감시 방법을 설명한다.Hereinafter, referring to FIG. 3, an image monitoring method of the image recognition device 100 of the present invention centering on the monitoring unit 135 and the image processing unit 133 will be described.
<감시 이벤트에 논리적으로 연결된 감시구역에 대한 감시 개시: S301, S303 단계><Starting monitoring of the monitoring area logically connected to the monitoring event: steps S301 and S303>
이벤트발생부(131)는 제1 내지 제3 센서(103a ~ 103c)를 읽어 침입자가 감지되면 감시 이벤트를 발생시킨다(S301). The event generator 131 reads the first to third sensors 103a to 103c and generates a monitoring event when an intruder is detected (S301).
이벤트발생부(131)에 의해 감시 이벤트가 발생하면, 감시부(135)는 해당 감시 이벤트에 논리적으로 연결된 카메라의 영상에 대한 객체 인식 및 추적을 영상처리부(133)에게 요청한다(S303). When the surveillance event is generated by the event generator 131, the monitor 135 requests the image processor 133 to recognize and track an object of an image of a camera logically connected to the surveillance event (S303).
이하에서는 설명의 편리를 위해, 제1 센서(103a)에 의해 감시 이벤트가 발생한 것으로 가정한다. 이벤트 발생 후 객체(p)는 제1 단위 감시구역(L1)에 진입했다가 제3 단위 감시구역(L3)으로 이동했다고 가정한다. 따라서 이벤트발생부(131)는 제1 센서(103a)에 의한 감시 이벤트를 생성하고, 감시부(135)는 제1 카메라(101a)의 영상에 대한 객체 인식 및 추적을 영상처리부(133)에게 요청하게 된다.Hereinafter, for convenience of description, it is assumed that the monitoring event has occurred by the first sensor 103a. After the event occurs, it is assumed that the object p enters the first unit monitoring area L1 and moves to the third unit monitoring area L3. Accordingly, the event generator 131 generates a surveillance event by the first sensor 103a, and the monitor 135 requests the image processor 133 to recognize and track the object of the image of the first camera 101a. Done.
<감시 개시된 영역에 대한 영상 감시(객체 인식 및 추적): S305 단계 ><Video Surveillance (Object Recognition and Tracking) for Surveillance Initiated Region: Step S305>
감시부(135)의 요청에 따라, 영상처리부(133)는 감시 요청된 카메라(도 2의 예에서, 제1 카메라)로부터 입력되는 영상에 대한 객체 인식 및 추적을 개시한다. 영상처리부(133)의 객체 인식, 추출 및 추적은 앞서 설명한 바와 같이, 영상처리분야에서 기 알려진 방법으로 처리할 수 있다.At the request of the monitoring unit 135, the image processing unit 133 starts object recognition and tracking of an image input from a camera requested to be monitored (in the example of FIG. 2, the first camera). Object recognition, extraction, and tracking of the image processor 133 may be processed by a method known in the image processing field as described above.
영상 분석 중에 움직이는 객체가 추출되면, 영상처리부(133)는 객체 인식 정보를 외부장치연동부(139)와 감시부(135)에게 제공하고, 해당 추출된 객체에 대한 추적을 개시한다. 객체인식정보에는 객체 인식 시간, 객체인식한 카메라 정보를 포함하고, 필요한 경우 해당 스틸 영상을 포함할 수 있다. When a moving object is extracted during image analysis, the image processor 133 provides object recognition information to the external device interlocking unit 139 and the monitoring unit 135 and starts tracking the extracted object. The object recognition information may include object recognition time and object recognized camera information, and if necessary, may include a corresponding still image.
<감시 결과 통보: S307 단계><Monitoring result notification: step S307>
영상처리부(133)로부터 객체 인식정보를 제공받은 외부장치연동부(139)는 영상처리부(133)가 영상처리한 이미지를 외부 시스템(30)으로 제공한다. 이때 해당 카메라 위치정보와 침입자를 감지한 시간 정보도 함께 제공된다. 도 2의 예에서, 침입자가 제1 카메라(101a) 영상에서 인식될 것이므로, 감시 결과는 제1 카메라(101a)가 촬영한 1장의 이미지와, 제1 카메라 위치, 제1 카메라(101a) 영상에서 침입자를 인식한 시간 정보 등이 포함된다. The external device interlocking unit 139, which receives the object recognition information from the image processing unit 133, provides the image processed by the image processing unit 133 to the external system 30. At this time, the camera location information and time information of detecting the intruder are also provided. In the example of FIG. 2, since the intruder will be recognized in the image of the first camera 101a, the monitoring result is obtained from one image captured by the first camera 101a, the first camera position, and the image of the first camera 101a. Time information identifying the intruder is included.
도 4는 감시 결과로 외부 시스템(30)의 하나인 휴대전화에 제공된 1장의 이미지의 예를 도시한 것이다. 외부 시스템(30)에 제공된 이미지에는 제1 카메라의 위치와, 인식 시간이 기록되어 있다.  4 shows an example of one image provided to the cellular phone which is one of the external systems 30 as a result of the monitoring. The position of the first camera and the recognition time are recorded in the image provided to the external system 30.
<객체 추적 및 종료: S309 단계><Track and close object: step S309>
객체의 이동과 단위 감시 구역의 탈출에 따라 일정 시간 이후에 감시 요청된 카메라의 영상에서 해당 객체가 사라지게 되면, 영상처리부(133)는 객체 추적의 종료를 종료 시간정보와 함께 감시부(135)에게 제공한다. When the object disappears from the image of the camera requested to be monitored after a certain time due to the movement of the object and the escape of the unit monitoring zone, the image processor 133 ends the tracking of the object with the end time information to the monitoring unit 135. to provide.
도 3의 예에서, 침입자가 제1 단위 감시구역(L1)으로 진입한 때에 영상처리부(133)가 객체 인식 및 추적을 개시하였다가, 침입자가 복도인 제3 단위 감시구역(L3)으로 퇴실한 때에 제1 카메라(101a)로부터 입력되는 영상에서 객체가 사라지게 되고 영상처리부(133)는 객체에 대한 추적을 종료하게 된다.In the example of FIG. 3, when the intruder enters the first unit monitoring area L1, the image processing unit 133 starts object recognition and tracking, and the intruder leaves the third unit monitoring area L3 which is a corridor. At this time, the object disappears from the image input from the first camera 101a and the image processor 133 ends the tracking of the object.
<논리적으로 연결된 감시구역 중 추적할 감시구역 선정: S311 단계 ><Selection of monitoring area to track among logically connected monitoring areas: step S311>
감시부(135)는 영상처리부(133)로부터 요청된 카메라 영상에 대한 객체 추적의 종료를 통지받으면, 추적 중이던 카메라에 논리적으로 연결된 카메라(또는 단위 감시 구역) 중에서 계속 추적할 카메라(단위 감시 구역)를 선정한다. 이를 위해, 감시부(135)는 논리적으로 연결된 카메라 전부에 대하여 소정 시간동안의 추적을 영상처리부(133)에게 요청한다. 예에서, 제1 카메라(101a)의 논리적 연결 카메라는 제2 카메라(101b)와 제3 카메라(101c)이므로, 감시부(135)는 제2 카메라(101b)와 제3 카메라(101c)에 대한 소정 시간의 감시를 영상처리부(133)에게 요청하게 된다. 여기서, 소정 시간은 객체의 이동 능력, 감시구역의 공간적 특성 등을 고려하여 정해질 수 있다. When the monitoring unit 135 is notified of the end of the object tracking for the requested camera image from the image processing unit 133, the camera to continue tracking among the cameras (or unit monitoring zones) logically connected to the camera being tracked (unit monitoring zones) Select. To this end, the monitoring unit 135 requests the image processing unit 133 to track all of the logically connected cameras for a predetermined time. In the example, since the logically connected cameras of the first camera 101a are the second camera 101b and the third camera 101c, the monitoring unit 135 is configured for the second camera 101b and the third camera 101c. The image processor 133 requests the monitoring of the predetermined time. Here, the predetermined time may be determined in consideration of the moving capability of the object, the spatial characteristics of the surveillance zone.
영상처리부(133)는 감시부(135)의 요청에 따라 설정된 시간동안 추적 요청된 카메라들의 영상에 대하여 새롭게 나타난 객체가 있는지를 감시하고, 감시 결과를 감시부(135)에게 제공한다. 객체(p)가 제1 단위 감시구역(L1)에서 제3 단위 감시구역(L3)으로 이동함에 따라, 영상처리부(133)는 제3 감시구역에서 새로운 객체가 인지되었음을 감시부(135)에게 통지한다.The image processor 133 monitors whether there is a newly displayed object with respect to the images of the cameras which are requested to be tracked for a set time at the request of the monitor 135, and provides the monitoring result to the monitor 135. As the object p moves from the first unit surveillance zone L1 to the third unit surveillance zone L3, the image processor 133 notifies the monitoring unit 135 that a new object has been recognized in the third surveillance zone. do.
경우에 따라, 제2 단위 감시구역(L2) 또는 제3 단위 감시구역(L3)에는 이전에 움직이는 객체가 있을 수 있다. 이 경우에도, 영상처리부(133)는 해당 단위 감시 구역의 영상에서 새롭게 나타난 객체만을 추출하여 그 결과를 감시부(135)에게 제공하게 된다. In some cases, the second unit monitoring zone L2 or the third unit monitoring zone L3 may have a previously moving object. Even in this case, the image processor 133 extracts only the newly appearing object from the image of the unit monitoring zone and provides the result to the monitoring unit 135.
만약, S301 단계에서 개시된 추적 개시된 단위 감시 구역에 논리적으로 연결된 단위 감시 구역이 하나뿐인 경우, S311 단계를 수행하지 않을 수 있다. If there is only one unit monitoring zone logically connected to the tracking initiated unit monitoring zone disclosed in step S301, step S311 may not be performed.
<논리적으로 연결된 단위 감시구역에 대한 추적 감시: S313 단계 ><Tracking monitoring for logically connected unit monitoring area: step S313>
감시부(135)는 S311 단계에서 영상처리부(133)로부터 제공받은 결과를 기초로 추적 대상인 객체가 검색된 카메라가 있는지를 판단한다. The monitoring unit 135 determines whether there is a camera in which the object to be tracked is searched based on the result provided from the image processing unit 133 in step S311.
검색된 카메라(도 2의 예에서, 제3 카메라)가 있는 경우, 감시부(135)는 계속 추적할 단위 감시 구역(도 2의 예에서, 제3 단위 감시구역)을 선정하고, 해당 단위 감시구역에 대한 S307의 감시 결과 통보를 외부장치연동부(139)에게 요청하면서, S309 단계와 동일한 객체 인식 및 추적을 영상처리부(133)에게 요청한다. If there is a detected camera (the third camera in the example of FIG. 2), the monitoring unit 135 selects the unit monitoring zone (the third unit monitoring zone in the example of FIG. 2) to continue tracking, and the corresponding unit monitoring zone. While requesting the monitoring result notification of S307 to the external device interlocking unit 139, the image processing unit 133 requests the same object recognition and tracking as in step S309.
객체(p)가 제1 단위 감시구역(L1)에서 제3 단위 감시구역(L3)으로 이동하였으므로, 외부장치연동부(139)는 객체(p)가 처음 포착된 제3 카메라(101c)의 이미지를 감시 결과로 외부 시스템(30)에게 제공한다. Since the object p has moved from the first unit monitoring zone L1 to the third unit monitoring zone L3, the external device interlocking unit 139 is an image of the third camera 101c from which the object p was first captured. To the external system 30 as a monitoring result.
도 2의 예에서, 제1 카메라(101a)가 1차 추적 카메라가 되고, 제3 카메라(101c)가 2차 추적 카메라가 된다. 이러한 과정을 반복함으로써, 외부 시스템(30)은 감시구역을 침입한 침입자가 어떤 경로로 침입하여 어떤 이동 경로로 이동 중인지에 대한 정보를 실시간으로 확인할 수 있다. In the example of FIG. 2, the first camera 101a becomes the primary tracking camera and the third camera 101c becomes the secondary tracking camera. By repeating this process, the external system 30 can check in real time the information on which path the intruder who invaded the surveillance area is invading.
이상의 S307 내지 S313 단계는 움직이는 객체를 인지하지 못함에 따라 계속 추적할 연결 감시구역을 선정하지 못할 때까지 반복된다. Steps S307 to S313 are repeated until the connection monitoring zone to continue tracking is not selected because the moving object is not recognized.
이상에서는 본 발명의 바람직한 실시 예에 대하여 도시하고 설명하였지만, 본 발명은 상술한 특정의 실시 예에 한정되지 아니하며, 청구범위에서 청구하는 본 발명의 요지를 벗어남이 없이 당해 발명이 속하는 기술 분야에서 통상의 지식을 가진 자에 의해 다양한 변형실시가 가능한 것은 물론이고, 이러한 변형실시들은 본 발명의 기술적 사상이나 전망으로부터 개별적으로 이해되어서는 안 될 것이다.Although the above has been illustrated and described with respect to preferred embodiments of the present invention, the present invention is not limited to the above-described specific embodiments, it is usually in the technical field to which the invention belongs without departing from the spirit of the invention claimed in the claims. Various modifications can be made by those skilled in the art, and these modifications should not be individually understood from the technical spirit or the prospect of the present invention.

Claims (5)

  1. 전체 감시구역을 분할한 복수 개의 단위 감시구역을 각각 촬영하는 복수 개의 카메라를 구비한 영상인식장치의 영상감시방법에 있어서,In the video surveillance method of an image recognition device having a plurality of cameras for respectively photographing a plurality of unit surveillance zones divided the entire surveillance zone,
    상기 복수 개 단위 감시구역의 공간적 연결에 따른 이동체의 이동 경로를 기초로 상기 복수 개의 카메라의 논리적 연결관계를 설정하는 단계;Establishing a logical connection relationship between the plurality of cameras based on a moving path of the moving object according to the spatial connection of the plurality of unit monitoring zones;
    상기 전체 감시구역의 외측에 설치된 복수 개의 센서 중 하나가 상기 이동체를 감지하고 감시 이벤트를 생성하는 단계;Sensing one of the plurality of sensors installed outside the entire monitoring zone and generating a monitoring event;
    상기 감시 이벤트에 따라, 상기 복수 개의 카메라 중에서 상기 이동체를 감지한 센서에 논리적으로 연결된 카메라를 1차 추적 카메라로 하여, 상기 1차 추적 카메라부터 입력되는 영상에서 움직이는 객체를 인식하고 추적하는 단계; 및Recognizing and tracking a moving object in an image input from the primary tracking camera by using a camera logically connected to a sensor that detects the moving object among the plurality of cameras as a primary tracking camera according to the monitoring event; And
    상기 움직이는 객체를 인식한 시점에, 상기 움직이는 객체를 인식한 영상을 외부 시스템으로 제공하는 단계를 포함하는 것을 특징으로 하는 영상인식장치의 영상감시방법.And providing an image to the external system that recognizes the moving object at the time when the moving object is recognized.
  2. 제1항에 있어서,The method of claim 1,
    상기 추적 중 상기 객체가 인식되지 않는 경우, 상기 복수 개의 카메라 중에서 상기 1차 추적 카메라와 논리적 연결관계에 있는 2차 추적 카메라로부터 입력되는 영상에 대한 객체인식 과정을 통해 상기 추적 중인 객체를 다시 추적하는 단계; 및If the object is not recognized during the tracking, the object being tracked is again tracked through an object recognition process for an image input from a secondary tracking camera that is logically connected to the primary tracking camera among the plurality of cameras. step; And
    상기 2차 추적 카메라에서 상기 객체를 다시 인식한 시점에, 상기 2차 추적 카메라의 영상을 상기 외부 시스템으로 제공하는 단계를 포함하는 것을 특징으로 하는 영상인식장치의 영상감시방법.And providing the image of the secondary tracking camera to the external system when the object is recognized again by the secondary tracking camera.
  3. 제2항에 있어서,The method of claim 2,
    상기 추적 중 상기 객체가 인식되지 않는 경우, 상기 복수 개의 카메라 중에서 상기 1차 추적 카메라와 논리적 연결관계에 있는 카메라가 복수 개인 경우, 상기 논리적으로 연결된 복수 개 카메라 전체로부터 입력되는 영상에 대하여 움직이는 객체를 인식하여 상기 2차 추적 카메라를 선택하는 것을 특징으로 하는 영상인식장치의 영상감시방법.When the object is not recognized during the tracking, when there are a plurality of cameras logically connected to the primary tracking camera among the plurality of cameras, an object moving with respect to an image input from all of the logically connected cameras is selected. And recognizing and selecting the secondary tracking camera.
  4. 전체 감시구역을 분할한 복수 개의 단위 감시구역을 각각 촬영하는 복수 개의 카메라; A plurality of cameras respectively photographing a plurality of unit surveillance zones in which the entire surveillance zone is divided;
    상기 전체 감시구역의 외측면에 설치된 복수 개의 센서; 및A plurality of sensors installed on the outer surface of the entire monitoring area; And
    상기 복수 개의 카메라로부터 영상을 수신하고 외부 시스템과 연결되는 영상제어기를 포함하고,An image controller configured to receive images from the plurality of cameras and to be connected to an external system;
    상기 영상제어기는,The image controller,
    상기 복수 개의 카메라가 촬영한 영상을 수신하는 영상수신부; An image receiver configured to receive images captured by the plurality of cameras;
    상기 센서를 이용하여 상기 전체 감시구역에 침입한 이동체를 감지하고 감시 이벤트를 생성하는 이벤트발생부;An event generator for detecting a moving object invading the entire surveillance zone by using the sensor and generating a surveillance event;
    상기 영상처리부로부터 입력되는 영상을 처리하여 움직이는 객체를 인식하고 추적하는 영상처리부;An image processor configured to recognize and track a moving object by processing an image input from the image processor;
    상기 복수 개 단위 감시구역의 공간적 연결에 따른 이동체의 이동 경로를 기초로 상기 복수 개의 카메라의 논리적 연결관계를 설정하고, 상기 감시 이벤트에 따라 상기 복수 개의 카메라 중에서 상기 이동체를 감지한 센서에 논리적으로 연결된 카메라를 1차 추적 카메라로 하여, 상기 1차 추적 카메라에 대한 영상 추적을 상기 영상처리부에게 요청하는 감시부; 및Set a logical connection relationship between the plurality of cameras based on a moving path of the moving object according to the spatial connection of the plurality of unit monitoring zones, and logically connected to a sensor that detects the moving object among the plurality of cameras according to the monitoring event. A monitoring unit requesting the image processing unit to track an image of the primary tracking camera using the camera as a primary tracking camera; And
    상기 움직이는 객체를 인식한 시점에, 상기 움직이는 객체를 인식한 영상을 상기 외부 시스템으로 제공하는 외부장치연동부를 포함하는 것을 특징으로 하는 영상인식장치.And an external device interlocking unit for providing the external system with an image recognizing the moving object at the time when the moving object is recognized.
  5. 제4항에 있어서, The method of claim 4, wherein
    상기 감시부는, 상기 추적 중 상기 객체가 인식되지 않는 경우 상기 1차 추적 카메라와 논리적 연결관계에 있는 카메라에 대한 계속 추적을 상기 영상처리부에게 요청하며, The monitoring unit, when the object is not recognized during the tracking, requests the image processing unit to continuously track the camera that is in a logical connection with the primary tracking camera,
    상기 외부장치연동부는 상기 2차 추적 카메라에서 상기 객체를 다시 인식한 시점에, 상기 2차 추적 카메라의 영상을 상기 외부 시스템으로 제공하는 것을 특징으로 하는 영상인식장치.And the external device interlocking unit provides an image of the secondary tracking camera to the external system when the object is re-recognized by the secondary tracking camera.
PCT/KR2011/002341 2011-04-04 2011-04-05 Image recognition device and image-monitoring method therefor WO2012137994A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2011-0030673 2011-04-04
KR1020110030673A KR101212082B1 (en) 2011-04-04 2011-04-04 Image Recognition Apparatus and Vison Monitoring Method thereof

Publications (1)

Publication Number Publication Date
WO2012137994A1 true WO2012137994A1 (en) 2012-10-11

Family

ID=46969368

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2011/002341 WO2012137994A1 (en) 2011-04-04 2011-04-05 Image recognition device and image-monitoring method therefor

Country Status (2)

Country Link
KR (1) KR101212082B1 (en)
WO (1) WO2012137994A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105744140A (en) * 2014-12-10 2016-07-06 信泰光学(深圳)有限公司 Device and method for recording occurrence number of objects to be detected
WO2017034114A1 (en) * 2015-08-24 2017-03-02 Lg Electronics Inc. Mobile terminal and method of controlling the same

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101467663B1 (en) * 2013-01-30 2014-12-01 주식회사 엘지씨엔에스 Method and system of providing display in display monitoring system
KR101593187B1 (en) 2014-07-22 2016-02-11 주식회사 에스원 Device and method surveiling innormal behavior using 3d image information
KR102183904B1 (en) 2014-10-14 2020-11-27 한화테크윈 주식회사 Method and Apparatus for surveillance using location-tracking imaging devices
KR20180020374A (en) * 2016-08-18 2018-02-28 한화테크윈 주식회사 The System, Apparatus And MethodFor Searching Event
KR102128108B1 (en) * 2019-11-25 2020-06-29 고병진 Apparatus for detecting camera

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20060027955A (en) * 2004-09-24 2006-03-29 주식회사 유니썬코리아 A watch system for fence by logical division
KR20060124232A (en) * 2005-05-31 2006-12-05 손용식 Irruption sensing and crime prevention system using mobile handset camera and built-in infrared sensor or exterior infrared sensor of mobile handset
KR20100116829A (en) * 2009-04-23 2010-11-02 (주)플렛디스 An apparatus of dection for moving from cctv camera and that of using method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20060027955A (en) * 2004-09-24 2006-03-29 주식회사 유니썬코리아 A watch system for fence by logical division
KR20060124232A (en) * 2005-05-31 2006-12-05 손용식 Irruption sensing and crime prevention system using mobile handset camera and built-in infrared sensor or exterior infrared sensor of mobile handset
KR20100116829A (en) * 2009-04-23 2010-11-02 (주)플렛디스 An apparatus of dection for moving from cctv camera and that of using method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105744140A (en) * 2014-12-10 2016-07-06 信泰光学(深圳)有限公司 Device and method for recording occurrence number of objects to be detected
WO2017034114A1 (en) * 2015-08-24 2017-03-02 Lg Electronics Inc. Mobile terminal and method of controlling the same

Also Published As

Publication number Publication date
KR20120113014A (en) 2012-10-12
KR101212082B1 (en) 2012-12-13

Similar Documents

Publication Publication Date Title
WO2012137994A1 (en) Image recognition device and image-monitoring method therefor
WO2011025085A1 (en) Method and system for combined audio-visual surveillance cross-reference to related applications
WO2018124510A1 (en) Event storage device, event search device and event notification device
WO2012060535A1 (en) Enhanced-security door lock system and a control method therefor
WO2012124852A1 (en) Stereo camera device capable of tracking path of object in monitored area, and monitoring system and method using same
WO2021167374A1 (en) Video search device and network surveillance camera system including same
WO2016099084A1 (en) Security service providing system and method using beacon signal
WO2019124635A1 (en) Syntax-based method for sensing object intrusion in compressed video
WO2014193065A1 (en) Video search apparatus and method
WO2018135906A1 (en) Camera and image processing method of camera
WO2021020866A1 (en) Image analysis system and method for remote monitoring
WO2011136418A1 (en) Dvr and method for monitoring image thereof
JP6807548B2 (en) Processing equipment, control methods, programs, and intercom systems
WO2017034177A1 (en) Enforcement system for curbing illegal parking and stopping by using images from different cameras, and control system including same
WO2012046899A1 (en) Image-monitoring device and method for detecting events therefor
WO2018135695A1 (en) Monitoring apparatus and system
KR20190092227A (en) System for Serarching Using Intelligent Analyzing Video
WO2012050244A1 (en) Image-monitoring device and method for searching for objects therefor
WO2016064107A1 (en) Pan/tilt/zoom camera based video playing method and apparatus
WO2018097384A1 (en) Crowdedness notification apparatus and method
WO2013162095A1 (en) Dvr and video monitoring method therefor
WO2021172943A1 (en) Video search device and network surveillance camera system comprising same
CN106572324A (en) Energy-efficient smart monitoring device
JP2012212238A (en) Article detection device and stationary-person detection device
WO2019124634A1 (en) Syntax-based method for object tracking in compressed video

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11862895

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11862895

Country of ref document: EP

Kind code of ref document: A1