WO2017047687A1 - Monitoring system - Google Patents

Monitoring system Download PDF

Info

Publication number
WO2017047687A1
WO2017047687A1 PCT/JP2016/077236 JP2016077236W WO2017047687A1 WO 2017047687 A1 WO2017047687 A1 WO 2017047687A1 JP 2016077236 W JP2016077236 W JP 2016077236W WO 2017047687 A1 WO2017047687 A1 WO 2017047687A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
speed
camera
monitoring
abnormality
Prior art date
Application number
PCT/JP2016/077236
Other languages
French (fr)
Japanese (ja)
Inventor
佑一郎 小宮
佐々 敦
山口 宗明
雅俊 近藤
Original Assignee
株式会社日立国際電気
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社日立国際電気 filed Critical 株式会社日立国際電気
Priority to JP2017539963A priority Critical patent/JP6584024B2/en
Publication of WO2017047687A1 publication Critical patent/WO2017047687A1/en

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B25/00Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/04Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/052Detecting movement of traffic to be counted or controlled with provision for determining speed or overspeed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present invention relates to a monitoring system that detects an abnormality occurring on a road based on the movement of a vehicle from a video of a monitoring camera installed on the road.
  • a monitoring system is required to have a function of automatically detecting an abnormality occurring in the field from a video of a monitoring camera and issuing an alarm. Prompt confirmation of abnormalities leads to prompt response after an accident and prevention of serious accidents.
  • Such a function is particularly effective when a large number of surveillance cameras are used, such as in a large-scale facility, and the supervisor cannot check all the images simultaneously.
  • One of the facilities that require an automatic detection function is an expressway. Abnormalities on the highway can be an emergency stop of the vehicle due to road damage, obstacles, breakdowns, etc., all of which can lead to serious accidents. Therefore, a technique for automatically detecting an abnormal situation from a video of a surveillance camera set on a highway is considered.
  • Patent Documents 1 and 2 As a technique related to the present invention, one that is mounted on a vehicle and detects a movement of a preceding vehicle that avoids an obstacle or the like is known (for example, see Patent Documents 1 and 2).
  • an object of the present invention is to provide a monitoring system that performs systematic image processing on a number of surveillance camera images and realizes abnormality detection with limited computational resources.
  • a monitoring system determines a priority order from a monitoring camera having a function of detecting a speed abnormality of a vehicle in a video, speed abnormality information from the monitoring camera, and a positional relationship between the monitoring cameras,
  • a central control unit having a function of analyzing a video of a high-priority surveillance camera video is provided.
  • the setting of the monitoring camera can be dynamically switched depending on whether or not it is a target of video analysis and whether or not an abnormality is detected.
  • an abnormality in the monitoring area can be efficiently detected from the video of the monitoring camera with a small amount of calculation.
  • FIG. 1 is a block diagram illustrating an example of a logical configuration of a monitoring system 1 according to a first embodiment.
  • 3 is a schematic state transition diagram of the operation of the monitoring system 1.
  • FIG. The schematic diagram explaining the principle which judges a traffic jam start point.
  • FIG. The schematic diagram which shows the principle which calculates an actual speed.
  • 10 is a flowchart of speed calculation by the speed analysis unit 22.
  • the monitoring system is based on the trajectory (route) of a vehicle traveling on the road, or the vehicle speed and the passing frequency obtained in the process of obtaining the trajectory. The presence of the error), and notify the abnormality.
  • an operation for directly detecting an obstacle causing the abnormality from the video is performed in cooperation with the estimation of the abnormality.
  • FIG. 1 is a block diagram showing an example of a logical configuration of the monitoring system 1 according to the first embodiment of the present invention.
  • the monitoring system 1 of this example is a centralized system that is centralized at an arbitrary number N of monitoring camera devices 2-1 to 2-N arranged at predetermined intervals along a highway and a base such as a traffic control center.
  • a processing system 3 includes a video analysis unit 4, a video display unit 5, a video recording unit 6, and a system control unit 7 that controls them.
  • the monitoring camera devices 2-1 to 2-N are, for example, a PTZ (panning pan) equipped with an electric head and an electric zoom.
  • a high-sensitivity color camera is used as the camera body.
  • the pan, tilt, and zoom to be applied are given as camera control information.
  • the surveillance camera 2 encodes and outputs a video signal obtained by the camera body as a surveillance camera video, and measures the speed of the vehicle in the video by a known technique such as Non-Patent Document 1.
  • the measured speed is output as speed abnormality detection information at least when the speed is abnormal.
  • the monitoring camera video can be subjected to processing such as shake correction, gradation correction, and super-resolution before being encoded in order to facilitate processing in the video analysis unit 4.
  • the video analysis unit 4 analyzes the monitoring camera video received from the monitoring camera 2 and outputs the analysis result.
  • the video analysis unit 4 only needs to process at most M channels (M is an integer smaller than N) designated by the system control unit 7, and simultaneously processes the monitoring camera videos from the N monitoring cameras 2 in real time. There is no need to have the ability.
  • the video analysis unit 4 has a capability of recognizing a vehicle, accumulates or learns a normal vehicle travel locus in advance, generates a vehicle trajectory recognized in a given image, and is not normal. Identify the occurrence of a running track.
  • a trajectory of a vehicle that overtakes a stopped vehicle, a trajectory that avoids falling objects on a road, or a trajectory of a plurality of vehicles that decelerate and accelerate before and after the same point is detected as abnormal.
  • the position information of falling objects and points is output. Further, in response to a request from the system control unit 7, an event (object) itself that causes the abnormal traveling locus is further detected.
  • the video display unit 5 displays the monitoring camera video received from the monitoring camera 2 according to the display control information.
  • L L is an integer smaller than N
  • flat panel displays are arranged vertically and horizontally, an L channel image is input, and the image is output together with surveillance camera information (photographing location name) and the like.
  • the video display unit 5 preferentially displays a video in which an abnormality has been detected or reveals an alarm that has been issued in accordance with a command from the system control unit 7.
  • the video recording unit 6 records the surveillance camera video received from the surveillance camera 2 on a hard disk or the like according to the recording control information.
  • the video recording unit 6 may change the video recording method based on the result of abnormality detection. For example, in the case of recording video after compressing it, the video of the surveillance camera in which an abnormality has been detected is saved at a reduced compression rate.
  • the system control unit 7 has a function of issuing an alarm based on the speed abnormality detection information from the connected N monitoring cameras 2 and the analysis result of the video analysis unit 4 and notifying the video display unit. Similarly to general CCTV, display control information is generated and the recording rate of the video recording unit 5 is set in order to display video on the video display unit 5 while cyclically switching the video of the surveillance camera 2. Generate recording control information to be controlled.
  • the system control unit 7 can be implemented as a part of a process control system (also called a distributed control system (DCS)) that performs traffic control.
  • the traffic control system is mainly managed by traffic counters and traffic measured by ETC 2.0.
  • FIG. 2 shows a schematic state transition diagram of the operation of the monitoring system 1.
  • the monitoring system 1 of this example generally has four states: steady state, congestion cause search, obstacle search, and obstacle observation.
  • steady state is a state in which there is no traffic jam and no obstacle, and the N monitoring cameras 2 are used like a traffic counter to monitor whether a speed abnormality has occurred.
  • the video is analyzed while selecting the surveillance camera 2 in a cyclic manner, and an attempt is made to detect the vehicle avoidance (trajectory of travel that avoids the obstacle) or the obstacle itself.
  • the state transits to a congestion cause search state, when an escape is detected, the state transits to an obstacle search state, and when an obstacle is detected, the state transits to an obstacle observation state.
  • the congestion cause search state is a state in which the start point and cause of the congestion are searched, the start point of the congestion is estimated from the shooting point of the video where the speed abnormality is detected and its change, and detection of the vehicle indicating the avoidance at that point is detected. Try.
  • the speed abnormality is continuously monitored in the places other than the starting point as in the steady state. As shown in FIG. 3, if there is a section where speed anomalies (low speed) continue, it is estimated as a traffic jam section, and the end of the traffic jam section on the vehicle traveling direction side (that is, a camera with speed anomalies and no speed anomalies) It can be estimated that there is a traffic jam start point between the cameras).
  • the speed of the traveling vehicle suddenly decreases with a certain camera or reaches a low speed that can be regarded as a traffic jam, if there is no speed abnormality before and after that, it can be estimated that it is the starting point of a newly generated traffic jam. If the estimated start point of the traffic jam is a location where natural traffic jams frequently occur, or if other known causes are known, a transition is made to a steady state. Otherwise, a high-magnification image near the starting point is acquired cyclically by PTZ control, attempts to detect a vehicle that shows avoidance, and if detected, transitions to the obstacle search state and issues an alarm (obstacle alarm) To do.
  • an alarm obstacle alarm
  • the image of the escape detection point is analyzed to try to detect the obstacle itself.
  • monitoring of speed anomalies is continued outside the detection point.
  • it is necessary to create an image (background image) when there are no obstacles in advance.
  • an alarm is issued as appropriate and a transition to the obstacle observation state is made.
  • the obstacle observation state it is a state in which the detected obstacle is continuously monitored, and when it is caught by the vehicle and disappears from the spot, it transits to the congestion cause search state and is dropped again on the road in the driving direction. You will search for traffic jams due to obstacles. Moreover, when the removal of the obstacle is completed by a worker who has visited the site, the state transitions to a steady state.
  • FIG. 4 shows a functional block diagram of the surveillance camera 2 of the present embodiment.
  • the surveillance camera 2 includes an imaging unit 21, a speed analysis unit 22, an image quality enhancement processing unit 23, and a camera control unit 24.
  • the monitoring camera 2 of this example calculates the speed of the subject from the captured video, detects a speed abnormality, and notifies the central processing system as speed abnormality detection information.
  • the imaging unit 21 includes a color imaging element, and outputs a subject image formed on the imaging surface as an imaging unit output video.
  • the speed analysis unit 22 calculates and outputs the actual speed of the subject (vehicle) in the input video.
  • the high image quality processing unit 23 performs high image quality processing such as super-resolution and contrast correction on the input video and outputs the result.
  • the speed analysis unit 22 and the image quality improvement processing unit 23 can be realized by software operating on a common processor. Alternatively, it can be realized in such a manner that a multiplier or the like is exclusively used by hardware such as a common FPGA (Field-Programmable Gate Array).
  • the camera control unit 24 controls operations of the PTZ, the speed analysis unit 22, and the image quality improvement processing unit 23 based on the camera control information from the central processing system 3.
  • the monitoring camera 2 measures whether or not a speed abnormality has occurred within a certain period and responds as speed abnormality detection information.
  • two threshold values TL and TH are set for the speed and its temporal change, and an abnormality occurs when the vehicle speed is equal to or lower than TL or equal to or higher than TH.
  • the camera control unit 24 controls the speed analysis unit and the image quality improvement processing unit in accordance with a request from the central processing unit. For example, only one function may be operated according to the request.
  • the output is output as a monitoring camera video
  • the imaging unit output video is output as a monitoring camera video.
  • the electric camera platform 25 changes the imaging direction of the imaging unit 21 in accordance with the camera control information received via the camera control unit 24.
  • the electric zoom lens 26 changes the imaging range (zoom magnification) of the imaging unit 21 according to the received camera control information.
  • the speed analysis unit 22, the image quality improvement processing unit 23, and the camera control unit 24 may be provided in the case of the imaging unit 21.
  • Fig. 5 shows the principle of calculating the actual speed in the real world.
  • a speed detection range that is a range in which the speed is calculated in the video is determined.
  • the actual distance of the speed detection range can be obtained from the height and angle at which the camera is installed. Alternatively, it may be set using a reference distance in the video such as a white line on the road.
  • the speed of the vehicle can be obtained, for example, by measuring how many frames it takes for the vehicle to pass through the speed detection range set above.
  • FIG. 6 shows a flow of speed calculation by the speed analysis unit 22.
  • step S11 one frame of the imaging unit output video is captured.
  • step S12 the moving object region is detected for the input frame.
  • the moving object region can be obtained by a known method, for example, a difference in pixel value between frames. If it is determined in S13 that a moving object exists, a feature point is detected in the moving object region in S14 by, for example, Harris corner detection or SUSAN operator.
  • S15 tracking of the moving object is performed by associating the feature points of the moving object region in the time domain.
  • Well-known technologies such as mean shift, particle filter, Kalman filter, etc.
  • a trajectory can be obtained. Tracking can be done in moving object region units, feature point units, or cluster units of feature points that are close in distance and similar in motion, and can be used in addition to differences in feature point positions to be matched. For example, the difference in the direction and type of corners (edges) may be considered.
  • feature points when there are feature points that are always detected at the same position, they may be learned to mask the first detection.
  • a train of vehicles that are closely connected in a traffic jam can be detected as a single moving object region in S12, but the feature points can be handled by appropriately grouping them.
  • S15 the number of frames required to pass the speed detection range is measured from the tracked result.
  • FIG. 7 shows a flow of obstacle alarm issuing performed by the system control unit 7.
  • step S31 speed abnormality information is collected from each monitoring camera 2.
  • the speed abnormality information is information indicating whether or not the speed of the subject of each monitoring camera is out of the range set within a certain period. Since speed monitoring information cannot be obtained from the surveillance camera 2 that is already subject to video analysis and is operating in the high image quality mode, the trajectory information obtained in the analysis by the video analysis unit 4 should be used instead. Can do.
  • the calculation of speed can be realized by image analysis with a low calculation cost compared to video analysis using recognition, learning, or the like. In the vicinity of the monitoring camera where the speed abnormality is detected, some abnormality may have occurred, such as traffic jam or the vehicle stepping on the brake.
  • step S32 the video of the surveillance camera 2 to be analyzed by the video analysis unit 4 is selected.
  • the system control unit 7 gives priority to the video of the monitoring camera 2 in which the speed abnormality is detected and transfers it to the video analysis unit 4. If the cameras in which the speed abnormality is detected are located adjacent to each other, the priority of only the head (downstream) monitoring camera 2 is increased as shown in FIG. This is intended to prioritize the camera corresponding to the head of the traffic jam, because it is expected that multiple consecutive cameras on the road will detect speed abnormalities depending on the traffic jam section. . When there are more cameras 2 with higher priority than M, analysis is performed with priority given to a long period from the time of the last analysis to the current time.
  • the images of cameras and other surveillance cameras that are geographically adjacent to the high-priority cameras are analyzed from the time of the last analysis to the present. Analysis may be performed with priority given to those having a long period. The history selected as the video analysis target is retained for a certain period.
  • the system control unit 7 transmits the information to the target monitoring camera 2 in S33.
  • the monitoring camera 2 to be analyzed stops the operation of the speed abnormality detection function, and performs processing for improving the image quality of the video with the computation resources. Further, when the surveillance camera 2 compresses and outputs video, the bit rate may be increased only during analysis to improve the image quality.
  • the video analysis unit 4 continues to analyze the video from the monitoring camera 2 that is the analysis target, and sequentially returns the analysis results to the system control unit 7.
  • the system control unit 7 determines whether or not the received analysis result (the latest one) indicates an abnormality such as avoidance. Since the analysis result may include erroneous detection, the analysis result may be retained for a certain period in the past and judged based on them.
  • the system control unit 7 searches the history as to whether there is a camera subject to video analysis more than once in the past fixed period as S36. In S37, based on the search in S36, it is determined whether or not there is a camera that has been frequently analyzed. If there is a corresponding camera, the process proceeds to the same processing (alarm issue) as when it is determined to be abnormal in S35. If there is no corresponding camera, the process returns to S31.
  • the system control unit 7 issues an alarm in S38.
  • the alarm is performed by transmitting the video of the camera in which an abnormality is detected and information such as the position and time to the video display device.
  • the information is transmitted to the monitoring camera 2 in which the abnormality is detected. Thereby, on the surveillance camera 2 side, if it is being performed, the operation of the speed abnormality detection function can be stopped, and the processing for improving the image quality of the video can be performed with the calculation resources.
  • the system control unit 7 also transmits camera control information that allows the position information of the abnormality output from the video analysis unit 4 to be clearly captured at a higher magnification to the corresponding monitoring camera 2.
  • the video analysis unit 4 is instructed to detect the cause of the abnormality itself.
  • the video analysis unit 4 is also provided with information on the shooting direction and range changed by the camera control information.
  • the video analysis unit 4 may perform road damage detection, obstacle recognition (detection), vehicle emergency stop detection, and the like using the above-described recognition / learning technique or the like.
  • contrast correction, fluctuation correction, fluctuation correction, super-resolution processing, or the like may be performed as preprocessing.
  • the number of passing vehicles per predetermined time may be measured, and the traffic jam start point may be searched from the change.
  • the video analysis unit 4 may detect the boundary line of the traveling lane drawn on the road surface by Hough transform or the like, and track the moving object using the detected line. For example, it is possible to reduce the amount of processing by excluding movements greatly different from the direction of the traveling lane from the candidates, and it is possible to determine the apparent size of the vehicle and the degree of avoidance based on the width of the lane. Also, when an obstacle is detected from the video, an alarm with a different severity can be issued based on the size and the positional relationship with the travel lane. For example, when an obstacle initially detected in the roadside zone moves to the traveling lane, an alarm with a higher severity is issued again.
  • the manner of sharing image processing between the monitoring camera 2 and the video analysis unit 4 includes various modes other than those described in this example.
  • the speed analysis unit 22 of the monitoring camera 2 may always calculate the trajectory of the moving area (vehicle) and pass the data to the video analysis unit 4 to perform the subsequent processing.
  • the trajectory data is converted from the screen coordinates to the global coordinate system, standardized to pass points that do not depend on the speed, and averaged (accumulated), and the avoidance is determined based on the difference from the averaged data can do.
  • a known technique such as Patent Document 3 can be used for coordinate system conversion.
  • the number of trajectory data (passage point data) to be averaged is required to be a certain number (for example, for 10 vehicles), and the video analysis unit 4 holds averaged data for each shooting field angle of each monitoring camera 2 To do.
  • a well-known algorithm for example, 1 class SVM, Singular spectrum analysis, etc.
  • detecting non-stationary data such as avoidance can also be used.
  • 1 monitoring system 2 monitoring camera device, 3 central processing system (center), 4 video analysis unit, 5 video display unit, 6 video recording unit, 7 system control unit, 21 imaging unit, 22 speed analysis unit, 23 image quality enhancement processing unit, 24 camera control unit, 25 electric pan head, 26 electric zoom lens.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Traffic Control Systems (AREA)

Abstract

A system for detecting a reflected obstruct from among a large number of camera videos with a low processing load is disclosed. A plurality of cameras installed in roadside areas regularly detect the locus of a vehicle traveling down a road or a vehicle speed or a frequency of passage obtained during locus acquisition, and notify a center when there is abnormality. In the center, a camera to be analyzed is narrowed down to the vicinity of a congestion starting point on the basis of the notification, advanced analysis is made on vehicle loci, etc., and an unsteady state is detected from the analyzed data. The camera chosen to be analyzed performs picture quality heightened processing over locus or vehicle speed and other processing preferentially in order to deliver a clearer video to the center.

Description

監視システムMonitoring system
 本発明は、道路上に設置された監視カメラの映像から、車両の動きを元に道路上に起きた異常を検知する監視システムに関する。 The present invention relates to a monitoring system that detects an abnormality occurring on a road based on the movement of a vehicle from a video of a monitoring camera installed on the road.
 近年、監視システムには、監視カメラの映像から現地で発生した異常を自動検知して、アラームを発報する機能が求められる。速やかに異常を確認することは、事故発生後の迅速な対応や、重大事故の未然防止につながる。このような機能は特に大規模な施設など、多量の監視カメラが用いられ、監視者が全ての映像を同時に確認できない場合に効果が高い。 In recent years, a monitoring system is required to have a function of automatically detecting an abnormality occurring in the field from a video of a monitoring camera and issuing an alarm. Prompt confirmation of abnormalities leads to prompt response after an accident and prevention of serious accidents. Such a function is particularly effective when a large number of surveillance cameras are used, such as in a large-scale facility, and the supervisor cannot check all the images simultaneously.
 自動検知機能が求められる施設の一つに、高速道路が挙げられる。高速道路での異常には、道路の損傷、障害物、故障などによる車両の緊急停車が考えられ、そのどれも重大な事故につながる可能性がある。そのため、高速道路上に設定された監視カメラの映像から、異常事態を自動検知する技術が考えられている。 One of the facilities that require an automatic detection function is an expressway. Abnormalities on the highway can be an emergency stop of the vehicle due to road damage, obstacles, breakdowns, etc., all of which can lead to serious accidents. Therefore, a technique for automatically detecting an abnormal situation from a video of a surveillance camera set on a highway is considered.
 なお、本発明に関連する技術として、車両に搭載され、先行車が障害物等を避ける動きを検出するものが知られる(例えば特許文献1及び2参照。)。  As a technique related to the present invention, one that is mounted on a vehicle and detects a movement of a preceding vehicle that avoids an obstacle or the like is known (for example, see Patent Documents 1 and 2).
特開2005-145153号公報JP 2005-145153 A 特開2003-276538号公報JP 2003-276538 A 特開2001-319218号公報JP 2001-319218 A 特開2000-057474号公報JP 2000-057474 A 特開2014-157522号公報JP 2014-157522 A 特開平4-67299号公報JP-A-4-67299
 しかし、異常検知のためには、様々な点からの映像解析が求められる。特に、照明条件の変化がある野外に設置されたカメラに対しては、補正処理が必要となるため、映像解析には複雑な処理が必要となる。これらの処理を全ての監視カメラ映像に対して実施するには、膨大な演算量が必要となり、機器コストが大きくなる問題がある。 However, in order to detect anomalies, video analysis from various points is required. In particular, since a correction process is required for a camera installed in the outdoors where the illumination conditions change, a complicated process is required for video analysis. In order to carry out these processes for all surveillance camera videos, a huge amount of calculation is required, and there is a problem that equipment costs increase.
 本発明の目的は、上記問題点に鑑み、多数の監視カメラの映像についてシステム的に画像処理を実行し、限られた計算資源で異常検知を実現する監視システムを提供することにある。 In view of the above problems, an object of the present invention is to provide a monitoring system that performs systematic image processing on a number of surveillance camera images and realizes abnormality detection with limited computational resources.
 本発明の一側面にかかる監視システムは、映像内車両の速度異常を検出する機能を有する監視カメラと、監視カメラからの速度異常情報と監視カメラ同士の位置関係とから優先順位を決定し、優先順位の高い監視カメラ映像に対して映像解析する機能を有する中央制御装置を有する。
 監視カメラの設定は、映像解析の対象であるかどうか及び、異常が検知されたかどうかに応じて動的に切り替えられうる。
A monitoring system according to an aspect of the present invention determines a priority order from a monitoring camera having a function of detecting a speed abnormality of a vehicle in a video, speed abnormality information from the monitoring camera, and a positional relationship between the monitoring cameras, A central control unit having a function of analyzing a video of a high-priority surveillance camera video is provided.
The setting of the monitoring camera can be dynamically switched depending on whether or not it is a target of video analysis and whether or not an abnormality is detected.
 本発明によれば、少ない演算量によって効率的に監視エリアでの異常を監視カメラの映像から検知することが出来る。 According to the present invention, an abnormality in the monitoring area can be efficiently detected from the video of the monitoring camera with a small amount of calculation.
実施例1の監視システム1の論理的な構成の例を示したブロック図。1 is a block diagram illustrating an example of a logical configuration of a monitoring system 1 according to a first embodiment. 監視システム1の動作の概略的な状態遷移図。3 is a schematic state transition diagram of the operation of the monitoring system 1. FIG. 渋滞始点を判断する原理を説明する模式図。The schematic diagram explaining the principle which judges a traffic jam start point. 監視カメラ2の機能ブロック図。The functional block diagram of the surveillance camera 2. FIG. 実際の速度を計算する原理を示す模式図。The schematic diagram which shows the principle which calculates an actual speed. 速度解析部22による速度算出のフローチャート。10 is a flowchart of speed calculation by the speed analysis unit 22. システム制御部7が行う障害物アラーム発報のフローチャート。The flowchart of the obstacle alarm alerting which the system control part 7 performs.
 本発明の実施形態の監視システムは、道路を走行する車両の軌跡(航路)、若しくはそれを取得する過程で得られる車速や通過頻度に基づいて、道路の損傷、障害物等(故障車両等を含む)の存在を推定し、異常として報知する。また異常の原因である障害物等を、映像から直接的に検知する動作を、異常の推定と協働して行う。 The monitoring system according to the embodiment of the present invention is based on the trajectory (route) of a vehicle traveling on the road, or the vehicle speed and the passing frequency obtained in the process of obtaining the trajectory. The presence of the error), and notify the abnormality. In addition, an operation for directly detecting an obstacle causing the abnormality from the video is performed in cooperation with the estimation of the abnormality.
 図1は本発明の実施例1の監視システム1の論理的な構成の例を示したブロック図である。
 本例の監視システム1は、高速道路の沿線に、所定の間隔で配置される任意の数N台の監視カメラ装置2-1~2-Nと、交通管制センター等の拠点に集約された中央処理システム3とを備える。
 中央処理システム3は、映像解析部4と、映像表示部5と、映像記録部6と、それらを制御するシステム制御部7とを備える。
FIG. 1 is a block diagram showing an example of a logical configuration of the monitoring system 1 according to the first embodiment of the present invention.
The monitoring system 1 of this example is a centralized system that is centralized at an arbitrary number N of monitoring camera devices 2-1 to 2-N arranged at predetermined intervals along a highway and a base such as a traffic control center. And a processing system 3.
The central processing system 3 includes a video analysis unit 4, a video display unit 5, a video recording unit 6, and a system control unit 7 that controls them.
 監視カメラ装置2-1~2-N(以下、個々を区別する必要が無い場合、1つを代表して監視カメラ2と呼ぶ)は、例えば、電動雲台と電動ズームを備えたPTZ(パン、チルト、ズーム)カメラであり、本例ではカメラ本体には高感度カラーカメラを用いる。適用すべきパン、チルト、ズームは、カメラ制御情報として与えられる。監視カメラ2は、そのカメラ本体で得られた映像信号を、符号化して監視カメラ映像として出力するとともに、非特許文献1のような周知技術により、映像中の車両の速度を計測する。計測した速度は、少なくとも速度に異常があった時には、速度異常検知情報として出力される。監視カメラ映像には、映像解析部4での処理を容易化するために、符号化される前に揺れ補正、階調補正、超解像等の処理が施され得る。 The monitoring camera devices 2-1 to 2-N (hereinafter referred to as the monitoring camera 2 as a representative when there is no need to distinguish each) are, for example, a PTZ (panning pan) equipped with an electric head and an electric zoom. In this example, a high-sensitivity color camera is used as the camera body. The pan, tilt, and zoom to be applied are given as camera control information. The surveillance camera 2 encodes and outputs a video signal obtained by the camera body as a surveillance camera video, and measures the speed of the vehicle in the video by a known technique such as Non-Patent Document 1. The measured speed is output as speed abnormality detection information at least when the speed is abnormal. The monitoring camera video can be subjected to processing such as shake correction, gradation correction, and super-resolution before being encoded in order to facilitate processing in the video analysis unit 4.
 映像解析部4は、監視カメラ2から受取った監視カメラ映像を解析し、解析結果を出力する。映像解析部4は、システム制御部7から指定される、最大でMチャンネル(MはNより小さい整数)のみを処理できればよく、N台の監視カメラ2からの監視カメラ映像を同時にリアルタイムで処理する能力を有する必要はない。映像解析部4は、車両を認識する能力を有し、予め正常時の車両の走行軌跡を蓄積若しくは学習しており、与えられた映像内で認識した車両の軌跡を生成して、正常ではない走行軌跡の発生を識別する。例えば、停止車両を追い抜いていく車両の軌跡、道路上の落下物を避ける軌跡、或いは同じ地点の前と後で減速と加速をする複数の車両の軌跡等が異常と検出され、それら停止車両や落下物や地点の位置情報が出力される。また、システム制御部7からの要求に応じ、更にその異常な走行軌跡の原因となった事象(物体)自体の検出を行う。 The video analysis unit 4 analyzes the monitoring camera video received from the monitoring camera 2 and outputs the analysis result. The video analysis unit 4 only needs to process at most M channels (M is an integer smaller than N) designated by the system control unit 7, and simultaneously processes the monitoring camera videos from the N monitoring cameras 2 in real time. There is no need to have the ability. The video analysis unit 4 has a capability of recognizing a vehicle, accumulates or learns a normal vehicle travel locus in advance, generates a vehicle trajectory recognized in a given image, and is not normal. Identify the occurrence of a running track. For example, a trajectory of a vehicle that overtakes a stopped vehicle, a trajectory that avoids falling objects on a road, or a trajectory of a plurality of vehicles that decelerate and accelerate before and after the same point is detected as abnormal. The position information of falling objects and points is output. Further, in response to a request from the system control unit 7, an event (object) itself that causes the abnormal traveling locus is further detected.
 映像表示部5は、監視カメラ2から受取った監視カメラ映像を、表示制御情報に従って表示する。例えばL(LはNより小さい整数)台のフラットパネルディスプレイを縦横に並べ、Lチャンネルの映像を入力として、その映像を監視カメラの情報(撮影地点名称)等と共に出力する。映像表示部5は、システム制御部7の命令に従い、異常が検知された映像を優先して表示したり、発報されたアラームを顕示したりする。 The video display unit 5 displays the monitoring camera video received from the monitoring camera 2 according to the display control information. For example, L (L is an integer smaller than N) flat panel displays are arranged vertically and horizontally, an L channel image is input, and the image is output together with surveillance camera information (photographing location name) and the like. The video display unit 5 preferentially displays a video in which an abnormality has been detected or reveals an alarm that has been issued in accordance with a command from the system control unit 7.
 映像記録部6は、監視カメラ2から受取った監視カメラ映像を、記録制御情報に従ってハードディスク等に記録する。一般に、N台の監視カメラ2の映像を同時にフルレートで伝送、記録しようとするとコストがかかりすぎるため、時間的・空間的・地理的に間引いて記録される。映像記録部6は、異常検知の結果に基づいて、映像の記録方法を変更しても良い。例えば、映像を圧縮して記録する場合は異常が検知された監視カメラの映像の圧縮率を下げて保存する。 The video recording unit 6 records the surveillance camera video received from the surveillance camera 2 on a hard disk or the like according to the recording control information. In general, it is too costly to transmit and record images from N surveillance cameras 2 at the same time at a full rate, and therefore, the images are recorded by being thinned out temporally, spatially, and geographically. The video recording unit 6 may change the video recording method based on the result of abnormality detection. For example, in the case of recording video after compressing it, the video of the surveillance camera in which an abnormality has been detected is saved at a reduced compression rate.
 システム制御部7は、接続されたN台の監視カメラ2からの速度異常検知情報と、映像解析部4の解析結果とに基づきアラームを発報し、映像表示部に通知する機能を有する。また、一般的なCCTVと同様に、多数の監視カメラ2の映像を巡回的に切替えながら映像表示部5に表示させるために、表示制御情報を生成したり、映像記録部5の記録レート等を制御する記録制御情報を生成したりする。システム制御部7は、交通管制を行うプロセス制御システム(分散型制御システム(DCS)とも呼ばれる)の一部として実装されうる。交通管制システムは、トラフィックカウンタやETC 2.0で計測された交通量等を主な管理対象としている。 The system control unit 7 has a function of issuing an alarm based on the speed abnormality detection information from the connected N monitoring cameras 2 and the analysis result of the video analysis unit 4 and notifying the video display unit. Similarly to general CCTV, display control information is generated and the recording rate of the video recording unit 5 is set in order to display video on the video display unit 5 while cyclically switching the video of the surveillance camera 2. Generate recording control information to be controlled. The system control unit 7 can be implemented as a part of a process control system (also called a distributed control system (DCS)) that performs traffic control. The traffic control system is mainly managed by traffic counters and traffic measured by ETC 2.0.
 図2に、監視システム1の動作の概略的な状態遷移図が示される。
 本例の監視システム1は、大まかには、定常状態、渋滞原因探索、障害物探索、及び障害物観察の4つの状態を有する。長大な路線を管理する交通管制システムでは、この状態遷移は、地理的に局所的に行われうる。
 定常状態は、渋滞も障害物もない状態であり、N台の監視カメラ2をトラフィックカウンターの様に用いて、速度異常が発生していないか監視する。また監視カメラ2を巡回的に選択しながらその映像を分析し、車両の避走(障害物を回避するような走行の軌跡)、或いは障害物そのものの検出も試みる。速度異常を検出すると渋滞原因探索状態に遷移し、避走を検出すると障害物探索状態に遷移し、障害物を検出すると障害物観察状態に遷移する。
FIG. 2 shows a schematic state transition diagram of the operation of the monitoring system 1.
The monitoring system 1 of this example generally has four states: steady state, congestion cause search, obstacle search, and obstacle observation. In a traffic control system managing a long route, this state transition can be performed geographically and locally.
The steady state is a state in which there is no traffic jam and no obstacle, and the N monitoring cameras 2 are used like a traffic counter to monitor whether a speed abnormality has occurred. Further, the video is analyzed while selecting the surveillance camera 2 in a cyclic manner, and an attempt is made to detect the vehicle avoidance (trajectory of travel that avoids the obstacle) or the obstacle itself. When a speed abnormality is detected, the state transits to a congestion cause search state, when an escape is detected, the state transits to an obstacle search state, and when an obstacle is detected, the state transits to an obstacle observation state.
 渋滞原因探索状態は、渋滞の発生始点や原因を探索する状態であり、速度異常を検出した映像の撮影地点およびその変化から渋滞の始点を推定し、その箇所で避走を示す車両の検出を試みる。それと同時に、始点以外の箇所では定常状態と同様に速度異常の監視を続ける。図3のように、速度異常(低速)が連続する区間があればそれを渋滞区間と推定し、渋滞区間の、車両走行方向側の端(つまり、速度異常のあるカメラと、速度異常のないカメラの間)に、渋滞の始点が存在すると推定することができる。またあるカメラで走行車両の速度が急に低下したり、渋滞とみなせる低速に達した場合、その前後において速度異常がなければ、新たに発生した渋滞の始点であると推定できる。
 推定された渋滞の始点が、自然渋滞が頻繁に発生する箇所で有ったり、他の既知の原因が判っている場合は、定常状態に遷移する。そうでなければ、PTZ制御により始点付近の高倍率映像を巡回的に取得し、避走を示す車両の検出を試み、検出すると障害物探索状態に遷移するとともにアラーム(障害物アラーム)を発報する。
The congestion cause search state is a state in which the start point and cause of the congestion are searched, the start point of the congestion is estimated from the shooting point of the video where the speed abnormality is detected and its change, and detection of the vehicle indicating the avoidance at that point is detected. Try. At the same time, the speed abnormality is continuously monitored in the places other than the starting point as in the steady state. As shown in FIG. 3, if there is a section where speed anomalies (low speed) continue, it is estimated as a traffic jam section, and the end of the traffic jam section on the vehicle traveling direction side (that is, a camera with speed anomalies and no speed anomalies) It can be estimated that there is a traffic jam start point between the cameras). If the speed of the traveling vehicle suddenly decreases with a certain camera or reaches a low speed that can be regarded as a traffic jam, if there is no speed abnormality before and after that, it can be estimated that it is the starting point of a newly generated traffic jam.
If the estimated start point of the traffic jam is a location where natural traffic jams frequently occur, or if other known causes are known, a transition is made to a steady state. Otherwise, a high-magnification image near the starting point is acquired cyclically by PTZ control, attempts to detect a vehicle that shows avoidance, and if detected, transitions to the obstacle search state and issues an alarm (obstacle alarm) To do.
 障害物探索状態では、避走検出地点の映像を分析して、障害物そのものの検知を試行する。それと同時に、検出地点以外では速度異常の監視を続ける。障害物の検出は背景差分法を用いる場合、障害物のない時の画像(背景画像)を予め作成しておく必要があるため、定常状態においていも高倍率映像の巡回取得を定期的に行うものとする。障害物を発見すると、適宜アラーム発報等をするとともに、障害物観察状態に遷移する。 In the obstacle search state, the image of the escape detection point is analyzed to try to detect the obstacle itself. At the same time, monitoring of speed anomalies is continued outside the detection point. When using the background subtraction method to detect obstacles, it is necessary to create an image (background image) when there are no obstacles in advance. And When an obstacle is found, an alarm is issued as appropriate and a transition to the obstacle observation state is made.
 障害物観察状態では、発見した障害物を監視し続ける状態であり、車両により引っかけられてその場から消失した場合、渋滞原因探索状態に遷移して、走行方向の前方で再び道路に落とされた障害物による渋滞を探索することになる。また現地に赴いた作業員によって障害物の撤去が完了すると、定常状態に遷移する。 In the obstacle observation state, it is a state in which the detected obstacle is continuously monitored, and when it is caught by the vehicle and disappears from the spot, it transits to the congestion cause search state and is dropped again on the road in the driving direction. You will search for traffic jams due to obstacles. Moreover, when the removal of the obstacle is completed by a worker who has visited the site, the state transitions to a steady state.
 図4に、本実施例の監視カメラ2の機能ブロック図が示される。監視カメラ2は、撮像部21と速度解析部22と高画質化処理部23とカメラ制御部24で構成される。
 本例の監視カメラ2は、撮影した映像から被写体の速度を算出し、速度の異常を検知し速度異常検知情報として、中央処理システムに通知する。
FIG. 4 shows a functional block diagram of the surveillance camera 2 of the present embodiment. The surveillance camera 2 includes an imaging unit 21, a speed analysis unit 22, an image quality enhancement processing unit 23, and a camera control unit 24.
The monitoring camera 2 of this example calculates the speed of the subject from the captured video, detects a speed abnormality, and notifies the central processing system as speed abnormality detection information.
 撮像部21は、カラー撮像素子を有し、撮像面に結像した被写体像を、撮像部出力映像として出力する。
 速度解析部22は、入力した映像内の被写体(車両)の実際の速度を計算して出力する。
 高画質化処理部23は入力した映像に対して超解像やコントラスト補正等の高画質化処理を施して出力する。速度解析部22と高画質化処理部23は、共通のプロセッサ上で動作するソフトウェアによって実現されうる。或いは共通のFPGA(Field-Programmable Gate Array)のようなハードウェアで、乗算器などを排他的に使用する様態で実現されうる。
The imaging unit 21 includes a color imaging element, and outputs a subject image formed on the imaging surface as an imaging unit output video.
The speed analysis unit 22 calculates and outputs the actual speed of the subject (vehicle) in the input video.
The high image quality processing unit 23 performs high image quality processing such as super-resolution and contrast correction on the input video and outputs the result. The speed analysis unit 22 and the image quality improvement processing unit 23 can be realized by software operating on a common processor. Alternatively, it can be realized in such a manner that a multiplier or the like is exclusively used by hardware such as a common FPGA (Field-Programmable Gate Array).
 カメラ制御部24は、中央処理システム3からのカメラ制御情報に基づき、PTZや速度解析部22、高画質化処理部23の動作を制御する。中央処理システム3から速度異常検知情報の要求を受けた場合は、監視カメラ2は一定の期間内での速度異常が発生したかどうかを計測し、速度異常検知情報として応答する。速度異常は例えば、速度やその時間的変化に2つの閾値TL、THを設定し、車両の速度等がTL以下の場合、もしくはTH以上の場合に異常発生とする。
 また、カメラ制御部24は、中央処理装置からの要求に従って、速度解析部と高画質化処理部の制御を行う。例えば、要求に従って片方の機能のみを動作するようにしても良い。高画質化処理部が動作している場合はその出力を監視カメラ映像として出力し、動作していない場合は撮像部出力映像を監視カメラ映像として出力する。
The camera control unit 24 controls operations of the PTZ, the speed analysis unit 22, and the image quality improvement processing unit 23 based on the camera control information from the central processing system 3. When a request for speed abnormality detection information is received from the central processing system 3, the monitoring camera 2 measures whether or not a speed abnormality has occurred within a certain period and responds as speed abnormality detection information. For example, two threshold values TL and TH are set for the speed and its temporal change, and an abnormality occurs when the vehicle speed is equal to or lower than TL or equal to or higher than TH.
The camera control unit 24 controls the speed analysis unit and the image quality improvement processing unit in accordance with a request from the central processing unit. For example, only one function may be operated according to the request. When the high image quality processing unit is operating, the output is output as a monitoring camera video, and when it is not operating, the imaging unit output video is output as a monitoring camera video.
 電動雲台25は、カメラ制御部24を経由して受信したカメラ制御情報に従って、撮像部21の撮像方向を変更する。
 電動ズームレンズ26は、受信したカメラ制御情報に従って、撮像部21の撮像範囲(ズーム倍率)を変更する。
 なお、速度解析部22や高画質化処理部23、カメラ制御部24は、撮像部21のケース内に設けてもよい。
The electric camera platform 25 changes the imaging direction of the imaging unit 21 in accordance with the camera control information received via the camera control unit 24.
The electric zoom lens 26 changes the imaging range (zoom magnification) of the imaging unit 21 according to the received camera control information.
The speed analysis unit 22, the image quality improvement processing unit 23, and the camera control unit 24 may be provided in the case of the imaging unit 21.
 図5に、現実世界での実際の速度を計算する原理が示される。まず、映像内で速度の計算を行う範囲である速度検出範囲を決定する。速度検出範囲の実際の距離はカメラの設置された高さと角度から求めることが出来る。または、道路上の白線など映像内の基準となる距離を用いて設定しても良い。車両の速度は例えば、上記で設定した速度検出範囲を車両が通過するのに何フレームかかるかを測定することにより求めることが出来る。 Fig. 5 shows the principle of calculating the actual speed in the real world. First, a speed detection range that is a range in which the speed is calculated in the video is determined. The actual distance of the speed detection range can be obtained from the height and angle at which the camera is installed. Alternatively, it may be set using a reference distance in the video such as a white line on the road. The speed of the vehicle can be obtained, for example, by measuring how many frames it takes for the vehicle to pass through the speed detection range set above.
 図6に速度解析部22による速度算出のフローを示す。
 まずS11として、撮像部出力映像を1フレーム取り込む。
 次にS12として、入力したフレームに対して動体領域の検出を行う。動体領域は、周知の方法、例えばフレーム間の画素値の違いにより求めることが出来る。
 S13において動体が存在したと判断すると、S14として動体領域内で、例えばハリスのコーナー検出やSUSAN演算子により、特徴点を検出する。
 次にS15として、動体領域の特徴点を、時間領域で対応付けることで、動体の追跡を行う。各動体領域の特徴点群を、フレーム間で対応付けるような処理として、ミーンシフト、パーティクルフィルタ、カルマンフィルタ等の周知の技術が利用でき、道路脇の柱などにより車両が一時的に隠れても連続した軌跡をえることができる。追跡は、動体領域単位、特徴点単位、或いは距離が近く動きが類似する特徴点のクラスターの単位で行うことができ、対応付けようとうする特徴点の位置の差の他、もし利用できるのであればコーナー(エッジ)の方向や種類の相違も考慮してもよい。また常に同じ位置で単発的に検出される特徴点があるときは、それらの位置を学習して、1回目の検出をマスクするようにしてもよい。渋滞時に密に連なった車両の列は、S12では1つに合体した動体領域として検出されうるが、特徴点群は適当にグループ化するなどして扱うことができる。
 最後にS15として、追跡した結果から、速度検出範囲を通過するのに何フレームを要したかを測定する。
FIG. 6 shows a flow of speed calculation by the speed analysis unit 22.
First, in S11, one frame of the imaging unit output video is captured.
In step S12, the moving object region is detected for the input frame. The moving object region can be obtained by a known method, for example, a difference in pixel value between frames.
If it is determined in S13 that a moving object exists, a feature point is detected in the moving object region in S14 by, for example, Harris corner detection or SUSAN operator.
Next, as S15, tracking of the moving object is performed by associating the feature points of the moving object region in the time domain. Well-known technologies such as mean shift, particle filter, Kalman filter, etc. can be used as a process for associating feature points of each moving object region between frames, and even if the vehicle is temporarily hidden by a roadside pillar, etc. A trajectory can be obtained. Tracking can be done in moving object region units, feature point units, or cluster units of feature points that are close in distance and similar in motion, and can be used in addition to differences in feature point positions to be matched. For example, the difference in the direction and type of corners (edges) may be considered. In addition, when there are feature points that are always detected at the same position, they may be learned to mask the first detection. A train of vehicles that are closely connected in a traffic jam can be detected as a single moving object region in S12, but the feature points can be handled by appropriately grouping them.
Finally, as S15, the number of frames required to pass the speed detection range is measured from the tracked result.
 図7に、システム制御部7が行う障害物アラーム発報のフローを示す。
 まずS31で、各監視カメラ2から速度異常情報の収集を行う。速度異常情報は、各監視カメラの被写体の速度が、一定期間内に設定された範囲から外れたかどうかを示す情報である。既に映像解析の対象となり高画質化モードで動作している監視カメラ2からは速度異常情報が得られないので、代わりに映像解析部4での解析の中で得られる軌跡情報等を利用することができる。 速度の算出は認識や学習等を用いた映像解析と比較して少ない計算コストの画像解析で実現できる。速度異常が検出された監視カメラ付近では、渋滞や車両がブレーキを踏むなど、何らかの異常が起きた可能性がある。
FIG. 7 shows a flow of obstacle alarm issuing performed by the system control unit 7.
First, in step S31, speed abnormality information is collected from each monitoring camera 2. The speed abnormality information is information indicating whether or not the speed of the subject of each monitoring camera is out of the range set within a certain period. Since speed monitoring information cannot be obtained from the surveillance camera 2 that is already subject to video analysis and is operating in the high image quality mode, the trajectory information obtained in the analysis by the video analysis unit 4 should be used instead. Can do. The calculation of speed can be realized by image analysis with a low calculation cost compared to video analysis using recognition, learning, or the like. In the vicinity of the monitoring camera where the speed abnormality is detected, some abnormality may have occurred, such as traffic jam or the vehicle stepping on the brake.
 次にS32で、映像解析部4で映像解析されるべき監視カメラ2の映像を選択する。
 システム制御部7は、速度異常が検出された監視カメラ2の映像を優先して映像解析部4に受け渡す。もし、速度異常が検出されたカメラが位置的に隣り合っていた場合は図3のように先頭の(下流の)監視カメラ2のみの優先度を上げる。これは、渋滞区間に応じて、道路上で連続する複数のカメラが速度異常を検出することが予期されるので、渋滞の先頭に対応するカメラを特に優先して選択することを意図している。 優先度が高いカメラ2がMより多く存在した場合は、最後に解析された時刻から現在までの期間が長いものを優先して解析する。優先度の高いカメラがM以下であった場合は、残った演算リソースによって、優先度の高いカメラに地理的に隣接するカメラやその他の監視カメラの映像を、最後に解析された時刻から現在までの期間が長いものから優先して解析してもよい。映像解析の対象として選択した履歴は、一定期間保持される。
In step S32, the video of the surveillance camera 2 to be analyzed by the video analysis unit 4 is selected.
The system control unit 7 gives priority to the video of the monitoring camera 2 in which the speed abnormality is detected and transfers it to the video analysis unit 4. If the cameras in which the speed abnormality is detected are located adjacent to each other, the priority of only the head (downstream) monitoring camera 2 is increased as shown in FIG. This is intended to prioritize the camera corresponding to the head of the traffic jam, because it is expected that multiple consecutive cameras on the road will detect speed abnormalities depending on the traffic jam section. . When there are more cameras 2 with higher priority than M, analysis is performed with priority given to a long period from the time of the last analysis to the current time. If the number of high-priority cameras is M or less, depending on the remaining computing resources, the images of cameras and other surveillance cameras that are geographically adjacent to the high-priority cameras are analyzed from the time of the last analysis to the present. Analysis may be performed with priority given to those having a long period. The history selected as the video analysis target is retained for a certain period.
 映像解析する監視カメラが決定すると、S33では、システム制御部7は、その情報を対象の監視カメラ2に伝達する。解析対象となった監視カメラ2はその間、速度異常検出の機能の動作を停止し、その演算リソースで映像の高画質化の処理を行う。また、監視カメラ2が映像の圧縮をして出力している場合、解析時のみビットレートを上げて画質の向上を図っても良い。 When the monitoring camera for video analysis is determined, the system control unit 7 transmits the information to the target monitoring camera 2 in S33. During that time, the monitoring camera 2 to be analyzed stops the operation of the speed abnormality detection function, and performs processing for improving the image quality of the video with the computation resources. Further, when the surveillance camera 2 compresses and outputs video, the bit rate may be increased only during analysis to improve the image quality.
 S34では、映像解析部4は、解析対象となった監視カメラ2からの映像を解析し続け、解析結果を逐次、システム制御部7に返す。
 S35では、システム制御部7は、受取った解析結果(最新の1つ)が、避走等の異常を示しているかを判断する。解析結果は誤検出を含みうるので、なお解析結果を過去一定期間分保持し、それらに基づいて判断してもよい。
In S <b> 34, the video analysis unit 4 continues to analyze the video from the monitoring camera 2 that is the analysis target, and sequentially returns the analysis results to the system control unit 7.
In S35, the system control unit 7 determines whether or not the received analysis result (the latest one) indicates an abnormality such as avoidance. Since the analysis result may include erroneous detection, the analysis result may be retained for a certain period in the past and judged based on them.
 異常が確認されなかった場合、S36として、システム制御部7は、過去一定期間内に複数回以上、映像解析の対象となったカメラがあるか、履歴を検索する。
 S37では、S36の検索に基づき、頻繁に解析対象となったカメラの有無を判断する。該当するカメラがあった場合、S35で異常と判断された場合と同じ処理(アラーム発報)に進む。該当するカメラがない場合、処理はS31に戻る。
If no abnormality is confirmed, the system control unit 7 searches the history as to whether there is a camera subject to video analysis more than once in the past fixed period as S36.
In S37, based on the search in S36, it is determined whether or not there is a camera that has been frequently analyzed. If there is a corresponding camera, the process proceeds to the same processing (alarm issue) as when it is determined to be abnormal in S35. If there is no corresponding camera, the process returns to S31.
 一方、S35で異常が確認されるかS37で該当カメラがあった場合、S38として、システム制御部7はアラームを発報する。
 アラームは、異常が検知されたカメラの映像と位置や時刻等の情報を映像表示装置に送信することにより行う。
On the other hand, if an abnormality is confirmed in S35 or if there is a corresponding camera in S37, the system control unit 7 issues an alarm in S38.
The alarm is performed by transmitting the video of the camera in which an abnormality is detected and information such as the position and time to the video display device.
 S39では、異常が検出された監視カメラ2に、その情報を伝達する。これにより、監視カメラ2の側では、もし行っていれば速度異常検出の機能の動作を停止し、その演算リソースで映像の高画質化の処理を行うことができる。システム制御部7はまた、映像解析部4が出力した異常の位置情報が利用できるときは、該当する監視カメラ2に、その位置情報をより高倍率で鮮明に撮影するようなカメラ制御情報を送信するとともに、映像解析部4に、その異常の原因自体の検出を指示する。その際必要に応じ、映像解析部4には、そのカメラ制御情報によって変更された撮影方向や範囲の情報も提供される。 S39の後、処理はS31に戻る。 In S39, the information is transmitted to the monitoring camera 2 in which the abnormality is detected. Thereby, on the surveillance camera 2 side, if it is being performed, the operation of the speed abnormality detection function can be stopped, and the processing for improving the image quality of the video can be performed with the calculation resources. The system control unit 7 also transmits camera control information that allows the position information of the abnormality output from the video analysis unit 4 to be clearly captured at a higher magnification to the corresponding monitoring camera 2. At the same time, the video analysis unit 4 is instructed to detect the cause of the abnormality itself. At that time, if necessary, the video analysis unit 4 is also provided with information on the shooting direction and range changed by the camera control information. After S39, the process returns to S31.
 本発明の範囲は、これまで説明した実施例の構成を含むことができるがこれに限定されるものではない。
 例えば、映像解析部4は、上述した認識・学習技術等による道路の損傷検出、障害物の認識(検出)、車両の緊急停車の検出等を行ってもよい。また、監視カメラ2側で高解像度化されなった映像を解析するために、前処理として、コントラスト補正や揺らぎ補正、揺れ補正、超解像処理等を行ってもよい。
 また車両の速度を計測する代わりに、所定時間当たりの通過台数を計測し、その変化から渋滞始点を探索してもよい。
 また映像解析部4は、路面に引かれた走行レーンの境界ラインをハフ変換等により検出し、それを利用して動体の追跡を行ってもよい。例えば走行レーンの向きと大きく異なる動きを候補から除外して処理量を軽減することができ、レーンの幅を基準に車両の見かけの大きさや避走の程度を判断することができる。また、映像から障害物が検出されたときは、その大きさや走行レーンとの位置関係に基づいて、深刻度の異なるアラームを発報することができる。例えば当初路側帯で検出された障害物が走行レーンに移動したときは、より高い深刻度のアラームを発報し直す。
The scope of the present invention can include, but is not limited to, the configurations of the embodiments described above.
For example, the video analysis unit 4 may perform road damage detection, obstacle recognition (detection), vehicle emergency stop detection, and the like using the above-described recognition / learning technique or the like. In addition, in order to analyze a video with a high resolution on the monitoring camera 2 side, contrast correction, fluctuation correction, fluctuation correction, super-resolution processing, or the like may be performed as preprocessing.
Further, instead of measuring the speed of the vehicle, the number of passing vehicles per predetermined time may be measured, and the traffic jam start point may be searched from the change.
Further, the video analysis unit 4 may detect the boundary line of the traveling lane drawn on the road surface by Hough transform or the like, and track the moving object using the detected line. For example, it is possible to reduce the amount of processing by excluding movements greatly different from the direction of the traveling lane from the candidates, and it is possible to determine the apparent size of the vehicle and the degree of avoidance based on the width of the lane. Also, when an obstacle is detected from the video, an alarm with a different severity can be issued based on the size and the positional relationship with the travel lane. For example, when an obstacle initially detected in the roadside zone moves to the traveling lane, an alarm with a higher severity is issued again.
 また、監視カメラ2と映像解析部4との間での画像処理の分担の仕方には、本例で説明したほかに様々な様態が含まれる。例えば監視カメラ2の速度解析部22が常時、移動領域(車両)の軌跡を算出し、映像解析部4にそのデータを渡して以降の処理を行わせてもよい。映像解析部4では、軌跡データをスクリーン座標からグローバル座標系に変換し、速度に依存しない通過点等に標準化して平均化(累積)し、その平均化データとの相違度を以て避走を判断することができる。座標系変換には特許文献3等の周知技術が利用できる。平均化する軌跡データ(通過点データ)の数は一定以上(例えば車両10台分)必要であり、映像解析部4はそれぞれの監視カメラ2の撮影画角毎に平均化データを保持するものとする。避走のような非定常なデータを検出する周知のアルゴリズム(例えば1クラスSVM、Singular spectrum analysis等)を用いることもできる。 Further, the manner of sharing image processing between the monitoring camera 2 and the video analysis unit 4 includes various modes other than those described in this example. For example, the speed analysis unit 22 of the monitoring camera 2 may always calculate the trajectory of the moving area (vehicle) and pass the data to the video analysis unit 4 to perform the subsequent processing. In the video analysis unit 4, the trajectory data is converted from the screen coordinates to the global coordinate system, standardized to pass points that do not depend on the speed, and averaged (accumulated), and the avoidance is determined based on the difference from the averaged data can do. A known technique such as Patent Document 3 can be used for coordinate system conversion. The number of trajectory data (passage point data) to be averaged is required to be a certain number (for example, for 10 vehicles), and the video analysis unit 4 holds averaged data for each shooting field angle of each monitoring camera 2 To do. A well-known algorithm (for example, 1 class SVM, Singular spectrum analysis, etc.) for detecting non-stationary data such as avoidance can also be used.
 1 監視システム、 2 監視カメラ装置、 3 中央処理システム(センター)、 4 映像解析部、 5 映像表示部、
 6 映像記録部、  7 システム制御部、
 21 撮像部、  22 速度解析部、  23 高画質化処理部、  24 カメラ制御部、  25 電動雲台、  26 電動ズームレンズ。
1 monitoring system, 2 monitoring camera device, 3 central processing system (center), 4 video analysis unit, 5 video display unit,
6 video recording unit, 7 system control unit,
21 imaging unit, 22 speed analysis unit, 23 image quality enhancement processing unit, 24 camera control unit, 25 electric pan head, 26 electric zoom lens.

Claims (6)

  1. 複数台のカメラの映像から異常を検知してアラームを発報する監視システムにおいて、被写体の速度又は通過台数の異常を検知する機能を有する複数の監視カメラと、前記監視カメラで検出された前記異常に基づいて前記複数の監視カメラの映像に優先順位をつけて映像解析を行う中央処理装置とを有し、
     前記中央処理装置が、前記複数の監視カメラの設定を、前記映像解析の対象であるかどうか若しくは前記映像解析によって異常が検知されたかどうかに応じて切り替えることを特徴とする監視システム。
    In a monitoring system that detects an abnormality from images of a plurality of cameras and issues an alarm, a plurality of monitoring cameras having a function of detecting an abnormality in the speed of a subject or the number of passing objects, and the abnormality detected by the monitoring camera And a central processing unit for performing video analysis by prioritizing video of the plurality of surveillance cameras based on
    The monitoring system, wherein the central processing unit switches the setting of the plurality of monitoring cameras depending on whether or not the video analysis target or an abnormality is detected by the video analysis.
  2. 請求項1の監視システムにおいて、前記複数の監視カメラはそれぞれ、
     映像を出力する撮像部と、
     撮像部の撮像の方向または範囲を変更する電動雲台と、
     前記映像を入力され、前記映像内の前記被写体の実際の速度を計算して出力する速度解析器と、
     前記映像に対して、高画質化処理を施して出力する高画質化処理器と、
     前記中央処理装置からの要求に従って、前記速度解析部と前記高画質化処理部の一方のみを動作させるように制御するカメラ制御器と、を有し、
     前記カメラ制御器は、前記速度解析器からの速度に異常があったときに、被写体の速度が一定期間内に設定された範囲から外れたかどうかを示す速度異常検知情報を、前記中央処理装置へ出力するとともに、前記中央処理装置から解析対象となったことを伝達されたとき及び前記映像解析によって異常が検出されたことが伝達されたときに、前記高画質化処理部を動作させ、それ以外のときに前記速度解析部を動作させることを特徴とする監視システム。
    The monitoring system according to claim 1, wherein each of the plurality of monitoring cameras is
    An imaging unit for outputting video;
    An electric pan head for changing the imaging direction or range of the imaging unit;
    A speed analyzer that receives the video and calculates and outputs the actual speed of the subject in the video;
    An image quality improvement processor that performs an image quality improvement process on the video and outputs the image,
    A camera controller that controls to operate only one of the speed analysis unit and the image quality improvement processing unit according to a request from the central processing unit;
    When the speed from the speed analyzer is abnormal, the camera controller sends speed abnormality detection information indicating whether or not the speed of the subject is out of a set range within a certain period to the central processing unit. And when the fact that the analysis target has been transmitted from the central processing unit and the fact that the abnormality has been detected by the video analysis is transmitted, the image quality improvement processing unit is operated, and otherwise The speed analysis unit is operated at the time of the monitoring system.
  3. 請求項1又は2の監視システムにおいて、前記中央処理装置は、
     前記映像解析の対象とされた監視カメラから受信した映像を、認識若しくは学習を用いて解析する映像解析器と、 異常が検知された映像を優先して表示する映像表示器と、 監視カメラから受取った映像を記録する映像記録機と、 前記複数の監視カメラから収集した速度異常検知情報に基づいて、前記複数の監視カメラの内の所定数以下の監視カメラを前記映像解析の対象として選択し、当該選択された映像が前記映像解析部で解析された結果に基づき、アラームを発報し、前記映像表示器での表示および映像記録機での記録を制御するシステム制御器と、を有することを特徴とする監視システム。
    3. The monitoring system according to claim 1, wherein the central processing unit is
    Received from the surveillance camera, a video analyzer that analyzes the video received from the surveillance camera that is the subject of the video analysis using recognition or learning, a video display that preferentially displays the video in which an abnormality is detected, and the like. A video recorder that records the video, and based on speed abnormality detection information collected from the plurality of monitoring cameras, a predetermined number or less of the plurality of monitoring cameras is selected as the target of the video analysis, A system controller that issues an alarm based on a result of analyzing the selected video by the video analysis unit, and controls display on the video display and recording on the video recorder. A characteristic surveillance system.
  4. 請求項2の監視システムにおいて、前記速度解析器は、フレーム間の画素値の違いに基づいて動体領域を検出し、前記動体領域内で特徴点を検出し、前記特徴点を時間領域で対応付けることで、前記被写体の追跡を行うものであり、追跡した前記被写体が、速度検出範囲を通過するのに要したフレーム数に基づいて前記実際の速度を測定することを特徴とする監視システム。 3. The monitoring system according to claim 2, wherein the velocity analyzer detects a moving object region based on a difference in pixel values between frames, detects a feature point in the moving object region, and associates the feature point with a time region. The monitoring system is configured to track the subject and measure the actual speed based on the number of frames required for the tracked subject to pass through the speed detection range.
  5. 請求項3の監視システムにおいて、前記被写体は道路を走行する車両であり、 前記映像解析器は、車両を認識する能力を有し、予め正常時の車両の走行軌跡を蓄積若しくは学習しており、与えられた映像内で認識した車両を追跡して、異常な走行軌跡の発生を検知し、 前記システム制御器は、前記速度異常検知を出力した監視カメラの映像を優先して前記映像解析器に解析させ、前記速度異常検知を出力した監視カメラが道路上で連続しているときは、当該道路における車両の進行方向で先頭にある監視カメラの優先度を上げて、前記映像解析器に解析させるとともに、前記映像解析器が前記異常な走行軌跡を検知したときは、前記異常な走行軌跡の原因となった事象の推定位置に向けて、対応する監視カメラの撮像の方向若しくは範囲を変更するような前記設定の切り替えを行うことを特徴とする監視システム。 4. The monitoring system according to claim 3, wherein the subject is a vehicle traveling on a road, and the video analyzer has a capability of recognizing the vehicle, and accumulates or learns a normal vehicle traveling locus in advance. The system recognizes the vehicle recognized in the given video and detects the occurrence of an abnormal driving locus, and the system controller gives priority to the video of the monitoring camera that outputs the speed abnormality detection to the video analyzer. When the surveillance camera that has analyzed and output the speed abnormality detection is continuous on the road, the video analyzer analyzes the video by increasing the priority of the surveillance camera at the head in the traveling direction of the vehicle on the road. In addition, when the video analyzer detects the abnormal traveling locus, the direction or range of imaging of the corresponding monitoring camera toward the estimated position of the event causing the abnormal traveling locus Monitoring system and performs switching of such the setting to change.
  6. 請求項1の監視システムを用いて、前記映像解析によって検知された異常を観察する道路映像監視方法。 A road video monitoring method for observing an abnormality detected by the video analysis using the monitoring system according to claim 1.
PCT/JP2016/077236 2015-09-17 2016-09-15 Monitoring system WO2017047687A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2017539963A JP6584024B2 (en) 2015-09-17 2016-09-15 Monitoring system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2015-184124 2015-09-17
JP2015184124 2015-09-17

Publications (1)

Publication Number Publication Date
WO2017047687A1 true WO2017047687A1 (en) 2017-03-23

Family

ID=58288964

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2016/077236 WO2017047687A1 (en) 2015-09-17 2016-09-15 Monitoring system

Country Status (2)

Country Link
JP (1) JP6584024B2 (en)
WO (1) WO2017047687A1 (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20190080142A (en) * 2017-12-28 2019-07-08 (주) 에스비네트워크 A system for displaying accident situation event in tunnel to panorama image and a method of displaying the same
JP2020198497A (en) * 2019-05-31 2020-12-10 株式会社東芝 Traffic video processing apparatus
WO2021020304A1 (en) * 2019-07-29 2021-02-04 京セラ株式会社 Base station, roadside device, traffic communication system, traffic management method, and teaching data generation method
JP2021022221A (en) * 2019-07-29 2021-02-18 京セラ株式会社 Base station, traffic communication system, and traffic management method
CN112637492A (en) * 2020-12-19 2021-04-09 中建浩运有限公司 Intelligent entity exhibition system
JP2021077072A (en) * 2019-11-08 2021-05-20 富士通株式会社 Information processing program, information processing method, and information processor
WO2021171338A1 (en) * 2020-02-25 2021-09-02 日本電信電話株式会社 Movement object following device, movement object following method, movement object following system, learning device, and program
WO2021186854A1 (en) * 2020-03-19 2021-09-23 日本電気株式会社 Data processing device, transmission device, data processing method, and program
JP2021175033A (en) * 2020-04-21 2021-11-01 株式会社日立製作所 Event analysis system and event analysis method
CN113744550A (en) * 2020-05-15 2021-12-03 丰田自动车株式会社 Information processing apparatus and information processing system
WO2022024208A1 (en) * 2020-07-28 2022-02-03 日本電気株式会社 Traffic monitoring device, traffic monitoring system, traffic monitoring method, and program
WO2022074969A1 (en) * 2020-10-05 2022-04-14 株式会社デンソー Information processing device, vehicle control device, and road information distribution method
WO2022269980A1 (en) * 2021-06-24 2022-12-29 日立Astemo株式会社 External environment recognition device and external environment recognition method

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113923449B (en) * 2021-12-15 2022-03-15 深圳市光网视科技有限公司 Method and system for prejudging failure of security monitoring terminal

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH03118696A (en) * 1989-09-29 1991-05-21 Omron Corp Detector for abnormally traveling vehicle
JPH10307987A (en) * 1997-05-02 1998-11-17 Mitsubishi Heavy Ind Ltd Traffic flow measurement instrument
JPH1139589A (en) * 1997-07-18 1999-02-12 Fuji Electric Co Ltd Traffic monitoring device and method
JPH1196494A (en) * 1997-09-22 1999-04-09 Hitachi Ltd Method for monitoring traffic flow and device therefor
JPH11288495A (en) * 1998-04-02 1999-10-19 Sumitomo Electric Ind Ltd Traffic abnormality detecting device
JP2002083394A (en) * 2000-06-30 2002-03-22 Sumitomo Electric Ind Ltd Device and method for detecting abnormality in traffic flow
JP2002190090A (en) * 2000-10-13 2002-07-05 Sumitomo Electric Ind Ltd Traffic flow abnormal condition detector and its method (spectrum)
JP2005073218A (en) * 2003-08-07 2005-03-17 Matsushita Electric Ind Co Ltd Image processing apparatus
JP2009284371A (en) * 2008-05-26 2009-12-03 Meidensha Corp Video monitoring device
JP2011071920A (en) * 2009-09-28 2011-04-07 Hitachi Kokusai Electric Inc Remote video monitoring system
JP2012234377A (en) * 2011-05-02 2012-11-29 Mitsubishi Electric Corp Video monitoring system
JP2014192713A (en) * 2013-03-27 2014-10-06 Hitachi Industry & Control Solutions Ltd Monitoring system

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH03118696A (en) * 1989-09-29 1991-05-21 Omron Corp Detector for abnormally traveling vehicle
JPH10307987A (en) * 1997-05-02 1998-11-17 Mitsubishi Heavy Ind Ltd Traffic flow measurement instrument
JPH1139589A (en) * 1997-07-18 1999-02-12 Fuji Electric Co Ltd Traffic monitoring device and method
JPH1196494A (en) * 1997-09-22 1999-04-09 Hitachi Ltd Method for monitoring traffic flow and device therefor
JPH11288495A (en) * 1998-04-02 1999-10-19 Sumitomo Electric Ind Ltd Traffic abnormality detecting device
JP2002083394A (en) * 2000-06-30 2002-03-22 Sumitomo Electric Ind Ltd Device and method for detecting abnormality in traffic flow
JP2002190090A (en) * 2000-10-13 2002-07-05 Sumitomo Electric Ind Ltd Traffic flow abnormal condition detector and its method (spectrum)
JP2005073218A (en) * 2003-08-07 2005-03-17 Matsushita Electric Ind Co Ltd Image processing apparatus
JP2009284371A (en) * 2008-05-26 2009-12-03 Meidensha Corp Video monitoring device
JP2011071920A (en) * 2009-09-28 2011-04-07 Hitachi Kokusai Electric Inc Remote video monitoring system
JP2012234377A (en) * 2011-05-02 2012-11-29 Mitsubishi Electric Corp Video monitoring system
JP2014192713A (en) * 2013-03-27 2014-10-06 Hitachi Industry & Control Solutions Ltd Monitoring system

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102011119B1 (en) * 2017-12-28 2019-10-21 (주)에스비네트워크 A system for displaying accident situation event in tunnel to panorama image and a method of displaying the same
KR20190080142A (en) * 2017-12-28 2019-07-08 (주) 에스비네트워크 A system for displaying accident situation event in tunnel to panorama image and a method of displaying the same
JP7326029B2 (en) 2019-05-31 2023-08-15 株式会社東芝 Traffic image processor
JP2020198497A (en) * 2019-05-31 2020-12-10 株式会社東芝 Traffic video processing apparatus
WO2021020304A1 (en) * 2019-07-29 2021-02-04 京セラ株式会社 Base station, roadside device, traffic communication system, traffic management method, and teaching data generation method
JP2021022221A (en) * 2019-07-29 2021-02-18 京セラ株式会社 Base station, traffic communication system, and traffic management method
JP7401217B2 (en) 2019-07-29 2023-12-19 京セラ株式会社 Base station, traffic communication system, and traffic management method
JP2021077072A (en) * 2019-11-08 2021-05-20 富士通株式会社 Information processing program, information processing method, and information processor
JPWO2021171338A1 (en) * 2020-02-25 2021-09-02
WO2021171338A1 (en) * 2020-02-25 2021-09-02 日本電信電話株式会社 Movement object following device, movement object following method, movement object following system, learning device, and program
US11983891B2 (en) 2020-02-25 2024-05-14 Nippon Telegraph And Telephone Corporation Moving target tracking device, moving target tracking method, moving target tracking system, learning device, and program
JP7255745B2 (en) 2020-02-25 2023-04-11 日本電信電話株式会社 Moving object tracking device, moving object tracking method, moving object tracking system, learning device, and program
WO2021186854A1 (en) * 2020-03-19 2021-09-23 日本電気株式会社 Data processing device, transmission device, data processing method, and program
JP2021175033A (en) * 2020-04-21 2021-11-01 株式会社日立製作所 Event analysis system and event analysis method
JP7440332B2 (en) 2020-04-21 2024-02-28 株式会社日立製作所 Event analysis system and method
CN113744550A (en) * 2020-05-15 2021-12-03 丰田自动车株式会社 Information processing apparatus and information processing system
WO2022024208A1 (en) * 2020-07-28 2022-02-03 日本電気株式会社 Traffic monitoring device, traffic monitoring system, traffic monitoring method, and program
JPWO2022024208A1 (en) * 2020-07-28 2022-02-03
JPWO2022074969A1 (en) * 2020-10-05 2022-04-14
WO2022074969A1 (en) * 2020-10-05 2022-04-14 株式会社デンソー Information processing device, vehicle control device, and road information distribution method
CN112637492A (en) * 2020-12-19 2021-04-09 中建浩运有限公司 Intelligent entity exhibition system
WO2022269980A1 (en) * 2021-06-24 2022-12-29 日立Astemo株式会社 External environment recognition device and external environment recognition method

Also Published As

Publication number Publication date
JP6584024B2 (en) 2019-10-02
JPWO2017047687A1 (en) 2018-08-02

Similar Documents

Publication Publication Date Title
JP6584024B2 (en) Monitoring system
KR101942491B1 (en) Hybrid ai cctv mediation module device consisting of road traffic situation monitoring and real time traffic information analysis
US20180204335A1 (en) System for tracking object, and camera assembly therefor
JP4984575B2 (en) Intruder detection device by image processing
KR102105162B1 (en) A smart overspeeding vehicle oversee apparatus for analyzing vehicle speed, vehicle location and traffic volume using radar, for detecting vehicles that violate the rules, and for storing information on them as videos and images, a smart traffic signal violation vehicle oversee apparatus for the same, and a smart city solution apparatus for the same
KR101496390B1 (en) System for Vehicle Number Detection
KR101187908B1 (en) Integrated surveillance system diplaying a plural of event images and method using the same
KR20200103194A (en) Method and system for abnormal situation monitoring based on video
KR100690279B1 (en) Multipurpose video image detection system
CN112241974A (en) Traffic accident detection method, processing method, system and storage medium
KR100871833B1 (en) Camera apparatus for auto tracking
Prommool et al. Vision-based automatic vehicle counting system using motion estimation with Taylor series approximation
KR102434154B1 (en) Method for tracking multi target in traffic image-monitoring-system
JP3426068B2 (en) TV camera switching monitoring method
CN112906428B (en) Image detection region acquisition method and space use condition judgment method
KR20180068462A (en) Traffic Light Control System and Method
KR101859329B1 (en) System of crackdown on illegal parking
JPH0676195A (en) Abnormal event detector
JP4675217B2 (en) Tracking type monitoring system
JPH0460880A (en) Moving body discrimination and analysis controlling system
Heo et al. Autonomous reckless driving detection using deep learning on embedded GPUs
JP2018192844A (en) Monitoring device, monitoring system, monitoring program, and storage medium
KR20230020184A (en) Video analysis device using fixed camera and moving camera
JP4209826B2 (en) Traffic flow monitoring device
JP4697761B2 (en) Queue detection method and queue detection apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16846558

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2017539963

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16846558

Country of ref document: EP

Kind code of ref document: A1