JPS61200410A - Detection of moving body - Google Patents

Detection of moving body

Info

Publication number
JPS61200410A
JPS61200410A JP60038765A JP3876585A JPS61200410A JP S61200410 A JPS61200410 A JP S61200410A JP 60038765 A JP60038765 A JP 60038765A JP 3876585 A JP3876585 A JP 3876585A JP S61200410 A JPS61200410 A JP S61200410A
Authority
JP
Japan
Prior art keywords
image
moving object
moving body
input
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP60038765A
Other languages
Japanese (ja)
Other versions
JPH0658218B2 (en
Inventor
Hiroyuki Fukuoka
福岡 広之
Tadaaki Mishima
三島 忠明
Masahito Suzuki
優人 鈴木
Morio Kanezaki
金崎 守男
Miyahiko Orita
折田 三弥彦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hitachi Ltd
Original Assignee
Hitachi Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi Ltd filed Critical Hitachi Ltd
Priority to JP60038765A priority Critical patent/JPH0658218B2/en
Publication of JPS61200410A publication Critical patent/JPS61200410A/en
Publication of JPH0658218B2 publication Critical patent/JPH0658218B2/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Abstract

PURPOSE:To realize day and night watching, by adopting a rear image pick-up method and introducing an image at the optimum timing after checking density change inside an independent range at the lower part of the screen. CONSTITUTION:Images to be processed by an image processor 22 are divisible into 2 classifications: The No.1 image is for detecting whether a moving body 1,000 is introduced into the image pick-up field as an input of not and the No.2 is a whole image of the moving body 100. The No.2 image is caught by a timing signal obtained basing upon the No.1 image and the image of the whole image is stored in an image memory 24. For input judgment of the moving body 100 by using the No.1 image, binarization with the threshold value previously set by a binarizing circuit 23 is made and a binary frequency distribution of the output image is obtained or the preprocessed result is judged by obtaining directly the density frequency distribution by frequency distributing cumulative processor 25. The rear-image pick-up is performed and by processing further the density data inside a small area, the permanent optimum-timed images can be obtained day and night.

Description

【発明の詳細な説明】 産業上の利用分野 本発明は、移動体をITVTVカメラ2次元の画像とし
てとらえ、その画像に対して諸々の画像処理を行い、移
動体のナンバー等の検出を行う方法に関する。
[Detailed Description of the Invention] Industrial Application Field The present invention is a method of capturing a moving object as a two-dimensional image of an ITV TV camera, performing various image processing on the image, and detecting the number etc. of the moving object. Regarding.

、〔発明の背景〕 一般に、車両等の移動体のナンバー及び形状の認識等を
、人手によらずITVTVカメラ像処理装置で行うニー
ズが増加している。
, [Background of the Invention] Generally, there is an increasing need to recognize the number and shape of a moving object such as a vehicle by using an ITV TV camera image processing device without relying on humans.

従来は、第4図に示すように、特殊な赤外線40、磁気
41、超音波42などのセンサー等を用いて、最適な取
込みタイミングを決定していたが、センサー等の現場設
置という経済的、実用的欠点があったため、これらの問
題点を解決するものとして特願昭59−98428号に
記載したように画像データのみで判断し取込む手法を発
明した。
Conventionally, as shown in Fig. 4, the optimal capture timing has been determined using special infrared 40, magnetic 41, ultrasonic 42, etc. sensors. Since there were practical drawbacks, in order to solve these problems, a method of determining and importing only image data was invented as described in Japanese Patent Application No. 59-98428.

しかしながら、その手法は、移動体を前部あるいは上部
より撮影するものであり、24時間の監視が必要とされ
た場合、夜間等は前照灯のためにノ・レーションをおこ
してしまい良好に取込めず最適な取込みタイミングをあ
やまる恐れがあった。
However, this method involves photographing a moving object from the front or top, and when 24-hour monitoring is required, the headlights cause no-rations and are difficult to capture. There was a risk that the optimum import timing could be misjudged.

〔発明の目的〕[Purpose of the invention]

本発明の目的は、車両等の移動体後部を撮影した画像で
処理を行う事により、従来の前方撮影による手法の欠点
である夜間時の画像取込みを可能とし、昼夜間でリアル
タイムの高速認識が行える移動体の検知方法を提供する
ことにある。
The purpose of the present invention is to process images taken from the rear of a moving object such as a vehicle, thereby making it possible to capture images at night, which is a drawback of conventional front-viewing methods, and to enable high-speed recognition in real time during the day and night. An object of the present invention is to provide a method for detecting a moving object.

a発明の概要〕 本発明に、夜間時の監視に対応させるため、後部撮影と
したもので、撮像視針内下方から進入してくる移動体を
検知するために、画面下方部の独立し九領域内の濃度変
化を調べることにより、最適なタイミングで画像人力す
るものであり、昼夜間の監視が実現可能となる。
a. Summary of the Invention] In order to make the present invention compatible with monitoring at night, the camera is photographed from the rear, and in order to detect a moving object entering from below within the imaging sight needle, an independent nine at the lower part of the screen is used. By examining density changes within a region, images can be manually captured at the optimal timing, making daytime and nighttime monitoring possible.

〔発明の実施例〕 以下、本発明の実施例につき図面を用いて説明する。第
1図は本発明の構成システム、第2図は本発明にがかる
ノ・−ドウエア構成を示す図である。
[Embodiments of the Invention] Examples of the present invention will be described below with reference to the drawings. FIG. 1 is a diagram showing a configuration system of the present invention, and FIG. 2 is a diagram showing a hardware configuration according to the present invention.

第1図に示すように、ITvカメラ10は運行路に対し
て直角に、高さHの距離で設置する。100は移動体、
200は画像処理装置を示す。そして、このITVカメ
210の視野角度2φ、2φの進行路面での中心点Oと
進行路面とのなす角度をθとすると、次式のような関係
式が成立する。
As shown in FIG. 1, the ITv camera 10 is installed at a distance of height H, perpendicular to the route of travel. 100 is a moving object,
200 indicates an image processing device. If the viewing angle of this ITV camera 210 is 2φ, and the angle between the center point O on the traveling road surface and the traveling road surface is θ, then the following relational expression holds true.

Lo=H/sinθ L t = H/ tanθ L z = H/ tan (θ+φ)Ls =H/l
a!+(θ−φ) L4=L!  Lt Ls=Ls  Lt 具体例として、ITvカメラ10と進行路面との角度θ
=30°、■TVカメラの高さH=5m。
Lo=H/sinθ L t = H/ tanθ L z = H/ tan (θ+φ) Ls = H/l
a! +(θ−φ) L4=L! Lt Ls=Ls Lt As a specific example, the angle θ between the ITv camera 10 and the traveling road surface
= 30°, ■Height of TV camera H = 5m.

D=λQm、そして、当然のことながら実際の車両等の
速度は一定ではない九め、ここで走行速度V=70KI
n/hと想定すると、Lo =10.0m。
D=λQm, and of course the speed of an actual vehicle is not constant.
Assuming n/h, Lo = 10.0 m.

L!=8.7 rn−s L1=5.7 ms Ls 
” 14.8 m5L4 = 9.0 m、 L8 =
 6.0 m、  φ=11.3’ となる、従って画
面縦方向の視野がL4 mとなり、この間を車両が通過
する場合、 T=L4/V = 128 x 10−3 方 の時間を要する。ま九、ITVカメラ(ノンインターレ
ースモード)で撮影する場合、1枚の画像を得るのに約
16ms要する。すなわち、前記速度で車両が通過する
とき、次の画像を得るまでに1/8の画面移動がある。
L! =8.7 rn-s L1=5.7 ms Ls
” 14.8 m5L4 = 9.0 m, L8 =
6.0 m, φ=11.3', so the field of view in the vertical direction of the screen is L4 m, and when a vehicle passes through this distance, it takes T=L4/V=128 x 10-3 time. Also, when shooting with an ITV camera (non-interlace mode), it takes about 16ms to obtain one image. That is, when a vehicle passes by at the above speed, there is a screen movement of 1/8 before the next image is obtained.

これは、たとえば車両高がhmの場合、車両後部端が画
面内を通過するまでに8枚の画像が得られるということ
である。
This means that, for example, when the vehicle height is hm, eight images are obtained before the rear end of the vehicle passes within the screen.

従って、1枚1枚の画像を取込むと共に、■TVカメラ
からの濃淡データを2値化し、その画像に対して“1”
の頻度分布をとるなどの、一連の処理を高速に行うこと
により、オンラインで移動体が撮像視野内へ入力したか
、どうかの判定が可能となる。
Therefore, in addition to capturing each image, the gray level data from the TV camera is binarized, and "1" is applied to each image.
By performing a series of processes at high speed, such as calculating the frequency distribution of , it becomes possible to determine online whether a moving object has entered the imaging field of view.

このような処理を実現する一つの構成として、第2図に
示すような画像処理装置が考えられる。
An image processing apparatus as shown in FIG. 2 can be considered as one configuration for realizing such processing.

図のように、AD変換器2011、ITVカメラ10か
らの読出しビデオ信号を取込みAD変換する。セレクタ
21は、通常はAD変換器20のAD出力を選択する。
As shown in the figure, an AD converter 2011 takes in a read video signal from the ITV camera 10 and performs AD conversion. The selector 21 normally selects the AD output of the AD converter 20.

画像処理プロセッサ22は、セレクタ21の出力を取込
み前処理を行う。
The image processing processor 22 takes in the output of the selector 21 and performs preprocessing.

この前処理とは、雑音除去や空間微分を行うものである
。この画像処理プロセッサ22で処理対象とする画像H
2m類ある。第一は、移動体100が撮像視野内に入力
したか否かの検出のための画像である。第二は、移動体
100そのものが撮像視野内に全部入力し九時に得られ
る移動体全体画像である。この第二の画像は、第一の画
像を基にして得たタイミング信号によって捕獲され、画
像メモリ24へ画面全体の画像が格納される。
This preprocessing involves noise removal and spatial differentiation. Image H to be processed by this image processing processor 22
There are 2m types. The first is an image for detecting whether the moving object 100 has entered the imaging field of view. The second is an image of the entire moving object obtained at 9 o'clock when the entire moving object 100 itself is input within the imaging field of view. This second image is captured by the timing signal obtained based on the first image, and the image of the entire screen is stored in the image memory 24.

第一の画像を用いて移動体100の入力判定を行う方法
には、画像処理プロセッサ22で雑音除去などの前処理
を行った後、2値化回路23であらかじめ設定している
閾値で2値化し、その出力画像の2値頻度分布(10”
、′″1#のそれぞれの画素数)を求め、その値に基づ
いて移動体の入力判定を行う方法や、画像処理プロセッ
サ22で前処理し九結果を、直接頻度分布累積プロセッ
サ25により濃度頻度分布を求めて、前記濃度平均レベ
ルや濃度分散などを求めて、その値に基づき、同様に判
定を行う方法等が実施できるものである。
In the method of determining the input of the moving object 100 using the first image, after performing preprocessing such as noise removal in the image processing processor 22, the binarization circuit 23 performs a binary value using a preset threshold value. and the binary frequency distribution of the output image (10”
, '''1#) and determine the input of a moving object based on that value. A method can be implemented in which the distribution is determined, the concentration average level, the concentration variance, etc. are determined, and a similar determination is made based on the values.

次に第3図、第4図を用いて処理の具体例を示す。第3
図に示すように移動体100が画面内に存在しない状態
で、下方より速度Nib/hで進入してき次場合、その
色彩及び輝度などに応じて濃度平均レベル、濃度分散、
2値化したときの面積等が、進入しない場合から変化す
る。第3図において、40H赤外線センサ、41は磁気
センサ、42は超音波センサである。第4図で説明すれ
ば、画面下方部の独立した領域である、(イ)の領域5
0の濃度データが移動体の進入により変化する。この変
化によって移動体の撮像視野内入力を検知しく(ロ))
、その後、Cつ→に)と変化し、同領域50で(イ)と
同じ濃度データが検知され九七の瞬間を最適画像取込み
のタイミングとして決定するわけである。ここで、10
1は移動体前部、102は移動体中部、103は移動体
後部を示す。
Next, a specific example of the process will be shown using FIGS. 3 and 4. Third
As shown in the figure, if the moving object 100 is not present in the screen and approaches from below at a speed of Nib/h, the density average level, density dispersion, etc.
The area etc. when binarized changes from the case where there is no intrusion. In FIG. 3, a 40H infrared sensor, 41 a magnetic sensor, and 42 an ultrasonic sensor. To explain this with reference to Figure 4, area 5 in (A) is an independent area at the bottom of the screen.
The concentration data of 0 changes due to the approach of the moving object. This change makes it easier to detect input from moving objects within the imaging field of view (b))
, and then C), and the same density data as in (a) is detected in the same area 50, and the moment of 97 is determined as the optimum timing for capturing the image. Here, 10
1 indicates the front part of the moving body, 102 the middle part of the moving body, and 103 the rear part of the moving body.

このような手法を用いることによシ、前方撮影に比べ、
夜間時の車両等移動体検知にも十分対応でき、更に確実
な移動体の検知が可能となる。
By using such a method, compared to forward photography,
It is fully capable of detecting moving objects such as vehicles at night, and enables more reliable detection of moving objects.

次に処理のフローチャートを第5図を用いて説明する。Next, a flowchart of the process will be explained using FIG.

まず画像入力サブルーチンによって画像を入力し、その
画像の濃度頻度分布が、閾値以上となったことを移動体
の入力と判断し、その後の濃度頻度分布が閾値以下とな
った瞬間を最適画像と判断して取込み、その画像を後に
処理していくものである。
First, an image is input using the image input subroutine, and when the density frequency distribution of the image exceeds a threshold value, it is determined to be an input from a moving object, and the moment when the subsequent density frequency distribution becomes below the threshold value is determined to be the optimal image. The image is then captured and processed later.

次に、他の実施例について説明する。処理の高速化を図
るため、最適画像の取込みタイミングを早期に知る必要
がある。しかしながら、現在のTVカメラのラスター走
査は第6図に示すように、画面左上を始点60として1
ラインずつ走査し、右下を終点61とするものであり、
従って画面下方のある小領域内の情報を得るためには、
走査開始時刻より約16m5ec遅れる(ノンインター
レースモード)ものである。従って第9図に示すような
ハードウェアでは画像取込みに1/60(8)要し、そ
の画像による移動体100の入力判定にto時間要する
。すなわち、2フレームに1回の移動体入力判定しかで
きず、高速移動体には追従できない場合がある。そこで
、例えばカメラ本体を上下逆に反転させて設置し、第7
図のように移動体進入方向側からラスター走査させ、更
に第8図のように必要部分だけウィンドウをかけて処理
すると、第10図のようにハードウェアでの画像取込み
に11時間、移動体の入力判定にt2時間を要するだけ
ですむ。すなわち1/60(8)に1回の判定ができ、
ビデオレートでリアルタイムに判定可能である。
Next, other embodiments will be described. In order to speed up processing, it is necessary to know the optimal image capture timing early. However, as shown in Figure 6, the raster scan of current TV cameras uses the upper left of the screen as the starting point 60, and
It scans line by line and sets the lower right as the end point 61,
Therefore, in order to obtain information within a small area at the bottom of the screen,
This is about 16m5ec later than the scanning start time (non-interlaced mode). Therefore, with the hardware as shown in FIG. 9, it takes 1/60 (8) to capture an image, and to time is required to determine the input of the moving body 100 based on the image. That is, it is possible to determine the input of a moving object only once every two frames, and it may be impossible to follow a high-speed moving object. Therefore, for example, by installing the camera body upside down,
If raster scanning is performed from the moving object approach direction as shown in the figure, and processing is performed by applying a window to only the necessary parts as shown in Fig. 10, it will take 11 hours to capture the image with hardware as shown in Fig. 10. It only takes t2 time for input determination. In other words, one judgment can be made every 1/60 (8),
Judgment can be made in real time at video rate.

この場合、上下逆の画像となるが、文字や数字などの認
識の場合には、認識処理の手法をあらかじめ変更するこ
とにより実行可能である。ま九、モニター等に表示した
場合にも正常に表示できるように、あらかじめ画像デー
タをメモリに格納する手段をとることによって、常に正
常に処理することが可能である。
In this case, the image will be upside down, but in the case of recognizing characters, numbers, etc., this can be done by changing the recognition processing method in advance. (9) By storing image data in memory in advance so that it can be displayed normally even when displayed on a monitor, etc., it is possible to always process the image data normally.

〔発明の効果〕〔Effect of the invention〕

本発明によれば、夜間、車両等の移動体の前照灯によっ
て画像取込みが困難とされていたものが、後部撮影を行
い、更に小領域内の濃度データを処理することにより可
能となるので、昼夜間常に最適なタイミングで画像が得
られる。それによって対象領域の抽出が、従来の前方撮
影による手法よシも短時間で行えることとなり、全体の
処理時間の短縮及び、車両等の移動体の番号、形状の高
認識率の効果がある。
According to the present invention, it has been difficult to capture images at night due to the headlights of moving objects such as vehicles, but it is now possible by capturing images from the rear and processing density data within a small area. , images can always be obtained at the optimal timing, day or night. As a result, the target area can be extracted in a shorter time than the conventional front photographing method, which has the effect of shortening the overall processing time and increasing the recognition rate of the number and shape of a moving object such as a vehicle.

【図面の簡単な説明】[Brief explanation of drawings]

第1図は本発明の構成システムを説明する図、第2図に
本発明にかかるハードウェア構成を示す図、第3図、第
4図は移動体の進入状態を示す図、第5図は本発明の処
理を示すフローチャート、第6図、第7図、第8図は本
発明の他の実施例を説明するためのラスター走査の一例
を示す図、第9図、第10図は他の実施例を説明するた
め・・−ドウエアのタイムチャートでおる。 10・・・ITVカメラ、100・・・移動体、200
・・・画像処理装置、50・・・処理の特定対象領域。
FIG. 1 is a diagram explaining the configuration system of the present invention, FIG. 2 is a diagram showing the hardware configuration according to the present invention, FIGS. 3 and 4 are diagrams showing the entering state of a moving object, and FIG. Flowcharts showing the processing of the present invention; FIGS. 6, 7, and 8 are diagrams showing an example of raster scanning for explaining other embodiments of the present invention; FIGS. 9 and 10 are diagrams showing other examples of raster scanning. In order to explain the embodiment, here is a time chart of Doware. 10... ITV camera, 100... Mobile object, 200
. . . Image processing device, 50 . . . Specific target area for processing.

Claims (1)

【特許請求の範囲】 1、移動体が撮像視野内に進入したことの判定を、画像
データのみに基づいて行う画像処理装置に於て、画面内
で移動体の進入方向側に独立した領域を設けてその領域
の画像データが変化した瞬間を、移動体の前部が、撮像
視野内へ入力したこととし、前記領域の画像データが、
移動体の存在しない状態に変化した瞬間を、移動体の後
部が撮像視野内へ入力したこととして、最適なタイミン
グとして画像を取込むことを特徴とする移動体の検知方
法。 2、特許請求の範囲第一項に記載の移動体の検知方法に
於て、画像を取込むのは、TVカメラからとし、該TV
カメラを反転させ、ラスター走査を移動体進入方向側か
ら走査させることにより、最適画像の取込みタイミング
を、早期検知できることを特徴とする移動体の検知方法
[Claims] 1. In an image processing device that determines whether a moving object has entered the imaging field of view based only on image data, an independent area is provided on the screen in the direction in which the moving object enters. The moment when the image data of the area changes is assumed to be the moment when the front part of the moving object enters the imaging field of view, and the image data of the area changes.
A method for detecting a moving object, characterized in that the moment when the state changes to the absence of the moving object is assumed to be the moment when the rear of the moving object is input into the imaging field of view, and an image is captured as the optimum timing. 2. In the method for detecting a moving object according to claim 1, the image is captured from a TV camera, and the image is captured from a TV camera.
A method for detecting a moving object, characterized in that the optimal image capturing timing can be detected early by inverting the camera and performing raster scanning from the direction in which the moving object approaches.
JP60038765A 1985-03-01 1985-03-01 Rear detection device for moving body Expired - Lifetime JPH0658218B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP60038765A JPH0658218B2 (en) 1985-03-01 1985-03-01 Rear detection device for moving body

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP60038765A JPH0658218B2 (en) 1985-03-01 1985-03-01 Rear detection device for moving body

Publications (2)

Publication Number Publication Date
JPS61200410A true JPS61200410A (en) 1986-09-05
JPH0658218B2 JPH0658218B2 (en) 1994-08-03

Family

ID=12534378

Family Applications (1)

Application Number Title Priority Date Filing Date
JP60038765A Expired - Lifetime JPH0658218B2 (en) 1985-03-01 1985-03-01 Rear detection device for moving body

Country Status (1)

Country Link
JP (1) JPH0658218B2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05205054A (en) * 1992-01-29 1993-08-13 Kyosan Electric Mfg Co Ltd Processing device and processing method for fetching image
JP2020148674A (en) * 2019-03-14 2020-09-17 Kddi株式会社 Vehicle detector, vehicle detection method, and program

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5287318A (en) * 1976-01-14 1977-07-21 Matsushita Electric Ind Co Ltd Monitoring equipment
JPS5466019A (en) * 1977-11-07 1979-05-28 Fuji Electric Co Ltd Detection system for moving object
JPS55112585A (en) * 1979-02-22 1980-08-30 Konishiroku Photo Ind Co Ltd Photo detection unit of moving object

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5287318A (en) * 1976-01-14 1977-07-21 Matsushita Electric Ind Co Ltd Monitoring equipment
JPS5466019A (en) * 1977-11-07 1979-05-28 Fuji Electric Co Ltd Detection system for moving object
JPS55112585A (en) * 1979-02-22 1980-08-30 Konishiroku Photo Ind Co Ltd Photo detection unit of moving object

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05205054A (en) * 1992-01-29 1993-08-13 Kyosan Electric Mfg Co Ltd Processing device and processing method for fetching image
JP2020148674A (en) * 2019-03-14 2020-09-17 Kddi株式会社 Vehicle detector, vehicle detection method, and program

Also Published As

Publication number Publication date
JPH0658218B2 (en) 1994-08-03

Similar Documents

Publication Publication Date Title
JP4984915B2 (en) Imaging apparatus, imaging system, and imaging method
CN100574376C (en) Camera head, camera system and image capture method
KR101999993B1 (en) Automatic traffic enforcement system using radar and camera
US20100283845A1 (en) Vehicle periphery monitoring device, vehicle, vehicle periphery monitoring program, and vehicle periphery monitoring method
US20180114078A1 (en) Vehicle detection device, vehicle detection system, and vehicle detection method
CN104380341A (en) Object detection device for area around vehicle
JP4798576B2 (en) Attachment detection device
JP2004312402A (en) System and apparatus for road monitoring
JPH1144533A (en) Preceding vehicle detector
JP2000306097A (en) Road area decision device
JP2000011157A (en) Image pickup device
JPH08172620A (en) Image input means for vehicle
JPS61200410A (en) Detection of moving body
JP2001043383A (en) Image monitoring system
JPH05189694A (en) Vehicle detector
JPH09322153A (en) Automatic monitor
JPH0757200A (en) Method and device for recognizing travel course
JPH06274788A (en) Number plate reading device
JPH0520593A (en) Travelling lane recognizing device and precedence automobile recognizing device
JP2000081322A (en) Slip angle measuring method and apparatus
CN111914833B (en) Moving vehicle license plate recognition system and method
JPH09142208A (en) Monitor device for vehicular peripheral circumference
JPH0738226B2 (en) Separation method and apparatus used for moving objects
JP2000268173A (en) Method for processing object recognition image
JPH0752480B2 (en) Vehicle detection device