JPS6314281A - Picture fetching method - Google Patents

Picture fetching method

Info

Publication number
JPS6314281A
JPS6314281A JP61156061A JP15606186A JPS6314281A JP S6314281 A JPS6314281 A JP S6314281A JP 61156061 A JP61156061 A JP 61156061A JP 15606186 A JP15606186 A JP 15606186A JP S6314281 A JPS6314281 A JP S6314281A
Authority
JP
Japan
Prior art keywords
image
recognition
view
label
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP61156061A
Other languages
Japanese (ja)
Other versions
JP2511886B2 (en
Inventor
Masao Takato
高藤 政雄
Tadaaki Mishima
三島 忠明
Morio Kanezaki
金崎 守男
Hideo Oota
太田 秀夫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hitachi Ltd
Original Assignee
Hitachi Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi Ltd filed Critical Hitachi Ltd
Priority to JP61156061A priority Critical patent/JP2511886B2/en
Publication of JPS6314281A publication Critical patent/JPS6314281A/en
Application granted granted Critical
Publication of JP2511886B2 publication Critical patent/JP2511886B2/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Abstract

PURPOSE:To process the recognition of a recognition object at a high speed by setting visual fields of image sensors to overlap them so that the whole of the recognition object always exists in one image out of plural images taken in by image sensors. CONSTITUTION:The image of a label 3 which is stuck to the upper face of an object 2 on a convector line 1 and includes a character string is taken in by cameras 4 and 4' with quick shutters, and the number of the label is recog nized by a picture processing device 6. The signal based on the recognition result is sent to a controller 7, and the controller 7 controls destination guide bars 8 and 8' on a basis of this signal. An extent l of overlap between a visual field 20 of the camera 4 and a visual field 21 of the camera 4' is made longer than the diagonal line of the label at the time of taking in labels 30-34 as images. Thus, recognition of the recognition object is processed at a high speed.

Description

【発明の詳細な説明】 〔産業上の利用分野〕 本発明は1画像取込み方法に係り、特に認識を容易にす
る画像を取込むのに好適にイメージセンサの視野設定方
法に関する。
DETAILED DESCRIPTION OF THE INVENTION [Field of Industrial Application] The present invention relates to a method for capturing one image, and particularly to a method for setting the field of view of an image sensor suitable for capturing an image that facilitates recognition.

〔従来の技術〕[Conventional technology]

従来、複数のイメージセンサを用いて画像取込みを行う
場合については、オートメーション、第27巻、第2号
、第79頁において論じられている。
Conventionally, the case where image capture is performed using a plurality of image sensors is discussed in Automation, Vol. 27, No. 2, p. 79.

〔発明が解決しようとする問題点〕[Problem that the invention seeks to solve]

上記従来技術は、複数のカメラ(イメージセンサ)の視
野の設定位置について配慮されておらず。
The above-mentioned conventional technology does not take into account the setting positions of the fields of view of the plurality of cameras (image sensors).

例えばベルトコンベアで移動してくる物体の上面の任意
の位置に貼られたラベルや書かれた文字列等の認識対象
が複数のカメラ(イメージセンサ)の視野にまたがって
取込まれる場合が生じ、認識方法が複雑になり、処理時
間が長くなる等の問題があった。
For example, a recognition target such as a label or written character string affixed to an arbitrary position on the top surface of an object moving on a conveyor belt may be captured across the fields of view of multiple cameras (image sensors). There were problems such as a complicated recognition method and a long processing time.

本発明の目的は、複数のイメージセンサあるいは1つの
イメージセンサで連続的に複数の画像を取込む必要があ
る場合に、認識対象の認識を簡単に、高速に処理できる
ようにすることにある。
An object of the present invention is to enable recognition of a recognition target to be processed easily and at high speed when it is necessary to continuously capture a plurality of images using a plurality of image sensors or one image sensor.

〔問題点を解決するための手段〕[Means for solving problems]

上記目的は、複数のカメラ(イメージセンサ)の視野又
は、1つのイメージセンサで連続的に画像を取込む場合
、同じイメージセンサの前回と今回の取込画像の視野を
、それぞれ取込んだ複数の画像の少なくとも1個の画像
中に、認識対象全体が存在するように、イメージセンサ
の視野を重ねて設定することにより、達成される。
The above purpose is to combine the fields of view of multiple cameras (image sensors) or, when one image sensor is used to continuously capture images, the fields of view of the previous and current captured images of the same image sensor, respectively. This is achieved by setting the fields of view of the image sensors to overlap so that the entire recognition target is present in at least one of the images.

〔作用〕[Effect]

複数のイメージセンサの視野又は、1つのイメージセン
サの複数の画像の視野を、認識対象全体が、少なくとも
1つの画像中に存在するように、イメージセンサの視野
を重ねて設定することにより、イメージセンサで取込ん
だ画像から、認識対象全体を含む画像を抽出し、その画
像から認識対象を認識する。それによって、複数の画像
にまたがって撮っている認識対象を合成した後認識した
り、あるいは別々に認識した結果を合成したりする必要
がなくなるので、処理が簡単になるとともに誤認識が減
少する。
By setting the fields of view of multiple image sensors or the fields of view of multiple images of one image sensor so that the entire recognition target exists in at least one image, the fields of view of the image sensors are set to overlap. An image that includes the entire recognition target is extracted from the images captured in , and the recognition target is recognized from that image. This eliminates the need to combine and recognize recognition targets taken across multiple images, or to combine separate recognition results, which simplifies processing and reduces recognition errors.

〔実施例〕〔Example〕

以下1本発明の一実施例を第1図により説明する。第1
図は物の仕分はシステムの全体構成図である。コンベア
ライン1上の物体2の上面に貼られた文字列を含むラベ
ル3を、高速シャッタ付カメラ4及び4′で取込み、画
像処理装置6により、ラベルの番号を認識し、その認識
結果に基づく信号を制御装置7へ送り、その信号に基づ
いて制御装置7は、行先別ガイドバー8,8′を制御す
る。
An embodiment of the present invention will be described below with reference to FIG. 1st
The figure shows the overall configuration of the system for sorting objects. A label 3 containing a character string pasted on the top surface of an object 2 on a conveyor line 1 is captured by cameras 4 and 4' with high-speed shutters, the label number is recognized by an image processing device 6, and based on the recognition result. A signal is sent to the control device 7, and based on the signal, the control device 7 controls the destination guide bars 8, 8'.

行先別ガイドバー8,8′により、物体2.2’ 。Object 2.2' by destination guide bars 8, 8'.

2′はガイド板9,9′を経由して、行先別に集められ
る。なお、5は照明装置である。また、コンベアライン
1は矢印の方向に動いていると仮定している。
2' are collected by destination via guide plates 9, 9'. Note that 5 is a lighting device. It is also assumed that the conveyor line 1 is moving in the direction of the arrow.

以下、第2図〜第5図を用いて本発明の中心部分である
イメージセンサの視野の設定方法について説明する。2
0はカメラ4の視野、21はカメラ4′の視野である。
Hereinafter, a method for setting the field of view of an image sensor, which is the central part of the present invention, will be explained using FIGS. 2 to 5. 2
0 is the field of view of camera 4, and 21 is the field of view of camera 4'.

物体は矢印の方向から移動して来るものとする。30〜
34は文字列を含むラベルである。視野20及び21の
重なり量Qは、ラベル30〜34を画像として取込んだ
ときのラベルの対角線の長さよりも大きく設定する。で
きれば、対角線の長さより1〜2画素程度長く設定する
のが望ましい。このように設定すると、ラベルは視野2
0と21のいずれかの視野内に入ることになる。例えば
、ラベル30,32.33は視野20に、ラベル30,
31.34は視野21に、それぞれのラベルの全体像が
入り、画像として取まれる場合を示したが、複数台のカ
メラ(イメージセンサ)から画像を取込む場合、第3図
に示す変換器36は1台で、カメラ切替器37を用いて
カメラを切替えて画像を画像メモリ39に取込む方法が
ある。第4図にカメラ3台あるいは1台の場合のカメラ
視野設定例について示す。なお、第4図(a)〜(e)
に示す矢印は、物体の移動方向を示す。第4図(、)は
、第3図(a)に示すようにカメラ毎にA/D変換器を
持つ場合、第4図(b)は、第3図(b)に示すように
A/D変換器1台にカメラ切替器を持つ場合を示してい
る。
Assume that the object is moving from the direction of the arrow. 30~
34 is a label containing a character string. The amount of overlap Q between the fields of view 20 and 21 is set to be larger than the length of the diagonal line of the labels when the labels 30 to 34 are captured as images. If possible, it is desirable to set the length to be about 1 to 2 pixels longer than the length of the diagonal line. With this setting, the label will be in field of view 2.
It will fall within the field of view of either 0 or 21. For example, labels 30, 32, 33 are in the field of view 20;
31.34 shows the case where the entire image of each label enters the field of view 21 and is captured as an image, but when capturing images from multiple cameras (image sensors), the converter shown in Figure 3 is used. There is a method in which only one camera 36 is used, and a camera switch 37 is used to switch the camera and capture images into the image memory 39. FIG. 4 shows an example of setting the camera field of view in the case of three cameras or one camera. In addition, Fig. 4 (a) to (e)
The arrow shown in indicates the direction of movement of the object. Fig. 4(,) shows that when each camera has an A/D converter as shown in Fig. 3(a), and Fig. 4(b) shows an A/D converter as shown in Fig. 3(b). This shows the case where one D converter has a camera switch.

二二でQはカメラ視野の重なり景を、dはカメラ切替器
37による取込みタイミングの遅れによる視野のずれ量
を示している。なお、図中■、■。
In 22, Q indicates the overlapping view of the camera field of view, and d indicates the amount of shift in the field of view due to a delay in the capture timing by the camera switch 37. In addition, ■ and ■ in the figure.

■はカメラの起動順序を示す、第4図(a)。4(a) shows the order in which the cameras are activated.

(b)は、認識対象とカメラの分解能との関係で、複数
のカメラを物体の移動方向と直角方向に並べた場合を示
しているが、同図(c)は、カメラの分解能としては十
分だが、コンベアラインの移動速度が高速のため、物体
の移動方向に複数のカメラの視野を重ね合わせて設定し
た場合を示す。重なり量Qは前述の通りである。また第
4図(d)は、1台のカメラで連続的に画像を取込んで
処理する場合の、前回の画像取込みの視野50.今回の
画像取込みの視野51を示す。両視野の重なり量mは、
コンベアライン1のスピードと認識時間との関係により
決まるが、ラベルの対角線の長さより人きくなければな
らない。第4図(e)は、同図(a)及び(d)を組合
わせた場合を示す。
(b) shows the case where multiple cameras are arranged perpendicular to the moving direction of the object due to the relationship between the recognition target and the resolution of the camera, but (c) of the same figure shows that the resolution of the camera is sufficient. However, since the moving speed of the conveyor line is high, this example shows a case in which the fields of view of multiple cameras are set to overlap in the direction in which the object is moving. The amount of overlap Q is as described above. Further, FIG. 4(d) shows the field of view 50 of the previous image capture when images are continuously captured and processed with one camera. The field of view 51 of the current image capture is shown. The amount of overlap m between both visual fields is
This is determined by the relationship between the speed of the conveyor line 1 and the recognition time, but it must be faster than the diagonal length of the label. FIG. 4(e) shows a combination of FIG. 4(a) and (d).

第4図(f)は、第4図(a)及び(c)を組合わせた
場合を示す。
FIG. 4(f) shows a case where FIGS. 4(a) and (c) are combined.

以上述べたようにすると、認識対象である文字列を含む
ラベル全体を取込んだ画像が少なくとも1個存在する。
As described above, there is at least one image that captures the entire label including the character string to be recognized.

複数の画像から、該ラベル全体を含む画像を抽出して、
その画像からラベルを抽出し、さらに文字列を抽出し、
番号認識を行う。これらについては、すでに公知である
のでここでは説明を省略する。
Extract an image containing the entire label from multiple images,
Extract the label from the image, then extract the string,
Perform number recognition. Since these are already known, their explanation will be omitted here.

なお、上記説明においては、ラベルに文字列が書かれて
おり、ラベルが検出できる場合を例に説明したが、ラベ
ルが悪く物体上に直接文字列が書かれている場合等、ラ
ベルでなく文字列そのものが直接検出できる場合には、
ラベルの対角線の長さの替りに、文字列の外接長方形の
対角線の長さを用いた視野の重なり量Qを決定すること
になる。
In the above explanation, we took as an example a case where a character string is written on the label and the label can be detected. However, in cases where the label is bad and the character string is written directly on the object, it is possible to detect the character string instead of the label. If the column itself can be detected directly,
The overlapping amount Q of the visual fields is determined using the length of the diagonal of the circumscribed rectangle of the character string instead of the length of the diagonal of the label.

また5本実施例では、対象物が移動している場合につい
て説明したが、対象物が静止している場合についても、
同様の考え方が適用できることは言うまでもない(特に
第4図(a)、Cc、> 、(f)が適用可能。第4図
(b)は、静止対象物の場合、同図(a)と同じになる
)、また、本実施例では、イメージセンサとしてシャッ
タ付カメラを用いて説明したが、通常のITVカメラや
、−次元センサについても、同様である。ただし、−次
元センサの場合は、対象物あるいはセンサの移動方向と
直角の方向の視野の重なりだけである。
In addition, in the fifth embodiment, the case where the target object is moving has been explained, but the case where the target object is stationary is also explained.
It goes without saying that similar ideas can be applied (particularly Fig. 4(a), Cc, > , and (f) are applicable. Fig. 4(b) is the same as Fig. 4(a) in the case of a stationary object. In this embodiment, a camera with a shutter is used as the image sensor, but the same applies to a normal ITV camera or a -dimensional sensor. However, in the case of a -dimensional sensor, only the fields of view in the direction perpendicular to the movement direction of the object or sensor overlap.

ちなみに、認識対象物とカメラの分解能の点で複数のカ
メラを用いて画像入力をする場合のカメラ視野の設定を
、本発明と異なり、第5図に示すように、重なり量αを
零とすることが一方法として考えられる。しかし、この
方法では、カメラを重なり量を零に設定するのは非常に
難しく、メンテナンスが大変である。また、仮りに重な
り量を零にできたとしても、二つのカメラの視野にまた
がって入力されたi11対象物を含む二つの画像を合成
して処理するか(このためには、画像処理装置として、
二つの画像を合成して処理できる画像メモリと、それを
処理できる機能が必要である)、あるいは、二つの画像
を各々別々に処理し、認識した結果をうまく合成するこ
とが必要になる。
Incidentally, in view of the object to be recognized and the resolution of the cameras, the camera field of view when inputting images using multiple cameras is different from the present invention, and the overlap amount α is set to zero, as shown in FIG. This can be considered as one method. However, with this method, it is very difficult to set the camera overlap amount to zero, and maintenance is difficult. Furthermore, even if the amount of overlap could be reduced to zero, would it be possible to synthesize and process two images containing the i11 object input across the fields of view of the two cameras? ,
It is necessary to have an image memory that can combine and process two images and a function that can process it), or to process the two images separately and successfully combine the recognized results.

〔発明の効果〕〔Effect of the invention〕

本発明によれば、複数のイメージセンサから画像を取込
んだ対象を認識する必要がある場合や。
According to the present invention, there may be cases where it is necessary to recognize an object whose images have been captured from a plurality of image sensors.

1つのイメージセンサで連続的に画像を取込んで対象を
認識する必要がある場合に、認識対象全体を少なくとも
1個の画像に取込むこととができるので、認識処理が簡
単になり、高速認識が可能になる。また、カメラの視野
の設定に際しては、複数のイメージセンサで認識対象の
出現範囲をカバーできる範囲で、かつ、視野の重なり部
分が認識対象の最大長(例えば物体に貼られたラベルの
対角線の長さ)よりも大きくなるようにイメージセンサ
の視野を設定すれば良いので、視野の設定及びそのメン
テナンスが容易であるという効果がある。また、先に述
べたように、複数のイメージセンサの画像を合成して処
理するための大規模な画像メモリ及びそれらを処理する
ための画像処理機能が不要になるという効果がある。
When it is necessary to recognize a target by continuously capturing images with one image sensor, the entire recognition target can be captured in at least one image, which simplifies the recognition process and enables high-speed recognition. becomes possible. In addition, when setting the field of view of the camera, make sure that the range in which multiple image sensors can cover the appearance range of the recognition target and that the overlapping part of the field of view is the maximum length of the recognition target (for example, the diagonal length of a label affixed to an object). Since it is only necessary to set the field of view of the image sensor so that the field of view is larger than the size of the field of view, setting and maintenance of the field of view is easy. Furthermore, as described above, there is an effect that a large-scale image memory for combining and processing images from a plurality of image sensors and an image processing function for processing them are not required.

【図面の簡単な説明】[Brief explanation of drawings]

第1図は物の仕分はシステムの全体構成図、第2図は、
イメージセンサの視野設定方法の説明図、第3図はイメ
ージセンサの接続例を示す図、第4図はイメージセンサ
の視野設定例を示す図、第5図は本発明以外の視野設定
例の一例を示す図である。 1・・・コンベア、2・・物体、3・・・レベル、4,
5 ・カメラ、6・・・画像処理装置、7・・・制御装
置、8・・・高1図 高2図 (αン 第4図 (α〕 昭午図 (C)           (cl)硼 □ L             J 躬=+図 (e) 筋4−■ (予)
Figure 1 shows the overall configuration of the system for sorting things, and Figure 2 shows the overall configuration of the system for sorting things.
An explanatory diagram of a method for setting the field of view of an image sensor, FIG. 3 is a diagram showing an example of how the image sensor is connected, FIG. 4 is a diagram showing an example of setting the field of view of the image sensor, and FIG. 5 is an example of a field of view setting example other than the present invention. FIG. 1...Conveyor, 2...Object, 3...Level, 4,
5 ・Camera, 6...Image processing device, 7...Control device, 8...1st grade, 2nd grade (α) Figure 4 (α) Showo map (C) (cl) 硼□ L J謬=+Figure (e) Plot 4-■ (Preliminary)

Claims (1)

【特許請求の範囲】[Claims] 1、イメージセンサより取込んだ画像を処理して対象を
認識する装置あるいはシステムにおいて、複数のイメー
ジセンサにより取込んだ複数の画像のうちの少なくとも
1個の画像中、または1個のイメージセンサで連続的に
取込んだ複数の画像のうちの少なくとも1個の画像中に
、常時認識対象の全体が存在するように、イメージセン
サの視野を重ねて設定し、画像取込みを行うことを特徴
とする画像取込み方法。
1. In a device or system that recognizes an object by processing images captured by an image sensor, at least one image of a plurality of images captured by a plurality of image sensors or in one image sensor is used. The image capturing is performed by setting the field of view of the image sensor to overlap so that the entire recognition target is always present in at least one of the plurality of continuously captured images. Image import method.
JP61156061A 1986-07-04 1986-07-04 Image processing system Expired - Lifetime JP2511886B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP61156061A JP2511886B2 (en) 1986-07-04 1986-07-04 Image processing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP61156061A JP2511886B2 (en) 1986-07-04 1986-07-04 Image processing system

Publications (2)

Publication Number Publication Date
JPS6314281A true JPS6314281A (en) 1988-01-21
JP2511886B2 JP2511886B2 (en) 1996-07-03

Family

ID=15619449

Family Applications (1)

Application Number Title Priority Date Filing Date
JP61156061A Expired - Lifetime JP2511886B2 (en) 1986-07-04 1986-07-04 Image processing system

Country Status (1)

Country Link
JP (1) JP2511886B2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH047239A (en) * 1990-04-24 1992-01-10 Mita Ind Co Ltd Conveyer
JPH09297813A (en) * 1996-04-30 1997-11-18 Mitsubishi Heavy Ind Ltd Container number recognition device
JP2001312715A (en) * 2000-04-28 2001-11-09 Denso Corp Optical information reader

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS542630A (en) * 1977-06-08 1979-01-10 Fujitsu Ltd Character read system
JPS5440037A (en) * 1977-09-06 1979-03-28 Toshiba Corp Optical character reader
JPS59214986A (en) * 1983-05-20 1984-12-04 Hitachi Ltd Pattern processing system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS542630A (en) * 1977-06-08 1979-01-10 Fujitsu Ltd Character read system
JPS5440037A (en) * 1977-09-06 1979-03-28 Toshiba Corp Optical character reader
JPS59214986A (en) * 1983-05-20 1984-12-04 Hitachi Ltd Pattern processing system

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH047239A (en) * 1990-04-24 1992-01-10 Mita Ind Co Ltd Conveyer
JPH09297813A (en) * 1996-04-30 1997-11-18 Mitsubishi Heavy Ind Ltd Container number recognition device
JP2001312715A (en) * 2000-04-28 2001-11-09 Denso Corp Optical information reader

Also Published As

Publication number Publication date
JP2511886B2 (en) 1996-07-03

Similar Documents

Publication Publication Date Title
EP1148316A1 (en) Information reader
JPH0695008B2 (en) Monitoring device
JPS6278979A (en) Picture processor
JPS6314281A (en) Picture fetching method
JPH0146002B2 (en)
JP2001189925A (en) Mobile object monitoring system
US5379236A (en) Moving object tracking method
JPH043803B2 (en)
JPH05173644A (en) Three-dimensional body recording device
JPS63241415A (en) Star sensor
JP2558772B2 (en) Moving object identification device
JPH07325906A (en) Detecting and tracking device for moving body
JPS61286986A (en) Fruit recognizing device
JPH08315112A (en) Image input device
JPS62196775A (en) Shape discriminating method
JPS6280779A (en) Character reader
JPS63167978A (en) Image processor
JPS6162982A (en) Music staff detector
JPH07140222A (en) Method for tracking target
JPS6365347A (en) Singular point detecting method by image processing
JPH03199898A (en) Picture processing device for missile
JPH02187883A (en) Document reader
JPS59158483A (en) Character reading method
JPH1188756A (en) Camera for image processing
JPS62115977A (en) Television camera device