JPS62264390A - Visual sense recognizing device for supervisory robot - Google Patents

Visual sense recognizing device for supervisory robot

Info

Publication number
JPS62264390A
JPS62264390A JP61109174A JP10917486A JPS62264390A JP S62264390 A JPS62264390 A JP S62264390A JP 61109174 A JP61109174 A JP 61109174A JP 10917486 A JP10917486 A JP 10917486A JP S62264390 A JPS62264390 A JP S62264390A
Authority
JP
Japan
Prior art keywords
line drawing
point
observation target
image
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP61109174A
Other languages
Japanese (ja)
Inventor
Atsushi Kuno
敦司 久野
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Omron Corp
Original Assignee
Omron Tateisi Electronics Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Omron Tateisi Electronics Co filed Critical Omron Tateisi Electronics Co
Priority to JP61109174A priority Critical patent/JPS62264390A/en
Publication of JPS62264390A publication Critical patent/JPS62264390A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)
  • Burglar Alarm Systems (AREA)
  • Emergency Alarm Devices (AREA)
  • Image Analysis (AREA)

Abstract

PURPOSE:To clearly judge whether or not a human being invades into an invading prohibiting area even under the high temperature environment by adopting an image recognizing processing system based upon the comparison of the line drawing of an observing object and a model line drawing. CONSTITUTION:Prior to supervising, the variable density image of an observing object at the time of normality is obtained by an image pickup part and the line drawing of the variable density image is prepared and filed as a model line drawing at a line drawing preparing part 2. At the time of the supervising, the variable density image of the supervisory object is obtained in such a same way, after the line drawing is executed for this by the line drawing preparing part 2, it is compared with the model line drawing and the line drawing part is extracted which is not included in the model line drawing. Next, in a stereoscopic vision part 1, the stereoscopic vision is executed for respective points on the observing object loaded to the line drawing part extracted by the comparing part 3, three-dimensional coordinates are observed, and an area discriminating part 4 discriminates whether or not the three-dimensional coordinates are included in the data area to specify an invading prohibiting area.

Description

【発明の詳細な説明】 〈産業上の利用分野〉 この発明は、例えばvbFXロボットのように建物内を
巡回して不法侵入者や不審物の有無を検知する監視ロボ
ットに関連し、殊にこの発明は、この種監視ロボットの
視覚に用いられる視覚認識装置に関する。
[Detailed Description of the Invention] <Industrial Application Field> The present invention relates to a monitoring robot, such as a vbFX robot, that patrols a building to detect the presence of illegal intruders and suspicious objects. The present invention relates to a visual recognition device used for the vision of this type of monitoring robot.

〈従来の技術〉 従来この種視覚認識装置として、超音波センサ、マイク
ロ波センサ、近接スイッチ、赤外線センサ等の各センサ
を頭部または胴部の周囲に適宜配設した構造のものが提
案されている(日経メカニカル1985年9月号)。こ
の視覚認識装置では、不@物を検知するのに超音波セン
サ。
<Prior Art> Conventionally, as this type of visual recognition device, a structure in which various sensors such as an ultrasonic sensor, a microwave sensor, a proximity switch, an infrared sensor, etc. are appropriately arranged around the head or torso has been proposed. (Nikkei Mechanical September 1985 issue). This visual recognition device uses an ultrasonic sensor to detect objects.

マイクロ波センサおよび、近接スイ・7チが機能し、ま
た不法侵入者を検知するのに赤外線センサが機能する。
A microwave sensor and a proximity switch function, as well as an infrared sensor to detect intruders.

〈発明が解決しようとする問題点〉 上記赤外線センサは、温度差によって人間と周囲環境を
構成する物体とを区別するため、周囲温度(気温)が高
い環境下ではもはやその使用は困難となる。また仮に人
間を検知できても、その人間が不法侵入者か否か、換言
すればその人間のいる位置が侵入禁止領域内であるか否
かを判別するのが不可能である。
<Problems to be Solved by the Invention> The above-mentioned infrared sensor distinguishes between humans and objects constituting the surrounding environment based on temperature differences, so it becomes difficult to use it in environments where the ambient temperature (air temperature) is high. Furthermore, even if a person could be detected, it would be impossible to determine whether the person is an illegal trespasser, or in other words, whether the person's location is within the prohibited area.

この発明は、従来の赤外線センサによる方式をやめ、観
測対象の線画とモデル線画との比較に基づく新たな画像
認識処理方式を採択することによって、高温環境下でも
使用でき、しかも侵入禁止領域へ人間が侵入し−たか否
かを明確に判断し得る監視ロボット用視覚認識装置を提
供することを目的とする。
This invention eliminates the conventional method using infrared sensors and adopts a new image recognition processing method based on comparing the line drawing of the observation target with a model line drawing.This invention can be used even in high-temperature environments, and can be used in areas where humans are prohibited from entering. An object of the present invention is to provide a visual recognition device for a surveillance robot that can clearly determine whether or not a robot has invaded.

く問題点を解決するための手段〉 上記目的を達成するだめのこの発明の構成を、一実施例
に対応する第1図および第2図を用いて脱明すると、こ
の発明では、 観測対象を撮像してその濃淡画像を生成するテレビカメ
ラ31のような撮像部と、 前記濃淡画像を線画処理して観測対象の線画を作成する
線画作成部2と、 前記観測対象の線画をモデル線画と比較してモデル線画
に含まれない線画部分を抽出する比較部3と、 この比較部で抽出された線画部分にかかる観測対象の各
点を立体視してその3次元座標を計測する立体視部1と
、 計測された各点の3次元座標が所定の禁止領域に含まれ
るか否かを判別する領域判別部4とで監視ロボット用視
覚認識装置を構成することにした。
Means for Solving the Problems〉 The structure of the present invention that achieves the above object will be explained using FIGS. 1 and 2, which correspond to one embodiment. In this invention, the observation target is An imaging unit such as a television camera 31 that captures an image and generates a grayscale image thereof; A line drawing creation unit 2 that performs line drawing processing on the grayscale image to create a line drawing of the observation target; Comparing the line drawing of the observation target with a model line drawing. a comparison unit 3 that extracts line drawing parts that are not included in the model line drawing, and a stereoscopic viewing unit 1 that stereoscopically views each point of the observation target related to the line drawing part extracted by the comparison unit and measures its three-dimensional coordinates. A visual recognition device for a monitoring robot is composed of the following: and an area discriminating unit 4 that discriminates whether or not the three-dimensional coordinates of each measured point are included in a predetermined prohibited area.

く作用〉 まず監視に先立ち、撮像部により正常時の観測対象(具
体的には、侵入禁止領域を含む監視区域の風景)の濃淡
画像を求め、このl@淡・画像の線画を線画作成部2で
作成して、これをモデル線画として予めファイルしてお
く。
Function> First, prior to monitoring, the imaging unit obtains a grayscale image of the observation target under normal conditions (specifically, the scenery of the monitoring area including the prohibited area), and the line drawing of this l@light image is sent to the line drawing creation unit. 2 and file it as a model line drawing in advance.

監視に際しては、撮像部により観測対象の濃淡画像を同
様にして求め、これを線画作成部2で線画化した後、こ
の線画(これを「観測線画」という)を前記モデル線画
と比較部3にて比較してモデル線画に含まれない線画部
分を抽出する。つぎに立体視部1において、比較部3で
抽出された線画部分にかかる観測対象上の各点を立体視
してその3次元座標を計測し、領域判別部4にてその3
次元座標が所定の禁止領域、すなわち侵入禁止領域を規
定するデータ領域に含まれるか否かを判別する。
During monitoring, a grayscale image of the observation target is similarly obtained by the imaging unit, and after this is converted into a line drawing by the line drawing creation unit 2, this line drawing (this is referred to as an “observation line drawing”) is compared with the model line drawing by the comparison unit 3. and compare them to extract line drawing parts that are not included in the model line drawing. Next, the stereoscopic viewing unit 1 stereoscopically views each point on the observation target related to the line drawing portion extracted by the comparing unit 3 and measures its three-dimensional coordinates, and the area determining unit 4 measures the three-dimensional coordinates.
It is determined whether the dimensional coordinates are included in a predetermined prohibited area, that is, a data area that defines a prohibited area.

従ってこの方式によれば、高温環境下においても人間の
検知が可能であり、しかも侵入禁止領域へ人間が侵入し
たか否かを明確に判断することができる。
Therefore, according to this method, it is possible to detect a person even in a high temperature environment, and it is also possible to clearly determine whether or not a person has entered the prohibited area.

〈実施例〉 第3図は、この発明の一実施例にかかる監視ロボットを
示す。
<Embodiment> FIG. 3 shows a monitoring robot according to an embodiment of the present invention.

図示例の監視ロボットは、建物内を巡回して不法侵入者
や不審物の有無を検知する移動タイプの警備ロボットを
示すが、この発明はこれに限らず、特定の監視区域に向
けて設置する固定タイプの監視ロボットにも適用実施で
きる。
The illustrated surveillance robot is a mobile type security robot that patrols inside a building to detect the presence of illegal intruders and suspicious objects; however, the present invention is not limited to this, and the robot may be installed toward a specific surveillance area. It can also be applied to fixed type monitoring robots.

第3図のロボットは、移動機構11を(+iえた機体1
2土に胴部13を縦設し、胴部13上に頭部14を回動
可能に取り付けた構造のものである。前記窮部13には
スピーカ15が、また頭部14には通信用アンテナ16
および一対の視覚部17.18がそれぞれ設けてあり、
各視覚部17.18内には後記する第1.第2の各観測
系34.36が対応位置させである。
The robot in Fig. 3 has a moving mechanism 11 (+i)
It has a structure in which a body part 13 is installed vertically on the ground, and a head part 14 is rotatably mounted on the body part 13. A speaker 15 is installed in the corner 13, and a communication antenna 16 is installed in the head 14.
and a pair of optics 17, 18, respectively.
Inside each visual section 17, 18 there is a first section, which will be described later. Each second observation system 34, 36 is in a corresponding position.

上記ロボットが走行する床面上には、ロボットの現在位
置を測定するためのマーク】9が施こされており、ロボ
ットはこのマーク19を機体12の下面に配備したマー
ク読取装置(図示せず)により読み取りつつ所定の径路
に沿って移動する。
A mark 9 for measuring the current position of the robot is placed on the floor surface on which the robot runs, and the robot reads this mark 19 with a mark reading device (not shown) installed on the underside of the body 12. ) while moving along a predetermined path.

第1図は、上記ロボットにf■み込まれた視覚認識装置
の全体構成例を示す。
FIG. 1 shows an example of the overall configuration of a visual recognition device incorporated into the robot.

図中立体視部tば、観測対象(具体的には、侵入禁止領
域を含む監視区域の風景)をI最像してその濃淡画像を
生成すると共に観測対象上の各点(以下、「物点」とい
う)を立体視してその3次元座標を計測する。
The figure-neutral stereoscopic viewing unit t closely images the observation target (specifically, the scenery of the monitoring area including the prohibited area) and generates a gray scale image of the observation target, and also generates a gray scale image of each point on the observation target (hereinafter referred to as "object"). The three-dimensional coordinates of a point are measured.

線画作成部2は、前記濃淡画像を線画処理して観測対象
の線画を作成する。この線画には、監視に先立ち正常時
の観測対象より求められる線画(これを「モデル線画]
という)と、監視時に観測対象より求められる線画(こ
れを「観測線画」という)とが含まれ、モデル線画の方
は観測対象に応じた数だけ線画ファイル22に格納され
ることになる。
The line drawing creation unit 2 performs line drawing processing on the gray scale image to create a line drawing of the observation target. This line drawing includes a line drawing (this is called a "model line drawing") obtained from the observation target during normal operation prior to monitoring.
) and line drawings (referred to as "observation line drawings") obtained from the observation target during monitoring, and the number of model line drawings corresponding to the observation target is stored in the line drawing file 22.

第5図(1)はモデル線画の一例を示し、第5図(2)
は観測線画の一例を示す。図示例のモデル線画には、侵
入禁止区域を定める壁部分aのみが現れているのに対し
、観測線画には、侵入禁止領域外に車すが、侵入禁止領
域内に不法侵入者Cが、それぞれ現れている。なお第5
図(2)において、車すおよび不法侵入者Cを構成する
各点は、モデル線画に含まれない新しい点(これを「新
特徴点」という)を意味しており、これら新特徴点に対
応する物点の3次元座標が前記立体視部1によって計測
されるものである。
Figure 5 (1) shows an example of a model line drawing, and Figure 5 (2)
shows an example of an observation line drawing. In the illustrated model line drawing, only the wall portion a that defines the no-trespassing area appears, whereas in the observation line drawing, there is a vehicle outside the no-trespassing area, but a trespasser C is inside the no-trespassing area. Each is appearing. Furthermore, the fifth
In Figure (2), each point that makes up the car and the trespasser C means new points that are not included in the model line drawing (these are called "new feature points"), and correspond to these new feature points. The three-dimensional coordinates of the object point are measured by the stereoscopic viewing unit 1.

つぎに比較部3は、観測線画をモデル線画と比較して、
モデル線画に含まれていない線画部分(第5図の例では
、車すおよび不法侵入者Cについての線画部分)を抽出
するためのものである。
Next, the comparison unit 3 compares the observed line drawing with the model line drawing,
This is for extracting line drawing parts that are not included in the model line drawing (in the example of FIG. 5, the line drawing parts for the car and the illegal trespasser C).

モデル管理部21は、線画ファイル22に対する前記モ
デル線画の格納や取出しを実行するだめの部分である。
The model management unit 21 is a part that stores and retrieves the model line drawing from the line drawing file 22.

領域判別部4は、立体視部1で計測された物点の3次元
座標と侵入禁止区域を現定する判定基準データとを対比
することにより、冬物点が侵入禁止領域内に位置してい
るか否かを判別する。なお前記判定基準データは予め判
定基準設定部23にセットされている。
The area determination unit 4 compares the three-dimensional coordinates of the object point measured by the stereoscopic vision unit 1 with the determination standard data defining the no-entry area, and determines whether the winter object point is located within the no-entry area. Determine whether or not. Note that the judgment standard data is set in advance in the judgment standard setting section 23.

監視データ制御部24は、現在のロボット位置や方向に
関する情報を人力し、その人力情ンuに応じたモデル線
画をモデル管理部21を介して取り出すと共に、入力情
報に応じた所定の判定基準データを判定基準設定部23
より領域判別部4へ出力させる。
The monitoring data control unit 24 manually inputs information regarding the current robot position and direction, retrieves a model line drawing according to the human power situation u via the model management unit 21, and also outputs predetermined judgment standard data according to the input information. Judgment criteria setting section 23
The data is output to the area discriminating unit 4.

第2図は、前記立体視部1の具(*構成例を示す。FIG. 2 shows an example of the configuration of the stereoscopic viewing section 1.

図示例のものは、テレビカメラ31.ハーフミラ−32
および、レーザスキャナー33を含む第1観測系34と
、PSDカメラ35より成る第2観測系36と、これら
観測系34.36にそれぞれ電気接続される三角測量制
御部37とから構成されている。
The illustrated example is a television camera 31. Half mirror 32
It also includes a first observation system 34 including a laser scanner 33, a second observation system 36 including a PSD camera 35, and a triangulation control section 37 electrically connected to these observation systems 34 and 36, respectively.

テレビカメラ31は、レンズ38と、2次元COD等よ
り成るエリアセンサ39とを含んでおり、この2次元エ
リアセンサ39の撮像面上に観測対象40の濃淡画像が
生成される。ハーフミラ−32は、テレビカメラ31と
観測対象40との中間位置に配設され、観測対象40か
らの光を透過すると共に、レーザスキャナー33が出力
するレーザ光41を観測対象40の側へ反射するもので
ある。レーザスキャナー33は、レーザ光41を走査し
てこれをハーフミラ−32の(玉章の点へ投光するため
のものであり、その反射光44は後記するように、観測
対象40上の所定の物点へ!11r1.射されて光点P
が生成されるようになっている。
The television camera 31 includes a lens 38 and an area sensor 39 made of a two-dimensional COD or the like, and a grayscale image of the observation target 40 is generated on the imaging surface of the two-dimensional area sensor 39. The half mirror 32 is arranged at an intermediate position between the television camera 31 and the observation object 40, and transmits the light from the observation object 40, and also reflects the laser beam 41 output from the laser scanner 33 toward the observation object 40 side. It is something. The laser scanner 33 is for scanning a laser beam 41 and projecting it onto the (bead) point of the half mirror 32, and the reflected light 44 is directed to a predetermined point on the observation target 40, as will be described later. To the object point!11r1.It is emitted to the light point P
is now generated.

PSDカメラ35は、レンズ42と、半真体光装置検出
器43 (Po5ition  5ensitive。
The PSD camera 35 includes a lens 42 and a semi-solid optical device detector 43 (Po5ition 5 sensitive).

Device ;以下、単にrPsDJという)とを含
んでおり、PSD43の検出面上には前記光点Pが結像
して像点Wが生成される。このPSD43は、検出面上
の像点Wの座標を検出し、これを電圧のようなアナログ
量に変換して三角測量制御1部37へ出力する。
The light point P is imaged on the detection surface of the PSD 43 to generate an image point W. This PSD 43 detects the coordinates of the image point W on the detection surface, converts this into an analog quantity such as voltage, and outputs it to the triangulation control 1 section 37.

三角測量1till i11部37は、マイクロコンピ
ュータを含む装置であり、レーザスキ蒔・ナー33の動
作制御:JIlや三角測量による物点の3次元座標の算
出等、種々の制御・演算を実行する他、新特徴点データ
に基づきエリアセンサ39上の像点Qを順次指定してゆ
く動作を実行する。
The triangulation unit 37 is a device including a microcomputer, and performs various controls and calculations such as operation control of the laser skier/ner 33: JIl and calculation of three-dimensional coordinates of an object point by triangulation. An operation of sequentially specifying image points Q on the area sensor 39 is performed based on the new feature point data.

ところで前記第1観測系34においては、三角測量制御
部37により像点Qが指定されたとき、この像点Qに対
応するハーフミラ−32上の点にヘレーザ光41が当た
るようにレーザスキャナー33の走査量が制御される。
By the way, in the first observation system 34, when the image point Q is specified by the triangulation control section 37, the laser scanner 33 is adjusted so that the helaser light 41 hits the point on the half mirror 32 corresponding to this image point Q. The amount of scanning is controlled.

またテレビカメラ31.ハーフミラ−32および、レー
ザスキャナー33は、所定の位置関係をもって配設され
ており、これにより前記の点にでの反射光44が前記像
点Qに対応する物点へ照射されて、光点Pが生成される
ように構成されている。
Also, TV camera 31. The half mirror 32 and the laser scanner 33 are arranged with a predetermined positional relationship, so that the reflected light 44 at the point is irradiated to the object point corresponding to the image point Q, and the light point P is configured to be generated.

すなわち今ハーフミラ−32の中心点を0、レンズ38
の中心点をし、レーザスキャナー33のスキャニング中
心点をSとすると、線分OLとO8とが互いに直交し且
つ両線骨OL、O3が相等しくなるように、テレビカメ
ラ31.ハーフミラ−32および、レーザスキャナー3
3の位置関係が決定される。さらに線分OLおよびOS
に対しX座標軸およびX座標軸を設定したとき、前記ハ
ーフミラ−32はこれら座標軸に対し45度の傾きとな
るよう、その姿勢を保持する。このように構成すること
でエリアセンサ39上の像点Qに対応するハーフミラ−
32上の点K、すなわち視%iQLがハーフミラ−32
と交わる交点にヘレーザ光41を当てると、K点での反
射光44はその光路が視″41AQLと一致することと
なって、像点Qに対応する観測対象40上の物点へ照射
され、光点Pが生成されるものである。従ってエリアセ
ンサ39上の各点がハーフミラ−32上のどの点に対応
するのかをテーブル化して、これを三角測量制?111
部37内のメモリに予め格納しておくことにより、エリ
アセンサ39上の像点Qが指定されろ毎に、その像点Q
に対応するハーフミラ−32上の点Kを自動選定できる
のである。そして選定された点Kに対しレーザ光41が
当たろようにレーザスキャナ−33の走査方向を三角測
量側’<IO部37により自動制御すれば、視4,7Q
Lと光路が一致する反射光44を容易に生成することが
できる。
That is, the center point of the half mirror 32 is now 0, and the lens 38 is
, and the scanning center point of the laser scanner 33 is S, the television camera 31 . Half mirror 32 and laser scanner 3
3 is determined. Furthermore, line segments OL and OS
When the X-coordinate axis and the X-coordinate axis are set for , the half mirror 32 maintains its posture so that it is inclined at 45 degrees with respect to these coordinate axes. With this configuration, the half mirror corresponding to the image point Q on the area sensor 39
The point K on 32, that is, the visual %iQL is the half mirror 32
When the laser beam 41 is applied to the intersection point, the optical path of the reflected light 44 at point K coincides with the visual field ``41AQL'', and it is irradiated to the object point on the observation target 40 corresponding to the image point Q. A light point P is generated.Therefore, a table is created showing which point on the half mirror 32 each point on the area sensor 39 corresponds to, and this is calculated by triangulation system?111
By storing the image point Q in the memory in the section 37 in advance, each time the image point Q on the area sensor 39 is specified, the image point Q
The point K on the half mirror 32 corresponding to the point K can be automatically selected. Then, if the scanning direction of the laser scanner 33 is automatically controlled by the triangulation side'
Reflected light 44 whose optical path coincides with L can be easily generated.

第4図は、上記位置関係に設定すれば、ハーフミラ−3
2上の点にでの反射光44の光路が視4iQLと一致す
ることを証明するための説明図である。
Figure 4 shows that if the above positional relationship is set, the half mirror 3
FIG. 2 is an explanatory diagram for proving that the optical path of reflected light 44 at a point on point 2 coincides with visual field 4iQL.

同図において、テレビカメラ31のレンズ38の中心点
りがハーフミラ−32の中心+’4Q(座標原点に相当
する)に対しX座標軸方向の距離10の位置に、またエ
リアセンサ39の撮、像面が同方向の距離q0の位置に
、さらにレーザスキャナー33のスキャニング中心点S
がX座標軸方向の距離l。の位置に、それぞれ位置する
と仮定すると、エリ7センサ39の撮像面の方程式はy
=Qo、ハーフミラ−32の方程式はy=)(となる。
In the figure, the center point of the lens 38 of the television camera 31 is located at a distance of 10 in the X coordinate axis direction from the center +'4Q (corresponding to the coordinate origin) of the half mirror 32, and the area sensor 39 captures and images The scanning center point S of the laser scanner 33 is located at a distance q0 in which the surfaces are in the same direction.
is the distance l in the direction of the X coordinate axis. Assuming that they are located at the respective positions, the equation of the imaging plane of the Eri-7 sensor 39 is
=Qo, the equation of half mirror 32 is y=)(.

また前記xyの各座標軸に加えて、ハーフミラ−32の
中心点0に2座標軸を想定すると、前記レンズ38の中
心点りの座標は(0,β。、0)、レーザスキャナー3
3のスキャニング中心点Sの座標は<1.、O,O)と
なり〜さらにエリ7センサ39との像点Qの座標を(q
x −Qo −O2)とすると、視線QLの方程式はつ
ぎの0式のようになる。
In addition to the xy coordinate axes, if we assume two coordinate axes at the center point 0 of the half mirror 32, the coordinates of the center point of the lens 38 are (0, β., 0), and the laser scanner 3
The coordinates of the scanning center point S of No. 3 are <1. , O, O) ~ Furthermore, the coordinates of the image point Q with the Eli-7 sensor 39 are (q
x −Qo −O2), the equation of the line of sight QL is as shown in the following equation 0.

従って視vAQLとハーフミラ−32との交点には、そ
の空間座標を(KQ 、 K、 、  K、、)とする
と、つぎの0〜0式のようになる。
Therefore, if the spatial coordinates of the intersection between the visual vAQL and the half mirror 32 are (KQ, K, , K, .), then the following equation 0 to 0 is obtained.

つぎに視線に沿うベクトルLKをLK、レーザ光41の
光路に沿うベクトルKSをKSとし、またx座標軸およ
びy座標軸に沿う単位ベクトルをi、jとすると、上記
0〜0式よりつぎの関係式〇が導かれる。
Next, if the vector LK along the line of sight is LK, the vector KS along the optical path of the laser beam 41 is KS, and the unit vectors along the x and y coordinate axes are i and j, then from the above equations 0 to 0, the following relational expression is obtained. 〇 is guided.

L K + K S = i o  (t  J )・
−−m−・■上式中、右辺における(i −j)はハー
フミラ−32の法線ベクトルnに相当し、この0式より
 LK+KSはハーフミラ−32の法線ベクトルnと同
一平面上にあって、この法線ベクトルnを対称軸とする
ことがわかる。従ってレーザスキャナー33のスキャニ
ング中心点Sよリハーフミラー32に照射されるレーザ
光41は、ハーフミラ−32により視線に沿って物点方
向へ反射されることとなる。
L K + K S = io (t J )・
−m−・■ In the above equation, (i − j) on the right side corresponds to the normal vector n of the half mirror 32, and from this equation 0, LK + KS is on the same plane as the normal vector n of the half mirror 32. It can be seen that this normal vector n is the axis of symmetry. Therefore, the laser beam 41 irradiated onto the rehalf mirror 32 from the scanning center point S of the laser scanner 33 is reflected by the half mirror 32 along the line of sight toward the object point.

つぎに上記構成例の視覚認識装置につきその動作を説明
する。
Next, the operation of the visual recognition device having the above configuration example will be explained.

まず監視に先立ち、ロボットを所定の監視位置へ順々に
導き、各監視位置での観測対象を立体視部1のテレビカ
メラ31により撮像して、その濃淡画像をエリアセンサ
39の撮像面上に生成する。このt−漫画像は線画作成
部2へ送られて線画化され、この線画をモデル線画とし
てモデル管理部21により線画ファイル22へ格納して
おく。
First, prior to monitoring, the robot is guided to predetermined monitoring positions one after another, the observation target at each monitoring position is imaged by the television camera 31 of the stereoscopic viewing unit 1, and the grayscale image is displayed on the imaging surface of the area sensor 39. generate. This T-comic image is sent to the line drawing creation section 2 and converted into a line drawing, and the model management section 21 stores this line drawing in the line drawing file 22 as a model line drawing.

監視に際して、ロボットを所定の径路に沿って巡回させ
、所定の監視位置に到達したとき、ロボットを停止させ
且つ所定の姿勢や向きに設定保持した後、各監視位置に
て観測対象をテレビカメラ1により撮像して、前記同様
に濃淡画像を求める。この濃淡画像は線画作成部2にて
線画化された後、この観測線画は比較部3へ送られる。
During monitoring, the robot is rotated along a predetermined route, and when it reaches a predetermined monitoring position, the robot is stopped and set and held in a predetermined posture and orientation, and then the observation target is captured by the TV camera 1 at each monitoring position. , and obtain a grayscale image in the same manner as described above. After this grayscale image is converted into a line drawing by the line drawing creation section 2, this observed line drawing is sent to the comparison section 3.

一方監視データ制御部24は、モデル管理部21の動作
を制御して、現監視位置に応じたモデル線画を取り出し
て、これを比較部3へ送出させる。そして比較部3では
前記観測線画をモデル線画と比較し、モデル線画に含ま
れていない線画部分を抽出すると共に、その線画部分を
構成する各点(これを「新特徴点コという)を抽出して
立体視部1へ出力する。
On the other hand, the monitoring data control section 24 controls the operation of the model management section 21 to take out a model line drawing corresponding to the current monitoring position and send it to the comparison section 3. Then, the comparison unit 3 compares the observed line drawing with the model line drawing, extracts line drawing parts that are not included in the model line drawing, and extracts each point (referred to as "new feature point") that constitutes the line drawing part. and outputs it to the stereoscopic viewing section 1.

立体視部1では、その新特徴点に対応するエリアセンサ
39上の像点Qを指定し、これに対応するハーフミラ−
32上の点Kをメモリに予め格納しであるテーブルを参
■ペシて求めろと共に、その点にヘレーザ光41が当た
るようにレーザスキ島す−33の走査量を制御する。
The stereoscopic viewing unit 1 specifies the image point Q on the area sensor 39 corresponding to the new feature point, and displays the corresponding half mirror.
The point K on the laser beam 32 is stored in a memory in advance and determined by referring to a table, and the scanning amount of the laser beam 33 is controlled so that the laser beam 41 hits that point.

ついでレーザスキャナー33を作動させてレーザ光41
を出力させると、レーザ光41はハーフミラ−32上の
点Kに当たって観−4jll対象40側へ反射し、その
反射光44は視線QLに沿い前記像点Qに対応する物点
へ至って光点Pを生成する。この光点PはPSDカメラ
35の光装置検出器33に結像され、その像点Wが位置
する座標がアナログ量として三角測量制御部、37へ出
力される。
Next, the laser scanner 33 is activated to emit laser light 41.
When output, the laser beam 41 hits the point K on the half mirror 32 and is reflected toward the viewing object 40, and the reflected light 44 reaches the object point corresponding to the image point Q along the line of sight QL and becomes a light point P. generate. This light point P is imaged on the optical device detector 33 of the PSD camera 35, and the coordinates where the image point W is located are outputted to the triangulation control section 37 as an analog quantity.

つぎに三角測量制御部37では、テレビカメラ31およ
びPSDカメラ35における各レンズ38.42の中心
点り、Rのl[が既知であるところから、直vAQ L
および直線WRの各方程式を求め、しかる後にこれら両
直線の交点(光点P)の3次元座標を三角測量の原理に
基づき算出する。
Next, the triangulation control unit 37 calculates the direct vAQ L from the point where the center point of each lens 38.
and the straight line WR, and then calculate the three-dimensional coordinates of the intersection (light point P) of these two straight lines based on the principle of triangulation.

この3次元座標は領域判別部4に送られ、その監視位置
での侵入禁止領域を規定する判定基、準データと大小比
較されて、前記光点Pが侵入禁止領域内に位置するか否
かを判別する。
These three-dimensional coordinates are sent to the area determination unit 4, where they are compared in size with criteria and quasi-data that define the prohibited area at the monitoring position, and whether or not the light point P is located within the prohibited area. Determine.

以下、他の新特徴点につき上記と同様の処理を順次操り
返し実行することにより、全ての新特徴点の3次元座標
が求まり、各新特徴点にかかる物点(光点)が侵入禁止
領域内に位置するか否かを判)すできろ。
By sequentially repeating the same process as above for other new feature points, the three-dimensional coordinates of all new feature points are determined, and the object point (light point) related to each new feature point is placed in the prohibited area. (determine whether or not it is located within).

その結果、物点が侵入禁止区域内に位置すると判断した
とき、領域判別部4は警報手段(図示せず)を作動させ
て、スピーカ15より警報音を発生させるか、或いは通
信手段(図示せず)を作動させて、アンテナ16より異
常信号を出力させるものである。
As a result, when it is determined that the object point is located within the no-trespassing area, the area determination unit 4 activates an alarm means (not shown) to generate an alarm sound from the speaker 15, or activates a communication means (not shown). 1) to output an abnormal signal from the antenna 16.

〈発明の効果〉 この発明は、上記の如(構成したから、赤外線センサの
ように高温環境下で使用不能となる等の虞れがなく、し
かも侵入禁止領域へ人間が侵入したか否かを明確に判断
することができる等、発明目的を達成した顕著な効果を
奏する。
<Effects of the Invention> Since the present invention is configured as described above, unlike infrared sensors, there is no risk of the sensor becoming unusable in a high temperature environment, and moreover, it can detect whether or not a person has entered a prohibited area. It has a remarkable effect of achieving the purpose of the invention, such as being able to make a clear judgment.

【図面の簡単な説明】[Brief explanation of drawings]

第1図はこの発明の一実施例にかかる視覚認識装置のブ
ロック図、第2図は立体視部の構成例を示す説明図、第
3図は警備ロボットの外観図、第4図は第2図の立体視
部の原理を示す説明図、第5図はモデル線画および観測
線画を示す説明図である。
FIG. 1 is a block diagram of a visual recognition device according to an embodiment of the present invention, FIG. 2 is an explanatory diagram showing an example of the configuration of a stereoscopic viewing section, FIG. 3 is an external view of a security robot, and FIG. FIG. 5 is an explanatory diagram showing the principle of the stereoscopic viewing section in the figure, and FIG. 5 is an explanatory diagram showing a model line drawing and an observation line drawing.

Claims (4)

【特許請求の範囲】[Claims] (1)観測対象を監視してその異常の有無を判別する監
視ロボットにおいて、 観測対象を撮像してその濃淡画像を生成する撮像部と、 前記濃淡画像を線画処理して観測対象の線画を作成する
線画作成部と、 前記観測対象の線画をモデル線画と比較してモデル線画
に含まれない線画部分を抽出する比較部と、 この比較部で抽出された線画部分にかかる観測対象上の
各点を立体視してその3次元座標を計測する立体視部と
、 計測された各点の3次元座標が所定の禁止領域に含まれ
るか否かを判別する領域判別部とを具備して成る監視ロ
ボット用視覚認識装置。
(1) A monitoring robot that monitors an observation target and determines whether there is an abnormality in the observation target includes an imaging unit that images the observation target and generates a grayscale image thereof, and a line drawing of the observation target that processes the grayscale image. a line drawing creation unit that compares the line drawing of the observation target with a model line drawing and extracts line drawing parts that are not included in the model line drawing, and each point on the observation target that corresponds to the line drawing part extracted by the comparison unit. A monitoring device comprising: a stereoscopic viewing unit that views the object stereoscopically and measures its three-dimensional coordinates; and an area determination unit that determines whether the three-dimensional coordinates of each measured point are included in a predetermined prohibited area. Visual recognition device for robots.
(2)前記監視ロボットは、建物内を巡回する移動可能
に形成された警備ロボットである特許請求の範囲第1項
記載の監視ロボット用視覚認識装置。
(2) The visual recognition device for a monitoring robot according to claim 1, wherein the monitoring robot is a movable security robot that patrols inside a building.
(3)前記立体視部は、その構成中に撮像部を含んで成
る特許請求の範囲第1項記載の監視ロボット用視覚認識
装置。
(3) The visual recognition device for a monitoring robot according to claim 1, wherein the stereoscopic viewing unit includes an imaging unit in its configuration.
(4)前記立体視部は、撮像面上に観測対象の濃淡画像
を生成するための撮像装置と、観測対象と撮像装置との
中間位置に配設されるハーフミラーと、このハーフミラ
ー上の点へ投光しハーフミラーによる反射光を観測対象
上の点へ照射して光点を生成する投光装置と、前記光点
を検出面上に結像させてその像点の座標を求める光装置
検出器と、前記撮像面上の像点位置座標と検出面上の像
点位置座標とに基づき観測対象上の点の3次元座標を求
める画像処理装置とで構成されて成る特許請求の範囲第
1項または第3項記載の監視ロボット用視覚認識装置。
(4) The stereoscopic viewing unit includes an imaging device for generating a grayscale image of the observation target on an imaging surface, a half mirror disposed at an intermediate position between the observation target and the imaging device, and a half mirror disposed on the half mirror. A light projection device that emits light to a point and irradiates the reflected light from a half mirror to a point on an observation target to generate a light point, and a light that images the light point on a detection surface and determines the coordinates of the image point. Claims comprising: a device detector; and an image processing device that calculates the three-dimensional coordinates of a point on the observation target based on the image point position coordinates on the imaging surface and the image point position coordinates on the detection surface. The visual recognition device for a monitoring robot according to item 1 or 3.
JP61109174A 1986-05-12 1986-05-12 Visual sense recognizing device for supervisory robot Pending JPS62264390A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP61109174A JPS62264390A (en) 1986-05-12 1986-05-12 Visual sense recognizing device for supervisory robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP61109174A JPS62264390A (en) 1986-05-12 1986-05-12 Visual sense recognizing device for supervisory robot

Publications (1)

Publication Number Publication Date
JPS62264390A true JPS62264390A (en) 1987-11-17

Family

ID=14503529

Family Applications (1)

Application Number Title Priority Date Filing Date
JP61109174A Pending JPS62264390A (en) 1986-05-12 1986-05-12 Visual sense recognizing device for supervisory robot

Country Status (1)

Country Link
JP (1) JPS62264390A (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2440826A (en) * 2006-08-10 2008-02-13 Northrop Grumman Corp Stereo camera intrusion detection system
US8139110B2 (en) 2007-11-01 2012-03-20 Northrop Grumman Systems Corporation Calibration of a gesture recognition interface system
US8180114B2 (en) 2006-07-13 2012-05-15 Northrop Grumman Systems Corporation Gesture recognition interface system with vertical display
US8234578B2 (en) 2006-07-25 2012-07-31 Northrop Grumman Systems Corporatiom Networked gesture collaboration system
US8345920B2 (en) 2008-06-20 2013-01-01 Northrop Grumman Systems Corporation Gesture recognition interface system with a light-diffusive screen
US8589824B2 (en) 2006-07-13 2013-11-19 Northrop Grumman Systems Corporation Gesture recognition interface system
US8972902B2 (en) 2008-08-22 2015-03-03 Northrop Grumman Systems Corporation Compound gesture recognition
US9377874B2 (en) 2007-11-02 2016-06-28 Northrop Grumman Systems Corporation Gesture recognition light and video image projector
US9696808B2 (en) 2006-07-13 2017-07-04 Northrop Grumman Systems Corporation Hand-gesture recognition method
JP2020112992A (en) * 2019-01-10 2020-07-27 功 村上 Alarm device
US11409277B2 (en) * 2015-03-12 2022-08-09 Alarm.Com Incorporated Robotic assistance in security monitoring

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9696808B2 (en) 2006-07-13 2017-07-04 Northrop Grumman Systems Corporation Hand-gesture recognition method
US8589824B2 (en) 2006-07-13 2013-11-19 Northrop Grumman Systems Corporation Gesture recognition interface system
US8180114B2 (en) 2006-07-13 2012-05-15 Northrop Grumman Systems Corporation Gesture recognition interface system with vertical display
US8234578B2 (en) 2006-07-25 2012-07-31 Northrop Grumman Systems Corporatiom Networked gesture collaboration system
US8432448B2 (en) 2006-08-10 2013-04-30 Northrop Grumman Systems Corporation Stereo camera intrusion detection system
GB2440826A (en) * 2006-08-10 2008-02-13 Northrop Grumman Corp Stereo camera intrusion detection system
GB2440826B (en) * 2006-08-10 2008-10-08 Northrop Grumman Corp Stereo camera intrusion detection system
US8139110B2 (en) 2007-11-01 2012-03-20 Northrop Grumman Systems Corporation Calibration of a gesture recognition interface system
US9377874B2 (en) 2007-11-02 2016-06-28 Northrop Grumman Systems Corporation Gesture recognition light and video image projector
US8345920B2 (en) 2008-06-20 2013-01-01 Northrop Grumman Systems Corporation Gesture recognition interface system with a light-diffusive screen
US8972902B2 (en) 2008-08-22 2015-03-03 Northrop Grumman Systems Corporation Compound gesture recognition
US11409277B2 (en) * 2015-03-12 2022-08-09 Alarm.Com Incorporated Robotic assistance in security monitoring
AU2022201806B2 (en) * 2015-03-12 2024-01-04 Alarm.Com Incorporated Robotic assistance in security monitoring
JP2020112992A (en) * 2019-01-10 2020-07-27 功 村上 Alarm device

Similar Documents

Publication Publication Date Title
US5673082A (en) Light-directed ranging system implementing single camera system for telerobotics applications
US8854594B2 (en) System and method for tracking
US9055226B2 (en) System and method for controlling fixtures based on tracking data
JP5323910B2 (en) Collision prevention apparatus and method for remote control of mobile robot
US7310431B2 (en) Optical methods for remotely measuring objects
KR100773184B1 (en) Autonomously moving robot
JP6562437B1 (en) Monitoring device and monitoring method
WO2018186790A1 (en) Driver assistance system and a method
US20030063776A1 (en) Walking auxiliary for person with impaired vision
US20150009300A9 (en) Divergence ratio distance mapping camera
KR102151815B1 (en) Method and Apparatus for Vehicle Detection Using Lidar Sensor and Camera Convergence
JPS62264390A (en) Visual sense recognizing device for supervisory robot
CN108536142B (en) Industrial robot anti-collision early warning system and method based on digital grating projection
US20240087327A1 (en) Object detection and tracking system
WO2018169467A1 (en) A vehicle with a crane with object detecting device
JP2018173707A (en) Person estimation system and estimation program
KR20170100892A (en) Position Tracking Apparatus
US11009887B2 (en) Systems and methods for remote visual inspection of a closed space
JP6125102B2 (en) Information display system
KR102546045B1 (en) monitering system with LiDAR for a body
JPH0389103A (en) Apparatus for detecting approach of obstacle to overhead power transmission line
JP2016118919A (en) Monitoring device and monitoring method
JP2004069497A (en) Monitoring apparatus and monitoring method
KR102233288B1 (en) Face Identification Heat Detection and Tracking System with multiple cameras
KR101684098B1 (en) Monitoring system with 3-dimensional sensor and image analysis integrated