JPS61207910A - Attitude measuring instrument - Google Patents

Attitude measuring instrument

Info

Publication number
JPS61207910A
JPS61207910A JP4836085A JP4836085A JPS61207910A JP S61207910 A JPS61207910 A JP S61207910A JP 4836085 A JP4836085 A JP 4836085A JP 4836085 A JP4836085 A JP 4836085A JP S61207910 A JPS61207910 A JP S61207910A
Authority
JP
Japan
Prior art keywords
line
information
segments
equation
straight line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP4836085A
Other languages
Japanese (ja)
Other versions
JPH0364806B2 (en
Inventor
Kazunori Onoguchi
一則 小野口
Hiroshi Hoshino
弘 星野
Yoshinori Kuno
義徳 久野
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National Institute of Advanced Industrial Science and Technology AIST
Original Assignee
Agency of Industrial Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Agency of Industrial Science and Technology filed Critical Agency of Industrial Science and Technology
Priority to JP4836085A priority Critical patent/JPS61207910A/en
Publication of JPS61207910A publication Critical patent/JPS61207910A/en
Publication of JPH0364806B2 publication Critical patent/JPH0364806B2/ja
Granted legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/26Measuring arrangements characterised by the use of optical techniques for measuring angles or tapers; for testing the alignment of axes

Abstract

PURPOSE:To derive at a high speed the attitude information of an object body by using a straight line for connecting plural segments having a correlation, obtained from distance information related to the object body, as a straight line equation of an attitude of the object body. CONSTITUTION:A laser range finder 1 irradiates a laser beam to a measuring object body, and inputs its distance information as an x-y-z coordinate data. A segment cutting part 2 cuts segments for showing the object bodies (a), (b) and (c) from in line information by utilizing the continuity of the x-y-z coordinate data, or a threshold value, etc. for discriminating the object body and the background. A collection input part 3 compares information of the segment which has been derived at every scanning line, and its previous information, and derives a group (collection) of the segment detected from the same object body extending over between plural scanning lines. With regard to each collection of the segments which have been obtained in this way, a straight line generating part 4 derives its center-of-gravity coordinates, respectively, and an equation of a straight line for connecting these center-of-gravity coordinates is calculated, and outputted as information for showing the attitude of the object body, respectively.

Description

【発明の詳細な説明】 〔発明の技術分野〕 本発明はレーザレンジファインダ等の光投影手段を用い
て求められる計測対象物体の距離情報から、該計測対象
物体の姿勢情報を容易に、且つ高速に得ることのできる
姿勢計測装置に関する。
DETAILED DESCRIPTION OF THE INVENTION [Technical Field of the Invention] The present invention provides for easily and quickly obtaining posture information of a measurement object from distance information of the measurement object obtained using a light projection means such as a laser range finder. The present invention relates to a posture measuring device that can be obtained.

〔発明の技術的背景とその問題点〕[Technical background of the invention and its problems]

物体を掴んで移動させる等の作業を行うロボットでは、
上記把持対象とする物体が置かれている姿勢、つまり物
体の姿勢を正確に把握することが  4重要である。特
に、ロボットを用いて所謂バラ積み部品をピックアップ
する場合、各部品の姿勢をそれぞれ正確に把握すことが
非常に重要である。
Robots that perform tasks such as grasping and moving objects,
It is important to accurately grasp the orientation of the object to be grasped, that is, the orientation of the object. In particular, when a robot is used to pick up so-called bulk parts, it is very important to accurately grasp the orientation of each part.

従来、このような物体の姿勢を正確に把1ll(計測)
する手段として、該物体の各部の距離情報を、その座標
情報として直接的に得ることのできるレーザレンジファ
インダが多く用いられている。
Conventionally, it was not possible to accurately grasp (measure) the posture of such an object.
A laser range finder, which can directly obtain distance information of each part of the object as its coordinate information, is often used as a means for this purpose.

さて従来は、上記物体を形成する全ての座標情報をレー
ザレンジファインダを用いて求め、これらの座標情報の
相関から該物体の姿勢情報を求めている。これ故、これ
らの座標情報から物体の姿勢情報を求めるには膨大な量
の計算処理が必要であった。この為、物体の姿勢情報を
高速に求めてその姿勢を把握することが困難であった。
Conventionally, all the coordinate information forming the object is obtained using a laser range finder, and the posture information of the object is obtained from the correlation of these coordinate information. Therefore, an enormous amount of calculation processing is required to obtain posture information of an object from these coordinate information. For this reason, it has been difficult to obtain posture information of an object at high speed and understand its posture.

しかも、物体の姿勢情報を上述したように座標情報の塊
としてボリューム的に取扱うので、これを記述する為の
大容量の記憶装置が必要となり、またその情報からロボ
ットの制御に必要なデータを抽出する為の処理が相当複
雑である等の不具合があった。
Moreover, as the orientation information of the object is handled volumetrically as a mass of coordinate information as described above, a large-capacity storage device is required to write this information, and data necessary for controlling the robot is extracted from this information. There were some problems, such as the processing required to do so was quite complicated.

〔発明の目的〕[Purpose of the invention]

本発明はこのような事情を考慮してなされたもので、そ
の目的とするところは、物体の姿勢を簡易に、且つ高速
に把握してロボット制御等の有効に役立てることのでき
る実用性の^い姿勢計測装置を提供することにある。
The present invention was made in consideration of these circumstances, and its purpose is to provide a practical method that can easily and quickly grasp the posture of an object and effectively utilize it for robot control, etc. The object of the present invention is to provide a posture measuring device that is easy to use.

〔発明の概要〕[Summary of the invention]

本発明は、レーザレンジファインダ等からなる光投影手
段を用いて計測対象物体に関する距離情報を所定の走査
間隔のライン情報として順次求め、これらの各ライン情
報中の上記計測対象物体を示す線分の情報を、例えば各
線分の長さと位置の情報として抽出し、これらの線分の
情報の各走査ライン間に亙る繋り関係を、例えば各ライ
ン間に亙って相互に関連する線分の集合として求め、こ
の集合の各線分を結ぶ直線を表現する方程式として前記
計測対象物体の姿勢を求めるようにしたものである。
The present invention uses a light projection means such as a laser range finder to sequentially obtain distance information regarding an object to be measured as line information at predetermined scanning intervals, and calculates a line segment indicating the object to be measured in each of the line information. Information is extracted, for example, as information on the length and position of each line segment, and the connection relationship between each scanning line of the information on these line segments is extracted, for example, as a set of line segments that are mutually related between each line. The posture of the object to be measured is determined as an equation expressing a straight line connecting each line segment of this set.

〔発明の効果〕〔Effect of the invention〕

かくして本発明によれば、対象物体に関する距離情報を
所定の走査間隔で順次入力しながら、そのライン情報中
の対象物体を示す線分を求め、これらの線分の各ライン
に屋る相関関係から相関のある複数の線分を同一の対象
物体を示す線分の集合として捕え、この線分の集合の各
線分を結ぶ直線を上記対象物体の姿勢を直線方程式とし
て表現するので、該対象物体の姿勢情報を高速に求める
ことができる。しかも新たなライン情報が入力される都
度、そのライン情報中から抽出される線分の情報を既に
求められた線分の集合に加えながら前記直線方程式を順
次修正していくことによって、対象物体の姿勢を高精度
に求めることが可能となる。
Thus, according to the present invention, while sequentially inputting distance information regarding a target object at predetermined scanning intervals, a line segment indicating the target object in the line information is determined, and from the correlation between each line of these line segments, A plurality of correlated line segments are treated as a set of line segments indicating the same target object, and the straight line connecting each line segment of this set of line segments represents the attitude of the target object as a linear equation, so the attitude of the target object is expressed as a straight line equation. Posture information can be obtained quickly. Moreover, each time new line information is input, the line segment information extracted from the line information is added to the set of already determined line segments, and the linear equation is sequentially corrected. It becomes possible to determine the posture with high accuracy.

また対象物体の姿勢を直線方程式として表現しているの
で、小容量の記憶装置を用いて対象物体の姿勢を記述す
ることができ、またその姿勢情報を直接的にロボット制
御等に用いることが可能となる等の実用上多大なる効果
が奏せられる。
In addition, since the posture of the target object is expressed as a linear equation, it is possible to describe the posture of the target object using a small-capacity storage device, and the posture information can be directly used for robot control, etc. A great practical effect can be achieved, such as:

〔発明の実施例〕[Embodiments of the invention]

以下、図面を参照して本発明の一実施例につき説明する
Hereinafter, one embodiment of the present invention will be described with reference to the drawings.

第、1図は実施例装置の概略構成図で、第2図および第
3図はその処理概念を示す図、第4図は実施例装置にお
ける処理シーケンスの一例を示す図である。
1 are schematic configuration diagrams of the embodiment apparatus, FIGS. 2 and 3 are diagrams showing the processing concept thereof, and FIG. 4 is a diagram showing an example of the processing sequence in the embodiment apparatus.

尚、この実施例では、円筒(円柱)形状部分を持つこと
の多い各種工業部品を計測対象物体として例示し、これ
をロボットによって把持する場合を例に説明する。
In this embodiment, various industrial parts that often have cylindrical (cylindrical) shaped parts are exemplified as objects to be measured, and a case will be described using as an example a case where this is grasped by a robot.

レーザレンジファインダ1は、レーザ光を計測対染物体
に照射して、その距離情報を、例えば計測対象物体各部
のx−y−z座標データとんで入力するものである。こ
こでは所定走査幅のレーザラインを所定の走査間隔で移
動させながら上記計測対象物体の距離情報を1ライン分
づつ入力する形式のレーザレンジファインダ1が用いら
れる。このレーザレンジファインダ1によって、例えば
第2図(a)に示すように複数の物体a、b、cの距離
情報が1走査ライン分づつ順次入力される。尚、レーザ
ビームを所定の走査幅で走査しつつ、その走査ラインを
所定の走査間隔で移動させながら計測対象物体の距離情
報を順次入力する形式のレーザレンジファインダ1を用
いても良い。この場合には、順次入力される計測対象物
体の距離情報を1走査ライン単位づつまとめて処理する
ようにすれば良い。
The laser range finder 1 irradiates a laser beam onto an object to be measured and inputs distance information therefrom, for example, as x-y-z coordinate data of each part of the object to be measured. Here, a laser range finder 1 is used in which distance information about the object to be measured is input line by line while moving a laser line with a predetermined scanning width at predetermined scanning intervals. With this laser range finder 1, distance information of a plurality of objects a, b, and c is sequentially inputted one scanning line at a time, as shown in FIG. 2(a), for example. Note that a laser range finder 1 may be used in which the distance information of the object to be measured is sequentially input while scanning the laser beam with a predetermined scanning width and moving the scanning line at predetermined scanning intervals. In this case, the distance information of the object to be measured that is input sequentially may be processed in units of one scanning line at a time.

線分切出し部2は、レーザレンジファインダ1を用いて
各走査ライン毎に順次入力されるライン情報中から対象
物体a、b、cを示す線分を、その計測対象物体a、b
、cの各部を示すx−y−z座標データの連続性、或い
は対象物体とバックグラウンドとを区別する為の閾値等
を利用して切出している。例えば第2図(b)に示すよ
うに走査ライン(K−1)における対象物体a、bの線
分をhl。
A line segment cutting unit 2 extracts line segments indicating target objects a, b, and c from among line information sequentially input for each scanning line using the laser range finder 1, and extracts line segments indicating target objects a, b, and c from line information sequentially input for each scanning line using the laser range finder 1.
, c using the continuity of x-y-z coordinate data indicating each part, or a threshold value for distinguishing the target object from the background. For example, as shown in FIG. 2(b), the line segment between target objects a and b on the scanning line (K-1) is hl.

h2として、また走査ラインKにおける対象物体a。As h2, object a of interest also in scanning line K.

bの線分をh3. h4として各ライン情報中からそれ
ぞれ切出している。そして各線分旧、h2.〜h4の情
報は、例えばその長さの情報、および前記レーザレンジ
ファインダ1による走査領域(図中枠内で示される領域
)中における位置情報、例えばX=y−z座標データと
してそれぞれ出力している。
The line segment of b is h3. h4 is extracted from each line information. And each line segment old, h2. The information of ~h4 is, for example, length information and position information in the scanning area (area shown in the frame in the figure) by the laser range finder 1, for example, output as X=yz coordinate data. There is.

しかして集合取込み部3は、各走査ライン毎に求められ
た線分の情報と、その前の走査ラインで求められた線分
の情報とを比較し、複数の走査ライン間に亙って同じ対
象物体から検出される線分の組(集合)を求めている。
The set importing unit 3 then compares the line segment information obtained for each scanning line with the line segment information obtained for the previous scanning line, and compares the line segment information obtained for each scanning line with the line segment information obtained for the previous scanning line. We are looking for a set of line segments detected from a target object.

具体的には、各走査ライン毎に検切りされた各線分の情
報を各走査ライン間で比較し、はぼ同一の位置情報であ
り、且つほぼ同じ長さの線分を同一の対象物から検出さ
れた線分であると判定する。第2図に示す例では、線分
旧とh3とが対象物体aに属する1つの集合(組)とし
てまとめられ、また線分h2とh4とが対象物体すに属
する1つの集合(組)としてまとめられる。
Specifically, information on each line segment cut out for each scanning line is compared between each scanning line, and line segments with approximately the same position information and approximately the same length are extracted from the same object. It is determined that it is a detected line segment. In the example shown in FIG. 2, line segments old and h3 are grouped together as one set (set) belonging to target object a, and line segments h2 and h4 are grouped together as one set (set) belonging to target object a. It can be summarized.

尚、各走査ライン間の線分の情報の比較において、その
情報の値が近い幾つかの線分が存在する場合には、例え
ば lh2+”Flh311h41 である場合には、距離的に近い線分同士を組(1つの集
合)として選択する。或いは既に求められた線分の集合
の後述するような各線分を結ぶ直線の情報を利用する等
して集合の判定処理が行われる。集合取込み部3は、各
走査ラインの線分情報が順次ライン単位で与えられる都
度、上述したようにその走査ラインで抽出された線分が
既に求められた集合(同一の対象物体に関与する線分の
集合)の中のどれに属するかを判定し、判定された集合
に上記処理対象としている線分を追加している。尚、判
定処理した線分が属する集合が見出だされない場合には
、その線分は新たに出現した対象物体に関するものであ
るとして、これをベースとした線分の集合を新たに作成
する。
In addition, when comparing line segment information between each scanning line, if there are several line segments whose information values are similar, for example, lh2+"Flh311h41, line segments that are close in distance are is selected as a set (one set).Alternatively, set determination processing is performed by using information on straight lines connecting each line segment, which will be described later, of a set of line segments that have already been found.Set importing unit 3 Each time line segment information for each scanning line is sequentially given line by line, as described above, the set of line segments extracted from that scanning line has already been obtained (a set of line segments related to the same target object). The line segment to which the determined line segment belongs is determined and added to the determined set.If the set to which the determined line segment belongs is not found, the line segment is added to the determined set. Assuming that the segment is related to a newly appearing target object, a new set of line segments is created based on this segment.

直線生成部4は、このようにして得られた線分の各集合
に関し、その集合に属する各線分の長さと位置の情報か
らその重心(中心)座標をそれぞれ求め、これらの重心
座標間を結ぶ直線の方程式を算出している。例えば第2
図(C)に示すように前記線分旧、 h3の各重心座標
を結ぶ直線f1を表す方程式を求め、同様に線分h2.
 h4の各重心座標を結ぶ直線t2を表す方程式を求め
ている。これらの直線fl、 f2の各方程式が、前記
対象物体a、 bの姿勢(傾き)を示す情報としてそれ
ぞれ出力され、またロボットの制御情報メモリ等に記述
されることになる。
For each set of line segments obtained in this way, the straight line generation unit 4 obtains the barycenter (center) coordinates of each set of line segments from the information on the length and position of each line segment belonging to the set, and connects these barycenter coordinates. Calculating the equation of a straight line. For example, the second
As shown in Figure (C), an equation representing the straight line f1 connecting the barycentric coordinates of the line segments h2 and h3 is obtained, and similarly, the equation of the straight line f1 connecting the barycentric coordinates of the line segments h2 and h3 is obtained.
An equation representing a straight line t2 connecting each barycenter coordinate of h4 is obtained. The equations of these straight lines fl and f2 are output as information indicating the postures (inclinations) of the target objects a and b, respectively, and are also written in the control information memory of the robot.

ところで対象物体を表現する線分の集合に属する線分の
情報は、前述したようにレーザレンジファインダ1よる
対象物体のライン走査に伴って順次増加する。この為、
姿勢計測の初期時において、上記直線生成部4が1つの
集合をなす2つの線分情報から求めた直線上に、新たに
追加された線分が正確に存在するとは限らない。つまり
、各線分の情報自体がある誤差を含み、しかも隣接ライ
ン間の距離的に殆んど離れない2つの線分の情報から該
2つの線分を結ぶ直線の方程式を算出しているので、直
線方程式の誤差量が比較的大きい。そこで直線修正部5
では、各線分の集合に新たな線分が追加される都度、先
に算出した直線方程式を修正している。
By the way, as described above, the information on the line segments belonging to the set of line segments representing the target object increases sequentially as the laser range finder 1 scans the line of the target object. For this reason,
At the initial stage of posture measurement, the newly added line segment does not necessarily exist exactly on the straight line that the straight line generation unit 4 finds from the information on two line segments forming one set. In other words, the information on each line segment itself includes a certain error, and the equation of the straight line connecting the two line segments is calculated from the information on two line segments that are almost not far apart from each other in terms of distance between adjacent lines. The amount of error in the linear equation is relatively large. Therefore, straight line correction section 5
In this case, the previously calculated linear equation is corrected each time a new line segment is added to each set of line segments.

例えば第3図に示すように線分H1,H2の重心点を結
ぶ直線F1の方程式を算出した後、上記線分1−11.
)l 2の集合に線分H3が追加ぎれたとき、線分H1
,H3の重心点を結ぶ直filF2の方程式を算出する
。そしてこれらの直線F1.F2の各係数の平均値をそ
れぞれ求め、これらの各平均値をそれぞれ係数とする直
線F3の方程式を求めている。
For example, as shown in FIG. 3, after calculating the equation of the straight line F1 connecting the centroid points of line segments H1 and H2, the above line segment 1-11.
)l When line segment H3 is added to the set of 2, line segment H1
, H3, calculate the equation of the direct filF2 that connects the centroid points of H3. And these straight lines F1. The average value of each coefficient of F2 is determined, and an equation of a straight line F3 is determined using each of these average values as a coefficient.

この直線F3の方程式を前記直線F1を修正した新たな
方程式、つまりその集合の各線分H1,H2゜H3を相
互に結ぶ直線の方程式としている。その後、次の走査ラ
インにおいて、線分H4が上記線分H1,H2,H3の
集合に追加されると、同様にして直線方程式の修正が行
われる。
The equation of this straight line F3 is a new equation obtained by modifying the straight line F1, that is, the equation of the straight line that interconnects the line segments H1, H2°H3 of the set. Thereafter, in the next scanning line, when line segment H4 is added to the set of line segments H1, H2, and H3, the linear equation is similarly corrected.

このようにして、認識対象物体に関する線分の情報が各
対象物体に対応する線分の集合として順次まとめられ、
第2図(d)に示すようにその集合に属する線分を相互
に結ぶ直線として各対象物体の姿勢が求められることに
なる。即ち、この例では線分旧、 ha、 ha、 h
9を含む線分の集合が対象物体aを表すものとして抽出
され、これらの線分h1. ha、 h6. h9を結
ぶ直線f1の方程式が該対象物体aの姿勢を示す情報と
して出力される。また同様にして線分h2. h4. 
h5. haを結ぶ直線f2の方程式が対象物体すの姿
勢を示す情報として求められ、線分h7.IOを結ぶ直
線t3の方程式が対象物体Cの姿勢を示す情報として出
力される。
In this way, the line segment information regarding the recognition target object is sequentially compiled into a set of line segments corresponding to each target object,
As shown in FIG. 2(d), the posture of each target object is determined as a straight line connecting the line segments belonging to the set. That is, in this example, the line segment old, ha, ha, h
A set of line segments including h1.9 is extracted as representing the target object a, and these line segments h1. ha, h6. The equation of the straight line f1 connecting h9 is output as information indicating the attitude of the target object a. Similarly, line segment h2. h4.
h5. The equation of the straight line f2 connecting h.ha is obtained as information indicating the posture of the target object, and the line segment h7. The equation of the straight line t3 connecting IO is output as information indicating the attitude of the target object C.

第4図は上述した姿勢計測の処理の制御シーケンスの例
を示すものである。
FIG. 4 shows an example of a control sequence for the above-mentioned attitude measurement process.

この制御シーケンス例に示すように、レーザレンジファ
インダ1による距離画像情報のライン単位の入力に対応
して制御パラメータに@設定し、走査ラインに上におけ
る線分の抽出を行いながらその線分が属する集合を求め
、各集合に属する線分を結ぶ直線の方程式を算出し、且
つこの直線の方程式を順次修正する。。このようにずれ
ば、計測対象領域内に存在する対象物体の各姿勢をそれ
ぞれ直線方程式として求めることが可能となる。
As shown in this control sequence example, @ is set in the control parameters in response to the input of distance image information line by line by the laser range finder 1, and while extracting the line segment above the scanning line, the line segment to which the line segment belongs is set. A set is determined, an equation of a straight line connecting line segments belonging to each set is calculated, and the equation of this straight line is sequentially corrected. . By shifting in this manner, each posture of the target object existing within the measurement target region can be determined as a linear equation.

以上説明したように本装置によれば、計測対象物体の距
離情報を入力し、その距離情報を走査ライン単位で処理
して、そのライン情報中に含まれる対象物体の情報を線
分情報として抽出し、複数のライン間における上記線分
の繋り関係から計測対象物体を線分の集合として捕え、
その集合に屈する線分を結ぶ直線の方程式として前記計
測対象物体の姿勢を計測する。従って、対象物体のライ
ン走査に伴って該対象物体の姿勢情報を前記直線方程式
として求め、更にその直線方程式を新たな線分情報を用
いて順次修正して行くので、高速に対象物体の姿勢情報
を得ることができる。つまり、対象物体の概略的な姿勢
情報を−早く得、これを順次修正して高精度な姿勢情報
を得ることが可能となる。しかも、線分の集合化と、そ
の集合に属する各線分を結ぶ直線の方程式の算出、およ
びその修正と云う簡易な処理によって上述した姿勢計測
を行い得る。
As explained above, according to this device, the distance information of the object to be measured is input, the distance information is processed in units of scanning lines, and the information of the object included in the line information is extracted as line segment information. Then, the object to be measured is captured as a set of line segments from the connection relationship of the above line segments between multiple lines,
The posture of the object to be measured is measured as an equation of a straight line connecting line segments that bend to the set. Therefore, as the line scanning of the target object is performed, the posture information of the target object is obtained as the linear equation, and the linear equation is sequentially corrected using new line segment information, so that the posture information of the target object can be obtained at high speed. can be obtained. In other words, it is possible to quickly obtain general orientation information of the target object and to sequentially correct this information to obtain highly accurate orientation information. Moreover, the above-mentioned posture measurement can be performed by simple processing of grouping line segments, calculating the equation of a straight line connecting each line segment belonging to the group, and correcting the equation.

また、対象物体の姿勢を直線の方程式として表現するの
で、これを記憶するに必要な情報は該直線方程式の係数
情報だけとなり、その必要記憶容量の大幅な削減を図る
ことが可能となる。更には姿勢情報を直線方程式として
表現するので、その情報をロボット制御に直接的に用い
ることができる。これ故、従来のように大容農のメモリ
を用いたり、或いは処理に多大な時間を要した等の不具
合がない等の実用上多大なる効果が奏せられる。
Furthermore, since the orientation of the target object is expressed as a straight line equation, the only information necessary to store this is the coefficient information of the straight line equation, making it possible to significantly reduce the required storage capacity. Furthermore, since posture information is expressed as a linear equation, the information can be used directly for robot control. Therefore, there are no problems such as the need to use large-sized memory or the long processing time required in the past, and great practical effects can be achieved.

尚、本発明は上述した実施例に限定されるものではない
。実施例では円筒(円柱)部品を計測対象としたが、所
謂細長い部分形状を持つ部品の全てを同様に計測対象と
することができる。また直線方程式を算出するに際し、
実施例に示した重心に代えて線分の情報の2座標極大点
の情報を用いるようにしても良い。このようにすれば、
線分の長さに起因する誤差を小さく抑えてその姿勢計測
を高精度に行うことが可能となる。また、線分の方程式
の計算法としては、最小二乗法以外の方法を用いること
も可能である。更に走査ラインのピッチ等は、装置の仕
様に応じて定めれば良いものであり、その飽水発明はそ
の要旨を逸脱しない範囲で種々変形して実施することが
できる。
Note that the present invention is not limited to the embodiments described above. In the embodiment, a cylindrical (cylindrical) part was the measurement target, but all parts having a so-called elongated partial shape can be similarly measured. Also, when calculating the linear equation,
Instead of the center of gravity shown in the embodiment, information on the two-coordinate maximum point of the line segment information may be used. If you do this,
It becomes possible to suppress the error caused by the length of the line segment and measure the posture with high precision. Further, as a method for calculating the equation of a line segment, it is also possible to use a method other than the least squares method. Further, the pitch of the scanning line, etc. may be determined according to the specifications of the apparatus, and the water-saturated invention can be implemented with various modifications without departing from the gist thereof.

【図面の簡単な説明】[Brief explanation of the drawing]

図は本発明の一実施例を示すもので、第1図は実施例装
置の概略構成図、第2図(a)〜(d)は実施例装置に
おける姿勢計測処理概念を示す図、第3図は直線方程式
の修正処理を示す図、第4図は実施例装置の処理制御シ
ーケンスを示す図である。 1・・・レーザレンジファインダ、2・・・線分切出し
部、3・・・集合取込み部、4・・・直線生成部、5・
・・直線修正部、a、b、C・・・対象物体、旧、h2
.〜h10・・・線分、flj2J3・・・直線。 出願人 工業技術院長 等々力 達 第2図
The figures show one embodiment of the present invention, in which FIG. 1 is a schematic configuration diagram of the embodiment device, FIGS. 2(a) to (d) are diagrams showing the concept of attitude measurement processing in the embodiment device, and FIG. This figure shows a correction process for a linear equation, and FIG. 4 is a diagram showing a process control sequence of the embodiment apparatus. DESCRIPTION OF SYMBOLS 1...Laser range finder, 2...Line segment cutting section, 3...Collection importing section, 4...Line generation section, 5.
...Line correction section, a, b, C...Target object, old, h2
.. ~h10...line segment, flj2J3...straight line. Applicant Todoroki Director of the Agency of Industrial Science and Technology Figure 2

Claims (4)

【特許請求の範囲】[Claims] (1)光投影手段を用いて計測対象物体に関する距離情
報を所定の走査間隔のライン情報として順次入力する手
段と、これらの各ライン情報中の上記計測対象物体を示
す線分の情報を順次抽出する手段と、これらの各線分の
情報の各走査ライン間に亙る繋り関係から前記計測対象
物体の姿勢を直線方程式として求める手段とを具備した
ことを特徴とする姿勢計測装置。
(1) Means for sequentially inputting distance information regarding the object to be measured using a light projection means as line information at a predetermined scanning interval, and sequentially extracting information on a line segment indicating the object to be measured from each of the line information. and means for determining the orientation of the object to be measured as a linear equation from the connection relationship between each scanning line of information on each of these line segments.
(2)直線方程式は、所定走査間隔の各ライン情報から
それぞれ抽出される計測対象物体の線分の長さと位置の
情報から、各ライン間に亙つて相互に関連する線分の集
合を求め、これらの線分の集合を結ぶ直線を表現する方
程式として算出されるものである特許請求の範囲第1項
記載の姿勢計測装置。
(2) The linear equation calculates a set of line segments that are mutually related between each line from information on the length and position of the line segment of the object to be measured, which is extracted from each line information at a predetermined scanning interval, The posture measuring device according to claim 1, wherein the posture measuring device is calculated as an equation expressing a straight line connecting a set of these line segments.
(3)線分の集合を結ぶ直線を表現する方程式は、既に
求められた縮分の集合に追加された線分の情報を用いて
逐次修正されるものである特許請求の範囲第2項記載の
姿勢計測装置。
(3) The equation expressing the straight line connecting the set of line segments is successively modified using information on the line segments added to the set of already determined contractions. posture measuring device.
(4)光投影手段は、レーザレンジファインダからなる
ものである特許請求の範囲第1項記載の姿勢計測装置。
(4) The posture measuring device according to claim 1, wherein the light projection means comprises a laser range finder.
JP4836085A 1985-03-13 1985-03-13 Attitude measuring instrument Granted JPS61207910A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP4836085A JPS61207910A (en) 1985-03-13 1985-03-13 Attitude measuring instrument

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP4836085A JPS61207910A (en) 1985-03-13 1985-03-13 Attitude measuring instrument

Publications (2)

Publication Number Publication Date
JPS61207910A true JPS61207910A (en) 1986-09-16
JPH0364806B2 JPH0364806B2 (en) 1991-10-08

Family

ID=12801182

Family Applications (1)

Application Number Title Priority Date Filing Date
JP4836085A Granted JPS61207910A (en) 1985-03-13 1985-03-13 Attitude measuring instrument

Country Status (1)

Country Link
JP (1) JPS61207910A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH01319878A (en) * 1988-06-21 1989-12-26 Kubota Ltd Crop line detecting device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH01319878A (en) * 1988-06-21 1989-12-26 Kubota Ltd Crop line detecting device

Also Published As

Publication number Publication date
JPH0364806B2 (en) 1991-10-08

Similar Documents

Publication Publication Date Title
CN106845515B (en) Robot target identification and pose reconstruction method based on virtual sample deep learning
CN111684474B (en) Arithmetic device, arithmetic method, and recording medium
JP5924862B2 (en) Information processing apparatus, information processing method, and program
JP5627325B2 (en) Position / orientation measuring apparatus, position / orientation measuring method, and program
JP5393318B2 (en) Position and orientation measurement method and apparatus
JP6736257B2 (en) Information processing device, information processing method, and program
JP3880702B2 (en) Optical flow detection apparatus for image and self-position recognition system for moving object
CN110842901B (en) Robot hand-eye calibration method and device based on novel three-dimensional calibration block
JP4234059B2 (en) Camera calibration method and camera calibration apparatus
JP2013186816A (en) Moving image processor, moving image processing method and program for moving image processing
JP2008296330A (en) Robot simulation device
JP2009128191A (en) Object recognition device and robot device
JP2005182834A (en) Method and apparatus for using rotational movement amount of mobile device and computer-readable recording medium for storing computer program
WO2018043524A1 (en) Robot system, robot system control device, and robot system control method
JP5104248B2 (en) Object recognition apparatus and robot apparatus
CN113008195A (en) Three-dimensional curved surface distance measuring method and system based on space point cloud
KR20100062320A (en) Generating method of robot motion data using image data and generating apparatus using the same
JP2017130067A (en) Automatic image processing system for improving position accuracy level of satellite image and method thereof
Bao et al. 3D perception-based collision-free robotic leaf probing for automated indoor plant phenotyping
JPS61207910A (en) Attitude measuring instrument
JPS61133409A (en) Automatic correction system of robot constant
CN116604212A (en) Robot weld joint identification method and system based on area array structured light
JPH07248209A (en) Object position and attitude measuring device and part assembling apparatus loading the device
Bao et al. Robotic 3D plant perception and leaf probing with collision-free motion planning for automated indoor plant phenotyping
Zhang et al. A Static Feature Point Extraction Algorithm for Visual-Inertial SLAM

Legal Events

Date Code Title Description
EXPY Cancellation because of completion of term