JPS61173391A - Method for setting the field of view for image sensor - Google Patents

Method for setting the field of view for image sensor

Info

Publication number
JPS61173391A
JPS61173391A JP1253985A JP1253985A JPS61173391A JP S61173391 A JPS61173391 A JP S61173391A JP 1253985 A JP1253985 A JP 1253985A JP 1253985 A JP1253985 A JP 1253985A JP S61173391 A JPS61173391 A JP S61173391A
Authority
JP
Japan
Prior art keywords
image sensor
pocket
view
field
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP1253985A
Other languages
Japanese (ja)
Other versions
JPH0638272B2 (en
Inventor
Masao Takato
高藤 政雄
Yoshiki Kobayashi
芳樹 小林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National Institute of Advanced Industrial Science and Technology AIST
Original Assignee
Agency of Industrial Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Agency of Industrial Science and Technology filed Critical Agency of Industrial Science and Technology
Priority to JP60012539A priority Critical patent/JPH0638272B2/en
Publication of JPS61173391A publication Critical patent/JPS61173391A/en
Publication of JPH0638272B2 publication Critical patent/JPH0638272B2/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Landscapes

  • Sewing Machines And Sewing (AREA)
  • Character Input (AREA)
  • Image Input (AREA)
  • Image Processing (AREA)
  • Character Discrimination (AREA)
  • Image Analysis (AREA)

Abstract

PURPOSE:To perform superposing processing of objects of regular pattern promptly and accurately without special illumination by setting the image sensor during superposing processing of plura objects so that the field of view of image sensor does not include the contour part of the objects. CONSTITUTION:An image sensor output for the contour line of a pocket 1 setting the field of view of the image sensor 4 is not produced when the field of view of the image sensor 4 is et inside the contour line of a pocket 2 superposed upon a shirt 1. In this state, two-dimensional shift amount of stripe patterns are calculated from binary coded image signals of the shirt 1 and the pocket 2 without obstruction by the contour line and the superposing state is adjusted by a robot 6. By this, special illumination blurring the contour line is dispensed with so that registration of the objects of regular pattern can be effected promptly and accurately during sewing.

Description

【発明の詳細な説明】 〔発明の利用分野〕 本発明は、画像の取込み方法に係シ、特に規則的な模様
を有する対象物を重ね合わせる際の最適画像入力を得る
イメージセンサの視野設定方法に関する。
[Detailed Description of the Invention] [Field of Application of the Invention] The present invention relates to a method of capturing images, and in particular, a method of setting the field of view of an image sensor to obtain optimal image input when superimposing objects having regular patterns. Regarding.

〔発明の背景〕[Background of the invention]

従来、アパレルメーカにおいては、模様(柄)を有する
生地の積層裁断や、ポケットの取り付は等の縫製は、人
間の判断による柄合わせが行なわれていた。
Conventionally, in apparel manufacturers, patterns have been matched by human judgment in laminated cutting of patterned fabrics and sewing such as attaching pockets.

しかし、近年、このような縫製は画像処理技術を用いて
自動的に行なう試みがなされてきた。
However, in recent years, attempts have been made to automatically perform such sewing using image processing technology.

そこで、この画像処理技術を用いて、たとえばワイシャ
ツにポケットを縫製する場合、該ワイシャツ上にポケッ
トを載置しその載置状態を画像処理化し、該ポケットの
位置ずれを補正することが必要となる。このことは、該
ポケットの幾何学的特徴をとらえて、前記ワイシャツに
対してどの位置にあるかを認識することになるが、その
認識にあっては該ポケットの輪郭線が妨げになることが
往々にしである。
Therefore, when using this image processing technology to sew a pocket onto a shirt, for example, it is necessary to place the pocket on the shirt, process the placed state into image processing, and correct the misalignment of the pocket. . This means that the geometrical features of the pocket are captured and the position of the pocket is recognized relative to the shirt, but the contour of the pocket may interfere with this recognition. It often happens.

〔発明の目的〕[Purpose of the invention]

本発明の目的は同一の規則的な模様(柄)を有する二つ
以上の対象物の重ね合わせ処理において、特別な照明上
の措置を必要とせず、画像認識処理の時間を短縮するこ
とが可能な画像入力方法を提供することにある。
The purpose of the present invention is to shorten the time required for image recognition processing without requiring any special lighting measures in the superposition processing of two or more objects having the same regular pattern. The purpose of this invention is to provide a method for inputting images.

〔発明の概要〕[Summary of the invention]

このような目的を達成するために、本発明は、画像を入
力するためのイメージセンサと該センナより取込まれた
画像を処理する画像処理装置を用いて行う同一の規則的
な模様を有する二つ以上の対象物の重ね合わせ処理にお
いて、対象物の輪郭部分を含まないように、前記イメー
ジセンサの視野を設定するようにしたものである。
In order to achieve such an object, the present invention provides two images having the same regular pattern using an image sensor for inputting an image and an image processing device for processing the image captured by the sensor. In the process of overlapping two or more objects, the field of view of the image sensor is set so as not to include the outline of the objects.

〔発明の実施例〕[Embodiments of the invention]

以下、本発明によるイメージセンサの視野設定方法を実
施例を用いて説明する。
Hereinafter, a method for setting the field of view of an image sensor according to the present invention will be explained using examples.

第1図ないし第5図は、規則的な模様が付されたワイシ
ャツ上に該模様と同一の模様が付されたポケットを重ね
合わせる際の一実施例を示す図である。まず、第1図に
おいて、縞模様(編柄)のあるワイシャツ1にポケット
2を付ける場合を示している。照明装置3により照明さ
れたワイシャツ1及び該ワイシャツ1のポケット2の画
像をイメージセンサ4(ITVカメラ等)で取込み、そ
れらの画像を画像処理装置5によシ処理し、ワイシャツ
1とポケット20両者の柄の傾き量(Δθ)及びずれ量
(ΔX、Δy)を求め、該傾き量及び該ずれ量をマテハ
ンロボット6.6′に制御信号として伝達し、該ロボツ
)6.6’を該制御信号に基づいてポケット2を回転及
び移動させるようになっている。両者(ワイシャツおよ
びポケット)のそれぞれの柄が一致した後、x−y−e
テーブル7を移動させ、自動機8によシ縫製を行う。な
お、ポケット2をマテハンロボット6.61で操作する
時にはワイシャツ1は、たとえば仮接着、あるいは真空
吸着等によりX−X−θテーブル7上に固定されている
FIGS. 1 to 5 are diagrams showing an embodiment in which a pocket having the same pattern as the regular pattern is superimposed on a dress shirt having a regular pattern. First, FIG. 1 shows a case where a pocket 2 is attached to a dress shirt 1 having a striped pattern (knitted pattern). Images of the shirt 1 and the pocket 2 of the shirt 1 illuminated by the illumination device 3 are captured by the image sensor 4 (ITV camera, etc.), and these images are processed by the image processing device 5, so that both the shirt 1 and the pocket 20 are captured. The amount of inclination (Δθ) and the amount of deviation (ΔX, Δy) of the handle are determined, the amount of inclination and the amount of deviation are transmitted as control signals to the material handling robot 6.6', and the robot 6.6' is controlled. The pocket 2 is rotated and moved based on the signal. After the patterns of both (shirt and pocket) match, x-y-e
The table 7 is moved and the automatic machine 8 performs sewing. Note that when the pocket 2 is operated by the material handling robot 6.61, the shirt 1 is fixed on the X-X-θ table 7 by, for example, temporary adhesion or vacuum suction.

ここで、前記イメージセンサ4はワイシャツ1およびポ
ケット2をそれぞれ別個にそれらに付された模様に基づ
いて認識するものであるが、このうち特にポケットを認
識する場合には以下に説明するように行なう。すなわち
、第2図に示すように、格子縞模様10の柄合わせを行
う場合において、ワイシャツである下側対象物11の上
にポケットである上側対象物12を重ね合わせる場合を
考える。上側対象物12の設定位置はマテハンロボット
6に対しては予め指示されているので、目標位置から大
きくずれることはなく、模様(柄)が着干ずれる程度で
ある。そこで、イージセンサ4の視野を破線13で示す
ように、上側対象物12の輪郭線14の内側になるよう
に設定することにある。なおこの設定は、人間が行うこ
とができる。
Here, the image sensor 4 recognizes the shirt 1 and the pocket 2 separately based on the patterns attached to them, but when recognizing the pocket in particular, it is performed as described below. . That is, as shown in FIG. 2, when matching the plaid pattern 10, consider the case where the upper object 12, which is a pocket, is superimposed on the lower object 11, which is a dress shirt. Since the set position of the upper object 12 is previously instructed to the material handling robot 6, it will not deviate greatly from the target position, and the pattern will only be misplaced. Therefore, the field of view of the easy sensor 4 is set to be inside the contour line 14 of the upper object 12, as shown by the broken line 13. Note that this setting can be performed by a human.

この他に仮りに、イメージセ/す4の視野の設定を一点
鎖線15で示すように、上側対象物i2の輪郭線14、
さらには下側対象物11の輪郭線を含むように設定した
場合には、照明方法を工夫して輪郭線が画像として現れ
ないようにするか、又は伺らかの画像処理の方法で輪郭
線を除く処理あるいは、輪郭線を認識し、模様と区別す
る処理が必要になる。
In addition, suppose that the setting of the field of view of the image sensor 4 is shown by the dashed line 15, the outline 14 of the upper object i2,
Furthermore, if the settings are made to include the outline of the lower object 11, the lighting method must be devised so that the outline does not appear as an image, or the outline may be removed using some image processing method. This requires processing to remove contour lines or to recognize contour lines and distinguish them from patterns.

以下、輪郭線を模様と区別する必要性について説明する
The necessity of distinguishing contour lines from patterns will be explained below.

第3図は、本発明の要点であるイメージセンサ4の視野
を輪郭線14の内部に設定した場合の、ワイシャツ本体
1の画像とポケット2を該本体1の上にのせたときの画
像の2つの画像について2値化処理後に画像間演算(論
理和演算)を行った結果の画像を示している。この画像
からヒストグラム処理等によシ、2つの対象物の柄のず
れ量(図中ΔX、ΔX/、Δy、Δy′で示している。
FIG. 3 shows an image of the shirt body 1 when the field of view of the image sensor 4, which is the main point of the present invention, is set inside the contour line 14 and an image of the shirt body 1 when the pocket 2 is placed on the body 1. The image is the result of performing an inter-image operation (logical OR operation) on two images after binarization processing. From this image, a histogram process or the like is performed to determine the amount of deviation between the patterns of the two objects (indicated by ΔX, ΔX/, Δy, and Δy' in the figure).

)を求めることは容易である。ここでΔXとΔX′のう
ち小さい方をX軸方向のずれ量、ΔyとΔy/のうち小
さい方をy軸方向のずれ量と考える。というのは、大き
い方のずれ量を選択した場合にはポケットがその本来的
な位置からずれてしまうという問題が生ずるからである
) is easy to find. Here, the smaller of ΔX and ΔX' is considered to be the amount of deviation in the X-axis direction, and the smaller of Δy and Δy/ is considered to be the amount of deviation in the y-axis direction. This is because if a larger amount of displacement is selected, the problem arises that the pocket will be displaced from its original position.

このようにすれば、画像面に表われるポケット2の輪郭
線がワイシャツおよびポケット模様とまぎられしくなる
ことがなくなり、該模様のみによってワイシャツに対す
るポケットの位置決めができるようになる。
In this way, the outline of the pocket 2 appearing on the image plane will not be easily confused with the dress shirt and pocket pattern, and the pocket can be positioned with respect to the dress shirt only by the pattern.

第4図は、従来の方法のようにイメージセンサ4の視野
をポケット2の輪郭線14の外部または一部を含んで設
定した場合の、ワイシャツ1の画像とポケット2を該ワ
イシャツ1の上にのせたときの2つの画像について2値
化処理をした後に画像間演算(論理和演算)を行った結
果の画像を示す(この画像は輪郭線14内の点線で示す
ワイシャツ1の柄も含むものとなる)。なお、もし簡単
のためポケット2をワイシャツ1の上にのせたときの画
像のみを2値化処理した場合の画像を扱うときは、点線
で示す柄の部分は画像としては現れないことになる。い
ずれの画像にしても、同図に示すように、輪郭線14が
障害物となり、X軸方向及びy軸方向のずれ量ΔX、Δ
xl及びΔy。
FIG. 4 shows an image of the shirt 1 and the pocket 2 placed on top of the shirt 1 when the field of view of the image sensor 4 is set to include the outside or part of the outline 14 of the pocket 2 as in the conventional method. The image is the result of performing inter-image operation (logical sum operation) after binarizing the two images when placed (this image also includes the pattern of the shirt 1 shown by the dotted line within the outline 14). ). For simplicity, if we treat an image obtained by binarizing only the image when pocket 2 is placed on shirt 1, the pattern portion indicated by the dotted line will not appear as an image. In either image, as shown in the figure, the contour line 14 becomes an obstacle, and the deviation amounts ΔX, Δ
xl and Δy.

Δy/をそれぞれ求めるのは難しい処理になることとな
る。
Determining each of Δy/ will be a difficult process.

第5図は、上述した画像処理をする場合の一実施例を示
す構成図である。同図において、ワイシャツの柄を認識
するイメージセンサ21、およびポケットの柄を認識す
るイメージセンサ22があり、それらの出力はそれぞれ
二値化処理装置23゜24に入力され、それぞれの出力
はそれぞれの記憶装置25.26に記憶される。記憶さ
れたデータは、オア回路27を介して記憶装置28に記
憶される。この記憶された該記憶装置28のデータはC
RT29に出力されるとともに、演算回路30に入力さ
れる。この演算回路30では、第3図に示すΔx1およ
びΔyが演算され、この演算値に基づいてロボットのア
ーム31を駆動するようになっている。
FIG. 5 is a block diagram showing an embodiment of the above-described image processing. In the figure, there are an image sensor 21 that recognizes the pattern of a dress shirt, and an image sensor 22 that recognizes the pattern of a pocket, and their outputs are input to binarization processing devices 23 and 24, respectively. It is stored in storage devices 25 and 26. The stored data is stored in the storage device 28 via the OR circuit 27. This stored data in the storage device 28 is C
It is output to the RT 29 and also input to the arithmetic circuit 30. This calculation circuit 30 calculates Δx1 and Δy shown in FIG. 3, and drives the arm 31 of the robot based on the calculated values.

〔発明の効果〕〔Effect of the invention〕

本発明によれば、(1)前述したように特別な照明上の
措置が不要となるため、照明コストが安くなる、(2)
対象物の輪郭線を除く処理、あるいは無視する処理が不
要となるため、画像認識処理が高速化される、という効
果がある。
According to the present invention, (1) lighting costs are reduced because special lighting measures are not required as described above; (2)
This eliminates the need for processing to remove or ignore the outline of the object, which has the effect of speeding up image recognition processing.

【図面の簡単な説明】[Brief explanation of drawings]

第1図は、本発明によるイメージセンサの視野設定方法
の一実施例を示す図で、該方法が適用される画像処理シ
ステムを示す構成図、第2図は本発明によるイメージセ
ンサの視野設定方法の一実施例を示す説明図、第3図は
前記画像処理システムにおいて、対象物のズレの補正を
行なう方法を示す説明図、第4図は従来のイメージセン
サの視野設定方法の一例を示す説明図、第5図は前記画
像処理システムにおける処理回路の一実施例を示す回路
図である。 1・・・ワイシャツ、2・・・ポケット、3・・・照明
装置、4・・・イメージセンサ、5・・・画像処理装置
、6・・・マテハンロボット、7・・・x−y−eテー
ブル、8・・・自動機(自動ミシン)、10・・・格子
柄模様、11・・・下側対象物、12・・・上側対象物
、13・・・イメージセンサの視野、14・・・上側対
象物の輪郭線。 特許出願人 工業技術院長 等々力 違Tr  図 ′iK2 図 xj図 ′;l114図
FIG. 1 is a diagram showing an embodiment of the method for setting the field of view of an image sensor according to the present invention, and is a block diagram showing an image processing system to which the method is applied, and FIG. 2 is a diagram showing the method for setting the field of view of an image sensor according to the present invention. An explanatory diagram showing one embodiment; FIG. 3 is an explanatory diagram showing a method for correcting the displacement of an object in the image processing system; FIG. 4 is an explanatory diagram showing an example of a method for setting the field of view of a conventional image sensor. FIG. 5 is a circuit diagram showing an embodiment of the processing circuit in the image processing system. 1... Dress shirt, 2... Pocket, 3... Lighting device, 4... Image sensor, 5... Image processing device, 6... Material handling robot, 7... x-y-e Table, 8... Automatic machine (automatic sewing machine), 10... Checkered pattern, 11... Lower object, 12... Upper object, 13... Field of view of image sensor, 14...・The contour line of the upper object. Patent applicant Todoroki Director of the Agency of Industrial Science and Technology

Claims (1)

【特許請求の範囲】[Claims] 1、画像を入力するためのイメージセンサと該センサよ
り取込まれた画像を処理する画像処理装置を用いて行う
同一の規則的な模様を有する二つ以上の対象物の重ね合
わせ処理において、対象物の輪郭部分を含まないように
、前記イメージセンサの視野を設定することを特徴とす
るイメージセンサの視野設定方法。
1. In superposition processing of two or more objects having the same regular pattern, which is performed using an image sensor for inputting images and an image processing device for processing images captured by the sensor, A method for setting a field of view for an image sensor, characterized in that the field of view of the image sensor is set so as not to include an outline of an object.
JP60012539A 1985-01-28 1985-01-28 Automatic sewing machine Expired - Lifetime JPH0638272B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP60012539A JPH0638272B2 (en) 1985-01-28 1985-01-28 Automatic sewing machine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP60012539A JPH0638272B2 (en) 1985-01-28 1985-01-28 Automatic sewing machine

Publications (2)

Publication Number Publication Date
JPS61173391A true JPS61173391A (en) 1986-08-05
JPH0638272B2 JPH0638272B2 (en) 1994-05-18

Family

ID=11808132

Family Applications (1)

Application Number Title Priority Date Filing Date
JP60012539A Expired - Lifetime JPH0638272B2 (en) 1985-01-28 1985-01-28 Automatic sewing machine

Country Status (1)

Country Link
JP (1) JPH0638272B2 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8186289B2 (en) 2008-02-28 2012-05-29 Brother Kogyo Kabushiki Kaisha Sewing machine and computer-readable medium storing control program executable on sewing machine
CN105332169A (en) * 2015-09-29 2016-02-17 广东溢达纺织有限公司 Cutting piece stacking locating mechanism and cutting piece stacking locating method
JP2017196207A (en) * 2016-04-28 2017-11-02 Juki株式会社 Sewing system
JP2018068696A (en) * 2016-10-31 2018-05-10 株式会社イノアックコーポレーション Manufacturing method of skin material with stitch
CN109072520A (en) * 2016-04-28 2018-12-21 株式会社松屋R&D Apparatus for sewing and method of sewing

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5127737A (en) * 1974-08-31 1976-03-08 Hokushin Electric Works
JPS55125893A (en) * 1979-03-20 1980-09-29 Toshiba Machine Co Ltd Machine for automatically sewing pocket
JPS5632744A (en) * 1979-08-27 1981-04-02 Toshiba Corp Pattern recognizing apparatus
JPS5962982A (en) * 1982-10-02 1984-04-10 Omron Tateisi Electronics Co Collating device of seal impression
JPS601900A (en) * 1983-06-17 1985-01-08 松下電器産業株式会社 Device for mounting electronic part with recognition

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5127737A (en) * 1974-08-31 1976-03-08 Hokushin Electric Works
JPS55125893A (en) * 1979-03-20 1980-09-29 Toshiba Machine Co Ltd Machine for automatically sewing pocket
JPS5632744A (en) * 1979-08-27 1981-04-02 Toshiba Corp Pattern recognizing apparatus
JPS5962982A (en) * 1982-10-02 1984-04-10 Omron Tateisi Electronics Co Collating device of seal impression
JPS601900A (en) * 1983-06-17 1985-01-08 松下電器産業株式会社 Device for mounting electronic part with recognition

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8186289B2 (en) 2008-02-28 2012-05-29 Brother Kogyo Kabushiki Kaisha Sewing machine and computer-readable medium storing control program executable on sewing machine
US8522701B2 (en) 2008-02-28 2013-09-03 Brother Kogyo Kabushiki Kaisha Sewing machine and computer-readable medium storing control program executable on sewing machine
CN105332169A (en) * 2015-09-29 2016-02-17 广东溢达纺织有限公司 Cutting piece stacking locating mechanism and cutting piece stacking locating method
JP2017196207A (en) * 2016-04-28 2017-11-02 Juki株式会社 Sewing system
CN107338582A (en) * 2016-04-28 2017-11-10 Juki株式会社 Sewing system
CN109072520A (en) * 2016-04-28 2018-12-21 株式会社松屋R&D Apparatus for sewing and method of sewing
US10815594B2 (en) 2016-04-28 2020-10-27 Matsuya R&D Co., Ltd. Sewing device and sewing method
JP2018068696A (en) * 2016-10-31 2018-05-10 株式会社イノアックコーポレーション Manufacturing method of skin material with stitch

Also Published As

Publication number Publication date
JPH0638272B2 (en) 1994-05-18

Similar Documents

Publication Publication Date Title
KR920004956B1 (en) Drawing figure recognition apparatus
US6751361B1 (en) Method and apparatus for performing fixturing in a machine vision system
JP2018055429A (en) Object recognition device and object recognition method
JPS61173391A (en) Method for setting the field of view for image sensor
US20200027205A1 (en) Data structure for creating image-processing data and method for creating image-processing data
JP2005116765A5 (en)
JP2001300875A (en) Robot system
JP4073995B2 (en) Electronic component position detection method
JPH0335108A (en) Lead position recognition device
JP3358847B2 (en) Control device for component mounting machine
JPS6265436A (en) Method for controlling wafer position in die bonder
JPS61877A (en) Form recognizer
JP2003156311A (en) Method and apparatus for detection and registration of alignment mark
JP3763229B2 (en) Position detection method by image recognition
JPH091338A (en) Method and device for automatically recognizing weld seam
GB2224865A (en) Fabric piece location and positioning
JP2818423B2 (en) Image processing device and work positioning device for machine tool using the same
JPH0276651A (en) Parts center sensing method
CN108195319A (en) A kind of vision sloped position method with chamfering workpiece
JP2624322B2 (en) Method for detecting the position of a feature of an object
JPH03166073A (en) Detection of work position
JP3611457B2 (en) Ring-shaped object gripping method and alignment apparatus therefor
JP2002216130A (en) Image recognizing method, image recognizing device and parts mounting device
JPH0340183A (en) Corner detector
JPH0661101B2 (en) Position shift measuring device

Legal Events

Date Code Title Description
EXPY Cancellation because of completion of term