JP2023023706A - System for reading engraved character and method for reading engraved character - Google Patents

System for reading engraved character and method for reading engraved character Download PDF

Info

Publication number
JP2023023706A
JP2023023706A JP2021129477A JP2021129477A JP2023023706A JP 2023023706 A JP2023023706 A JP 2023023706A JP 2021129477 A JP2021129477 A JP 2021129477A JP 2021129477 A JP2021129477 A JP 2021129477A JP 2023023706 A JP2023023706 A JP 2023023706A
Authority
JP
Japan
Prior art keywords
character
stamped
candidate
type
learning model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP2021129477A
Other languages
Japanese (ja)
Inventor
一記 箱石
Kazunori Hakoishi
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Toyota Motor East Japan Inc
Original Assignee
Toyota Motor East Japan Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Toyota Motor East Japan Inc filed Critical Toyota Motor East Japan Inc
Priority to JP2021129477A priority Critical patent/JP2023023706A/en
Publication of JP2023023706A publication Critical patent/JP2023023706A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

To provide a system for reading an engraved character and a method for reading an engraved character, capable of reading an engraved character with high accuracy.SOLUTION: A system 1 for reading an engraved character includes character position identification means 20 that identifies a position of an engraved character for each character in a photographic image obtained by imaging means 10, and character type determination means 30 that determines which of candidate characters a character type is in an engraved character image for each character at the position identified by the character position identification means 20. The character position identification means 20 identifies the position on the basis of a position learning model 21 obtained by learning a feature of an area where the engraved character is located for each character of each candidate character type by machine learning. The character type determination means 30 determines the character type on the basis of a position learning model 31 obtained by learning a feature of a shape for each character of each candidate character type by machine learning.SELECTED DRAWING: Figure 1

Description

本発明は、刻印文字を読み取る刻印文字読み取りシステム及び刻印文字読み取り方法に関する。 The present invention relates to a stamped character reading system and a stamped character reading method for reading stamped characters.

車のエンジンの組付けラインでは、例えば、エンジンブロックに刻印されたメタル組付け指示の数字を読み取り、その数字によりメタル番号を選定している。数字の読み取りは、例えば、目視により行っているが、近年では、文字の自動読み取り技術が開発されている。例えば、特許文献1には、捺印数字列が印刷された印刷面からCCDカメラにより数字画像を読み取って計算機に転送し、各捺印数字の幅よりも狭いスリットを移動させて数字画像からスリット画像を取り出し、ニューラルネットワークを用いて文字種を認識する文字認識装置が記載されている。また、特許文献2には、OCR処理により読取画像から文字認識を行いテキストデータを生成する会計処理システムが記載されている。 In a car engine assembly line, for example, the number of metal assembly instructions stamped on an engine block is read, and the number is used to select the metal number. For example, numbers are read visually, but in recent years, automatic character reading technology has been developed. For example, in Patent Document 1, a number image is read by a CCD camera from a printed surface on which a string of stamped numbers is printed, transferred to a computer, and a slit narrower than the width of each stamped number is moved to obtain a slit image from the number image. A character recognition device is described which extracts characters and recognizes character types using a neural network. Further, Patent Document 2 describes an accounting processing system that performs character recognition from a read image by OCR processing to generate text data.

特許第2972011号公報Japanese Patent No. 2972011 特許第6528147号公報Japanese Patent No. 6528147

しかしながら、これらの画像解析技術では、文字と背景とで色に差があるような印字や手書き文字の場合は文字を認識することができるが、文字と背景が同色である刻印文字の場合には、表面の光沢や汚れ等が画像解析時に悪影響を及ぼし、文字と認識しない場合があるという問題があった。特に、文字の二度打ちやずれがあると、例えば、図4に示したように、二値化処理により文字の輪郭を出そうとしても、光沢や汚れ又は肌の微小凹凸をノイズとして拾ってしまうので、判別率が低くなり、文字の認識が難しい。 However, with these image analysis technologies, characters can be recognized in the case of printed or handwritten characters in which there is a color difference between the characters and the background, but in the case of engraved characters in which the characters and background are the same color, However, there is a problem that surface gloss, stains, etc. have an adverse effect on image analysis, and may not be recognized as characters. In particular, when characters are double-struck or misaligned, for example, as shown in FIG. As a result, the discrimination rate becomes low, making it difficult to recognize characters.

本発明は、このような問題に基づきなされたものであり、刻印文字を高い精度で読み取ることができる刻印文字読み取りシステム及び刻印文字読み取り方法を提供することを目的とする。 SUMMARY OF THE INVENTION The present invention is made based on such problems, and an object of the present invention is to provide a stamped character reading system and a stamped character reading method capable of reading stamped characters with high accuracy.

本発明の刻印文字読み取りシステムは、読み取り対象の刻印文字が複数種の候補文字種のいずれであるかを読み取るものであって、刻印文字が設けられた刻印面を撮影する撮影手段と、撮影手段により得られた撮影画像について、1字ごとに刻印文字の位置を特定する文字位置特定手段と、撮影手段により得られた撮影画像のうち文字位置特定手段により特定した位置の1字ごとの刻印文字画像について、文字種が候補文字種のいずれであるかを判別する文字種判別手段とを備え、文字位置特定手段は、機械学習により1字ごとに刻印文字が位置する領域の特徴を各候補文字種について学習させた位置学習モデルに基づいて特定し、文字種判別手段は、機械学習により1字ごとに刻印文字の形状の特徴を各候補文字種について学習させた文字種学習モデルに基づいて判別するものである。 The stamped character reading system of the present invention reads which of a plurality of candidate character types a stamped character to be read belongs to, and uses a photographing means for photographing a stamped surface on which the stamped character is provided, and a photographing means. Character position specifying means for specifying the position of the stamped character for each character in the obtained photographed image, and a stamped character image for each character at the position specified by the character position specifying means in the photographed image obtained by the photographing means. character type determining means for determining which of the candidate character types the character type is, and the character position specifying means learns the characteristics of the area where the stamped character is located for each character for each candidate character type by machine learning The character type discrimination means discriminates based on the character type learning model in which the feature of the shape of the stamped character for each character is learned for each candidate character type by machine learning.

本発明の刻印文字読み取り方法は、読み取り対象の刻印文字が複数種の候補文字種のいずれであるかを読み取るものあって、刻印文字が設けられた刻印面を撮影する撮影手順と、撮影手段により得られた撮影画像について、1字ごとに刻印文字の位置を特定する文字位置特定手順と、撮影手順により得られた撮影画像のうち文字位置特定手順により特定した位置の1字ごとの刻印文字画像について、文字種が候補文字種のいずれであるかを判別する文字種判別手順とを備え、文字位置特定手順では、機械学習により1字ごとに刻印文字が位置する領域の特徴を各候補文字種について学習させた位置学習モデルを用いて特定し、文字種判別手順では、機械学習により1字ごとに刻印文字の形状の特徴を各候補文字種について学習させた文字種学習モデルを用いて判別するものである。 The stamped character reading method of the present invention reads which of a plurality of candidate character types a stamped character to be read belongs to. Character position specifying procedure for specifying the position of the stamped character for each character in the captured image, and a stamped character image for each character at the position specified by the character position specifying procedure in the captured image obtained by the photographing procedure , a character type determination procedure for determining which of the candidate character types the character type is, and in the character position specifying procedure, the position where the feature of the area where the stamped character is located for each character is learned for each candidate character type by machine learning The character type discrimination procedure uses a character type learning model in which the feature of the shape of the stamped character for each character is learned for each candidate character type by machine learning.

本発明によれば、機械学習により1字ごとに刻印文字が位置する領域の特徴を各候補文字種について学習させた位置学習モデルを用い、1字ごとに刻印文字の位置を特定し、かつ、機械学習により1字ごとに刻印文字の形状の特徴を各候補文字種について学習させた文字種学習モデルを用い、特定した位置の刻印文字画像の文字種が候補文字種のいずれであるかを判別するようにしたので、表面の光沢や汚れも含めた特徴を検出して学習させることができ、刻印文字を高い精度で読み取ることができる。 According to the present invention, using a position learning model in which the characteristics of the region where the stamped character is located for each character are learned for each candidate character type by machine learning, the position of the stamped character for each character is specified, and Using a character type learning model in which the feature of the shape of the stamped character is learned for each candidate character type by learning, it is determined which of the candidate character types the character type of the stamped character image at the specified position is. It can detect and learn features including surface gloss and dirt, and can read stamped characters with high accuracy.

本発明の一実施の形態に係る刻印文字読み取りシステムの構成を表す図である。1 is a diagram showing the configuration of a stamped character reading system according to an embodiment of the present invention; FIG. 文字位置特定手段、及び、文字種判別手段のハードウェアの一構成例を表す図である。It is a figure showing one structural example of the hardware of a character position specification means and a character type discrimination|determination means. 本発明の一実施の形態に係る刻印文字読み取り方法の流れを表す図である。It is a figure showing the flow of the stamped character reading method which concerns on one embodiment of this invention. 刻印文字について二値化処理をした画像を表すものである。It represents an image obtained by binarizing the stamped characters.

以下、本発明の実施の形態について、図面を参照して詳細に説明する。 BEST MODE FOR CARRYING OUT THE INVENTION Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings.

図1は、本発明の一実施の形態に係る刻印文字読み取りシステム1の構成を表すものである。この刻印文字読み取りシステム1は、読み取り対象である刻印文字が複数種の候補文字種のいずれであるかを読み取るものである。刻印文字は、打刻により形成された打刻文字である。読み取り対象は、1字でもよく、複数字でもよい。なお、本実施の形態では、エンジンの組付けライン等の生産ラインにおいて、エンジンブロック等のワークMに設けられた刻印文字を読み取る場合を例に挙げて説明する。 FIG. 1 shows the configuration of a stamped character reading system 1 according to an embodiment of the present invention. This stamped character reading system 1 reads which of a plurality of candidate character types a stamped character to be read belongs to. Stamped characters are stamped characters formed by stamping. One character or a plurality of characters may be read. In the present embodiment, an example will be described in which stamped characters provided on a work M such as an engine block are read in a production line such as an engine assembly line.

刻印文字読み取りシステム1は、例えば、読み取り対象の刻印文字が設けられた刻印面を撮影する撮影手段10と、撮影手段10により得られた撮影画像について、1字ごとに刻印文字の位置を特定する文字位置特定手段20と、撮影手段10により得られた撮影画像のうち文字位置特定手段20により特定した位置の1字ごとの刻印文字画像について、文字種が候補文字種のいずれであるかを判別する文字種判別手段30とを備えている。 The stamped character reading system 1 includes, for example, a photographing means 10 for photographing a stamped surface on which a stamped character to be read is provided, and for the photographed image obtained by the photographing means 10, the position of the stamped character is specified for each character. A character position identifying means 20, and a character type for determining which character type is a candidate character type for each stamped character image at a position identified by the character position identifying means 20 in the photographed image obtained by the photographing means 10. and determination means 30 .

撮影手段10は、例えば、CCDカメラ等のカメラにより構成され、生産ラインを搬送されるワークMの刻印文字を撮影するように固定して配設されている。撮影手段10の撮影範囲は、例えば、読み取り対象が全て含まれるように設定することが好ましい。 The photographing means 10 is composed of, for example, a camera such as a CCD camera, and is fixedly arranged so as to photograph the stamped characters of the work M conveyed through the production line. It is preferable that the imaging range of the imaging means 10 is set so as to include, for example, the entire object to be read.

文字位置特定手段20は、例えば、コンピュータにより構成することができ、プログラムを実行することにより、文字位置特定手段20として機能するように構成されている。文字位置特定手段20は、例えば、撮影手段10に接続され、撮影手段10により得られた撮影画像から1字ごとに刻印文字の位置を特定し、特定した位置の1字ごとの刻印文字画像を切り取って保存するように構成されていることが好ましい。刻印文字の位置の特定は、例えば、機械学習により1字ごとに刻印文字が位置する領域の特徴を各候補文字種について学習させた位置学習モデル21に基づいて行うように構成されている。 The character position specifying means 20 can be configured by a computer, for example, and is configured to function as the character position specifying means 20 by executing a program. The character position specifying means 20 is connected to, for example, the photographing means 10, specifies the position of the stamped character for each character from the photographed image obtained by the photographing means 10, and generates the stamped character image for each character at the specified position. It is preferably configured to be cut and saved. The position of the stamped character is specified, for example, based on a position learning model 21 that learns the features of the region where the stamped character is located for each candidate character type by machine learning.

具体的には、文字位置特定手段20は、例えば、機械学習により1字ごとに刻印文字が位置する領域の特徴を各候補文字種について学習させた位置学習モデル21と、位置学習モデル21を用いて撮影画像から刻印文字を1字ごとに検出し刻印文字画像として切り取る検出部22と、検出部22により切り取った刻印文字画像を位置情報と共に保存する保存部23とを有している。また、文字位置特定手段20は、例えば、位置学習モデル21を生成する位置学習モデル生成手段24を有していることが好ましい。 Specifically, the character position specifying means 20 uses, for example, a position learning model 21 that has learned, for each candidate character type, the characteristics of the region where the stamped character is located for each character by machine learning, and the position learning model 21. It has a detection unit 22 that detects each character from a photographed image and cuts it as a stamped character image, and a storage unit 23 that stores the stamped character image cut by the detection unit 22 together with position information. In addition, the character position identifying means 20 preferably has position learning model generating means 24 for generating the position learning model 21, for example.

位置学習モデル21は、位置学習モデル生成手段24により生成することができる。位置学習モデル生成手段24は、例えば、複数種の候補文字種について刻印文字を設けた刻印面を撮影した撮影画像から1字ごとに刻印文字を枠等で囲んで指定した領域内の画像を位置学習画像とし、深層学習(ディープラーニング;Deep learning)により位置学習画像の特徴量を抽出し、候補文字種のいずれか1字の刻印文字がある位置の特徴量として学習させるように構成されていることが好ましい。例えば、候補文字種が0から6までの7種の数字である場合には、0から6までの7種の刻印文字を設けた刻印面を撮影した撮影画像から、1字ごとに枠等で囲んで指定した領域内の画像を位置学習画像とし、いずれか1字の刻印文字がある位置として学習させることが好ましい。各候補文字種の刻印文字は、同一の刻印面に設けて撮影してもよく、異なる刻印面に設けて撮影してもよい。深層学習には、畳み込みニーラルネットワーク(CNN;Convolutional neural network)を用いることが好ましい。 The position learning model 21 can be generated by the position learning model generation means 24 . For example, the position learning model generating means 24 performs position learning on an image within a region specified by enclosing each character with a frame or the like from a photographed image of a stamping surface on which stamped characters are provided for a plurality of candidate character types. It is configured to extract the feature amount of the position learning image by deep learning, and to learn it as the feature amount of the position where the stamped character of any one of the candidate character types is present. preferable. For example, if the candidate character types are seven types of numbers from 0 to 6, each character is surrounded by a frame or the like from a photographed image of a stamping surface on which seven types of stamped characters from 0 to 6 are provided. It is preferable to use the image in the area designated by as a position learning image, and to learn the position of any one stamped character. The stamped characters of each candidate character type may be provided on the same stamped surface and photographed, or may be provided on different stamped surfaces and photographed. It is preferable to use a convolutional neural network (CNN) for deep learning.

これにより、位置学習モデル21は、刻印面の表面の光沢や汚れ等も含め、1字の刻印文字が位置する領域について、各候補文字種を合わせた特徴量を自ら検出し学習したものとなっている。すなわち、文字位置特定手段20は、複数種の候補文字種の刻印文字について、1つの位置学習モデル21により位置を特定することができるように構成されている。なお、位置学習画像としては、読み取り対象の刻印文字として可能性のある各種態様、特に該当する生産ラインで流動する部品のばらつきの範囲を鑑みた、例えば、二度打ちされた文字の画像、斜めや逆さなど向きが回転している文字の画像、明るさが異なる文字の画像を含めることが好ましい。 As a result, the position learning model 21 automatically detects and learns the feature amount of each candidate character type for the region in which a single stamped character is located, including glossiness and stains on the surface of the stamped surface. there is That is, the character position identifying means 20 is configured to be able to identify the positions of stamped characters of a plurality of candidate character types using one position learning model 21 . In addition, as the position learning image, there are various possible aspects of the stamped characters to be read, especially considering the range of variations in the parts flowing in the relevant production line, for example, an image of double-struck characters, an oblique It is preferable to include an image of characters whose orientation is rotated, such as upside down, or an image of characters with different brightness.

文字判別手段30は、例えば、コンピュータにより構成することができ、プログラムを実行することにより、文字判別手段30として機能するように構成されている。文字判別手段30は、例えば、文字位置特定手段20に接続され、文字位置特定手段20により特定した位置の刻印文字画像と、機械学習により1字ごとに刻印文字の形状の特徴を各候補文字種について学習させた文字種学習モデル31とに基づいて、文字種を判別するように構成されている。 The character discrimination means 30 can be configured by a computer, for example, and is configured to function as the character discrimination means 30 by executing a program. The character discrimination means 30 is connected to, for example, the character position specifying means 20, and uses the stamped character image at the position specified by the character position specifying means 20 and machine learning to determine the feature of the shape of the stamped character for each candidate character type. Based on the learned character type learning model 31, the character type is determined.

具体的には、文字判別手段30は、例えば、機械学習により1字ごとに刻印文字の形状の特徴を各候補文字種について学習させた文字種学習モデル31と、文字種学習モデル31に刻印文字画像を入力し、それにより得られる出力値に基づいて文字種が候補文字種のいずれであるかを判別する判定部32と、判定部32により判定した文字種を表示するディスプレイ等の表示部33とを有している。また、文字判定手段30は、例えば、位置種学習モデル31を生成する文字種学習モデル生成手段34を有していることが好ましい。 Specifically, the character discrimination means 30 inputs a character type learning model 31 obtained by learning, for example, the feature of the shape of a stamped character for each candidate character type by machine learning, and a stamped character image to the character type learning model 31. and a determination unit 32 for determining which of the candidate character types the character type is based on the output value obtained thereby, and a display unit 33 such as a display for displaying the character type determined by the determination unit 32. . Further, the character determining means 30 preferably has character type learning model generating means 34 for generating the position type learning model 31, for example.

文字種学習モデル31は、文字種学習モデル生成手段34により生成することができる。文字種学習モデル生成手段34は、例えば、複数種の候補文字種について刻印文字を設けた刻印面を撮影した撮影画像から1字ごとに刻印文字を枠等で囲んで指定した領域内の画像を文字種学習画像とし、深層学習により候補文字種ごとに文字種学習画像の特徴量を抽出し、その特徴量を正解文字種の候補文字種と紐づけて、学習させるように構成されていることが好ましい。例えば、候補文字種が0から6までの7種の数字である場合には、0から6までの7種の刻印文字を設けた刻印面を撮影した撮影画像から、1字ごとに枠等で囲んで指定した領域内の画像を文字種学習画像とし、正確文字種の候補文字種と紐づけて学習させることが好ましい。文字種学習画像には、位置学習画像と同一の画像を用いることが好ましい。深層学習には、畳み込みニーラルネットワークを用いることが好ましい。 The character type learning model 31 can be generated by the character type learning model generating means 34 . For example, the character type learning model generating means 34 learns the character type from the photographed image of the stamped surface on which the stamped characters are provided for a plurality of candidate character types, and the image within the area specified by enclosing the stamped characters with a frame or the like for each character. It is preferable that the feature amount of the character type learning image is extracted for each candidate character type by deep learning, and the feature amount is associated with the candidate character type of the correct character type for learning. For example, if the candidate character types are seven types of numbers from 0 to 6, each character is surrounded by a frame or the like from a photographed image of a stamping surface on which seven types of stamped characters from 0 to 6 are provided. It is preferable that the image in the area specified by 2 is used as the character type learning image, and is associated with the candidate character type of the correct character type for learning. It is preferable to use the same image as the position learning image for the character type learning image. A convolutional neural network is preferably used for deep learning.

これにより、文字種学習モデル31は、刻印面の表面の光沢や汚れ等も含め、複数種の候補文字種についてそれぞれ刻印文字の形状の特徴量を自ら検出し学習したものとなっている。すなわち、文字判別手段30は、1つの文字種学習モデル31により、文字種が候補文字種のいずれであるかを判別できるように構成されている。なお、文字種学習画像は、位置学習画像と同様に、読み取り対象の刻印文字として可能性のある各種態様、特に該当する生産ラインで流動する部品のばらつきの範囲を鑑みた、例えば、二度打ちされた文字の画像、斜めや逆さなど向きが回転している文字の画像、明るさが異なる文字の画像を含むことが好ましい。 As a result, the character type learning model 31 detects and learns by itself the feature values of the stamped character shape for each of a plurality of types of candidate character types, including glossiness and dirt on the surface of the stamped surface. That is, the character discrimination means 30 is configured to be able to discriminate which of the candidate character types the character type is using one character type learning model 31 . Note that the character type learning image, like the position learning image, can be a variety of possible aspects of the stamped character to be read. It is preferable to include an image of a character that is rotated, an image of a character that is rotated such as obliquely or upside down, and an image of a character that is different in brightness.

判定部32は、例えば、保存部24、文字種学習モデル31、及び、表示部33にそれぞれ接続されている。判別部32は、例えば、保存部24に保存された刻印文字画像を文字種学習モデル31に入力し、文字種学習モデル31から得られる出力値、例えば、各候補文字種のスコア値に基づき、一致度が最も高い候補文字種であると判定するように構成されていることが好ましい。なお、一致度が高い候補文字種がない場合や、同程度の一致度の候補文字種が2以上ある場合には、文字種不明と判定するようにしてもよい。 The determination unit 32 is connected to the storage unit 24, the character type learning model 31, and the display unit 33, for example. For example, the discrimination unit 32 inputs the stamped character image stored in the storage unit 24 to the character type learning model 31, and the degree of matching is determined based on the output value obtained from the character type learning model 31, for example, the score value of each candidate character type. It is preferable that the character type is determined to be the highest candidate character type. If there is no candidate character type with a high degree of matching, or if there are two or more candidate character types with a similar degree of matching, the character type may be determined to be unknown.

表示部33は、例えば、判定部32により判定された刻印文字画像の文字種と、刻印文字画像の位置情報とに基づき、刻印面の刻印文字を表示するように構成されていることが好ましい。 The display unit 33 is preferably configured to display the stamped characters on the stamped surface based on, for example, the character type of the stamped character image determined by the determination unit 32 and the position information of the stamped character image.

図2は、文字位置特定手段20、及び、文字種判別手段30のハードウェア構成の一例を表すものである。文字位置特定手段20、及び、文字種判別手段30は、例えば、CPU(Center Processing Unit)41と、ROM(Read Only Memory)42、RAM(Random Access Memory)43と、HDD(ハードディスクドライブ)44と、操作インターフェース(操作I/F)45とを有している。CPU41は、ROM42に記録されている各種プログラム、又は、HDD44からRAM43にロードされた各種プログラムに従って各種の処理を実行するものである。RAM43には、CPU41が各種の処理を実行する上において必要なデータ等も適宜記憶されている。HDD44には、各種データが記憶されている。 FIG. 2 shows an example of the hardware configuration of the character position identifying means 20 and the character type determining means 30. As shown in FIG. The character position specifying means 20 and the character type determining means 30 include, for example, a CPU (Center Processing Unit) 41, a ROM (Read Only Memory) 42, a RAM (Random Access Memory) 43, an HDD (Hard Disk Drive) 44, and an operation interface (operation I/F) 45 . The CPU 41 executes various processes according to various programs recorded in the ROM 42 or various programs loaded from the HDD 44 to the RAM 43 . The RAM 43 also stores data necessary for the CPU 41 to execute various processes. Various data are stored in the HDD 44 .

この刻印文字読み取りシステム1は、例えば、次のようにして用いられる。図3は刻印文字読み取りシステム1を用いた刻印文字読み取り方法の流れを表すものである。まず、準備手順として、位置学習モデル生成手段24により位置学習モデル21を生成すると共に、文字種学習モデル生成手段34により文字種学習モデル31を生成する(準備手順;ステップS110)。 This stamped character reading system 1 is used, for example, as follows. FIG. 3 shows the flow of the stamped character reading method using the stamped character reading system 1 . First, as a preparatory procedure, the position learning model generating means 24 generates the position learning model 21, and the character type learning model generating means 34 generates the character type learning model 31 (preparatory procedure; step S110).

具体的には、例えば、複数種の候補文字種について刻印文字を設けた刻印面を撮影手段10により撮影し、得られた撮影画像から1字ごとに刻印文字を枠などで囲んで指定した領域内の画像を位置学習画像として、深層学習により位置学習画像の特徴量を抽出し、補文字種のいずれか1字の刻印文字がある位置の特徴量として学習させた位置学習モデル21を生成する。また、例えば、位置学習画像と同一の画像を文字種学習画像として、深層学習により候補文字種ごとに文字種学習画像の特徴量を抽出し、その特徴量を正解文字種の候補文字種と紐づけて学習させた文字種学習モデル31を生成する。例えば、候補文字種が0から6までの7種の数字である場合には、0から6までの7種の刻印文字を設けた刻印面を撮影した撮影画像から、1字ごとに枠等で囲んで指定した領域内の画像を文字位置学習画像及び文字種学習画像として用意する。 Specifically, for example, a stamping surface on which stamped characters are provided for a plurality of types of candidate character types is photographed by the photographing means 10, and the stamped characters are enclosed in a frame or the like for each character from the obtained photographed image, and within a designated area is used as a position-learning image, the feature quantity of the position-learning image is extracted by deep learning, and a position-learning model 21 learned as the feature quantity of the position where any one of the complementary character types is present is generated. Further, for example, using the same image as the position learning image as the character type learning image, the feature amount of the character type learning image is extracted for each candidate character type by deep learning, and the feature amount is associated with the candidate character type of the correct character type for learning. A character type learning model 31 is generated. For example, if the candidate character types are seven types of numbers from 0 to 6, each character is surrounded by a frame or the like from a photographed image of a stamping surface on which seven types of stamped characters from 0 to 6 are provided. Prepare images in the area designated by as a character position learning image and a character type learning image.

次いで、例えば、エンジンの組付けライン等の生産ラインにおいて、エンジンブロック等のワークMに設けられた刻印文字を次のようにして読み取る。まず、撮影手段10によりワークMの刻印文字、例えば、「32232」の数字が設けられた刻印面を撮影する(撮影手順;ステップS121)。次に、例えば、文字位置特定手段20により、撮影手段10により得られた撮影画像について、位置学習モデル21を用い、1字ごとに刻印文字の位置を特性する(文字位置特定手順;ステップS122)。具体的には、例えば、検出部22により、位置学習モデル21を用いて撮影画像から刻印文字を1字ごとに検出し、刻印文字画像として切り取り、切り取った刻印文字画像を位置情報と共に保存部23に保存する。 Next, for example, in a production line such as an engine assembly line, the stamped characters provided on the work M such as an engine block are read as follows. First, the photographing means 10 photographs the stamped surface of the workpiece M on which the stamped characters, for example, the number "32232" is provided (photographing procedure; step S121). Next, for example, the character position identifying means 20 uses the position learning model 21 to characterize the position of the stamped character for each character in the photographed image obtained by the photographing means 10 (character position identifying procedure; step S122). . Specifically, for example, the detection unit 22 detects each character of the stamped character from the captured image using the position learning model 21, cuts it as a stamped character image, and stores the cut stamped character image together with the position information in the storage unit 23. Save to

続いて、例えば、文字判別手段30により、文字位置特定手段20により特定した位置の1字ごとの刻印文字画像について、文字種学習モデル31を用い、候補文字種のいずれの文字種であるかを判別する(文字種判別手順;ステップS123)。具体的には、例えば、判定部32により、文字種学習モデル31に刻印文字画像を入力し、文字種学習モデル31から得られる出力値に基づき、候補文字種のうち一致度が最も高い文字種であると判定し、表示部33に表示する。例えば、文字種学習モデル31から得られる各候補文字種のスコア値が、「0」については0%、「1」については0%、「2」については98%、「3」については1%、「4」については0%、「5」については1%、「6」については0%であった場合には、最もスコア値の高い2の数字であると判定する。これを各刻印文字画像について行い、表示部33には、判定された文字種が、各刻印文字画像の位置情報に基づいて、例えば「32232」と表示がされる。 Subsequently, for example, the character discriminating means 30 uses the character type learning model 31 to discriminate which of the candidate character types the stamped character image of each character at the position specified by the character position specifying means 20 is ( Character type discrimination procedure; step S123). Specifically, for example, the determination unit 32 inputs the stamped character image to the character type learning model 31, and based on the output value obtained from the character type learning model 31, determines that the character type has the highest degree of matching among the candidate character types. and displayed on the display unit 33. For example, the score value of each candidate character type obtained from the character type learning model 31 is 0% for "0", 0% for "1", 98% for "2", 1% for "3", " 4” is 0%, “5” is 1%, and “6” is 0%, it is determined to be the number 2 with the highest score value. This is performed for each stamped character image, and the determined character type is displayed on the display unit 33 as, for example, "32232" based on the position information of each stamped character image.

なお、エンジンの組付けラインにおいて、エンジンブロックに設けられた刻印文字の読み取りを刻印文字読み取りシステム1とOCRとで行ったところ、OCRによる読み取り率は98%であったのに対し、刻印文字読み取りシステム1による読み取り率は100%であった。 In the engine assembly line, the engraved characters provided on the engine block were read using the engraved character reading system 1 and OCR. The read rate by System 1 was 100%.

このように本実施の形態によれば、機械学習により1字ごとに刻印文字が位置する領域の特徴を各候補文字種について学習させた位置学習モデル21を用い、1字ごとに刻印文字の位置を特定し、かつ、機械学習により1字ごとに刻印文字の形状の特徴を各候補文字種について学習させた文字種学習モデル31を用い、特定した位置の刻印文字画像の文字種が候補文字種のいずれであるかを判別するようにしたので、表面の光沢や汚れも含めた特徴を検出して学習させることができ、刻印文字を高い精度で読み取ることができる。 As described above, according to the present embodiment, the position learning model 21, in which the feature of the region where the stamped character is located for each character is learned for each candidate character type by machine learning, is used to determine the position of the stamped character for each character. Which of the candidate character types is the character type of the stamped character image at the specified position using a character type learning model 31 that identifies and learns the features of the shape of the stamped character for each character by machine learning for each candidate character type? can be learned by detecting characteristics including surface gloss and dirt, and the stamped characters can be read with high accuracy.

以上、実施の形態を挙げて本発明を説明したが、本発明は上記実施の形態に限定されるものではなく、種々変形可能である。例えば、上記実施の形態では、各構成要素について具体的に説明したが、各構成要素の具体的な構造や形状は異なっていてもよく、また、上述した構成要素を全て備えていなくてもよく、他の構成要素を備えていてもよい。 Although the present invention has been described above with reference to the embodiments, the present invention is not limited to the above embodiments and can be variously modified. For example, in the above embodiments, each component was specifically described, but the specific structure and shape of each component may be different, and all of the components described above may not be provided. , may comprise other components.

また、上記実施の形態では、エンジンの組付けラインにおいて、エンジンブロックに設けられた刻印文字を読み取る場合について具体的に説明したが、他の生産ラインにおいて刻印文字を読み取る場合についても適用することができる。更に、上記実施の形態では、候補文字種として数字の場合を具体的に説明したが、数字以外の文字種についても適用することができる。 Further, in the above embodiment, the case of reading the stamped characters provided on the engine block in the engine assembly line was specifically described, but the present invention can also be applied to the case of reading the stamped characters in other production lines. can. Furthermore, in the above-described embodiment, the case of numerals as candidate character types has been specifically described, but character types other than numerals can also be applied.

1…刻印文字読み取りシステム、10…撮影手段、20…文字位置特定手段、21…位置学習モデル、22…検出部、23…保存部、24…位置学習モデル生成手段、30…文字種判別手段、31…文字種学習モデル31、32…判定部、33…表示部、34…文字種学習モデル生成手段、41…CPU、42…ROM、43…RAM、44…HDD、45…操作インターフェース REFERENCE SIGNS LIST 1 engraved character reading system 10 photographing means 20 character position specifying means 21 position learning model 22 detection unit 23 storage unit 24 position learning model generation means 30 character type discrimination means 31 Character type learning model 31, 32 Determining unit 33 Display unit 34 Character type learning model generating means 41 CPU 42 ROM 43 RAM 44 HDD 45 Operation interface

Claims (2)

読み取り対象の刻印文字が複数種の候補文字種のいずれであるかを読み取る刻印文字読み取りシステムであって、
前記刻印文字が設けられた刻印面を撮影する撮影手段と、
前記撮影手段により得られた撮影画像について、1字ごとに前記刻印文字の位置を特定する文字位置特定手段と、
前記撮影手段により得られた撮影画像のうち前記文字位置特定手段により特定した位置の1字ごとの刻印文字画像について、文字種が前記候補文字種のいずれであるかを判別する文字種判別手段とを備え、
前記文字位置特定手段は、機械学習により1字ごとに前記刻印文字が位置する領域の特徴を前記各候補文字種について学習させた位置学習モデルに基づいて特定し、
前記文字種判別手段は、機械学習により1字ごとに前記刻印文字の形状の特徴を前記各候補文字種について学習させた文字種学習モデルに基づいて判別する
ことを特徴とする刻印文字読み取りシステム。
A stamped character reading system for reading which of a plurality of candidate character types a stamped character to be read is,
a photographing means for photographing the stamped surface on which the stamped characters are provided;
Character position specifying means for specifying the position of the stamped character for each character in the photographed image obtained by the photographing means;
Character type determination means for determining which of the candidate character types the character type is for the stamped character image of each character at the position specified by the character position specifying means in the photographed image obtained by the photographing means,
The character position specifying means specifies the characteristics of the region where the stamped character is located for each character based on a position learning model learned for each of the candidate character types by machine learning,
A stamped character reading system, wherein the character type discriminating means discriminates the feature of the shape of the stamped character for each character by machine learning based on a character type learning model that is learned for each of the candidate character types.
読み取り対象の刻印文字が複数種の候補文字種のいずれであるかを読み取る刻印文字読み取り方法あって、
前記刻印文字が設けられた刻印面を撮影する撮影手順と、
前記撮影手段により得られた撮影画像について、1字ごとに前記刻印文字の位置を特定する文字位置特定手順と、
前記撮影手順により得られた撮影画像のうち前記文字位置特定手順により特定した位置の1字ごとの刻印文字画像について、文字種が前記候補文字種のいずれであるかを判別する文字種判別手順とを備え、
前記文字位置特定手順では、機械学習により1字ごとに前記刻印文字が位置する領域の特徴を前記各候補文字種について学習させた位置学習モデルを用いて特定し、
前記文字種判別手順では、機械学習により1字ごとに前記刻印文字の形状の特徴を前記各候補文字種について学習させた文字種学習モデルを用いて判別する
ことを特徴とする刻印文字読み取り方法。
A stamped character reading method for reading which of a plurality of candidate character types a stamped character to be read is,
a photographing procedure for photographing the stamped surface on which the stamped characters are provided;
A character position identification procedure for identifying the position of the stamped character for each character in the photographed image obtained by the photographing means;
a character type determination procedure for determining which of the candidate character types the character type is for the stamped character image of each character at the position specified by the character position specifying procedure in the photographed image obtained by the photographing procedure,
In the character position specifying procedure, the feature of the region where the stamped character is located for each character is specified by machine learning using a position learning model learned for each of the candidate character types,
In the character type discrimination procedure, a character type learning model in which the feature of the shape of the engraved character is learned for each of the candidate character types by machine learning is used to discriminate using a character type learning model.
JP2021129477A 2021-08-06 2021-08-06 System for reading engraved character and method for reading engraved character Pending JP2023023706A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2021129477A JP2023023706A (en) 2021-08-06 2021-08-06 System for reading engraved character and method for reading engraved character

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2021129477A JP2023023706A (en) 2021-08-06 2021-08-06 System for reading engraved character and method for reading engraved character

Publications (1)

Publication Number Publication Date
JP2023023706A true JP2023023706A (en) 2023-02-16

Family

ID=85204189

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2021129477A Pending JP2023023706A (en) 2021-08-06 2021-08-06 System for reading engraved character and method for reading engraved character

Country Status (1)

Country Link
JP (1) JP2023023706A (en)

Similar Documents

Publication Publication Date Title
CN111179251B (en) Defect detection system and method based on twin neural network and by utilizing template comparison
US20200160040A1 (en) Three-dimensional living-body face detection method, face authentication recognition method, and apparatuses
CN100414558C (en) Automatic fingerprint distinguishing system and method based on template learning
US20210019878A1 (en) Image processing device, image processing method, and image processing program
JP2005122351A (en) Method, system and program for searching for face image candidate area
CN105303156B (en) Character detection device, method, and program
JP2006048322A (en) Object image detecting device, face image detection program, and face image detection method
WO2020238232A1 (en) Image recognition method, apparatus and device, and authentication method, apparatus and device
JP2005092759A (en) Image processing device and method, red-eye detection method, and program
IES20070820A2 (en) Method for improved red eye detection in ISO images
JP2000105829A (en) Method and device for face parts image detection
JP2016103759A (en) Image processing apparatus, image processing method, and program
Kalina et al. Application of template matching for optical character recognition
Nodari et al. A Multi-Neural Network Approach to Image Detection and Segmentation of Gas Meter Counter.
CN114445843A (en) Card image character recognition method and device of fixed format
JP7269897B2 (en) Data registration device, biometric authentication device, and data registration program
JP2006323779A (en) Image processing method and device
JP5929282B2 (en) Image processing apparatus and image processing program
JP2023023706A (en) System for reading engraved character and method for reading engraved character
CN114708582B (en) AI and RPA-based electric power data intelligent inspection method and device
JP2006285959A (en) Learning method of face recognition device, and method, device and program for face recognition
JPH11306348A (en) Method and device for object detection
JP6075238B2 (en) Character recognition device and character recognition method
CN111935480B (en) Detection method for image acquisition device and related device
CN110782439B (en) Method and device for auxiliary detection of image annotation quality