JP2017215859A - Character string recognition device, method and program - Google Patents

Character string recognition device, method and program Download PDF

Info

Publication number
JP2017215859A
JP2017215859A JP2016110270A JP2016110270A JP2017215859A JP 2017215859 A JP2017215859 A JP 2017215859A JP 2016110270 A JP2016110270 A JP 2016110270A JP 2016110270 A JP2016110270 A JP 2016110270A JP 2017215859 A JP2017215859 A JP 2017215859A
Authority
JP
Japan
Prior art keywords
character string
character
image
recognition
candidate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP2016110270A
Other languages
Japanese (ja)
Other versions
JP6611346B2 (en
Inventor
新豪 劉
xin hao Liu
新豪 劉
川西 隆仁
Takahito Kawanishi
隆仁 川西
小萌 武
Xiaomeng Wu
小萌 武
柏野 邦夫
Kunio Kashino
邦夫 柏野
薫 平松
Kaoru Hiramatsu
薫 平松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nippon Telegraph and Telephone Corp
Original Assignee
Nippon Telegraph and Telephone Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nippon Telegraph and Telephone Corp filed Critical Nippon Telegraph and Telephone Corp
Priority to JP2016110270A priority Critical patent/JP6611346B2/en
Publication of JP2017215859A publication Critical patent/JP2017215859A/en
Application granted granted Critical
Publication of JP6611346B2 publication Critical patent/JP6611346B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Character Discrimination (AREA)

Abstract

PROBLEM TO BE SOLVED: To provide a character string recognition device capable of accurately recognizing a character string represented by a character string image.SOLUTION: A character recognition part 22 scans a window for picking up a partial image on a character string image, and inputs each of partial images picked out in the window into a CNN of a Network In Network structure to calculate a CNN score matrix representing a score of each character relevant to each of the partial images. A character string candidate recognition part 26 recognizes a character string candidate represented by the character string image on the basis of the CNN score matrix.SELECTED DRAWING: Figure 1

Description

本発明は、文字列認識装置、方法、及びプログラムに係り、特に、文字列画像が表す文字列を認識するための文字列認識装置、方法、及びプログラムに関する。   The present invention relates to a character string recognition device, method, and program, and more particularly, to a character string recognition device, method, and program for recognizing a character string represented by a character string image.

文字検出と分類に関する従来技術として、伝統的にHOG特徴が使われている。また、文字特有の中間特徴を学習により作り出す技術や、属性を部分空間で表現した手法も提案されている。   Traditionally, HOG features are used as a conventional technique for character detection and classification. In addition, techniques for creating character-specific intermediate features by learning and methods for expressing attributes in subspaces have been proposed.

また、最近ではDNNを用いた特徴も提案されており、例えば、教師無しCNN特徴が提案されている(非特許文献1)。   Recently, a feature using DNN has been proposed. For example, an unsupervised CNN feature has been proposed (Non-Patent Document 1).

Coates et al., "Text Detection and Character Recognition in Scene Images with Unsupervised feature learning", ICCV 2011Coates et al., "Text Detection and Character Recognition in Scene Images with Unsupervised feature learning", ICCV 2011

本発明では、文字列画像が表す文字列を精度よく認識する文字列認識装置、方法、及びプログラムを提供することを目的とする。   An object of the present invention is to provide a character string recognition device, method, and program for accurately recognizing a character string represented by a character string image.

上記目的を達成するために、本発明に係る文字列認識装置は、文字列画像が表す文字列を認識する文字列認識装置であって、前記文字列画像に対して部分画像を切り出すための窓を走査して、前記窓で切り出された部分画像の各々を、Network in Network構造であって、かつ、文字を認識するための予め学習されたCNN(Convolutional Neural Network)に入力して、前記部分画像の各々についての各文字のスコアを表すCNNスコア行列を求める文字認識部と、を含んで構成されている。   To achieve the above object, a character string recognition device according to the present invention is a character string recognition device for recognizing a character string represented by a character string image, and is a window for cutting out a partial image from the character string image. Each of the partial images cut out by the window is input to a CNN (Convolutional Neural Network) having a network-in-network structure and previously learned for recognizing characters. And a character recognition unit that obtains a CNN score matrix representing the score of each character for each image.

本発明に係る文字列認識方法は、文字列画像が表す文字列を認識する文字列認識装置における文字列認識方法であって、文字認識部が、前記文字列画像に対して部分画像を切り出すための窓を走査して、前記窓で切り出された部分画像の各々を、Network in Network構造であって、かつ、文字を認識するための予め学習されたCNN(Convolutional Neural Network)に入力して、前記部分画像の各々についての各文字のスコアを表すCNNスコア行列を求め、文字列候補認識部が、前記文字認識部によって求められた前記CNNスコア行列に基づいて、前記文字列画像が表す文字列候補を認識する。   A character string recognition method according to the present invention is a character string recognition method in a character string recognition device that recognizes a character string represented by a character string image, because the character recognition unit cuts out a partial image from the character string image. Each of the partial images cut out by the window is input into a network in network structure and a previously learned CNN (Convolutional Neural Network) for recognizing characters, A CNN score matrix representing the score of each character for each of the partial images is obtained, and a character string candidate recognition unit represents a character string represented by the character string image based on the CNN score matrix obtained by the character recognition unit. Recognize candidates.

また、本発明のプログラムは、コンピュータを、上記の文字列認識装置を構成する各部として機能させるためのプログラムである。   Moreover, the program of this invention is a program for functioning a computer as each part which comprises said character string recognition apparatus.

以上説明したように、本発明の文字列認識装置、方法、及びプログラムによれば、文字列画像に対して走査した窓で切り出された部分画像の各々を、Network in Network構造のCNNに入力し、求められたCNNスコア行列に基づいて、文字列画像が表す文字列候補を認識することにより、文字列画像が表す文字列を精度よく認識することができる。   As described above, according to the character string recognition apparatus, method, and program of the present invention, each of the partial images cut out by the window scanned with respect to the character string image is input to the CNN having the Network in Network structure. The character string represented by the character string image can be accurately recognized by recognizing the character string candidate represented by the character string image based on the obtained CNN score matrix.

本発明の第1の実施の形態及び第2の実施の形態に係る文字列認識装置の構成を示すブロック図である。It is a block diagram which shows the structure of the character string recognition apparatus which concerns on the 1st Embodiment and 2nd Embodiment of this invention. Network in Network構造のCNNを示す図である。It is a figure which shows CNN of a Network in Network structure. RNNを示す図である。It is a figure which shows RNN. スコアを再検証する方法を説明するための図である。It is a figure for demonstrating the method of re-verifying a score. 本発明の第1の実施の形態に係る文字列認識装置における文字列認識処理ルーチンの内容を示すフローチャートである。It is a flowchart which shows the content of the character string recognition process routine in the character string recognition apparatus which concerns on the 1st Embodiment of this invention. 第2の実施の形態に係る文字列認識装置の処理の流れを示す図である。It is a figure which shows the flow of a process of the character string recognition apparatus which concerns on 2nd Embodiment. 探索グラフを用いて文字列を認識する方法を説明するための図である。It is a figure for demonstrating the method of recognizing a character string using a search graph. 本発明の第2の実施の形態に係る文字列認識装置における文字列認識処理ルーチンの内容を示すフローチャートである。It is a flowchart which shows the content of the character string recognition process routine in the character string recognition apparatus which concerns on the 2nd Embodiment of this invention. 実験結果を示す図である。It is a figure which shows an experimental result. 本発明の実施の形態の他の例に係る文字列認識装置の処理の流れを示す図である。It is a figure which shows the flow of a process of the character string recognition apparatus which concerns on the other example of embodiment of this invention.

以下、図面を参照して本発明の実施の形態を詳細に説明する。   Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings.

<本発明の実施の形態の概要>
入力画像から、文字の領域を切り出すことを検出と言う。検出された文字の領域の画像から、どの単語に該当するか、判断する過程を認識とよぶ。このように画像内文字認識の過程として、以下の3つのタスクが考えられる。
<Outline of Embodiment of the Present Invention>
Extracting a character area from an input image is called detection. The process of determining which word corresponds to the detected character region image is called recognition. As described above, the following three tasks can be considered as the process of character recognition in the image.

1)文字領域の場所を検出する。 1) Detect the location of the character area.

2)切り出された文字領域を入力として、そこに描かれている文字を認識する。 2) Recognize the character drawn in the cut-out character area as input.

3)画像の入力から認識結果の出力までのトータルの技術を扱う。 3) Handles total technology from image input to recognition result output.

本実施の形態では、上記の2)を扱う。また、本実施の形態では、以下の3つのポイントを特徴としている。   In the present embodiment, the above 2) is handled. Further, this embodiment is characterized by the following three points.

1)CNNを用いた高い性能の文字分類器を用いる。 1) Use a high performance character classifier using CNN.

2)辞書と言語モデルを組み合せたWFSTによる文字列のラベル付け技術を用いる。 2) Use character string labeling technology by WFST that combines a dictionary and a language model.

3)RNN系列分類器による文字列認識技術を用いる。 3) Use character string recognition technology by RNN sequence classifier.

<第1の実施の形態>
<文字列認識装置のシステム構成>
図1は、本発明の第1の実施の形態に係る文字列認識装置100を示すブロック図である。この文字列認識装置100は、CPUと、RAMと、後述する文字列認識処理ルーチンを実行するためのプログラムを記憶したROMと、を備えたコンピュータで構成され、機能的には次に示すように構成されている。
<First Embodiment>
<System configuration of character string recognition device>
FIG. 1 is a block diagram showing a character string recognition apparatus 100 according to the first embodiment of the present invention. The character string recognition apparatus 100 is composed of a computer including a CPU, a RAM, and a ROM that stores a program for executing a character string recognition processing routine to be described later, and functionally as described below. It is configured.

本実施の形態に係る文字列認識装置100は、図1に示すように、入力部10と、演算部20と、出力部40とを備えている。   As shown in FIG. 1, the character string recognition apparatus 100 according to the present embodiment includes an input unit 10, a calculation unit 20, and an output unit 40.

入力部10は、文字列画像を入力として受け付ける。文字列画像は、画像から文字列の領域を切り出した画像であり、縦の画素数が所定数となるように大きさが正規化されているものとする。ここで、所定数とは、後述する窓と同じサイズである。   The input unit 10 receives a character string image as an input. The character string image is an image obtained by cutting out the character string region from the image, and the size is normalized so that the number of vertical pixels is a predetermined number. Here, the predetermined number is the same size as a window described later.

演算部20は、文字認識部22、文字列候補認識部24、及び文字列認識部26を備えている。   The calculation unit 20 includes a character recognition unit 22, a character string candidate recognition unit 24, and a character string recognition unit 26.

文字認識部22は、入力された文字列画像に対して部分画像を切り出すための窓を走査して、窓で切り出された部分画像の各々を、Network in Network構造であって、かつ、文字を認識するための予め学習されたCNN(Convolutional Neural Network)に入力して、部分画像の各々についての各文字のスコアを表すCNNスコア行列を求める。   The character recognition unit 22 scans a window for cutting out a partial image with respect to the input character string image, and each of the partial images cut out by the window has a Network in Network structure and a character. A CNN score matrix representing the score of each character for each of the partial images is obtained by inputting into a CNN (Convolutional Neural Network) learned in advance for recognition.

本実施の形態では、文字認識のタスクで、図2に示すような、Network in Network構造を持つCNNを用いて、走査ステップ毎に窓で切り出された部分画像から、例えば、62クラスへ分類するための各スコアを計算し、走査した窓毎に62クラスの各スコアを並べたCNNスコア行列が得られる。   In the present embodiment, in the character recognition task, the CNN having the Network in Network structure as shown in FIG. 2 is used to classify into 62 classes, for example, from the partial images cut out by the window for each scanning step. Each score is calculated, and a CNN score matrix in which 62 classes of scores are arranged for each scanned window is obtained.

ここで、62クラスとは、数字10個、アルファベット26文字*2(大文字,小文字)を合わせたものである。   Here, the 62 class is a combination of 10 numbers and 26 alphabetic characters * 2 (upper case and lower case).

また、Network in Network構造を持つCNNの入力としては、32×32のグレースケール画像を用い、Micro network(非特許文献2(M. Lin et al. "Network in Network", ICLR 2014)を参照)に接続された3つの畳み込み層(convolutional layer)と、128個のニューロンを有する、1つの全接続層(fully connected layer)とを有するCNNを用いる。   In addition, as a CNN input having a Network in Network structure, a 32 × 32 gray scale image is used, and Micro network (see Non-Patent Document 2 (M. Lin et al. “Network in Network”, ICLR 2014)). A CNN having three convolutional layers connected to each other and one fully connected layer having 128 neurons is used.

畳み込み層間は、小さな複層パーセプションネットワークで接続され、全接続層は、分類カテゴリと特徴マップとを直接リンクさせるグローバルアベレージプーリング層である。   The convolutional layers are connected by a small multi-layer perception network, and all connection layers are global average pooling layers that directly link classification categories and feature maps.

なお、CNNは、訓練データに基づいて予め学習しておく。   The CNN learns in advance based on the training data.

文字列候補認識部24は、文字認識部22によって求められたCNNスコア行列に対して、局所的な最大値以外を押さえる強調処理を行い、強調処理後のCNNスコア行列から、文字列画像が表す文字列候補を認識する。具体的には、強調処理後のCNNスコア行列を、文字列候補を認識するための予め学習されたRNN(Recurrent Neural Network)に入力して、文字列画像が表す文字列候補の各々を求める。   The character string candidate recognizing unit 24 performs an emphasis process on the CNN score matrix obtained by the character recognizing unit 22 except for the local maximum value, and the character string image is represented from the CNN score matrix after the emphasis process. Recognize character string candidates. Specifically, the CNN score matrix after the enhancement process is input to a previously learned RNN (Recurrent Neural Network) for recognizing the character string candidates, and each of the character string candidates represented by the character string image is obtained.

本実施の形態では、RNNとして、図3に示すLSTMを用いた双方向のRNN系列分類器を用いる。   In the present embodiment, a bidirectional RNN sequence classifier using LSTM shown in FIG. 3 is used as the RNN.

なお、RNNは、訓練データに基づいて予め学習しておく。   The RNN is learned in advance based on the training data.

文字列認識部26は、文字列候補認識部24によって認識された文字列候補から、誤った文字列候補を取り除くことにより、文字列画像が表す文字列を認識する。本実施の形態では、文字列候補認識部24によって認識された文字列候補の各々について、CNNスコア行列から得られるスコアを再検証することにより、誤った文字列候補を取り除く。   The character string recognition unit 26 recognizes the character string represented by the character string image by removing an erroneous character string candidate from the character string candidates recognized by the character string candidate recognition unit 24. In the present embodiment, for each of the character string candidates recognized by the character string candidate recognition unit 24, the incorrect character string candidates are removed by re-verifying the score obtained from the CNN score matrix.

具体的には、CNNスコア行列Mに基づいて、文字列候補Wの各々について、以下の式に従ってスコアS(W,M)を計算し、再検証を行う。   Specifically, based on the CNN score matrix M, for each of the character string candidates W, a score S (W, M) is calculated according to the following formula, and re-verification is performed.

ただし、W={c1,c2,…,cN}であり、pi+Δは、文字ciの中心の位置である(図4参照)。また、B=[−δ,δ]は、幅を示すパラメタである。実験ではδを5としている。 However, W = {c 1 , c 2 ,..., C N }, and p i + Δ is the center position of the character c i (see FIG. 4). B = [− δ, δ] is a parameter indicating the width. In the experiment, δ is set to 5.

文字列候補Wの各々について計算されたスコアS(W,M)に基づいて、スコアS(W,M)が最大となる文字列候補Wを、文字列画像が表す文字列の認識結果とする。   Based on the score S (W, M) calculated for each character string candidate W, the character string candidate W having the maximum score S (W, M) is set as the recognition result of the character string represented by the character string image. .

出力部40は、文字列画像が表す文字列の認識結果を出力する。   The output unit 40 outputs a recognition result of the character string represented by the character string image.

<文字列認識装置の作用>
次に、本実施の形態に係る文字列認識装置100の作用について説明する。縦の画素数が所定数となるように大きさが正規化されている文字列画像が、文字列認識装置100に入力されると、文字列認識装置100によって、図5に示す文字列認識処理ルーチンが実行される。
<Operation of character string recognition device>
Next, the operation of the character string recognition device 100 according to the present embodiment will be described. When a character string image whose size is normalized so that the number of vertical pixels becomes a predetermined number is input to the character string recognition device 100, the character string recognition device 100 performs a character string recognition process shown in FIG. The routine is executed.

まず、ステップS100において、入力された文字列画像に対して窓を走査して、窓で切り出された部分画像の各々を、Network in Network構造のCNN(Convolutional Neural Network)に入力して、部分画像の各々についての各文字のスコアを表すCNNスコア行列を求める。   First, in step S100, a window is scanned with respect to the input character string image, and each partial image cut out by the window is input to a CNN (Convolutional Neural Network) having a Network in Network structure. A CNN score matrix representing the score of each character for each of.

そして、ステップS102において、上記ステップS100で求められたCNNスコア行列に対して、強調処理を行う。   In step S102, enhancement processing is performed on the CNN score matrix obtained in step S100.

ステップS104では、強調処理後のスコア行列を、文字列候補を認識するための予め学習されたRNN(Recurrent Neural Network)に入力して、文字列画像が表す文字列候補の各々を取得する。   In step S104, the score matrix after the enhancement processing is input to a previously learned RNN (Recurrent Neural Network) for recognizing the character string candidate, and each character string candidate represented by the character string image is acquired.

そして、ステップS106では、上記ステップS100で得られたCNNスコア行列Mに基づいて、上記ステップS104で得られた文字列候補Wの各々について、スコアS(W,M)を計算し、再検証し、スコアS(W,M)が最大となる文字列候補Wを、文字列画像が表す文字列の認識結果として、出力部40により出力し、文字列認識処理ルーチンを終了する。   In step S106, based on the CNN score matrix M obtained in step S100, for each of the character string candidates W obtained in step S104, a score S (W, M) is calculated and re-verified. The character string candidate W having the maximum score S (W, M) is output by the output unit 40 as the recognition result of the character string represented by the character string image, and the character string recognition processing routine is terminated.

以上説明したように、本発明の第1の実施の形態に係る文字列認識装置によれば、文字列画像に対して走査した窓で切り出された部分画像の各々を、Network In Network構造のCNNに入力し、求められたCNNスコア行列に基づいて、文字列画像が表す文字列候補を認識することにより、文字列画像が表す文字列を精度よく認識することができる。   As described above, according to the character string recognition device according to the first embodiment of the present invention, each partial image cut out by a window scanned with respect to a character string image is converted into a CNN having a Network In Network structure. The character string represented by the character string image can be accurately recognized by recognizing the character string candidate represented by the character string image based on the obtained CNN score matrix.

また、CNN特徴がさまざまなノイズ・変形に強いことから、CNNスコア行列を用いて、文字列画像が表す文字列を精度よく認識することができる。   Further, since the CNN feature is resistant to various noises and deformations, the character string represented by the character string image can be accurately recognized using the CNN score matrix.

また、RNNを用いて、文字列候補を求めることにより、文脈の情報を十分に役立てることができ、また、原語モデルや語彙辞書を前提とせずに、文字列候補を得ることができる。   Further, by obtaining character string candidates using the RNN, context information can be fully utilized, and character string candidates can be obtained without assuming a source language model or a vocabulary dictionary.

<第2の実施の形態>
<文字列認識装置のシステム構成>
次に、第2の実施の形態について説明する。なお、第2の実施の形態に係る文字列認識装置は、第1の実施の形態と同様の構成であるため、同一符号を付して説明を省略する。
<Second Embodiment>
<System configuration of character string recognition device>
Next, a second embodiment will be described. In addition, since the character string recognition apparatus according to the second embodiment has the same configuration as that of the first embodiment, the same reference numerals are given and description thereof is omitted.

第2の実施の形態では、図6に示すように、文字列候補に対して、WFST(Weight Finite State Transducer)に基づく探索グラフを作成して、文字列画像が表す文字列の認識結果を求めている点が、第1の実施の形態と異なっている。   In the second embodiment, as shown in FIG. 6, a search graph based on WFST (Weight Finite State Transducer) is created for a character string candidate, and the recognition result of the character string represented by the character string image is obtained. This is different from the first embodiment.

第2の実施の形態では、文字列認識部26は、文字列候補認識部24によって認識された文字列候補の各々から、言語モデル及び語彙辞書から得られる、文字列候補の各々に対応する文字列を表す探索グラフを生成し、生成した探索グラフに基づいて、文字列画像が表す文字列を認識する。   In the second embodiment, the character string recognition unit 26 uses characters corresponding to each of the character string candidates obtained from the language model and the vocabulary dictionary from each of the character string candidates recognized by the character string candidate recognition unit 24. A search graph representing a column is generated, and a character string represented by the character string image is recognized based on the generated search graph.

具体的には、文字列候補の各々を、語彙辞書の中の正しい文字列に対応付けることにより、語彙辞書から得られる文字列候補の各々に対応する文字列を表す、語彙辞書に基づくWFST Lを生成し、文字列候補の各々を、言語モデルの中の正しい文字列に対応付けることにより、言語モデルから得られる文字列候補の各々に対応する文字列を表す、言語モデルに基づくWFST Gを生成し、複数のWFSTを組み合わせて、効率のよい1つの探索グラフを作成する。   Specifically, the WFST L based on the vocabulary dictionary that represents the character string corresponding to each of the character string candidates obtained from the vocabulary dictionary by associating each of the character string candidates with the correct character string in the vocabulary dictionary. Generate a WFST G based on the language model that represents the character string corresponding to each of the character string candidates obtained from the language model by associating each character string candidate with the correct character string in the language model. A plurality of WFSTs are combined to create an efficient search graph.

作成した探索グラフを用いて、文字列候補に対して編集距離が最も短い文字列を求め、文字列画像が表す文字列の認識結果とする。   Using the created search graph, the character string having the shortest editing distance is obtained for the character string candidate, and the result is recognized as the character string represented by the character string image.

例えば、図7に示すように、文字列候補「POCHIETL」に対して、編集距離が最も短い文字列を求めることで、文字列の認識結果「POCKET」を得ることができる。   For example, as shown in FIG. 7, the character string recognition result “POCKET” can be obtained by obtaining the character string with the shortest editing distance for the character string candidate “POCHIETL”.

<文字列認識装置の作用>
次に第2の実施の形態における文字列認識処理ルーチンについて、図8を用いて説明する。なお、第1の実施の形態と同様の処理については、同一符号を付して詳細な説明を省略する。
<Operation of character string recognition device>
Next, a character string recognition processing routine in the second embodiment will be described with reference to FIG. In addition, about the process similar to 1st Embodiment, the same code | symbol is attached | subjected and detailed description is abbreviate | omitted.

まず、ステップS100において、入力された文字列画像に対して窓を走査して、窓で切り出された部分画像の各々を、Network in Network構造のCNNに入力して、CNNスコア行列を求める。   First, in step S100, a window is scanned with respect to the input character string image, and each partial image cut out by the window is input to a CNN having a Network in Network structure to obtain a CNN score matrix.

そして、ステップS102において、上記ステップS100で求められたCNNスコア行列に対して、強調処理を行う。   In step S102, enhancement processing is performed on the CNN score matrix obtained in step S100.

ステップS104では、強調処理後のスコア行列を、RNNに入力して、文字列画像が表す文字列候補の各々を取得する。   In step S104, the score matrix after the enhancement process is input to the RNN, and each character string candidate represented by the character string image is acquired.

そして、ステップS206では、上記ステップS104で得られた文字列候補Wの各々から、語彙辞書に基づくWFST L、及び言語モデルに基づくWFST Gを組み合わせた探索グラフを作成する。ステップS208では、上記ステップS206で作成した探索グラフを用いて、文字列候補に対して編集距離が最も短い文字列を求め、文字列画像が表す文字列の認識結果として、出力部40により出力し、文字列認識処理ルーチンを終了する。   In step S206, a search graph is created by combining WFST L based on the vocabulary dictionary and WFST G based on the language model from each of the character string candidates W obtained in step S104. In step S208, using the search graph created in step S206, the character string with the shortest edit distance is obtained for the character string candidate, and the output unit 40 outputs the recognition result of the character string represented by the character string image. Then, the character string recognition processing routine is terminated.

以上説明したように、本発明の第2の実施の形態に係る文字列認識装置によれば、文字列候補に対して、複数のWFSTに基づく探索グラフを作成して、文字列画像が表す文字列の認識結果を求めることにより、効率的に、語彙辞書と言語モデルの双方を考慮した文字列を認識結果として得ることができる。   As described above, according to the character string recognition device according to the second embodiment of the present invention, a search graph based on a plurality of WFSTs is created for a character string candidate, and the character represented by the character string image By obtaining the recognition result of the string, it is possible to efficiently obtain a character string taking into consideration both the vocabulary dictionary and the language model as the recognition result.

<実施例>
既存のICDAR 2003のデータセット、SVT-WORDのデータセット、IIIT5Kのデータセットを用いて上述した第2の実施の形態の手法による文字列認識の効果を検証する評価実験を行った。比較対象は従来のICCV2011(非特許文献3)、BMVC2012(非特許文献4)、ICPR2012(非特許文献5)、CVPR2014(非特許文献6)、ICLR2014(非特許文献7)、ECCV2014(非特許文献8)、PAMI2014(非特許文献9)に記載の各手法とした。
<Example>
An evaluation experiment was conducted to verify the effect of character string recognition by the method of the second embodiment described above using an existing ICDAR 2003 data set, SVT-WORD data set, and IIIT5K data set. The comparison targets are the conventional ICCV2011 (Non-patent document 3), BMVC2012 (Non-patent document 4), ICPR2012 (Non-patent document 5), CVPR2014 (Non-patent document 6), ICLR2014 (Non-patent document 7), ECCV2014 (Non-patent document). 8) and each method described in PAMI2014 (Non-patent Document 9).

[非特許文献3]: Kai Wang, Boris Babenko, and Serge Belongie, “Endto-end scene text recognition,” in ICCV. IEEE, 2011,pp. 1457-1464.
[非特許文献4]:Anand Mishra, Karteek Alahari, and CV Jawahar,“Scene text recognition using higher order language priors,”in BMVC, 2012
[非特許文献5]:Tao Wang, David J Wu, Andrew Coates, and Andrew Y Ng, “End-to-end text recognition with convolutional neural networks,” in Pattern Recognition (ICPR), 2012 21st International Conference on. IEEE, 2012, pp. 3304-3308.
[非特許文献6]:Cong Yao, Xiang Bai, Baoguang Shi, and Wenyu Liu,“Strokelets: A learned multi-scale representation for scene text recognition,” in Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on. IEEE, 2014, pp. 4042-4049.
[非特許文献7]:Ouais Alsharif and Joelle Pineau, “End-to-end text recognition with hybrid HMM maxout models,” in ICLR, 2014.
[非特許文献8]: Max Jaderberg, Andrea Vedaldi, and Andrew Zisserman, “Deep features for text spotting,” in Computer Vision-ECCV 2014, pp. 512-528. Springer, 2014.
[非特許文献9]: Jon Almazan, Albert Gordo, Alicia Forn´es, and Ernest Valveny, “Word spotting and recognition with embedded attributes,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 36, no. 12, pp. 2552-2566, 2014.
[Non-Patent Document 3]: Kai Wang, Boris Babenko, and Serge Belongie, “Endto-end scene text recognition,” in ICCV. IEEE, 2011, pp. 1457-1464.
[Non-Patent Document 4]: Anand Mishra, Karteek Alahari, and CV Jawahar, “Scene text recognition using higher order language priors,” in BMVC, 2012
[Non-Patent Document 5]: Tao Wang, David J Wu, Andrew Coates, and Andrew Y Ng, “End-to-end text recognition with convolutional neural networks,” in Pattern Recognition (ICPR), 2012 21st International Conference on. IEEE , 2012, pp. 3304-3308.
[Non-Patent Document 6]: Cong Yao, Xiang Bai, Baoguang Shi, and Wenyu Liu, “Strokelets: A learned multi-scale representation for scene text recognition,” in Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on. IEEE, 2014, pp. 4042-4049.
[Non-Patent Document 7]: Ouais Alsharif and Joelle Pineau, “End-to-end text recognition with hybrid HMM maxout models,” in ICLR, 2014.
[Non-Patent Document 8]: Max Jaderberg, Andrea Vedaldi, and Andrew Zisserman, “Deep features for text spotting,” in Computer Vision-ECCV 2014, pp. 512-528. Springer, 2014.
[Non-Patent Document 9]: Jon Almazan, Albert Gordo, Alicia Forn´es, and Ernest Valveny, “Word spotting and recognition with embedded attributes,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 36, no. 12, pp. 2552-2566, 2014.

文字列認識の結果を図9の表に示す。ノイズや変形の多いデータセットに対して、上述した第2の実施の形態の手法では、最高性能を示すことが分かった。   The result of character string recognition is shown in the table of FIG. It has been found that the method of the second embodiment described above shows the highest performance for a data set with a lot of noise and deformation.

なお、本発明は、上述した実施形態に限定されるものではなく、この発明の要旨を逸脱しない範囲内で様々な変形や応用が可能である。   Note that the present invention is not limited to the above-described embodiment, and various modifications and applications are possible without departing from the gist of the present invention.

例えば、図10に示すように、RNNを用いずに、強調処理後のCNNスコア行列から、順序の整合性を考慮した解釈により、文字列候補を取得するようにしてもよい。   For example, as illustrated in FIG. 10, a character string candidate may be acquired from the CNN score matrix after the emphasis process by interpretation in consideration of order consistency without using the RNN.

上述の文字列認識装置100は、内部にコンピュータシステムを有しているが、「コンピュータシステム」は、WWWシステムを利用している場合であれば、ホームページ提供環境(あるいは表示環境)も含むものとする。   The above-described character string recognition apparatus 100 has a computer system therein, but the “computer system” includes a homepage providing environment (or display environment) if a WWW system is used.

例えば、本願明細書中において、プログラムが予めインストールされている実施形態として説明したが、当該プログラムを、コンピュータ読み取り可能な記録媒体に格納して提供することも可能である。   For example, in the present specification, the embodiment has been described in which the program is installed in advance. However, the program may be provided by being stored in a computer-readable recording medium.

10 入力部
20 演算部
22 文字認識部
24 文字列候補認識部
26 文字列認識部
40 出力部
100 文字列認識装置
DESCRIPTION OF SYMBOLS 10 Input part 20 Operation part 22 Character recognition part 24 Character string candidate recognition part 26 Character string recognition part 40 Output part 100 Character string recognition apparatus

Claims (8)

文字列画像が表す文字列を認識する文字列認識装置であって、
前記文字列画像に対して部分画像を切り出すための窓を走査して、前記窓で切り出された部分画像の各々を、Network in Network構造であって、かつ、文字を認識するための予め学習されたCNN(Convolutional Neural Network)に入力して、前記部分画像の各々についての各文字のスコアを表すCNNスコア行列を求める文字認識部と、
前記文字認識部によって求められた前記CNNスコア行列に基づいて、前記文字列画像が表す文字列候補を認識する文字列候補認識部と、
を含む文字列認識装置。
A character string recognition device for recognizing a character string represented by a character string image,
A window for cutting out a partial image is scanned with respect to the character string image, and each of the partial images cut out by the window has a Network in Network structure and is learned in advance for recognizing characters. A character recognition unit that inputs a CNN (Convolutional Neural Network) and obtains a CNN score matrix representing a score of each character for each of the partial images;
A character string candidate recognition unit for recognizing a character string candidate represented by the character string image based on the CNN score matrix obtained by the character recognition unit;
A character string recognition device.
前記文字列候補認識部によって認識された文字列候補から、誤った文字列候補を取り除くことにより、前記文字列画像が表す文字列を認識する文字列認識部を更に含む請求項1記載の文字列認識装置。   The character string according to claim 1, further comprising a character string recognition unit that recognizes a character string represented by the character string image by removing an erroneous character string candidate from the character string candidates recognized by the character string candidate recognition unit. Recognition device. 前記文字列認識部は、前記文字列候補認識部によって認識された文字列候補の各々について、前記CNNスコア行列から得られるスコアを再検証することにより、誤った文字列候補を取り除く請求項2記載の文字列認識装置。   The character string recognition unit removes erroneous character string candidates by re-verifying the score obtained from the CNN score matrix for each of the character string candidates recognized by the character string candidate recognition unit. Character string recognition device. 言語モデル又は語彙辞書から得られる、前記文字列候補認識部によって認識された前記文字列候補の各々に対応する文字列を表す探索グラフを生成し、生成した探索グラフに基づいて、前記文字列画像が表す文字列を認識する文字列認識部を更に含む請求項1記載の文字列認識装置。   Generate a search graph representing a character string corresponding to each of the character string candidates recognized by the character string candidate recognition unit, obtained from a language model or a vocabulary dictionary, and based on the generated search graph, the character string image The character string recognition device according to claim 1, further comprising a character string recognition unit that recognizes a character string represented by 前記文字列候補認識部は、前記文字認識部によって求められた前記CNNスコア行列を、文字列候補を認識するための予め学習されたRNN(Recurrent Neural Network)に入力して、前記文字列画像が表す文字列候補を認識する請求項1〜請求項4の何れか1項記載の文字列認識装置。   The character string candidate recognizing unit inputs the CNN score matrix obtained by the character recognizing unit to a previously learned RNN (Recurrent Neural Network) for recognizing a character string candidate, and the character string image is The character string recognition device according to any one of claims 1 to 4, wherein a character string candidate to be represented is recognized. 文字列画像が表す文字列を認識する文字列認識装置における文字列認識方法であって、
文字認識部が、前記文字列画像に対して部分画像を切り出すための窓を走査して、前記窓で切り出された部分画像の各々を、Network in Network構造であって、かつ、文字を認識するための予め学習されたCNN(Convolutional Neural Network)に入力して、前記部分画像の各々についての各文字のスコアを表すCNNスコア行列を求め、
文字列候補認識部が、前記文字認識部によって求められた前記CNNスコア行列に基づいて、前記文字列画像が表す文字列候補を認識する
文字列認識方法。
A character string recognition method in a character string recognition device for recognizing a character string represented by a character string image,
A character recognition unit scans a window for cutting out a partial image from the character string image, and each of the partial images cut out by the window has a Network in Network structure and recognizes a character. Input to a pre-trained CNN (Convolutional Neural Network) for obtaining a CNN score matrix representing a score of each character for each of the partial images;
A character string recognition method in which a character string candidate recognition unit recognizes a character string candidate represented by the character string image based on the CNN score matrix obtained by the character recognition unit.
文字列認識部が、前記文字列候補認識部によって認識された文字列候補から、誤った文字列候補を取り除くことにより、前記文字列画像が表す文字列を認識することを更に含む請求項6記載の文字列認識方法。   The character string recognition unit further includes recognizing a character string represented by the character string image by removing an erroneous character string candidate from the character string candidates recognized by the character string candidate recognition unit. String recognition method. コンピュータを、請求項1〜請求項5の何れか1項記載の文字列認識装置を構成する各部として機能させるためのプログラム。   The program for functioning a computer as each part which comprises the character string recognition apparatus of any one of Claims 1-5.
JP2016110270A 2016-06-01 2016-06-01 Character string recognition apparatus, method, and program Active JP6611346B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2016110270A JP6611346B2 (en) 2016-06-01 2016-06-01 Character string recognition apparatus, method, and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2016110270A JP6611346B2 (en) 2016-06-01 2016-06-01 Character string recognition apparatus, method, and program

Publications (2)

Publication Number Publication Date
JP2017215859A true JP2017215859A (en) 2017-12-07
JP6611346B2 JP6611346B2 (en) 2019-11-27

Family

ID=60575753

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2016110270A Active JP6611346B2 (en) 2016-06-01 2016-06-01 Character string recognition apparatus, method, and program

Country Status (1)

Country Link
JP (1) JP6611346B2 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108427953A (en) * 2018-02-26 2018-08-21 北京易达图灵科技有限公司 A kind of character recognition method and device
KR101951595B1 (en) * 2018-05-18 2019-02-22 한양대학교 산학협력단 Vehicle trajectory prediction system and method based on modular recurrent neural network architecture
JP2020047213A (en) * 2018-09-21 2020-03-26 富士ゼロックス株式会社 Character string recognition device and character string recognition program
KR20200044179A (en) * 2018-10-05 2020-04-29 주식회사 한글과컴퓨터 Apparatus and method for recognizing character
CN111507353A (en) * 2020-04-17 2020-08-07 新分享科技服务(深圳)有限公司 Chinese field detection method and system based on character recognition
JPWO2021010276A1 (en) * 2019-07-17 2021-01-21
US11423728B2 (en) 2018-10-24 2022-08-23 Fujitsu Frontech Limited Banknote inspection device, banknote inspection method, and banknote inspection program product

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0830728A (en) * 1994-07-12 1996-02-02 Suzuki Motor Corp Binarization device for image
JP2010072446A (en) * 2008-09-19 2010-04-02 Toyohashi Univ Of Technology Coarticulation feature extraction device, coarticulation feature extraction method and coarticulation feature extraction program
JP2011123494A (en) * 2009-12-14 2011-06-23 Intel Corp Method and system to traverse graph-based network
JP2012160000A (en) * 2011-01-31 2012-08-23 Internatl Business Mach Corp <Ibm> Association method of table of content and headline, association device, and association program
WO2014200736A1 (en) * 2013-06-09 2014-12-18 Apple Inc. Managing real - time handwriting recognition
JP2015184691A (en) * 2014-03-20 2015-10-22 富士ゼロックス株式会社 Image processor and image processing program

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0830728A (en) * 1994-07-12 1996-02-02 Suzuki Motor Corp Binarization device for image
JP2010072446A (en) * 2008-09-19 2010-04-02 Toyohashi Univ Of Technology Coarticulation feature extraction device, coarticulation feature extraction method and coarticulation feature extraction program
JP2011123494A (en) * 2009-12-14 2011-06-23 Intel Corp Method and system to traverse graph-based network
JP2012160000A (en) * 2011-01-31 2012-08-23 Internatl Business Mach Corp <Ibm> Association method of table of content and headline, association device, and association program
WO2014200736A1 (en) * 2013-06-09 2014-12-18 Apple Inc. Managing real - time handwriting recognition
JP2015184691A (en) * 2014-03-20 2015-10-22 富士ゼロックス株式会社 Image processor and image processing program

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"Network in network", ARXIV:1312.440V3, JPN6019028323, 4 March 2014 (2014-03-04), pages 1 - 10, ISSN: 0004082554 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108427953A (en) * 2018-02-26 2018-08-21 北京易达图灵科技有限公司 A kind of character recognition method and device
KR101951595B1 (en) * 2018-05-18 2019-02-22 한양대학교 산학협력단 Vehicle trajectory prediction system and method based on modular recurrent neural network architecture
JP2020047213A (en) * 2018-09-21 2020-03-26 富士ゼロックス株式会社 Character string recognition device and character string recognition program
JP7172351B2 (en) 2018-09-21 2022-11-16 富士フイルムビジネスイノベーション株式会社 Character string recognition device and character string recognition program
KR20200044179A (en) * 2018-10-05 2020-04-29 주식회사 한글과컴퓨터 Apparatus and method for recognizing character
KR102235506B1 (en) * 2018-10-05 2021-04-02 주식회사 한글과컴퓨터 Apparatus and method for recognizing character
US11423728B2 (en) 2018-10-24 2022-08-23 Fujitsu Frontech Limited Banknote inspection device, banknote inspection method, and banknote inspection program product
CN112840383B (en) * 2018-10-24 2024-03-08 富士通先端科技株式会社 Banknote checking device, banknote checking method, and banknote checking program
JPWO2021010276A1 (en) * 2019-07-17 2021-01-21
JP7343585B2 (en) 2019-07-17 2023-09-12 富士フイルム富山化学株式会社 Identification support system, identification support client, identification support server, and identification support method
CN111507353A (en) * 2020-04-17 2020-08-07 新分享科技服务(深圳)有限公司 Chinese field detection method and system based on character recognition
CN111507353B (en) * 2020-04-17 2023-10-03 新分享科技服务(深圳)有限公司 Chinese field detection method and system based on character recognition

Also Published As

Publication number Publication date
JP6611346B2 (en) 2019-11-27

Similar Documents

Publication Publication Date Title
JP6611346B2 (en) Character string recognition apparatus, method, and program
EP3620956B1 (en) Learning method, learning device for detecting lane through classification of lane candidate pixels and testing method, testing device using the same
JP6678778B2 (en) Method for detecting an object in an image and object detection system
TWI651662B (en) Image annotation method, electronic device and non-transitory computer readable storage medium
US8306327B2 (en) Adaptive partial character recognition
US7756335B2 (en) Handwriting recognition using a graph of segmentation candidates and dictionary search
JP4905931B2 (en) Human body region extraction method, apparatus, and program
WO2020164278A1 (en) Image processing method and device, electronic equipment and readable storage medium
IL273446A (en) Method and system for image content recognition
US11449706B2 (en) Information processing method and information processing system
CN107292349A (en) The zero sample classification method based on encyclopaedic knowledge semantically enhancement, device
JP2018206252A (en) Image processing system, evaluation model construction method, image processing method, and program
CN109033321B (en) Image and natural language feature extraction and keyword-based language indication image segmentation method
Liu et al. Scene text recognition with high performance CNN classifier and efficient word inference
Liu et al. Scene text recognition with CNN classifier and WFST-based word labeling
JP6393495B2 (en) Image processing apparatus and object recognition method
Sirigineedi et al. Deep Learning Approaches for Autonomous Driving to Detect Traffic Signs
CN115205806A (en) Method and device for generating target detection model and automatic driving vehicle
JP6235368B2 (en) Pattern recognition device, pattern recognition method and program
CN113378707A (en) Object identification method and device
JP6511942B2 (en) INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING PROGRAM
JP2020119291A (en) Information processing device and program
Hassan et al. SCAN: Sequence-character Aware Network for Text Recognition.
US20240070516A1 (en) Machine learning context based confidence calibration
Ali et al. Iris Identification enhancement based deep learning

Legal Events

Date Code Title Description
A80 Written request to apply exceptions to lack of novelty of invention

Free format text: JAPANESE INTERMEDIATE CODE: A80

Effective date: 20160622

A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20180531

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20190730

A521 Request for written amendment filed

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20190930

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20191023

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20191028

R150 Certificate of patent or registration of utility model

Ref document number: 6611346

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R150