JP2010191767A - Device and method for recognizing character - Google Patents

Device and method for recognizing character Download PDF

Info

Publication number
JP2010191767A
JP2010191767A JP2009036455A JP2009036455A JP2010191767A JP 2010191767 A JP2010191767 A JP 2010191767A JP 2009036455 A JP2009036455 A JP 2009036455A JP 2009036455 A JP2009036455 A JP 2009036455A JP 2010191767 A JP2010191767 A JP 2010191767A
Authority
JP
Japan
Prior art keywords
character
pixel
sign
image data
character recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP2009036455A
Other languages
Japanese (ja)
Other versions
JP5010627B2 (en
Inventor
Kiichi Sugimoto
喜一 杉本
Kenta Nakao
健太 中尾
Mayumi Saito
真由美 斎藤
Takuma Okazaki
拓馬 岡▲崎▼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mitsubishi Heavy Industries Ltd
Original Assignee
Mitsubishi Heavy Industries Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mitsubishi Heavy Industries Ltd filed Critical Mitsubishi Heavy Industries Ltd
Priority to JP2009036455A priority Critical patent/JP5010627B2/en
Publication of JP2010191767A publication Critical patent/JP2010191767A/en
Application granted granted Critical
Publication of JP5010627B2 publication Critical patent/JP5010627B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Character Input (AREA)

Abstract

<P>PROBLEM TO BE SOLVED: To improve character recognizing precision by specifying the connection point of characters in either a horizontal direction and a vertical direction to correctly separate those characters. <P>SOLUTION: A character recognition method includes steps of: extracting the contours of a character string by using a differential filter from image data; calculating the sign of a differential value by the differential filter about each of elements configuring vertical contours and horizontal contours among the extracted contours; extracting a first region with a pixel under consideration as a center from among pixels configuring the vertical contours and the horizontal contours; calculating the number of pixels indicating the same signs and the number of pixels indicating opposite signs by comparing the signs of the pixels included in the first region with the sign of the pixel under consideration; determining the contrast of the first region based on the number of the same signs or the opposite signs; and define the pixels showing the signs contrary to the determination result in the first region as the division point of the character string. <P>COPYRIGHT: (C)2010,JPO&INPIT

Description

本発明は、例えば、車両番号等の文字列を認識する文字認識装置に係り、特に、隣り合う文字の間隔が狭い場合などにも高精度に文字の切り出しを行うことができる文字認識装置に関するものである。   The present invention relates to a character recognition device that recognizes a character string such as a vehicle number, and more particularly to a character recognition device that can cut out characters with high accuracy even when the interval between adjacent characters is narrow. It is.

従来、車両のナンバープレートに記載された車両番号などの記載内容を自動認識する装置が知られている。
ナンバープレートの記載内容を認識するには、画像処理技術を用いた文字認識手法が用いられている。このような文字認識手法は、一般には、対象の文字画像を入力した後、文字と背景を分離するために文字線を強調し(2値化)、文字であると判断できる領域を抽出する(文字切り出し処理)。文字切り出しされた領域において、文字の形状特徴(例えば輪郭線の方向特徴など)を抽出し、パターン認識により文字を認識する。
2. Description of the Related Art Conventionally, there has been known an apparatus for automatically recognizing description contents such as a vehicle number described on a license plate of a vehicle.
In order to recognize the description content of the license plate, a character recognition method using an image processing technique is used. In such a character recognition method, generally, after inputting a target character image, the character line is emphasized (binarization) to separate the character and the background, and an area that can be determined to be a character is extracted ( Character cutout process). In the character cut-out area, character shape features (for example, contour direction features) are extracted, and the characters are recognized by pattern recognition.

ところで、ナンバープレートは、その形状、大きさ、記載される文字列等が各国毎に相違し、例えば、文字と文字とが近接し文字同士が連結して見えたり、ナンバープレートを車両に取り付けているボルトなどの周辺構造物と文字とが近接して文字と周辺構造物とが連結して見えるようなナンバープレートも散見される。このような場合には、従来の文字認識装置では、正しい文字の切り出し処理を行うことができず、正しい認識結果が得られないこととなる。
そこで、特許文献1には、2値化された画像データの輪郭プロファイルを利用して、記載される文字列が日英混在する文書である際にも認識精度を維持する技術が開示されている。
By the way, the license plate has a different shape, size, written character string, etc. for each country. For example, the character and the character are close to each other and appear to be connected to each other, or the license plate is attached to the vehicle. There are also some license plates where characters and peripheral structures such as bolts and the like are close to each other and characters appear to be connected. In such a case, the conventional character recognition device cannot perform a correct character cut-out process, and a correct recognition result cannot be obtained.
Therefore, Patent Document 1 discloses a technique for maintaining the recognition accuracy even when a character string to be described is a document in which Japanese and English are mixed, using a contour profile of binarized image data. .

特開2001−22885号公報JP 2001-22885 A

しかしながら、特許文献1に記載された技術では、認識対象の文字列の垂直方向に文字や周辺構造物が近接して、文字同士又は文字と周辺構造物とが連結して見える場合には、連結箇所を特定するのが難しく、文字の連結箇所を正しく分離できないために文字認識の精度が低下するという問題があった。   However, in the technique described in Patent Document 1, when characters or peripheral structures are close to each other in the vertical direction of the character string to be recognized and the characters and the characters and the peripheral structures appear to be connected, There is a problem that it is difficult to specify the location, and the character recognition accuracy is lowered because the connected location of the characters cannot be correctly separated.

本発明は、上記問題を解決するためになされたもので、水平方向及び垂直方向のいずれにおいても文字の連結点を特定して正しく分離し、文字認識精度を向上させた文字認識装置を提供することを目的とする。   The present invention has been made to solve the above problem, and provides a character recognition device that improves character recognition accuracy by identifying and correctly separating character connection points in both the horizontal and vertical directions. For the purpose.

上記課題を解決するために、本発明は以下の手段を採用する。   In order to solve the above problems, the present invention employs the following means.

本発明は、文字列を撮像した画像データの該文字列を分割して該文字列に含まれる文字を個々に認識する文字認識装置であって、前記画像データから微分フィルタを用いて前記文字列の輪郭線を抽出する輪郭線抽出手段と、前記抽出された輪郭線のうち垂直輪郭線及び水平輪郭線を構成する各画素について、前記微分フィルタによる微分値の符号を算出する符号算出手段と、前記符号が算出された画素から1つの画素を注目画素として選択し、該注目画素を含む第1の領域を前記画像データから順次抽出し、前記第1の領域に含まれる画素について、前記符号算出手段により算出された符号に基づいて、前記注目画素の符号と異符号を示す画素があるか否かを判定し、前記画像データから抽出された第1の領域のうち異符号を示す画素があると判定された第1の領域の数を算出する符号変化算出手段と、異符号を示す画素があると判定された前記第1の領域を、前記注目画素が+符号を示す領域と注目画素が−符号を示す領域とに分別し、分別した結果、より数の少ない領域を前記文字列の分割点とする分割点決定手段と、を備えたことを特徴とする文字認識装置を提供する。   The present invention is a character recognition device that recognizes each character included in the character string by dividing the character string of image data obtained by imaging the character string, and using the differential filter from the image data A contour line extracting means for extracting the contour line, and a code calculating means for calculating a sign of a differential value by the differential filter for each pixel constituting a vertical contour line and a horizontal contour line among the extracted contour lines; One pixel is selected as a target pixel from the pixels for which the code has been calculated, a first area including the target pixel is sequentially extracted from the image data, and the code calculation is performed for pixels included in the first area. Based on the code calculated by the means, it is determined whether or not there is a pixel having a different sign from the sign of the pixel of interest, and there is a pixel having a different sign in the first region extracted from the image data The code change calculating means for calculating the number of the determined first areas, the first area determined to have a pixel having a different sign, the area where the target pixel indicates a + sign, and the target pixel are − There is provided a character recognition device, comprising: a dividing point determination unit that classifies a region indicating a code, and uses a smaller number of regions as a result of the separation to determine a dividing point of the character string.

本発明によれば、画像データから微分フィルタを用いて前記文字列の輪郭線を抽出し、この輪郭線のうち垂直輪郭線及び水平輪郭線を構成する各画素について、前記微分フィルタによる微分値の符号を算出するので、垂直輪郭線及び水平輪郭線の双方について輝度勾配を把握することができる。そして、符号が算出された画素から1つの画素を注目画素として選択し、該注目画素を含む第1の領域を画像データから順次抽出する。すなわち、画像データの最上段から順に、符号が算出された画素のうち1つの画素を注目画素として選択し、注目画素を基準とする第1の領域を抽出する。ここで、第1の領域は、認識対象の文字の大きさ、太さ等に基づいて定められ、認識対象の文字が、例えば、車両のナンバープレートに記載される文字であった場合には4〜5画素程度の領域に定めることが好ましい。そして、この第1の領域に含まれる画素について、先に算出された符号に基づいて、注目画素の符号と異符号を示す画素があるか否かを判定し、前記画像データから抽出された第1の領域のうち異符号を示す画素があると判定された第1の領域の数を算出する。この処理を画像データ全体から順次抽出された第1の領域全てに対して行う。このようにすることで、第1の領域内において、注目画素に対する符号の変化を把握することができ、この算出結果に基づいて前記第1の領域の明暗を判定することができる。明暗の判定に際しては、符号を示す画素があると判定された前記第1の領域を、前記注目画素がプラスの領域と注目画素がマイナスの領域とに分別し、分別した結果、より数の少ない領域を前記文字列の分割点とする。すなわち、注目画素が+符号であるのに対し領域内に画素に−符号の画素がある領域と、注目画素が−符号であるのに対し領域内に+符号の画素がある領域との数を比較し、より多い領域が文字線を捉えた領域であり、少ない領域が背景を捉えた領域として文字列の分割点となる。   According to the present invention, a contour line of the character string is extracted from image data using a differential filter, and a differential value obtained by the differential filter is determined for each pixel constituting a vertical contour line and a horizontal contour line among the contour lines. Since the code is calculated, the luminance gradient can be grasped for both the vertical contour line and the horizontal contour line. Then, one pixel is selected as the target pixel from the pixels for which the codes are calculated, and the first region including the target pixel is sequentially extracted from the image data. That is, in order from the top of the image data, one pixel among the pixels whose codes are calculated is selected as a target pixel, and a first region based on the target pixel is extracted. Here, the first area is determined based on the size, thickness, etc. of the character to be recognized. If the character to be recognized is, for example, a character described on a license plate of a vehicle, the first area is 4. It is preferable to define the area of about ˜5 pixels. Then, with respect to the pixels included in the first region, it is determined whether there is a pixel that has a code different from the code of the pixel of interest based on the previously calculated code, and the first extracted from the image data. The number of first regions determined to have pixels having different signs in one region is calculated. This process is performed on all the first areas sequentially extracted from the entire image data. By doing so, it is possible to grasp the change in the sign with respect to the target pixel in the first region, and to determine the brightness of the first region based on the calculation result. In the determination of lightness and darkness, the first region determined to have a pixel indicating a sign is classified into a region where the target pixel is positive and a region where the target pixel is negative. An area is set as a division point of the character string. That is, the number of areas where the pixel of interest has a + sign while the area of the area has a − sign pixel and the area of interest where the pixel of interest is − sign and the area having a + sign pixel in the area In comparison, a larger area is an area that captures a character line, and a smaller area is a segment that divides a character string as an area that captures a background.

このように、輪郭線の垂直方向及び水平方向の双方の勾配に基づいて文字認識対象画素の明暗を判定するので、文字同士の連結箇所や文字とナンバープレートを固定するボルトなどの周辺構造物との連結箇所を正しく特定でき、連結部分を正しく分割できる。これにより、文字認識精度が向上する。   In this way, since the brightness of the character recognition target pixel is determined based on both the vertical and horizontal gradients of the contour line, the connected portion of the characters and peripheral structures such as bolts for fixing the character and the license plate, Can be correctly identified, and the connected portion can be correctly divided. This improves the character recognition accuracy.

上記文字認識装置は、前記分割点の位置を予め推定し、前記画像データから前記推定された分割点の位置を含む第2の領域を定める領域選定手段を備え、前記分割点決定手段は、前記第2の領域において文字列の分割点を決定することとしてもよい。   The character recognition device includes a region selection unit that preliminarily estimates a position of the division point and determines a second region including the estimated division point position from the image data, and the division point determination unit includes: The division point of the character string may be determined in the second area.

分割点の位置を予め推定し、画像データから推定された分割点の位置を含む第2の領域を定めるので、選定された領域内でのみ文字認識処理を実行すればよく、文字認識処理に要する時間を短縮することができるとともに、分割点の誤検出を抑制することができる。   Since the position of the dividing point is estimated in advance and the second area including the position of the dividing point estimated from the image data is determined, it is only necessary to execute the character recognition process within the selected area, which is required for the character recognition process. Time can be shortened and erroneous detection of division points can be suppressed.

上記文字認識装置は、前記分割点を水平軸方向に投影した投票空間に投票することにより、射影投票された周辺分布からなる得票分布を生成する投票手段を備え、前記分割点決定手段は、前記得票分布から、水平方向の得票値が所定閾値以上となる場合に、該水平方向の得票値に対する垂直方向の得票値を真の分割点としてもよい。   The character recognition device includes voting means for generating a vote distribution consisting of peripheral distributions that have been projected and voted by voting on a voting space in which the division points are projected in a horizontal axis direction, and the division point determination means includes the From the vote distribution, when the vote value in the horizontal direction is equal to or greater than a predetermined threshold, the vote value in the vertical direction with respect to the vote value in the horizontal direction may be set as a true division point.

分割点を水平軸方向に投影した投票空間に投票することにより、射影投票された周辺分布からなる得票分布を生成し、得票分布から、水平方向の得票値が所定閾値以上となる場合に、水平方向の得票値に対する垂直方向の得票値を真の分割点とするので、文字列を含む画像データにおいて、文字同士や文字と周辺構造物がぴったり接している場合などの連結箇所が多い場合にも分割漏れが発生してしまうことを防止でき、文字認識精度が向上する。   By voting on the voting space in which the division points are projected in the horizontal axis direction, a vote distribution consisting of the peripheral distribution voted on by projection is generated, and when the vote value in the horizontal direction is greater than or equal to a predetermined threshold, Since the vertical voting value with respect to the directional voting value is set as the true dividing point, even in the case of image data including a character string, when there are many connected parts such as when characters and the peripheral structure are in close contact with each other Occurrence of division leakage can be prevented and character recognition accuracy is improved.

上記文字認識装置は、前記文字の大きさにかかる文字情報を予め記憶した記憶手段を備え、前記分割点決定手段は、前記文字情報と前記分割点とを比較して前記文字情報で示される文字の大きさに最も近い分割点を真の分割点としてもよい。   The character recognition device includes a storage unit that stores character information related to the size of the character in advance, and the division point determination unit compares the character information with the division point and displays the character indicated by the character information. The division point closest to the size of may be the true division point.

記憶手段に予め文字の大きさにかかる文字情報を記憶し、文字情報と分割点とを比較して文字情報で示される文字の大きさに最も近い分割点を真の分割点とするので、文字の輪郭線の輝度勾配にかかる情報及び文字の大きさにかかる情報に基づいて分割点を決定するので、文字認識精度が向上する。   Character information related to the size of the character is stored in the storage means in advance, and the character information and the division point are compared, and the division point closest to the character size indicated by the character information is set as the true division point. Since the division point is determined based on the information related to the brightness gradient of the outline and the information related to the character size, the character recognition accuracy is improved.

本発明は、文字列を撮像した画像データの該文字列を分割し、該文字列に含まれる文字を個々に認識する文字認識方法であって、前記画像データから微分フィルタを用いて前記文字列の輪郭線を抽出するステップと、前記抽出された輪郭線のうち垂直輪郭線及び水平輪郭線を構成する各画素について、前記微分フィルタによる微分値の符号を算出するステップと、前記符号が算出された画素から1つの画素を注目画素として選択し、該注目画素を含む第1の領域を前記画像データから順次抽出し、前記第1の領域に含まれる画素について、前記符号算出手段により算出された符号に基づいて、前記注目画素の符号と異符号を示す画素があるか否かを判定し、前記画像データから抽出された第1の領域のうち異符号を示す画素があると判定された第1の領域の数を算出するステップと、異符号を示す画素があると判定された前記第1の領域を、前記注目画素が+符号を示す領域と注目画素が−符号を示す領域とに分別し、分別した結果、より数の少ない領域を前記文字列の分割点とするステップと、を備えたことを特徴とする文字認識方法を提供する。   The present invention is a character recognition method for dividing the character string of image data obtained by imaging a character string and individually recognizing characters included in the character string, and using the differential filter from the image data, the character string A step of extracting a contour line, a step of calculating a sign of a differential value by the differential filter for each pixel constituting a vertical contour line and a horizontal contour line among the extracted contour lines, and the sign is calculated. One pixel is selected as the target pixel from the selected pixels, the first region including the target pixel is sequentially extracted from the image data, and the pixels included in the first region are calculated by the code calculation unit Based on the code, it is determined whether or not there is a pixel having a different sign from the code of the target pixel, and it is determined that there is a pixel having a different sign in the first region extracted from the image data. The step of calculating the number of first regions, and the first region determined to have a pixel having a different sign are classified into a region in which the target pixel indicates a + sign and a region in which the target pixel indicates a-sign. There is provided a character recognition method comprising the steps of: classifying, and, as a result of the classification, a region having a smaller number as a division point of the character string.

このように、本発明によれば、水平方向及び垂直方向のいずれにおいても文字の連結点を正しく特定できるので、この連結点を正しく分離し、文字認識精度を向上させることができる。   As described above, according to the present invention, since the connection point of characters can be correctly specified in both the horizontal direction and the vertical direction, the connection point can be correctly separated and the character recognition accuracy can be improved.

本発明の第1の実施形態にかかる文字認識装置の概略構成を示すブロック図である。It is a block diagram which shows schematic structure of the character recognition apparatus concerning the 1st Embodiment of this invention. 本発明の第1の実施形態にかかる文字認識装置における文字認識処理の過程を示すフローチャートである。It is a flowchart which shows the process of the character recognition process in the character recognition apparatus concerning the 1st Embodiment of this invention. 文字認識処理の過程を示した説明図であり、(a)は、撮影された文字列、(b)は輪郭線強調処理がなされた文字列、(c)は、文字列の垂直エッジ画像、(d)は、文字列の水平エッジ画像を夫々示す。It is explanatory drawing which showed the process of the character recognition process, (a) is the image | photographed character string, (b) is the character string by which the outline enhancement process was made, (c) is the vertical edge image of a character string, (D) shows the horizontal edge image of a character string, respectively. 図3(c)のA部分の拡大図であり、本発明の文字認識処理において、符号を算出する場合の説明図である。FIG. 4 is an enlarged view of a portion A in FIG. 3 (c), and is an explanatory diagram when a code is calculated in the character recognition processing of the present invention. 本発明の第2の実施形態にかかる文字認識処理の過程を示すフローチャートである。It is a flowchart which shows the process of the character recognition process concerning the 2nd Embodiment of this invention. 本発明の第3の実施形態にかかる文字認識処理の過程を示すフローチャートである。It is a flowchart which shows the process of the character recognition process concerning the 3rd Embodiment of this invention.

以下に、本発明に係る文字認識装置の実施形態について、図面を参照して説明する。
〔第1の実施形態〕
図1は、本発明の第1の実施形態に係る文字認識装置の概略構成を示したブロック図である。
Hereinafter, embodiments of a character recognition device according to the present invention will be described with reference to the drawings.
[First Embodiment]
FIG. 1 is a block diagram showing a schematic configuration of a character recognition apparatus according to the first embodiment of the present invention.

本発明における文字認識装置10は、車両のナンバープレートを撮影して画像データを得るカメラ11と、カメラ11で撮影した画像データの入力を受ける画像入力部12と、画像入力部12に入力された画像データに基づいて文字認識処理を実行する認識処理部13と、認識処理部13での認識結果を出力する認識結果出力部14を備えている。   The character recognition device 10 according to the present invention includes a camera 11 that captures a license plate of a vehicle and obtains image data, an image input unit 12 that receives input of image data captured by the camera 11, and an image input unit 12 A recognition processing unit 13 that executes character recognition processing based on image data and a recognition result output unit 14 that outputs a recognition result in the recognition processing unit 13 are provided.

認識処理部13は、文字認識処理を実行するための演算を行うものであり、文字認識処理に関する各種演算処理を実行するCPU(中央演算処理装置)21、文字認識処理にかかるプログラム等を記憶する読み出し専用のメモリであるROM(Read Only Memory)22、CPU21の作業領域として機能する読み書き自在のメモリであるRAM(Random Access Memory)23、及び文字認識処理にかかるプログラムに基づく文字認識処理を実行するに際して必要となる種々のデータが格納された記憶装置24を備えている。   The recognition processing unit 13 performs calculations for executing the character recognition process, and stores a CPU (central processing unit) 21 that executes various calculation processes related to the character recognition process, a program related to the character recognition process, and the like. A ROM (Read Only Memory) 22 which is a read-only memory, a RAM (Random Access Memory) 23 which is a readable / writable memory functioning as a work area of the CPU 21, and a character recognition process based on a program related to the character recognition process are executed. In this case, a storage device 24 is provided in which various data necessary for this are stored.

また、認識処理部13は、輪郭線抽出手段として輪郭線抽出部25、符号算出手段としての符号算出部26、符号変化算出手段としての符号変化算出部27、及び分割点を決定する分割点決定手段としての分割点決定部28を備えている。   In addition, the recognition processing unit 13 includes a contour line extraction unit 25 as a contour line extraction unit, a code calculation unit 26 as a code calculation unit, a code change calculation unit 27 as a code change calculation unit, and a division point determination for determining a division point. A division point determination unit 28 is provided as a means.

輪郭線抽出部25は、画像入力部12により入力された画像データに対して、Sobel微分などの1次微分フィルタを用いて画像データの空間1次微分を計算し、画像データの中の文字の輪郭線を抽出する(輪郭線強調処理)。また、輪郭線強調処理がなされた画像データに対して、非極大輪郭画素を抑制することで輪郭線を細線化する(輪郭線細線化処理)。符号算出部26は、輪郭線抽出処理及び輪郭線細線化処理がなされた画像データが入力され、処理後の画像データに対し、垂直輪郭線及び水平輪郭線を構成する画素、すなわち、左右方向または上下方向に勾配がある画素について輪郭線の微分値の符号(+または−)を求める。符号変化算出部27は、符号算出部26により算出された符号に基づいて、画像データから文字の太さに相当する第一の領域としての局所領域を抽出し、この局所領域における画素の微分値の符号の変化を算出する。分割点決定部28は、符号変化算出部27により算出された符号の変化に基づいて、ナンバープレートの背景色に対して文字の明暗を判定し、文字の明暗と反対の符号変化をしている局所領域の中心を求め、この中心を分割点として算出する。輪郭線抽出部25、符号算出部26、符号変化算出部27、及び分割点決定部28は、いずれもCPU21が所定のROM22に格納された処理プログラムをRAM23に展開し、展開したプログラムを実行することによって実現される処理部であり、その処理については後述する。   The contour line extraction unit 25 calculates a spatial first derivative of the image data by using a first-order differential filter such as Sobel differentiation with respect to the image data input by the image input unit 12, and calculates the character in the image data. An outline is extracted (outline enhancement processing). In addition, the contour line is thinned by suppressing non-maximum contour pixels in the image data that has undergone the contour line emphasis process (contour line thinning process). The code calculation unit 26 receives image data that has undergone the contour extraction process and the contour thinning process, and the pixels constituting the vertical contour line and the horizontal contour line with respect to the processed image data, that is, the horizontal direction or The sign (+ or-) of the differential value of the contour is obtained for a pixel having a gradient in the vertical direction. Based on the code calculated by the code calculation unit 26, the code change calculation unit 27 extracts a local region as a first region corresponding to the thickness of the character from the image data, and the differential value of the pixel in the local region Is calculated. The dividing point determination unit 28 determines the lightness / darkness of the character with respect to the background color of the license plate based on the change of the sign calculated by the sign change calculation unit 27, and changes the sign opposite to the lightness / darkness of the character. The center of the local area is obtained, and this center is calculated as a dividing point. In the contour line extraction unit 25, the code calculation unit 26, the code change calculation unit 27, and the division point determination unit 28, the CPU 21 expands the processing program stored in the predetermined ROM 22 in the RAM 23 and executes the expanded program. The processing unit realized by the above-described processing will be described later.

以下、本実施形態における文字認識処理について図2、図3を参照して説明する。図2は、本実施形態における文字認識処理の過程を示すフローチャートである。   Hereinafter, the character recognition process in this embodiment is demonstrated with reference to FIG. 2, FIG. FIG. 2 is a flowchart showing the process of character recognition processing in the present embodiment.

図2に示すように、カメラ11で撮影されたナンバープレートの画像は、図3(a)に示す画像データとして画像入力部12に入力され、認識処理部13は、ステップS101及びステップS102で所定の処理を行う。すなわち、ステップS101では、輪郭線抽出部25が、入力された画像データに対して、Sobelフィルタ等の微分フィルタにより画像データの空間1次微分を計算し、図3(b)に示すように、画像データの中の文字の輪郭線を抽出する(輪郭線強調処理)。また、ステップS102では、ステップS101において輪郭線強調処理がなされた画像データに対して、輪郭線抽出部25が、非極大輪郭画素を抑制することで輪郭線を細線化する(輪郭線細線化処理)。   As shown in FIG. 2, the license plate image captured by the camera 11 is input to the image input unit 12 as the image data shown in FIG. 3A, and the recognition processing unit 13 performs predetermined processing in steps S101 and S102. Perform the process. That is, in step S101, the contour line extraction unit 25 calculates the spatial first derivative of the image data with respect to the input image data using a differential filter such as a Sobel filter, and as shown in FIG. A contour line of characters in image data is extracted (contour emphasis processing). In step S102, the contour line extraction unit 25 thins the contour line by suppressing the non-maximum contour pixel with respect to the image data subjected to the contour line emphasis process in step S101 (contour line thinning process). ).

すなわち、図3(c)に示すように、ステップS101で輪郭線強調処理がなされた画像データに対して、水平方向に勾配がある画素を抽出して輪郭線の方向を算出し(垂直エッジ画像生成)、輪郭線に垂直な方向で輪郭線輝度値がピーク値となる点(極大点)を探索し、これを真の輪郭線上の点として抽出する。また同様に、図3(d)に示すように、ステップS101で輪郭線強調処理がなされた画像データに対して、垂直方向に勾配がある画素を抽出して輪郭線の方向を算出し(水平エッジ画像生成)、輪郭線に垂直な方向で輪郭線輝度値がピーク値となる点(極大点)を探索し、これを真の輪郭線上の点として抽出する。   That is, as shown in FIG. 3C, a pixel having a gradient in the horizontal direction is extracted from the image data that has been subjected to the contour emphasis processing in step S101, and the contour direction is calculated (vertical edge image). Generation), a point (maximum point) at which the contour luminance value becomes a peak value in a direction perpendicular to the contour is searched, and this is extracted as a point on the true contour. Similarly, as shown in FIG. 3D, for the image data subjected to the contour emphasis processing in step S101, pixels having a gradient in the vertical direction are extracted to calculate the direction of the contour (horizontal Edge image generation), a point (maximum point) at which the contour luminance value reaches a peak value in a direction perpendicular to the contour line is searched, and this is extracted as a point on the true contour line.

続いて、ステップS103では、輪郭線強調処理及び輪郭線細線化処理がなされた画像データに対して、認識処理部13の符号算出部26が、水平方向または垂直方向に勾配がある画素について輪郭線の微分値の符号を求める。より具体的には、符号算出部26は、輪郭線強調処理及び輪郭線細線化処理がなされた画像データのうち、抽出された輪郭線の各画素について輪郭線の勾配方向に注目し、水平方向に勾配がある画素、すなわち、垂直輪郭線について微分フィルタの垂直輪郭線の微分値の符号を算出する。同様に、垂直方向に勾配がある画素、すなわち、水平輪郭線について微分フィルタの水平輪郭線の微分値の符号を算出する。ここで図4は、図3(c)のA部分の拡大図であり、符号を算出する場合の例を示している。   Subsequently, in step S103, the code calculation unit 26 of the recognition processing unit 13 applies the contour line to pixels having a gradient in the horizontal direction or the vertical direction with respect to the image data that has undergone the contour line emphasis processing and the contour line thinning processing. Find the sign of the derivative of. More specifically, the code calculation unit 26 pays attention to the gradient direction of the contour line for each pixel of the extracted contour line in the image data that has undergone the contour line emphasis process and the contour line thinning process. The sign of the differential value of the vertical contour of the differential filter is calculated for a pixel having a gradient in the vertical contour, that is, the vertical contour. Similarly, the sign of the differential value of the horizontal contour of the differential filter is calculated for a pixel having a gradient in the vertical direction, that is, the horizontal contour. Here, FIG. 4 is an enlarged view of a portion A in FIG. 3C, and shows an example in the case of calculating a code.

ステップS104では、符号変化算出部27が、ステップS103で算出した各画素における微分値の符号に基づいて、注目画素を中心として文字の太さに相当する局所領域を抽出し、この局所領域に含まれる画素の符号を注目画素の符号と比較して変化しているか否かを算出する。具体的には、まず、ステップS103で微分値の符号が算出された画像データに対して、左上方の画素から右方向に1行づつ順次スキャンして微分値が算出されている画素を抽出する。そして、スキャンした1行に含まれる画素のうち、微分値が算出されている画素を注目画素として、この注目画素から右方向に文字太さ相当の画素からなる所定の局所領域を抽出する。続いて、この局所領域の中で注目画素と反対の符号の画素があるかを検索して、注目画素と反対の符号を示す画素がある場合には、注目画素に対して符号が変化している画素がある領域として、その画素の座標と符号変化の方向の情報(注目画素+符号に対して−符号、又は注目画素−符号に対して+符号)とを記憶する。この処理をスキャンされた1行に亘って行い、この1行に対する処理が終了したら、次の行の処理を開始する。上記処理を1行毎に順次行うことで、画像データ全体に対して処理が終わるまで繰り返す。   In step S104, the sign change calculation unit 27 extracts a local area corresponding to the thickness of the character centered on the target pixel based on the sign of the differential value calculated in each pixel in step S103, and is included in the local area. It is calculated whether the sign of the pixel to be changed is compared with the sign of the target pixel. Specifically, first, the image data for which the sign of the differential value has been calculated in step S103 is sequentially scanned from the upper left pixel one line at a time in the right direction to extract the pixel for which the differential value has been calculated. . Then, among the pixels included in one scanned row, a pixel whose differential value is calculated is used as a target pixel, and a predetermined local region including pixels corresponding to the character thickness is extracted from the target pixel in the right direction. Subsequently, the local region is searched for a pixel having a sign opposite to the target pixel. If there is a pixel having a sign opposite to the target pixel, the sign changes with respect to the target pixel. As a region where a pixel is located, information on the coordinates of the pixel and the direction of sign change (a sign for the target pixel + code or a + sign for the target pixel-code) is stored. This process is performed over one scanned line, and when the process for this one line is completed, the process for the next line is started. The above processing is sequentially performed for each line, so that the processing is repeated for the entire image data.

そして、算出された符号の変化から、注目画素に対して符号が変化している画素がある領域数を、符号変化の方向の情報毎に算出する。すなわち、注目画素が+符号であった場合の−符号へ変化している画素がある領域と、注目画素が−符号であった場合の+符号へ変化している画素がある領域の数をそれぞれ算出する。   Then, from the calculated code change, the number of regions where the pixel whose code has changed with respect to the target pixel is calculated for each piece of information of the code change direction. That is, the number of areas where there is a pixel that changes to − sign when the target pixel is + sign, and the number of areas where there is a pixel that changes to + sign when the target pixel is − sign, respectively. calculate.

なお、ここで、文字太さ相当とは、4〜5画素程度に相当する領域であることが多く、局所領域の範囲は、認識対象のナンバープレートに記載される文字の大きさ、太さ等を考慮して定められたデータとして予め記憶装置24に記憶しておく。また、水平方向の分割点を算出する場合には、水平方向で局所領域を定めて符号の変化を算出し、垂直方向の分割点を算出する場合には垂直方向で局所領域を定めて符号の変化を算出する。   Here, the character thickness equivalent is often an area corresponding to about 4 to 5 pixels, and the range of the local area is the size, thickness, etc. of the character written on the license plate to be recognized. Is stored in advance in the storage device 24 as data determined in consideration of the above. In addition, when calculating horizontal division points, a local region is determined in the horizontal direction to calculate a change in sign, and when dividing points in the vertical direction are calculated, a local area is determined in the vertical direction and the code is changed. Calculate the change.

ステップS105では、分割点決定部28が、ステップS103での算出結果から局所領域の背景に対する明暗、すなわち、局所領域が文字線部分にかかる領域であるか、背景部分にかかる領域であるかを判定する。具体的には、注目画素+符号に対して−符号へ変化している画素がある領域数と、注目画素−に対して+へ変化している画素がある領域数とを比較してのより多い方の局所領域について文字線部を捉えた局所領域と判定する。例えば、注目画素+符号に対して−符号へ変化している画素がある領域数よりも、注目画素−符号に対して+符号へ変化している画素がある領域数が多い場合には、注目画素−符号に対して+符号へ変化している画素がある領域を文字線部分とし、注目画素+符号に対して−符号へ変化している画素がある領域を背景部分と判定する。   In step S105, the division point determination unit 28 determines whether the local area is bright or dark with respect to the background, that is, whether the local area is an area covering the character line portion or an area covering the background portion, from the calculation result in step S103. To do. Specifically, by comparing the number of regions where there is a pixel that has changed to −sign with respect to the target pixel + sign and the number of regions with pixels that have changed to + with respect to the target pixel− It is determined that the local region of the larger local region captures the character line portion. For example, if there are more regions with pixels that have changed to + sign with respect to the pixel of interest-sign than there are regions with pixels that have changed to-sign with respect to the pixel of interest + code, A region having a pixel that has changed to a + sign with respect to the pixel-code is determined as a character line portion, and a region having a pixel that has changed to a-sign with respect to the target pixel + code is determined as a background portion.

そして、次のステップS106において、分割点決定部28により、局所領域内で分割点を決定する。具体的には、分割点決定部28は、ステップS105で判定された文字線部分にかかる局所領域と背景部分にかかる局所領域との判定結果、及び、ステップS104で算出した局所領域内の注目画素に対して符号が変化している画素の座標と符号変化の方向の情報とに基づいて、背景部分と判定された局所領域を全て抽出する。そして、抽出された局所領域の中心画素を分割点と決定する。   In the next step S106, the dividing point determination unit 28 determines the dividing points in the local region. Specifically, the dividing point determination unit 28 determines the determination result of the local region related to the character line portion determined in step S105 and the local region related to the background portion, and the target pixel in the local region calculated in step S104. All the local regions determined as background parts are extracted based on the coordinates of the pixels whose sign is changed and information on the direction of the sign change. Then, the central pixel of the extracted local area is determined as a division point.

分割点が決定されると、ナンバープレートに記載された文字が分離され、分離された1文字領域に対して、2値化、ラベリング等の処理により切り出した個々の文字画像に対して文字認識処理がなされる。全ての文字領域について所定の文字認識処理が終了すると、認識結果が認識結果出力部14へ出力され、文字認識処理が終了する。   When the division points are determined, the characters written on the license plate are separated, and character recognition processing is performed on individual character images cut out by binarization, labeling, etc. for the separated one character area Is made. When the predetermined character recognition process is completed for all the character areas, the recognition result is output to the recognition result output unit 14, and the character recognition process is completed.

このように、本実施形態によれば、輪郭線の垂直方向及び水平方向の双方の輝度勾配に基づいて文字認識対象画素の明暗を判定するので、文字同士の連結箇所や文字とナンバープレートを固定するボルトなどの周辺構造物との連結箇所を正しく特定でき、連結部分を正しく分割できる。これにより、文字認識精度が向上する。   As described above, according to the present embodiment, since the brightness of the character recognition target pixel is determined based on both the vertical and horizontal luminance gradients of the contour line, the connection portion between characters and the character and license plate are fixed. It is possible to correctly identify the connection location with surrounding structures such as bolts to be connected, and to correctly divide the connection portion. This improves the character recognition accuracy.

なお、文字同士や文字と周辺構造物との連結が生じる可能性のある箇所は、認識対象のナンバープレートによってある程度定められる。このため、図示しない領域選定手段により、文字の大きさにかかる情報などを利用して予め分割点の位置を推定し、推定された位置を含む領域について分割点を探索する領域と定める。そして、このようにすることで、定められた領域内でのみ上記した処理を実行すればよく、文字認識処理に要する時間を短縮することができるとともに、分割点の誤検出を抑制することができる。領域選定手段によって、入力画像データから探索対象の領域を定めるための処理は、任意のタイミングで行うことができ、例えば、注目画素の明暗判定に先立って行うことも、また、輪郭線強調処理に先立って行うこともできる。また、分割点を探索する領域は、文字認識処理の都度演算して選定してもよいが、予め記憶手段などに定めておき、領域選定手段によって記憶手段から読み出すこともできる。領域選定手段もCPU21が所定のROM22に格納された処理プログラムをRAM23に展開し、展開したプログラムを実行することによって実現される処理部として機能する。   In addition, the location where a character and the connection of a character and a surrounding structure may arise is decided to some extent by the license plate of recognition object. For this reason, the position of the dividing point is estimated in advance by using information on the size of the character by an area selection unit (not shown), and the area including the estimated position is determined as an area for searching for the dividing point. In this way, the above-described process only needs to be executed within a predetermined area, and the time required for the character recognition process can be shortened and erroneous detection of division points can be suppressed. . The process for determining the search target area from the input image data by the area selection means can be performed at an arbitrary timing. For example, it can be performed prior to the brightness determination of the target pixel. It can also be done in advance. The area for searching for the dividing point may be calculated and selected every time the character recognition process is performed, but may be determined in advance in the storage means or the like and read from the storage means by the area selection means. The area selecting means also functions as a processing unit realized by the CPU 21 developing a processing program stored in a predetermined ROM 22 in the RAM 23 and executing the expanded program.

〔第2の実施形態〕
次に、本発明の第2の実施形態について、図5を用いて説明する。
[Second Embodiment]
Next, a second embodiment of the present invention will be described with reference to FIG.

本実施形態の文字認識処理が上述した第1の実施形態と異なる点は、第1の実施形態における分割点の算出処理を実行した後に、ここで求めた分割点を分割候補点とし、分割候補点の水平軸(x軸)への射影投票を行い、分割点が多く投票されている座標については、そのx座標のすべてのy座標を分割点として算出するという処理を加えた点である。以下、本実施形態の文字認識処理について、第1の実施形態と共通する点については説明を省略し、異なる点について説明する。   The character recognition process of the present embodiment is different from that of the first embodiment described above. After the division point calculation process in the first embodiment is executed, the division point obtained here is set as a division candidate point, and the division candidate is obtained. Projection voting on the horizontal axis (x-axis) of a point is performed, and for coordinates where a large number of division points are voted, processing is performed to calculate all y coordinates of the x-coordinates as division points. Hereinafter, regarding the character recognition processing of the present embodiment, description of points that are common to the first embodiment will be omitted, and different points will be described.

図5は、本実施形態における文字認識処理の過程を示すフローチャートである。図5に示すように、ステップS201〜ステップS206で、カメラ11で撮影されたナンバープレートの画像は、図3(a)に示す画像データとして画像入力部12に入力され、認識処理部13により、上述した第1の実施形態における文字認識処理と同様にして、局注目画素+符号に対して−符号へ変化している画素がある領域数と、注目画素−符号に対して+符号へ変化している画素がある領域数のより少ないほうを分割候補点として抽出する。   FIG. 5 is a flowchart showing a process of character recognition processing in the present embodiment. As shown in FIG. 5, the license plate image taken by the camera 11 in steps S201 to S206 is input to the image input unit 12 as image data shown in FIG. In the same manner as the character recognition process in the first embodiment described above, the number of regions in which pixels are changed to the minus sign with respect to the local attention pixel + code, and the change to the plus sign with respect to the noticeable pixel-code. A pixel having a smaller number of regions is extracted as a division candidate point.

ステップS207では、図示しない投票手段が、ステップS206で算出された分割候補点を水平方向(x軸方向)に投影した投票空間に投票することにより、射影投票された周辺分布からなる得票分布を生成する。ステップS208では、生成された得票分布から、得票が所定閾値以上となるx座標を算出し次のステップへ進む。なお、所定閾値としては、最高得票値又は全得票値の平均値が予め定められた固定値以上である場合に、最高得票値に予め定めた任意の比率を乗じて求めた値や、先験情報により予め定められる文字の参照高さに予め定めた任意の比率を乗じて求めた値等を用いることができる。ステップS209では、先のステップS208で算出された全てのx座標について、そのx座標の全てのy座標を算出し、このy座標を分割点として定め、このルーチンを終了する。   In step S207, voting means (not shown) generates a vote distribution consisting of the peripheral distributions that have been projectively voted by voting on the voting space obtained by projecting the division candidate points calculated in step S206 in the horizontal direction (x-axis direction). To do. In step S208, the x coordinate at which the vote is equal to or greater than a predetermined threshold is calculated from the generated vote distribution, and the process proceeds to the next step. In addition, as the predetermined threshold, when the average value of the maximum vote value or the total vote value is equal to or more than a predetermined fixed value, a value obtained by multiplying the maximum vote value by a predetermined arbitrary ratio, A value obtained by multiplying a reference height of a character predetermined by information by a predetermined arbitrary ratio can be used. In step S209, for all x coordinates calculated in the previous step S208, all y coordinates of the x coordinates are calculated, this y coordinate is determined as a division point, and this routine is terminated.

分割点が決定されると、ナンバープレートに記載された文字が分離され、分離された1文字領域に対して、2値化、ラベリング等の処理により切り出した個々の文字画像に対して文字認識処理がなされる。全ての文字領域について所定の文字認識処理が終了すると、認識結果が認識結果出力部14へ出力され、文字認識処理が終了する。   When the division points are determined, the characters written on the license plate are separated, and character recognition processing is performed on individual character images cut out by binarization, labeling, etc. for the separated one character area Is made. When the predetermined character recognition process is completed for all the character areas, the recognition result is output to the recognition result output unit 14, and the character recognition process is completed.

このように、本実施形態によれば、輪郭線の垂直方向及び水平方向の双方の輝度勾配に基づいて文字認識対象画素の明暗を判定するので、文字同士の連結箇所や文字とナンバープレートを固定するボルトなどの周辺構造物との連結箇所を正しく特定でき、連結部分を正しく分割できる。特に、射影投票を行うことで、文字同士や文字と周辺構造物がぴったり接している場合などの連結箇所が多い場合にも分割漏れが発生してしまうことを防止でき、文字認識精度が向上する。   As described above, according to the present embodiment, since the brightness of the character recognition target pixel is determined based on both the vertical and horizontal luminance gradients of the contour line, the connection portion between characters and the character and license plate are fixed. It is possible to correctly identify the connection location with surrounding structures such as bolts to be connected, and to correctly divide the connection portion. In particular, by performing projective voting, it is possible to prevent the occurrence of division leakage even when there are many connected parts, such as when characters and their surrounding structures are in close contact with each other, and character recognition accuracy is improved. .

〔第3の実施形態〕
続いて、本発明の第3の実施形態について、図6を用いて説明する。
[Third Embodiment]
Subsequently, a third embodiment of the present invention will be described with reference to FIG.

本実施形態の文字認識処理が上述した第1の実施形態及び第2の実施形態と異なる点は、第1の実施形態及び第2の実施形態における分割点の算出処理を実行した後に、認識対象の文字の大きさの先験情報を用いて分割点をさらに絞り込み、絞り込んだ領域にて文字認識処理を行い、最もマッチング度が高い領域を文字の分割点として確定する点である。第1及び第2の実施形態と共通する点については説明を省略し、異なる点について説明する。   The difference between the character recognition processing of the present embodiment and the first embodiment and the second embodiment described above is that the recognition target is obtained after the division point calculation processing in the first embodiment and the second embodiment is executed. The point of division is further narrowed down using a priori information on the size of the character, character recognition processing is performed in the narrowed down region, and the region with the highest matching degree is determined as the character dividing point. Description of points common to the first and second embodiments will be omitted, and different points will be described.

図6は、本実施形態における文字認識処理の過程を示すフローチャートである。図6に示すように、上述した第1の実施形態もしくは第2の実施形態に従って算出された分割点と、文字の大きさにかかる情報である文字情報とが認識処理部13に入力され、文字認識処理がスタートする。なお、文字情報は、認識対象のナンバープレートに基づいて予め定められた情報であり、この文字情報を例えば記憶装置24に記憶しておく。   FIG. 6 is a flowchart showing the process of character recognition processing in this embodiment. As shown in FIG. 6, the division points calculated according to the first embodiment or the second embodiment described above and the character information that is information related to the character size are input to the recognition processing unit 13, and the character is The recognition process starts. The character information is information predetermined based on the license plate to be recognized, and the character information is stored in the storage device 24, for example.

ステップS301では、記憶装置24から入力された文字情報に基づいて、分割候補点を絞り込む。すなわち、文字情報と先のステップで算出された全ての分割点とを比較して、文字情報で示される文字の大きさに最も近い分割候補点を抽出する。次のステップS302では、抽出された分割候補点において2値化、ラベリング等の処理により個々の文字画像に対しての文字認識処理を実行する。次のステップS303では、先のステップS302で文字認識処理された文字を文字情報と比較し、最もマッチング度が高い分割候補点の組み合わせを算出する。   In step S301, the division candidate points are narrowed down based on the character information input from the storage device 24. That is, the character information is compared with all the division points calculated in the previous step, and the division candidate point closest to the character size indicated by the character information is extracted. In the next step S302, character recognition processing is performed on each character image by binarization, labeling, and the like at the extracted division candidate points. In the next step S303, the character that has been subjected to the character recognition process in the previous step S302 is compared with character information, and a combination of division candidate points having the highest matching degree is calculated.

ステップS304では、上記したステップS302〜S303の処理が、ステップS301で絞り込まれた分割候補点の全てに対して実行されたか否かを判定し、実行されていないと判定された場合には、ステップS302に戻り、ステップS302及びS303の処理を繰り返す。ステップS304で、ステップS302〜S303の処理が、ステップS301で絞り込まれた分割候補点の全てに対して実行されたと判定された場合には、算出された全ての分割候補点の組み合わせを採用して、文字の分割点を確定し、本ルーチンを終了する。なお、本ルーチンの終了後、文字の認識結果をより確実にするために、再度、2値化、ラベリング等の処理により切り出した個々の文字画像に対して文字認識処理を行うことも可能である。なお、この場合には、先に行った文字認識処理とは異なるパラメータを用いることが好ましい。   In step S304, it is determined whether or not the processing in steps S302 to S303 described above has been executed for all of the candidate division points narrowed down in step S301. If it is determined that the processing has not been executed, step S304 is executed. Returning to S302, the processes in steps S302 and S303 are repeated. If it is determined in step S304 that the processes in steps S302 to S303 have been executed for all of the division candidate points narrowed down in step S301, combinations of all the division candidate points calculated are adopted. The character dividing point is determined and this routine is terminated. Note that after the end of this routine, in order to make the character recognition result more reliable, it is also possible to perform character recognition processing again on individual character images cut out by processing such as binarization and labeling. . In this case, it is preferable to use parameters different from those of the character recognition process performed previously.

このように、本実施形態によれば、文字の輪郭線の輝度勾配にかかる情報に加えて、文字の大きさ、文字マッチング結果にかかる情報に基づいて文字の分割点を決定する。ので、より精度よく文字の分割点が定まり、文字認識精度が向上する。   As described above, according to the present embodiment, the character division point is determined based on the information related to the character gradient and the character matching result in addition to the information related to the luminance gradient of the character outline. Therefore, the character dividing point is determined more accurately, and the character recognition accuracy is improved.

10 文字認識装置
11 カメラ
12 画像入力部
13 認識処理部
14 認識結果出力部
25 輪郭線抽出部
26 符号算出部
27 符号変化算出部
28 分割点決定部
DESCRIPTION OF SYMBOLS 10 Character recognition apparatus 11 Camera 12 Image input part 13 Recognition processing part 14 Recognition result output part 25 Outline extraction part 26 Code calculation part 27 Code change calculation part 28 Division | segmentation point determination part

Claims (5)

文字列を撮像した画像データの該文字列を分割して該文字列に含まれる文字を個々に認識する文字認識装置であって、
前記画像データから微分フィルタを用いて前記文字列の輪郭線を抽出する輪郭線抽出手段と、
前記抽出された輪郭線のうち垂直輪郭線及び水平輪郭線を構成する各画素について、前記微分フィルタによる微分値の符号を算出する符号算出手段と、
前記符号が算出された画素から1つの画素を注目画素として選択し、該注目画素を含む第1の領域を前記画像データから順次抽出し、前記第1の領域に含まれる画素について、前記符号算出手段により算出された符号に基づいて、前記注目画素の符号と異符号を示す画素があるか否かを判定し、前記画像データから抽出された第1の領域のうち異符号を示す画素があると判定された第1の領域の数を算出する符号変化算出手段と、
異符号を示す画素があると判定された前記第1の領域を、前記注目画素が+符号を示す領域と注目画素が−符号を示す領域とに分別し、分別した結果、より数の少ない領域を前記文字列の分割点とする分割点決定手段と、
を備えたことを特徴とする文字認識装置。
A character recognition device that divides the character string of image data obtained by imaging a character string and individually recognizes characters included in the character string,
Contour extraction means for extracting a contour line of the character string from the image data using a differential filter;
Code calculating means for calculating a sign of a differential value by the differential filter for each pixel constituting a vertical contour line and a horizontal contour line among the extracted contour lines;
One pixel is selected as a target pixel from the pixels for which the code has been calculated, a first area including the target pixel is sequentially extracted from the image data, and the code calculation is performed for pixels included in the first area. Based on the code calculated by the means, it is determined whether or not there is a pixel having a different sign from the sign of the pixel of interest, and there is a pixel having a different sign in the first region extracted from the image data Sign change calculating means for calculating the number of first regions determined as:
As a result of classifying the first area determined to have a pixel having an opposite sign into an area in which the pixel of interest indicates + sign and an area in which the pixel of interest indicates-sign, the number of areas is smaller. A dividing point determining means that defines a character string dividing point as
A character recognition device comprising:
前記分割点の位置を予め推定し、前記画像データから前記推定された分割点の位置を含む第2の領域を定める領域選定手段を備え、
前記分割点決定手段は、前記第2の領域において文字列の分割点を決定することを特徴とする請求項1に記載の文字認識装置。
A region selection unit that preliminarily estimates the position of the dividing point and determines a second region including the estimated position of the dividing point from the image data,
The character recognition device according to claim 1, wherein the division point determination unit determines a division point of the character string in the second region.
前記分割点を水平軸方向に投影した投票空間に投票することにより、射影投票された周辺分布からなる得票分布を生成する投票手段を備え、
前記分割点決定手段は、前記得票分布から、水平方向の得票値が所定閾値以上となる場合に、該水平方向の得票値に対する垂直方向の得票値を真の分割点とする、
ことを特徴とする請求項1又は請求項2に記載の文字認識装置。
Voting means for generating a vote distribution composed of peripheral distributions that have been projected and voted by voting in a voting space in which the division points are projected in the horizontal axis direction;
The division point determining means, when the vote value in the horizontal direction is equal to or greater than a predetermined threshold from the vote distribution, sets the vertical vote value with respect to the horizontal vote value as a true division point.
The character recognition device according to claim 1, wherein the character recognition device is a character recognition device.
前記文字の大きさにかかる文字情報を予め記憶した記憶手段を備え、
前記分割点決定手段は、前記文字情報と前記分割点とを比較して前記文字情報で示される文字の大きさに最も近い分割点を真の分割点とする、
ことを特徴とする請求項1乃至請求項3の何れか1項に記載の文字認識装置。
Storage means for storing character information related to the size of the character in advance;
The dividing point determination means compares the character information with the dividing point and sets the dividing point closest to the character size indicated by the character information as a true dividing point.
The character recognition device according to claim 1, wherein the character recognition device is a character recognition device.
文字列を撮像した画像データの該文字列を分割し、該文字列に含まれる文字を個々に認識する文字認識方法であって、
前記画像データから微分フィルタを用いて前記文字列の輪郭線を抽出するステップと、
前記抽出された輪郭線のうち垂直輪郭線及び水平輪郭線を構成する各画素について、前記微分フィルタによる微分値の符号を算出するステップと、
前記符号が算出された画素から1つの画素を注目画素として選択し、該注目画素を含む第1の領域を前記画像データから順次抽出し、前記第1の領域に含まれる画素について、前記符号算出手段により算出された符号に基づいて、前記注目画素の符号と異符号を示す画素があるか否かを判定し、前記画像データから抽出された第1の領域のうち異符号を示す画素があると判定された第1の領域の数を算出するステップと、
異符号を示す画素があると判定された前記第1の領域を、前記注目画素が+符号を示す領域と注目画素が−符号を示す領域とに分別し、分別した結果、より数の少ない領域を前記文字列の分割点とするステップと、
を備えたことを特徴とする文字認識方法。
A character recognition method for dividing the character string of image data obtained by imaging a character string and individually recognizing characters included in the character string,
Extracting a contour line of the character string from the image data using a differential filter;
Calculating a sign of a differential value by the differential filter for each pixel constituting a vertical contour line and a horizontal contour line among the extracted contour lines;
One pixel is selected as a target pixel from the pixels for which the code has been calculated, a first area including the target pixel is sequentially extracted from the image data, and the code calculation is performed for pixels included in the first area. Based on the code calculated by the means, it is determined whether or not there is a pixel having a different sign from the sign of the pixel of interest, and there is a pixel having a different sign in the first region extracted from the image data Calculating the number of first regions determined as:
As a result of classifying the first area determined to have a pixel having an opposite sign into an area in which the pixel of interest indicates + sign and an area in which the pixel of interest indicates-sign, the number of areas is smaller. And a step of defining the character string dividing point;
A character recognition method comprising:
JP2009036455A 2009-02-19 2009-02-19 Character recognition device and character recognition method Active JP5010627B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2009036455A JP5010627B2 (en) 2009-02-19 2009-02-19 Character recognition device and character recognition method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2009036455A JP5010627B2 (en) 2009-02-19 2009-02-19 Character recognition device and character recognition method

Publications (2)

Publication Number Publication Date
JP2010191767A true JP2010191767A (en) 2010-09-02
JP5010627B2 JP5010627B2 (en) 2012-08-29

Family

ID=42817735

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2009036455A Active JP5010627B2 (en) 2009-02-19 2009-02-19 Character recognition device and character recognition method

Country Status (1)

Country Link
JP (1) JP5010627B2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012252691A (en) * 2011-05-31 2012-12-20 Fujitsu Ltd Method and device for extracting text stroke image from image
CN112001393A (en) * 2020-07-06 2020-11-27 西安电子科技大学 Specific character recognition FPGA implementation method, system, storage medium and application
CN112749694A (en) * 2021-01-20 2021-05-04 中科云谷科技有限公司 Method and device for identifying image direction and nameplate characters

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104200210B (en) * 2014-08-12 2018-11-06 合肥工业大学 A kind of registration number character dividing method based on component

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS6224310A (en) * 1985-07-23 1987-02-02 Kubota Ltd Boundary detector for automatic traveling truck
JPH0830727A (en) * 1994-07-15 1996-02-02 Matsushita Electric Works Ltd Binarizing method for character image
JPH08305795A (en) * 1995-04-28 1996-11-22 Nippon Steel Corp Character recognizing method
JPH113421A (en) * 1997-06-11 1999-01-06 Meidensha Corp Method for detecting line segment
JP2001175808A (en) * 1999-12-22 2001-06-29 Fujitsu Ltd Image processor and computer-readable recording medium with recorded image processing program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS6224310A (en) * 1985-07-23 1987-02-02 Kubota Ltd Boundary detector for automatic traveling truck
JPH0830727A (en) * 1994-07-15 1996-02-02 Matsushita Electric Works Ltd Binarizing method for character image
JPH08305795A (en) * 1995-04-28 1996-11-22 Nippon Steel Corp Character recognizing method
JPH113421A (en) * 1997-06-11 1999-01-06 Meidensha Corp Method for detecting line segment
JP2001175808A (en) * 1999-12-22 2001-06-29 Fujitsu Ltd Image processor and computer-readable recording medium with recorded image processing program

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012252691A (en) * 2011-05-31 2012-12-20 Fujitsu Ltd Method and device for extracting text stroke image from image
CN112001393A (en) * 2020-07-06 2020-11-27 西安电子科技大学 Specific character recognition FPGA implementation method, system, storage medium and application
CN112001393B (en) * 2020-07-06 2024-02-02 西安电子科技大学 Method, system, storage medium and application for realizing specific character recognition FPGA
CN112749694A (en) * 2021-01-20 2021-05-04 中科云谷科技有限公司 Method and device for identifying image direction and nameplate characters
CN112749694B (en) * 2021-01-20 2024-05-21 中科云谷科技有限公司 Method and device for recognizing image direction and nameplate characters

Also Published As

Publication number Publication date
JP5010627B2 (en) 2012-08-29

Similar Documents

Publication Publication Date Title
CN111028213B (en) Image defect detection method, device, electronic equipment and storage medium
CN108364010B (en) License plate recognition method, device, equipment and computer readable storage medium
CN108986152B (en) Foreign matter detection method and device based on difference image
JP4901676B2 (en) License plate information processing apparatus and license plate information processing method
JP2008217347A (en) License plate recognition device, its control method and computer program
JP2008286725A (en) Person detector and detection method
JP6177541B2 (en) Character recognition device, character recognition method and program
JP5010627B2 (en) Character recognition device and character recognition method
JP2008251029A (en) Character recognition device and license plate recognition system
US20180158203A1 (en) Object detection device and object detection method
JP5100688B2 (en) Object detection apparatus and program
CN114693917A (en) Data enhancement method applied to signboard identification
EP3955207A1 (en) Object detection device
JP2007265292A (en) Road sign database construction device
CN111354038A (en) Anchor object detection method and device, electronic equipment and storage medium
JP5201184B2 (en) Image processing apparatus and program
JP2016053763A (en) Image processor, image processing method and program
JP5439069B2 (en) Character recognition device and character recognition method
US9792675B1 (en) Object recognition using morphologically-processed images
US9582733B2 (en) Image processing device, image processing method, and image processing program
CN113744200B (en) Camera dirt detection method, device and equipment
JP2009025856A (en) Document discrimination program and document discrimination device
JP2008027130A (en) Object recognition apparatus, object recognition means, and program for object recognition
JP2010257252A (en) Image recognition device
JP5397103B2 (en) Face position detection device, face position detection method, and program

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20110914

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20120426

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20120508

A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20120601

R151 Written notification of patent or utility model registration

Ref document number: 5010627

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R151

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20150608

Year of fee payment: 3

S111 Request for change of ownership or part of ownership

Free format text: JAPANESE INTERMEDIATE CODE: R313111

R350 Written notification of registration of transfer

Free format text: JAPANESE INTERMEDIATE CODE: R350

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

S531 Written request for registration of change of domicile

Free format text: JAPANESE INTERMEDIATE CODE: R313531

S533 Written request for registration of change of name

Free format text: JAPANESE INTERMEDIATE CODE: R313533

R371 Transfer withdrawn

Free format text: JAPANESE INTERMEDIATE CODE: R371

S531 Written request for registration of change of domicile

Free format text: JAPANESE INTERMEDIATE CODE: R313531

S533 Written request for registration of change of name

Free format text: JAPANESE INTERMEDIATE CODE: R313533

R350 Written notification of registration of transfer

Free format text: JAPANESE INTERMEDIATE CODE: R350