JP3755854B2 - Image processing apparatus and two-dimensional line feature amount calculation apparatus - Google Patents

Image processing apparatus and two-dimensional line feature amount calculation apparatus Download PDF

Info

Publication number
JP3755854B2
JP3755854B2 JP06273798A JP6273798A JP3755854B2 JP 3755854 B2 JP3755854 B2 JP 3755854B2 JP 06273798 A JP06273798 A JP 06273798A JP 6273798 A JP6273798 A JP 6273798A JP 3755854 B2 JP3755854 B2 JP 3755854B2
Authority
JP
Japan
Prior art keywords
pixel
density
interest
dimensional line
line feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
JP06273798A
Other languages
Japanese (ja)
Other versions
JPH10341336A (en
Inventor
ルノー ミエル
浩一 橋本
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Riso Kagaku Corp
Original Assignee
Riso Kagaku Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Riso Kagaku Corp filed Critical Riso Kagaku Corp
Priority to JP06273798A priority Critical patent/JP3755854B2/en
Priority to US09/055,468 priority patent/US6167154A/en
Priority to EP98106374A priority patent/EP0871323B1/en
Priority to DE69834976T priority patent/DE69834976T2/en
Publication of JPH10341336A publication Critical patent/JPH10341336A/en
Application granted granted Critical
Publication of JP3755854B2 publication Critical patent/JP3755854B2/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/40Picture signal circuits
    • H04N1/40062Discrimination between different image types, e.g. two-tone, continuous tone

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Facsimile Image Signal Circuits (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Control Or Security For Electrophotography (AREA)
  • Color, Gradation (AREA)
  • Manufacture Or Reproduction Of Printing Formes (AREA)

Description

【0001】
【発明の属する技術分野】
本発明は原稿の画像を読み取り、読み取られたデジタル画像データを画像処理して2値データとして出力する画像処理装置、特に、感熱孔版原紙に穿孔を施す製版装置(デジタル印刷機)、電子写真技術により感光体に潜像を形成し用紙に転写する装置(デジタル複写機)、あるいは、感熱紙等に複写する装置に使用される画像処理装置に関するものである。
【0002】
【従来の技術】
文字や線画等の二値画像と写真等の階調画像とが混在する原稿を上記のような画像処理装置を用いて二値データとして出力する場合に最良な結果を得るためには、二値画像領域は適当な単一閾値により最大濃度か最小濃度のどちらかに二値化し、階調画像領域は入出力デバイスの特性を考慮して、適切な濃度に変換する濃度変換を施す必要がある。そのためには画像の各部が二値画像領域であるか階調画像領域であるかを判別し、両領域を分離する必要がある。
【0003】
従来、二値画像領域か階調画像領域かの判別を行うために、画像をn×n画素のブロックに分割し、各ブロック毎に特徴抽出を行い特徴抽出結果を利用してブロックの判別を行う方法(例えば特開平3-153167号参照)(以下方法Aという)、注目画素とその周辺画素を利用して特徴抽出を行い、その特徴抽出結果を利用して各画素毎に判別を行う方法(例えば、特開平1-227573号参照)(以下方法Bという)等がある。
【0004】
前記方法Aでは各ブロック毎に判別を行うため、誤判別された部分や、二値画像領域と階調画像領域との境界の部分にブロックの形が現れてしまうという問題がある。前記方法Bを用いた場合には誤判別の影響が前記方法Aより目立たないが、誤判別された部分と正しく判別された部分との間に濃度の段差が生じ、違和感があるという問題がある。
【0005】
また、二値画像領域にある太い線画や黒ベタと階調画像領域にある写真の高濃度部とは区別が困難である。太い線画や黒ベタを二値画像と判別できるように判別用パラメータを調整すると写真画像中につぶれる部分ができ、一方写真の高濃度部を階調画像と判別できるように判別用パラメータを調整すると太い線画や黒ベタの濃度が薄くなってしまう。
【0006】
そこで本願出願人は、前記方法A,方法Bの問題点を緩和する手段として、二値画像領域か階調画像領域かの判別を行うために、原稿における「エッジ画素」(濃度勾配が強い画素)およびその周辺を検出し、エッジ画素とその周辺の高濃度画素を二値画像領域として判別し、エッジ画素が階調画像領域の輪郭画素であるという誤判断等があった場合、印刷結果に違和感が生じないように、二値画像領域画素とエッジ画素の間の距離が大きくなるに従って、階調画像用濃度変換曲線と二値画像濃度変換曲線とを補間する1本以上の濃度変換曲線の中で適当なものを選択する方法を提案している(特開平 8-51538号、以下方法Cという)。
【0007】
この方法Cでは注目画素が二値画像領域画素として判別されるためにはエッジ画素に近くかつ濃度が濃いという二つの条件が要求されるが、新聞の切張りの隣に手書きによる文字がある場合や、レーザビームプリンタで印刷された文書に印(朱肉による文字)が押された場合のように、下地の濃度や文字の濃度が異なる二値画像領域が混在する原稿、更に写真等の階調画像領域も混在する原稿においては、薄い文字(特に、鉛筆や朱肉印による文字)の濃度勾配がそれほど強くなく、かつ濃度が薄いため、この薄い文字が二値画像領域としてではなく階調画像領域として誤判別され、その結果印刷物上では消えてしまうことがある。
【0008】
【発明が解決しようとする課題】
本発明は上記事情に鑑みてなされたものであり、文字の濃度の異なる二値画像領域が混在する原稿、更に写真等の階調画像領域も混在する原稿に対して、階調画像領域の階調性を保存することができると共に、文字の濃度が異なっても適度に文字のコントラストを強めるように画像濃度信号を出力し、薄い文字などの掠れや消滅という問題を解決することのできる画像処理装置を提供することを目的とするものである。
【0009】
【課題を解決するための手段】
本発明の画像処理装置は、注目画素の2値画像らしさ階調画像らしさを分析する第1の分析手段と、
注目画素周辺の参照画像データに基づき、注目画素を通る複数の異なる方向について注目画素の山および谷の度合いのうち少なくとも一方を1次元線特徴量として夫々算出し、算出した複数の結果に基づいて2次元線特徴量を算出して注目画素の2値画像らしさ階調画像らしさを分析する第1の分析手段とは異なる第2の分析手段と、
第1および第2の分析手段による分析結果に基づいて注目画素の濃度を変換する濃度変換手段とを備えたことを特徴とするものである。
【0010】
ここで「第1の分析手段」とは、前記第2の分析手段との組合せによって2値画像らしさ階調画像らしさの判別効果を向上させるものである。例えば、注目画素を通る任意の一方向のみの画像特徴量に基づいて2値画像らしさ階調画像らしさを分析するものや、特開平 8-51538号に記載されているような注目画素のエッジ鋭さ、注目画素に最も近いエッジ画素との距離、線の太さ(高濃度縦線)等のエッジ情報に応じた画像特徴量に基づいて2値画像らしさ階調画像らしさを分析するもの等である。
【0011】
また、「注目画素の山および谷の度合い」とは、注目画素が線画素である場合に、その線とその地との濃度差(濃度プロファイルに表われる)の程度を表す指標である。
【0012】
なお、周辺画素が注目画素の濃度より低い濃度値を有するとき、即ち、濃度プロファイルに立ち上がった後に立ち下がる部分があるときには「山」といい、周辺画素が注目画素の濃度より高い濃度値を有するとき、即ち、濃度プロファイルに立ち下がった後に立ち上がる部分があるときには「谷」という。
【0013】
本発明による画像処理装置の第2の分析手段は、上記度合いを注目画素と注目画素周辺の参照画像との濃度差の絶対値の大きさで示すものであり、予め定められた角度と複数の異なる方向の数との関連により、何番目に絶対値の大きい1次元線特徴量を2次元線特徴量として抽出すべきかを予め決定しておき、該当する1次元線特徴量を注目画素の2次元線特徴量として出力するものであることが望ましい。この場合、度合いは、注目画素を挟む参照画像中の両参照画素群のうち注目画素との濃度差の小さい方の画素群と注目画素との濃度差の絶対値の大きさで示すものであればより望ましい。
【0014】
ここで「予め定められた角度」とは、階調画像領域の角で、線として検出したくない部分の角度を意味する。
【0015】
上記2次元線特徴量は、例えば式(1)に基づいて算出される。
【0016】
【数1】

Figure 0003755854
【0017】
本発明による画像処理装置の濃度変換手段は、複数の異なる濃度変換曲線から適切な濃度変換曲線を選択して濃度変換するものであることが望ましい。
【0018】
ここで「複数の異なる濃度変換曲線」は、全体的により薄いおよび/またはより濃い出力を与えるように、基本的な濃度変換曲線を滑らかに変形する複数の濃度変換曲線を持っていればより好ましい。
【0019】
本発明による画像処理装置は、必要に応じて、濃度変換された注目画素に対して誤差拡散法により2値化処理を施す2値化手段を備えるようにしてもよい。
【0020】
なお、本発明による画像処理装置は、上記第1の分析手段を備えることなく、2次元線特徴量のみを算出して注目画素の2値画像らしさ階調画像らしさを分析するようにしてもよい。本発明による2次元線特徴量算出手段はこのためのものであって、注目画素周辺の参照画像データに基づき、注目画素を通る複数の異なる方向について注目画素の山および谷の度合いのうち少なくとも一方を1次元線特徴量として夫々算出し、算出した複数の結果に基づいて2次元線特徴量を算出することを特徴とするものである。
【0021】
【発明の効果】
本発明による画像処理装置は、濃度差情報を有する1次元線特徴量の性質を併せ持つ2次元線特徴量に基づいて注目画素の2値画像らしさ階調画像らしさを分析する第2の分析手段(2次元線特徴算出手段)と、この第2の分析手段との組合せによって2値画像らしさ階調画像らしさの判別効果を向上させる第1の分析手段とを備え、分析結果に応じて濃度変換するように構成したので、従来の画像処理装置のもつ効果に加えて、一層適切な濃度変換ができるようになる。
【0022】
例えば、濃い線をより濃くすることができ、細線で構成された薄い文字や階調画像領域における薄い模様(捺印等)の再現が適切にできる。また、明るい線をより明るくすることもでき、文字ストロークの間の隙間等の潰れを避けることもできる。更に、濃い線をより濃く、明るい線をより明るくすることができるから、細かい点から成る網点写真が一方的に濃くなったり一方的に薄くなったりすることを防ぐことができる。
【0023】
これにより、濃度が異なる複数の2値画像領域が混在する原稿や、2値画像領域と階調画像領域が混在する原稿を製版印刷した際に、2値画像領域における薄い文字などの掠れや消滅、あるいは濃い下地の上に書かれた文字の潰れがなく、また階調画像領域では階調性が保存され、濃度の段差による違和感が生じない印刷物を得ることができる。
【0024】
また、2次元線特徴量の算出は主に比較演算に基づいているため、第2の分析手段(2次元線特徴算出手段)を単純なハードウエアで構成することが可能である。
【0025】
【発明の実施の形態】
以下図面を参照して本発明による画像処理装置の実施の形態について詳細に説明する。
【0026】
図1は本発明による画像処理装置の基本構成を示したブロック図である。この画像処理装置100 は、図示しない原稿読み取り手段を備えている。この原稿読み取り手段は、図3に示すように、原稿の主走査方向の画素位置b、副走査方向の画素位置aの画素P(a,b)の画像情報をデジタル画像信号(濃度信号)f(a,b)として、順次原稿を主走査および副走査して読み取るものである。
【0027】
画像処理装置100 は、原稿読み取り手段により得られた濃度信号f(a,b)に対して、画素毎に、注目画素Pの2値画像らしさ或いは階調画像らしさ(以下「注目画素Pの状態」ともいう)を分析する分析手段110 と、分析手段110 の分析結果に基づいて、注目画素Pの状態を2値画像から階調画像の間で多段階に判断する判断手段122 と、判断手段122 による注目画素Pの状態の判断結果に従って複数の濃度変換曲線からなる濃度変換曲線集合の中から所望の濃度変換曲線を選択して、選択された濃度変換曲線に基づいて注目画素Pの濃度信号f(a,b)に対して濃度変換処理を施す適応濃度変換手段124 とからなる濃度変換手段120 とを備えている。尚、感熱孔版原紙に穿孔を施す製版装置用の画像処理装置等必要に応じて、図中点線で示すように、濃度変換が施された注目画素Pを2値化する2値化手段130 を更に備えるようにしてもよい。
【0028】
図2は分析手段110 の詳細を示したブロック図である。この分析手段110 は、濃度信号f(a,b)を記憶するデータ記憶手段112 と、データ記憶手段112 からの画像データを用いて、注目画素Pの2値画像らしさ階調画像らしさを分析する第1の分析手段114 と、注目画素P周辺の画素の参照画像データに基づき、注目画素Pを通る複数の異なる方向について注目画素Pの山および/または谷の度合いを1次元線特徴量hxとして夫々算出し、算出した複数の結果に基づいて、線とその地との濃度差(以下「2次元線特徴量」と称す)Hを算出して注目画素Pの2値画像らしさ階調画像らしさを分析する第2の分析手段116 とを備えている。第1の分析手段114 は、注目画素Pにおけるエッジの鋭さSを算出するエッジ鋭さ算出手段114aと、注目画素Pにおけるエッジ鋭さSが予め定められた閾値を超えるか否か、換言すれば「注目画素Pにおいて強いエッジが存在するか否か」という判定を表す特徴量Eを算出する強いエッジ検出手段114bと、強いエッジ検出手段114bが強いエッジを検出した場合、該当画素を強いエッジ画素と呼び、注目画素Pと最も近い強いエッジ画素からの距離Dを算出する強いエッジ画素からの距離算出手段(以下「距離算出手段」と称す)114cと、注目画素Pが一定幅以下の濃い縦線の画素であるか否かという判定を表す特徴量Kを算出する高濃度縦線検出手段114dとからなる。また、第2の分析手段116 は、換言すれば、注目画素Pが一定幅以下の線を構成する画素である場合に、その線とその地との濃度差(2次元線特徴量)Hを算出する2次元線特徴量算出手段である。以下、第2の分析手段116 を2次元線特徴量算出手段116 と称す。
【0029】
次に、上記構成の画像処理装置の作用について説明する。尚、最初に当該画像処理装置100 の特徴である2次元線特徴量算出手段116 の作用について図8〜図12を参照して詳細に説明し、その後に画像処理装置100 をより具体的に示した図5を参照して全体的な作用について説明する。
【0030】
2次元線特徴量算出手段116 は、線の特徴である「線自身の方向を除く、全ての方向において、その濃度プロファイルを取ると、濃度の山(立ち上がった後に立ち下がる部分)あるいは谷(立ち下がった後に立ち上がる部分)となること」を検出するため、注目画素P(a,b)を通る複数の異なる方向の直線に沿って、式(2)で定義される1次元線特徴量hx(a,b) を方向毎に算出する。次に、この方向毎の全算出結果に基づいて式(1)で定義される2次元線検出特徴量を算出して、階調画像領域の角を構成する画素と2次元線を構成する画素とを区別することにより、階調画像領域の角を線と誤認識しないようにする。これは、「線自身の方向に濃度プロファイルを取ると1次元線特徴量hx(a,b) はゼロになるが、その他の方向に濃度プロファイルを取ると1次元線特徴量hx(a,b) は大きな値を持ち、したがって、上記「線の特徴」の検出は、1次元線特徴量hx(a,b) の最小値から2番目以降のものを2次元線検出特徴量として採用し、これを評価することによってなされる」、ということに基づくものである。
【0031】
【数1】
Figure 0003755854
【0032】
この式(1)から明らかなように、注目画素P(a,b)の2次元線特徴量H(a,b)は、それが正/負の場合、注目画素P(a,b)が下地より濃い線/明るい線の画素であることを意味する。したがって注目画素P(a,b)の2次元線特徴量H(a,b)の絶対値が高い/低い程、2値画像領域/階調画像領域らしさを示す。階調画像領域等濃度がほとんど平らである領域では2次元線特徴量H(a,b)がゼロに近い値を取り、線が見えないか線が存在しないことを意味する。以下2次元線特徴量H(a,b)についてより詳しく説明する。
【0033】
最初に注目画素P(a,b)の1次元線特徴量hx(a,b)の特性について説明する。上述の式(2)は1次元線特徴量hx(a,b)を正確に定義するもので、1次元線検出特徴量は線の特徴である「線自身の方向以外全ての方向に沿って、線の濃度プロファイルにおける濃度の山あるいは谷」が、参照領域内に検出可能であれば、線の幅に関係なく線の検出を可能にするものである。以下、これについて簡単な例を用いて説明する。
【0034】
図8は下地の濃度値がAである原稿の注目画素P(a,b)を取り囲む所定のサイズの参照領域内に線(濃度値B;B>A)が存在して注目画素P(a,b)が線を構成する画素である場合に、1次元線特徴量hx(a,b) (図中画素位置を示すサフィックスは省略している。図9〜図12も同様)による線検出を図説したものである。参照領域の大きさは、検出したい線の幅に応じて適宜変更することができる。
【0035】
図8(A)に示すように、注目画素P(a,b)を通り左下から右上への方向x(45°)について濃度プロファイル(図8(B))をとり、1次元線特徴量hx(a,b) を以下のようにして求める。
【0036】
(1) 注目画素P(a,b)の左右で夫々最大値および最小値を求める。
【0037】
本例では、左半面の最大値=B
左半面の最小値=A
右半面の最大値=B
右半面の最小値=A
(2) 上記で求めた左右の最小値の最大値を求める。本例ではAとなる。
【0038】
(3) 上記で求めた左右の最大値の最小値を求める。本例ではBとなる。
【0039】
(4) 注目画素P(a,b)の濃度値Bと (2)で求めた左右の最小値の最大値との差を取って山x(a,b) を求める。値があれば線は「山」即ち「明るい下地に暗い線がある」ことを現し、値が線の濃さ(下地との濃度差)になる。本例ではB−Aとなる。
【0040】
(5) 注目画素P(a,b)の濃度値Bと (3)で求めた左右の最大値の最小値との差を取って谷x(a,b) を求める。値があれば線は「谷」即ち「濃い下地に明るい線がある」ことを現し、値が線の濃さ(下地との濃度差)になる。本例ではゼロとなる。
【0041】
(6) (4)および(5)で求めた山x(a,b) および谷x(a,b) の絶対値の大きい方を採って注目画素P(a,b)の1次元線特徴量hx(a,b)とする。本例ではB−Aとなる。
【0042】
上記 (4)は式(3)に対応し、(5)は式(4)に対応し、(6)は式(2)に対応するものである。
【0043】
このように、注目画素P(a,b)が線を構成する画素である場合、濃度の山あるいは谷が検出されるから、1次元線特徴量hx(a,b)はゼロより大きい値を取り、注目画素P(a,b)は線画素の一部であるとして識別される。
【0044】
図9は注目画素P(a,b)が階調画像領域の縁の周辺に位置した場合に、1次元線特徴量hx(a,b)がどのような値を示すかを図説したものである。注目画素P(a,b)を通り左下から右上への方向Xについて、濃度プロファイル図9(B)から上記図8における(1)〜(6)と同様にして注目画素P(a,b)の1次元線特徴量hx(a,b)を求めるとゼロとなる。
【0045】
このように、注目画素P(a,b)が階調画像領域の縁の周辺に位置した場合、参照領域において濃度の立ち上がりと立ち下りのどちらかしか観察できないため、1次元線特徴量hx(a,b)はゼロに近い値(本例ではゼロ)を取り、注目画素P(a,b)は線画素の一部でない、即ち階調画像領域の画素として識別される。
【0046】
次に注目画素P(a,b)の2次元線特徴量の特性について説明する。上述の式(1)は2次元線特徴量H(a,b)を正確に定義するもので、注目画素P(a,b)を通り主走査方向に対して、
方向1は、縦(-90°)方向
方向2は、右斜め下(-45°) 方向
方向3は、横(0°)方向
方向4は、右斜め上(+45°) 方向
の異なる4つの方向に沿って1次元線特徴量hx(a,b) を算出し、その4つの値の中で絶対値が2番目に大きいものを選択する。
【0047】
図10は上記図8と同様に下地の濃度値がAである原稿の注目画素P(a,b)を取り囲む所定のサイズの参照領域内に線(濃度値B;B>A)が存在して注目画素P(a,b)が線を構成する画素である場合に、夫々の1次元線特徴量hx(a,b) および2次元線特徴量H(a,b)がどのような値を示すかを図説したものである。この例では、方向1において1次元線特徴量h3(a,b) がゼロとなるが、他の1次元線特徴量hx(a,b) はB−Aであるから2次元線特徴量H(a,b)はB−Aとなる。
【0048】
また、図11は上記図9と同様に注目画素P(a,b)が階調画像領域の縁の周辺に位置した場合に、夫々の1次元線特徴量hx(a,b) および2次元線特徴量H(a,b)がどのような値を示すかを図説したものである。この例では、いずれの方向においても1次元線特徴量hx(a,b) がゼロとなるから2次元線特徴量H(a,b)もゼロとなる。
【0049】
このように、2次元線特徴量H(a,b)は1次元線特徴量hx(a,b) と同様の特性を示すので2次元線特徴量H(a,b)によっても線の検出が可能であると共に、2次元線特徴量H(a,b)は階調画像領域の縁の周辺が線として誤判断されないような特性を持っている。
【0050】
図12は注目画素P(a,b)が階調画像領域の角の周辺に位置した場合に、夫々の1次元線特徴量hx(a,b) および2次元線特徴量H(a,b)がどのような値を示すかを図説したものである。この例では、方向4において1次元線特徴量h2(a,b) がB−Aとなるが、他の1次元線特徴量hx(a,b) はゼロであるから2次元線特徴量H(a,b)はゼロとなる。即ち、2次元線特徴量H(a,b)によれば、階調画像領域の角の周辺が線としてではなく、階調画像領域として正しく認識される。
【0051】
このように、2次元線特徴量H(a,b)は前記特性に加えて、階調画像領域の角の周辺が線として誤判断されないような特性を持っており、注目画素P(a,b)が階調画像領域の角の周辺に位置した場合、1次元線特徴量hx(a,b) に大きな値を取らせる方向は1つしかないため(上記例では方向4)、式(1)により2次元線特徴量H(a,b)はゼロに近い値を取り(上記例ではゼロ)、注目画素P(a,b)は階調画像領域画素として正しく識別される。
【0052】
上述の図10〜図12では、一般的な原稿においては階調画像領域の角が走査方向と略同一となるような位置に置かれることが多いことを考慮して、4つの方向を上述のように採ったが、1次元線特徴量を算出するための方向とその数は、必ずしも上記の4つに限られず、例えば5つ以上であってもよく、また、選択される絶対値も2番目に大きいものに限られず、3番目以降の大きさの絶対値を選択することもできる。この際、階調画像領域の角が線として誤認識されないように、角の大きさおよび向きと1次元線特徴量を算出する方向およびその数との関連により何番目の大きさのものを選択するかを決定すればよく、要するに、「階調画像領域の角が線として誤認識されない」ことを達成できれば何れのものでもよい。
【0053】
次に、図5を参照して画像処理装置100 の具体的な作用について説明する。図5は図1および図2により示される画像処理装置100 をより具体的に示したブロック図である。
【0054】
図中点線で示した各手段1,2,3,4に含まれる要素は夫々図1に示した分析手段 110,判断手段 122,適応濃度変換手段 124,2値化手段 130を実現する部分である(以下、夫々図1と同様の名称を用いる。)。分析手段1内の各回路8,10,11,12は夫々図2に示した第1の分析手段114 を構成する距離算出手段114c,エッジ鋭さ算出手段114a,強いエッジ検出手段114b,高濃度縦線検出手段114dを実現する回路であり、9は2次元線特徴量算出手段116 を実現する2次元線特徴量算出回路である。分析手段1内のRAM 7と画像データバッファ6がデータ記憶手段 122を実現する回路である。分析手段1内の入出力管理回路5は後述するように、図示しない原稿読み取り手段から入力された濃度値を担持する濃度信号fをRAM 7に出力すると共に、RAM 7と画像データバッファ6内のデータの入出力の管理を行う。
【0055】
画像処理装置100 は入力される濃度値f(または濃度信号f;後述の各特徴量についても、適宜「**信号」と称す)に対して、画素毎に同じ処理を繰り返すため、以下の説明は、注目画素P(i,j-L)の各特徴量を算出する時点の処理について説明する。
【0056】
図4は処理の対象となる注目画素P(i,j-L)と、参照領域の中心となる中心画素P(i,j)と、注目画素P(i,j-L)の処理が始まる時、画像処理装置100 への入力となる最新画素P(i+N,j+N)の濃度信号f(i+N,j+N)の空間関係を図説したものである。ここで、カッコ内の記号は副走査上および主走査上の画素番号を示す。また、中心画素P(i,j)と注目画素P(i,j-L)の主走査方向の画素位置がLだけずれているのは、高濃度縦線検出回路12により特徴量Kを求めるためには、その他の特徴量を求めるための画素位置よりLだけ遅れてからでなければ求めることができないからである。
【0057】
(1) 分析手段1における処理の流れ
最新画素P(i+N,j+N)の濃度信号f(i+N,j+N)が本装置に入力される時、入出力管理回路5がRAM 7から、中心画素P(i,j)におけるエッジ鋭さ特徴量S(i,j)、画像データバッファ6に必要なデータ(以下、「画像データバッファ用最新画像データ」と称す)、および最も近い強いエッジ画素からの距離D(i,j)の算出に必要となるD(i-1,j),D(i-1,j+1)およびD(i,j-1)を読み取り、この読み取った距離信号D(i-1,j)等および最新画素の濃度値を画像データバッファ6へ、距離信号D(i-1,j)等を最も近い強いエッジ画素からの距離算出回路8へ送る。そして、入出力管理回路5はRAM 7に格納されていた濃度値f(i-N,j+N)を最新画素の濃度値f(i+N,j+N)で上書きする。
【0058】
図6は最新画素の濃度値f(i+N,j+N)がRAM 7に格納される前、RAM 7内に保存されている濃度値を表す画像データを図説したものである。図7はN=8, S+={1,2,4,8}, S-={-1,-2,-4,-8} 、2次元線特徴量を求めるための方向x=0°,45°,-90°,-45° の場合、画像データバッファ6に格納されている画像データを示したものである。この図7では、f(i+8,j+8)が最新画素の濃度信号、f(i-8,j+8),f(i-4,j+4),f(i-2,j+2),f(i-1,j+1),f(i,j+8),f(i+1,j+1),f(i+2,j+2)、f(i+4,j+4)が画像データバッファ用最新画像データである。画像データバッファ6は、各ライン毎にシフトレジスタを持つことにより構成される。なお、ここでS+およびS-として、「1,2,4,8」という飛び飛びの値を用いたのは2次元線特徴量を求める演算速度を高めるためであって、連続する整数値を用いてもよい。
【0059】
画像データバッファ6内に格納された画像データ(図7参照)を用いて、上記図10〜図12の説明と同様にして2次元線特徴量算出回路9が式(1)で定義される2次元線特徴量H(i,j)を算出した後H(i,j)を遅延回路(FIFO)15に入力する。
【0060】
エッジ鋭さ算出回路10では、-90°,-45°,0°,45°という4つの向き(図7参照)を持つエッジを検出するために、式(5)但書きに示す縦(-90°) 方向,右斜め下(-45°) 方向,横(0°)方向,右斜め上(45°)方向の4つの方向性を持つエッジ検出フィルタ(エッジ検出係数マトリクス)によりコンヴォリューションの演算が行われる。求められた4つの値の絶対値のうち最大となるエッジ強度が式(5)で定義されるエッジ鋭さ特徴量S(i,j)である(特開平8-51538号参照)。 エッジ鋭さ算出回路10により式(5)で定義されるエッジ鋭さ特徴量S(i,j)を算出した後S(i,j)をFIFO16に入力する。
【0061】
【数2】
Figure 0003755854
【0062】
このエッジ鋭さ特徴量S(i,j)は、中心画素P(i,j)のエッジの鋭さが高い(低い)程、該中心画素P(i,j)が文字の輪郭画素(階調画像領域の画素)である可能性が高いことを示す。
【0063】
強いエッジ検出回路11は中心画素P(i,j)におけるエッジ鋭さ信号S(i,j)を使って、式(6)で定義される強いエッジ検出特徴量E(i,j)を算出し、最も近い強いエッジ画素からの距離算出回路8と高濃度縦線検出回路12とへ送る。式(6)で定義される強いエッジ検出特徴量E(i,j)はエッジ鋭さ特徴量S(i,j)が予め定められた閾値を超えた場合のみ、真となる。
【0064】
【数3】
Figure 0003755854
【0065】
最も近い強いエッジ画素からの距離算出回路8は強いエッジ検出特徴量E(i,j)と入出力管理回路5で読み取られたD(i-1,j+1),D(i-1,j+1),D(i,j-1)および計算されたエッジ検出特徴量E(i,j)を用いて、式(7)で定義される中心画素P(i,j)での最も近い強いエッジ画素からの距離D(i,j)を算出し、FIFO14の入力と入出力管理回路5へ送る。そして、入出力管理回路5はRAM 7で保存されていたD(i-1,j)をD(i,j)で上書きする。
【0066】
式(7)は中心画素と中心画素より前に読み込まれて、強いエッジが存在する画素の中で、最寄りのとの間の距離を示し、単純なハードウエアでこの式を実現可能な近似演算方法を提供するものである(特開平8-51538 号参照)。
【0067】
【数4】
Figure 0003755854
【0068】
この最も近い強いエッジ画素からの距離D(i,j)は、中心画素P(i,j)と最寄りの強いエッジ画素からの距離が短い程文字像域らしさを示し、長いほど階調画像領域らしさを示す。
【0069】
高濃度縦線検出回路12は、強いエッジ検出特徴量E(i,j)と、中心画素P(i,j)の濃度値f(i,j)をL個の画素で遅延するFIFO13の出力となる注目画素P(i,j-L)の濃度値f(i,j-L)とを用いて、数式(8)で定義される注目画素P(i,j-L)での高濃度縦線検出特徴量K(i,j-L)を算出して、判定回路17へ送る。この高濃度縦線検出特徴量K(i,j-L)は注目画素P(i,j-L)が原画像における濃い且つある程度太い縦線(副走査方向と平行している線)の上に乗っているかどうかという判定結果を表わすものであり、注目画素P(i,j-L)の濃度がある程度濃い、かつ、注目画素P(i,j-L)から主走査方向にそって一定距離以内位置する画素の中で、少なくとも1つの画素において強いエッジが存在する、かつ、注目画素P(i,j-L)の直前画素が強いエッジ画素あるいは高濃度縦線画素であるという3つの条件が満たされた場合、注目画素P(i,j-L)が高濃度縦線画素であることを意味する(特開平8-51538 号参照)。したがって、注目画素P(i,j-L)において高濃度縦線が検出された場合、この線が文字を構成する濃い縦線である可能性が高い。
【0070】
【数5】
Figure 0003755854
【0071】
FIFO14,15,16は夫々中心画素P(i,j)での、最も近い強いエッジ画素からの距離信号D(i,j)、2次元線特徴量信号H(i,j)、エッジ鋭さ信号S(i,j)を、注目画素P(i,j-L)における特徴量K(i,j-L)と画素位置を合わせるため、L個の画素で遅延し、注目画素P(i,j-L)での夫々の信号D(i,j-L)、H(i,j-L)、S(i,j-L)を、適当な濃度変換曲線の番号を判定する判定回路17へ送る。
【0072】
(2) 判定手段2における処理の流れ
判定回路17は注目画素P(i,j-L)の状態を2値画像から階調画像の間で多段階に判定するものである。この判定回路17は、上述のようにして求めた注目画素P(i,j-L)の状態を特徴づけるD(i,j-L)信号,H(i,j-L)信号,S(i,j-L)信号,K(i,j-L)信号を用いて、式(9)で定義される注目画素P(i,j-L)の状態に適したガンマ曲線(濃度変換曲線)の番号を選択する選択信号B(i,j-L)を求め、選択信号B(i,j-L)に応じた濃度変換曲線を選択して濃度変換する適応濃度変換回路19へ送る。
【0073】
【数6】
Figure 0003755854
【0074】
この選択信号B(i,j-L)の値は、
1.エッジ画素は主として2値画像領域に存在する
2.エッジの鋭さ(濃度差)の度合いが強い(大きい)程2値画像らしさを示し、エッジの鋭さの度合いが弱い(小さい)程階調画像らしさを示す
3.主走査方向または副走査方向での濃度の立ち上がりエッジ画素と立ち下がりエッジ画素で注目画素を挟む線分がある場合、その線の濃度が高く細ければ文字を構成する線(2値画像の一部)である可能性が強い
4.注目画素と該注目画素に最も近いエッジ画素との距離が短い程文字画像らしさを示し、長い程階調画像らしさを示す
5.2次元線特徴量が検出される線分がある場合、下地より濃い(或いは薄い)線が存在し、その値が大きいほど濃度差がある
等、注目画素の特徴を考慮して、適応濃度変換回路19で画素毎に所望の濃度変換曲線が選択されるように閾値T3〜T12が設定されている。
【0075】
具体的には、選択信号B(i,j-L)は、
B(i,j-L)=0ならば、2値画像
B(i,j-L)=1ならば、下地より極めて濃い線
B(i,j-L)=2ならば、下地より濃い線
B(i,j-L)=3ならば、下地より少し濃い線
B(i,j-L)=4ならば、下地より極めて薄い線
B(i,j-L)=5ならば、下地より薄い線
B(i,j-L)=6ならば、下地より少し薄い線
B(i,j-L)=7ならば、2値画像らしい
B(i,j-L)=8ならば、2値画像か階調画像か判断しかねる画像
B(i,j-L)=9ならば、階調画像らしい
B(i,j-L)=10ならば、階調画像
なる意味を持つ。
【0076】
(3) 濃度変換手段3における処理の流れ
適応濃度変換回路19と接続されたROM 18には、上記選択信号B(i,j-L)の夫々に応じた濃度変換曲線に対応する適応濃度変換データG(i,j-L)が保存されている。
【0077】
適応濃度変換回路19は注目画素P(i,j-L)の状態に適した選択信号B(i,j-L)と注目画素P(i,j-L)の濃度値f(i,j-L)を用いて、予めROM 18に保存された、式(10)で定義される適応濃度変換データG(i,j-L)を読み取り、2値化回路20へ送る。
【0078】
【数7】
Figure 0003755854
【0079】
具体的には、適応濃度変換回路19は図13に示すように、入力濃度を最大か最小かのどちらかに2値化する2値画像領域用濃度変換曲線21と、入力濃度の階調特性を出力濃度に保存する階調画像領域用濃度変換曲線25と、上記2曲線を補間する2値画像領域用濃度変換曲線21よりの濃度変換曲線22,階調画像領域用濃度変換曲線25よりの濃度変換曲線24,濃度変換曲線22と濃度変換曲線24との中間階調を呈する濃度変換曲線23からなる3本の補間用濃度変換曲線と、階調画像用濃度変換曲線25を全体的に極めて/中程度に/少し濃い出力を与えるように滑らかに変形する濃度変換曲線26(大山用),27(中山用),28(小山用)と、階調画像用濃度変換曲線25を全体的に少し/中程度に/極めて薄い出力を与えるように滑らかに変形する濃度変換曲線29(小谷用),30(中谷用),31(大谷用)の合計11本の濃度変換曲線の何れかにしたがって濃度変換を行うものである。
【0080】
したがって、適応濃度変換回路19は、判定回路17から入力される選択信号B(i,j-L)に応じて、
B(i,j-L)=0ならば、2値画像領域用濃度変換曲線21
B(i,j-L)=1ならば、大山用濃度変換曲線26
B(i,j-L)=2ならば、中山用濃度変換曲線27
B(i,j-L)=3ならば、小山用濃度変換曲線28
B(i,j-L)=4ならば、大谷用濃度変換曲線31
B(i,j-L)=5ならば、中谷用濃度変換曲線30
B(i,j-L)=6ならば、小谷用濃度変換曲線29
B(i,j-L)=7ならば、2曲線を補間する2値画像用濃度変換曲線21よりの濃度変換曲線22
B(i,j-L)=8ならば、濃度変換曲線22と濃度変換曲線24との中間階調を呈する濃度変換曲線23
B(i,j-L)=9ならば、2曲線を補間する階調画像用濃度変換曲線25よりの濃度変換曲線24
B(i,j-L)=10ならば、階調画像領域用濃度変換曲線25
なる規則により何れかの濃度変換曲線を選択し、濃度値f(i,j-L)に対して選択された濃度変換曲線に従って濃度変換を行い、適応濃度変換データG(i,j-L)を出力する。
【0081】
(4) 2値化手段4における処理の流れ
適応濃度変換回路19において適応的に濃度変換された注目画素P(i,j-L)の濃度値G(i,j-L)は2値化回路20に入力され、2値化回路20で誤差拡散法に基づいた方式により2値化されて孔版印刷に適した2値画像データが出力される。
【0082】
これにより、濃い(明るい)線を示す2次元線特徴量の大きさに応じて、階調画像用濃度変換曲線25より全体的に濃い(明るい)出力を与える濃度変換曲線26,27,28(29,30,31)を選択することによって、細線で構成される薄い文字を適切に再現することができ、また、明るい線の潰れを避けることもできる。更に、濃い線をより濃く、明るい線をより明るくすることによって、細かい点からなる網点写真が一方的に濃くなったり一方的に薄くなったりすることを防ぐことができる。
【0083】
このように、本発明による画像処理装置によれば、2次元線特徴量に基づいて画像状態を判断することにより、階調画像領域の階調性を保存することができると共に、文字の濃度が異なっても適度に文字のコントラストを強めたり弱めたりすることにより薄い文字などの掠れや消滅という問題を解決することが可能となる。
【0084】
また、図5の分析手段1では特開平8-51538号 に示された画像処理装置における特徴量算出回路と同様の特徴量算出回路8,10,11,12と2次元線特徴量算出回路9とを組み合わせ、適応濃度変換回路19で特徴量算出回路8,10,11,12の出力をも画像状態の判断要素としているので、画像の状態に応じて一層適切な濃度変換ができるようになり、エッジに近い程2値画像領域用濃度変換曲線21寄りの濃度変換曲線22或いは23を選択し、エッジから離れる程階調画像領域用濃度変換曲線25寄りの濃度変換曲線24或いは23を選択することにより、特開平8-51538号 に示された画像処理装置と同様に、太い文字やべた部を黒々と見せることができ、階調画像の濃度の濃い部分の階調性を保存することもできる。
【0085】
なお、本発明による画像処理装置は上述の実施の形態のように、従来の画像処理装置の濃度変換処理と組み合わせることで、従来の装置による効果に2次元線特徴量に基づいて画像状態を判断する効果を加えることができるが、2次元線特徴量のみに基づいて画像状態を判断し、適切な濃度変換を行うこともできる。
【図面の簡単な説明】
【図1】本発明による画像処理装置の基本構成を示したブロック図
【図2】上記画像処理装置の分析手段の詳細を示したブロック図
【図3】画素位置と原稿の走査方向との関係を説明する図
【図4】注目画素と中心画素と最新画素との関係を説明する図
【図5】上記画像処理装置の詳細を示したブロック図
【図6】メモリに格納された画像濃度信号に対応する画素の原稿上の位置を示した図
【図7】画像データバッファに格納されるデータの例を示した図(N=8, S+={1,2,4,8}, S-={-1,-2,-4,-8}の場合)
【図8】1次元線特徴量による線検出を説明する図
【図9】階調画像領域の縁の周辺での1次元線特徴量の働きを説明する図
【図10】2次元線特徴量による線検出を説明する図
【図11】階調画像領域の縁の周辺での2次元線特徴量の働きを説明する図
【図12】階調画像領域の角の周辺での2次元線特徴量の働きを説明する図
【図13】上記画像処理装置における濃度変換曲線を示した図
【符号の説明】
1 分析手段
2 判定手段
3 濃度変換手段
4 2値化手段
5 入出力管理回路
6 画像データバッファ
7 画像データを保存するラムメモリ
8 最も近い強いエッジ画素からの距離を算出する回路
9 2次元線特徴量算出回路
10 エッジ鋭さ算出回路
11 強いエッジ検出回路
12 高濃度縦線検出回路
13〜16 遅延回路(FIFO)
17 判定回路
18 ロムメモリ(ROM)
19 適応濃度変換回路
20 2値化回路
21 2値画像領域用濃度変換曲線(選択信号=0)
22 2値画像寄りの濃度変換曲線(選択信号=7)
23 中間調の濃度変換曲線(選択信号=8)
24 階調画像寄りの濃度変換曲線(選択信号=9)
25 階調画像領域用濃度変換曲線(選択信号=10)
26 大山用濃度変換曲線(選択信号=1)
27 中山用濃度変換曲線(選択信号=2)
28 小山用濃度変換曲線(選択信号=3)
29 小谷用濃度変換曲線(選択信号=6)
30 中谷用濃度変換曲線(選択信号=5)
31 大谷用濃度変換曲線(選択信号=4)
110 分析手段
112 データ記憶手段
114 第1の分析手段
114a エッジ鋭さ算出手段
114b 強いエッジ検出手段
114c 注目画素と最も近い強いエッジ画素からの距離算出手段
114d 高濃度縦線検出手段
116 第2の分析手段[0001]
BACKGROUND OF THE INVENTION
The present invention relates to an image processing apparatus that reads an image of a document, performs image processing on the read digital image data, and outputs the processed data as binary data, in particular, a plate making apparatus (digital printing machine) that punches a heat-sensitive stencil sheet, and electrophotographic technology. The present invention relates to an image processing apparatus used for an apparatus (digital copying machine) for forming a latent image on a photosensitive member and transferring it onto a sheet, or an apparatus for copying on thermal paper or the like.
[0002]
[Prior art]
In order to obtain the best results when outputting a binary document such as characters and line drawings and a gradation image such as a photograph as binary data using the image processing apparatus as described above, The image area must be binarized to either the maximum density or the minimum density by an appropriate single threshold, and the gradation image area needs to be subjected to density conversion for conversion to an appropriate density in consideration of the characteristics of the input / output device. . For this purpose, it is necessary to determine whether each part of the image is a binary image area or a gradation image area, and to separate both areas.
[0003]
Conventionally, in order to determine whether a binary image area or a gradation image area, an image is divided into blocks of n × n pixels, feature extraction is performed for each block, and block determination is performed using the feature extraction result. A method of performing (for example, refer to Japanese Patent Application Laid-Open No. 3-153167) (hereinafter referred to as method A), a method of performing feature extraction using a pixel of interest and its surrounding pixels, and performing a discrimination for each pixel using the feature extraction result (See, for example, JP-A-1-227573) (hereinafter referred to as method B).
[0004]
In the method A, since each block is discriminated, there is a problem that the shape of the block appears in the misidentified portion or the boundary portion between the binary image region and the gradation image region. When the method B is used, the influence of misclassification is less conspicuous than the method A, but there is a problem in that there is a difference in density between the misclassified part and the correctly judged part, and there is a sense of incongruity. .
[0005]
Further, it is difficult to distinguish a thick line drawing or black solid in the binary image area from a high density portion of a photograph in the gradation image area. Adjusting the discriminating parameters so that thick line drawings and black solids can be discriminated from binary images will result in a crushed part in the photo image, while adjusting the discriminating parameters so that the high density part of the photo can be discriminated as a gradation image Thick line drawing and black solid density will be light.
[0006]
Accordingly, the applicant of the present application, as means for alleviating the problems of the methods A and B, in order to determine whether the image area is a binary image area or a gradation image area, an “edge pixel” (a pixel with a strong density gradient) is used. ) And its periphery, edge pixels and their surrounding high density pixels are discriminated as binary image areas, and if there is a misjudgment that the edge pixels are contour pixels of the gradation image area, etc. One or more density conversion curves for interpolating between the gradation image density conversion curve and the binary image density conversion curve as the distance between the binary image region pixel and the edge pixel increases so that a sense of incongruity does not occur. A method of selecting an appropriate one is proposed (Japanese Patent Laid-Open No. 8-51538, hereinafter referred to as Method C).
[0007]
In this method C, in order for the pixel of interest to be determined as a binary image area pixel, two conditions are required that are close to the edge pixel and dark, but there is a handwritten character next to the newspaper clip. Or a document that contains binary image areas with different background density or character density, such as when a mark (red text) is pressed on a document printed with a laser beam printer, and also gradations such as photographs In a manuscript with mixed image areas, the density gradient of thin characters (especially characters with pencils and vermilion marks) is not so strong and the density is low. And as a result, it may disappear on the printed matter.
[0008]
[Problems to be solved by the invention]
The present invention has been made in view of the above circumstances, and the gradation image area is more suitable for an original document in which binary image areas having different character densities are mixed and an original image in which gradation image areas such as photographs are also mixed. Image processing that can preserve tone and output image density signal to moderately increase character contrast even if the character density is different, and solve the problem of blurring and disappearance of thin characters The object is to provide an apparatus.
[0009]
[Means for Solving the Problems]
The image processing apparatus of the present invention includes a first analysis unit that analyzes the binary image-like gradation image-likeness of the target pixel;
Based on the reference image data around the target pixel, at least one of the degrees of peaks and valleys of the target pixel is calculated as a one-dimensional line feature amount for a plurality of different directions passing through the target pixel, and based on the calculated results A second analysis means different from the first analysis means for calculating a two-dimensional line feature value and analyzing a binary image-like gradation image-likeness of a target pixel;
And density conversion means for converting the density of the pixel of interest based on the analysis results of the first and second analysis means.
[0010]
Here, the “first analysis unit” is a combination of the second analysis unit and improves the discrimination effect of the binary image-like gradation image-likeness. For example, analysis of the likelihood of a binary image or gradation image based on the image feature quantity in only one direction passing through the pixel of interest, or the edge sharpness of the pixel of interest as described in Japanese Patent Laid-Open No. 8-51538 Analyzing the likelihood of a binary image or a gradation image based on the image feature amount according to edge information such as the distance to the edge pixel closest to the target pixel and the thickness of the line (high density vertical line). .
[0011]
The “degree of peak and valley of the target pixel” is an index representing the degree of density difference (appearing in the density profile) between the line and the ground when the target pixel is a line pixel.
[0012]
When the peripheral pixel has a density value lower than the density of the pixel of interest, that is, when there is a portion that falls after rising in the density profile, it is called a “mountain”, and the peripheral pixel has a density value higher than the density of the pixel of interest. When there is a portion that rises after falling in the density profile, it is called “valley”.
[0013]
The second analyzing means of the image processing apparatus according to the present invention indicates the degree by the magnitude of the absolute value of the density difference between the target pixel and the reference image around the target pixel, and has a predetermined angle and a plurality of Based on the relationship with the number of different directions, it is determined in advance how many one-dimensional line feature values having the largest absolute value should be extracted as the two-dimensional line feature values, and the corresponding one-dimensional line feature values are set to 2 of the target pixel. It is desirable to output as a dimension line feature. In this case, the degree may be indicated by the magnitude of the absolute value of the density difference between the target pixel and the pixel group having the smaller density difference from the target pixel among the reference pixel groups in the reference image sandwiching the target pixel. More desirable.
[0014]
Here, the “predetermined angle” means an angle of a portion of the gradation image area that is not desired to be detected as a line.
[0015]
The two-dimensional line feature amount is calculated based on, for example, Expression (1).
[0016]
[Expression 1]
Figure 0003755854
[0017]
It is desirable that the density conversion means of the image processing apparatus according to the present invention is to perform density conversion by selecting an appropriate density conversion curve from a plurality of different density conversion curves.
[0018]
Here, “a plurality of different density conversion curves” is more preferable if it has a plurality of density conversion curves that smoothly deform the basic density conversion curve so as to give an overall lighter and / or darker output. .
[0019]
The image processing apparatus according to the present invention may be provided with binarization means for performing binarization processing on the pixel of interest whose density has been converted by an error diffusion method as necessary.
[0020]
Note that the image processing apparatus according to the present invention may analyze only the binary image-like gradation image-likeness of the target pixel by calculating only the two-dimensional line feature amount without including the first analyzing means. . The two-dimensional line feature amount calculating means according to the present invention is for this purpose, and based on reference image data around the target pixel, at least one of the degrees of the peaks and valleys of the target pixel for a plurality of different directions passing through the target pixel. Are each calculated as a one-dimensional line feature value, and a two-dimensional line feature value is calculated based on the calculated results.
[0021]
【The invention's effect】
The image processing apparatus according to the present invention is a second analysis unit that analyzes the likelihood of a binary image or a gradation image of a pixel of interest based on a two-dimensional line feature amount having the property of a one-dimensional line feature amount having density difference information. Two-dimensional line feature calculation means) and first analysis means for improving the discrimination effect of the likelihood of binary image and gradation image by a combination of the second analysis means, and the density conversion is performed according to the analysis result. Thus, in addition to the effects of the conventional image processing apparatus, more appropriate density conversion can be performed.
[0022]
For example, a dark line can be made deeper, and a thin character composed of fine lines and a thin pattern (such as a seal) in a gradation image area can be appropriately reproduced. In addition, bright lines can be brightened, and crushing of gaps between character strokes can be avoided. Further, since the dark line can be made darker and the bright line can be made brighter, it is possible to prevent the halftone dot photo consisting of fine dots from becoming unilaterally dark or unilaterally thinning.
[0023]
As a result, when an original document in which a plurality of binary image areas having different densities are mixed or an original document in which binary image areas and gradation image areas are mixed is prepress-printed, blurring or disappearance of thin characters in the binary image areas is eliminated. Alternatively, it is possible to obtain a printed matter in which characters written on a dark background are not crushed, gradation is preserved in the gradation image area, and a sense of incongruity due to a difference in density does not occur.
[0024]
Further, since the calculation of the two-dimensional line feature amount is mainly based on a comparison operation, the second analysis means (two-dimensional line feature calculation means) can be configured with simple hardware.
[0025]
DETAILED DESCRIPTION OF THE INVENTION
Embodiments of an image processing apparatus according to the present invention will be described below in detail with reference to the drawings.
[0026]
FIG. 1 is a block diagram showing the basic configuration of an image processing apparatus according to the present invention. The image processing apparatus 100 includes a document reading unit (not shown). As shown in FIG. 3, the document reading means converts the image information of the pixel P (a, b) at the pixel position b in the main scanning direction and the pixel position a in the sub-scanning direction of the document into a digital image signal (density signal) f. As (a, b), the original is sequentially read by main scanning and sub-scanning.
[0027]
For the density signal f (a, b) obtained by the document reading means, the image processing apparatus 100 is, for each pixel, a binary image or a tone image (hereinafter referred to as “the state of the target pixel P”). Analysis means 110 for analyzing the image of the pixel of interest P based on the analysis result of the analysis means 110, a determination means 122 for determining the state of the pixel of interest P between the binary image and the gradation image in multiple stages, A desired density conversion curve is selected from a density conversion curve set consisting of a plurality of density conversion curves according to the determination result of the state of the target pixel P by 122, and the density signal of the target pixel P is selected based on the selected density conversion curve. a density conversion means 120 including an adaptive density conversion means 124 for performing density conversion processing on f (a, b). It should be noted that a binarizing means 130 for binarizing the pixel of interest P subjected to density conversion, as indicated by a dotted line in the drawing, as required, such as an image processing apparatus for a plate making apparatus for punching a heat-sensitive stencil sheet, You may make it provide further.
[0028]
FIG. 2 is a block diagram showing the details of the analyzing means 110. The analysis unit 110 analyzes the likelihood of the target image P from the binary image and the gradation image by using the data storage unit 112 that stores the density signal f (a, b) and the image data from the data storage unit 112. Based on the first analysis means 114 and reference image data of pixels around the pixel of interest P, the degree of peaks and / or valleys of the pixel of interest P in a plurality of different directions passing through the pixel of interest P is defined as a one-dimensional line feature quantity hx. Based on a plurality of calculated results, a density difference (hereinafter referred to as “two-dimensional line feature amount”) H between the line and the ground is calculated, and the binary image characteristic of the target pixel P or the gradation image characteristic And second analysis means 116 for analyzing. The first analysis unit 114 is configured to calculate an edge sharpness calculation unit 114a for calculating the edge sharpness S at the target pixel P, whether or not the edge sharpness S at the target pixel P exceeds a predetermined threshold value, in other words, “attention A strong edge detection unit 114b that calculates a feature quantity E representing whether or not a strong edge exists in the pixel P, and when the strong edge detection unit 114b detects a strong edge, the corresponding pixel is called a strong edge pixel. A distance calculation means 114c for calculating a distance D from the strong edge pixel closest to the target pixel P (hereinafter referred to as “distance calculation means”) 114c, and a dark vertical line in which the target pixel P is a certain width or less. It comprises high density vertical line detecting means 114d for calculating a feature value K representing the determination of whether or not it is a pixel. In other words, the second analyzing means 116 calculates the density difference (two-dimensional line feature amount) H between the line and the ground when the target pixel P is a pixel constituting a line having a certain width or less. This is a two-dimensional line feature amount calculating means for calculating. Hereinafter, the second analysis unit 116 is referred to as a two-dimensional line feature amount calculation unit 116.
[0029]
Next, the operation of the image processing apparatus having the above configuration will be described. First, the operation of the two-dimensional line feature quantity calculating means 116 which is a feature of the image processing apparatus 100 will be described in detail with reference to FIGS. 8 to 12, and then the image processing apparatus 100 will be shown more specifically. The overall operation will be described with reference to FIG.
[0030]
The two-dimensional line feature quantity calculation means 116 obtains density peaks (portions that fall after rising) or valleys (falling edges) when taking density profiles in all directions except the direction of the line itself. In order to detect “being a portion that rises after falling)”, the one-dimensional line feature amount hx (defined by the equation (2) is defined along a plurality of straight lines in different directions passing through the target pixel P (a, b). a, b) is calculated for each direction. Next, the two-dimensional line detection feature amount defined by the expression (1) is calculated based on the total calculation result for each direction, and the pixels constituting the corners of the gradation image region and the pixels constituting the two-dimensional line are calculated. Are distinguished from each other so that the corners of the gradation image region are not erroneously recognized as lines. This means that if the density profile is taken in the direction of the line itself, the one-dimensional line feature value hx (a, b) becomes zero, but if the density profile is taken in the other direction, the one-dimensional line feature value hx (a, b ) Has a large value. Therefore, the detection of the above-mentioned “line feature” adopts the second and subsequent ones from the minimum value of the one-dimensional line feature quantity hx (a, b) as the two-dimensional line detection feature quantity. It is based on the fact that this is done by evaluating this.
[0031]
[Expression 1]
Figure 0003755854
[0032]
As is clear from this equation (1), the two-dimensional line feature H (a, b) of the pixel of interest P (a, b) is positive / negative and the pixel of interest P (a, b) is It means that the pixel is a line that is darker / brighter than the background. Therefore, the higher / lower absolute value of the two-dimensional line feature amount H (a, b) of the target pixel P (a, b) indicates the likelihood of a binary image area / tone image area. In a gradation image area or the like where the density is almost flat, the two-dimensional line feature amount H (a, b) takes a value close to zero, which means that the line is not visible or does not exist. Hereinafter, the two-dimensional line feature amount H (a, b) will be described in more detail.
[0033]
First, the characteristics of the one-dimensional line feature quantity hx (a, b) of the target pixel P (a, b) will be described. The above formula (2) accurately defines the one-dimensional line feature quantity hx (a, b). The one-dimensional line detection feature quantity is the feature of the line “along all directions other than the direction of the line itself. If a peak or valley of density in the line density profile can be detected in the reference area, the line can be detected regardless of the width of the line. This will be described below using a simple example.
[0034]
In FIG. 8, a line (density value B; B> A) exists in a reference area of a predetermined size surrounding the pixel of interest P (a, b) of the document whose background density value is A, and the pixel of interest P (a , b) is a pixel constituting a line, line detection by one-dimensional line feature quantity hx (a, b) (the suffix indicating the pixel position in the figure is omitted. The same applies to FIGS. 9 to 12). Is illustrated. The size of the reference area can be appropriately changed according to the width of the line to be detected.
[0035]
As shown in FIG. 8 (A), a density profile (FIG. 8 (B)) is taken in the direction x (45 °) from the lower left to the upper right through the target pixel P (a, b), and the one-dimensional line feature quantity hx. (a, b) is obtained as follows.
[0036]
(1) The maximum value and the minimum value are obtained on the left and right of the target pixel P (a, b), respectively.
[0037]
In this example, the maximum value of the left half surface = B
Minimum value on left half = A
Maximum value on right half = B
Minimum value on right half = A
(2) Find the maximum of the left and right minimum values obtained above. In this example, it is A.
[0038]
(3) Find the minimum of the left and right maximum values obtained above. In this example, B.
[0039]
(4) The peak x (a, b) is obtained by taking the difference between the density value B of the target pixel P (a, b) and the maximum value of the left and right minimum values obtained in (2). If there is a value, the line represents “mountain”, that is, “a dark line exists on a bright background”, and the value becomes the density of the line (density difference from the background). In this example, it is B-A.
[0040]
(5) The valley x (a, b) is obtained by taking the difference between the density value B of the target pixel P (a, b) and the minimum value of the left and right maximum values obtained in (3). If there is a value, the line represents “valley”, that is, “a bright line exists on a dark background”, and the value is the line density (density difference from the background). In this example, it is zero.
[0041]
(6) One-dimensional line feature of the pixel of interest P (a, b) taking the larger absolute value of the peak x (a, b) and valley x (a, b) obtained in (4) and (5) Let the amount be hx (a, b). In this example, it is B-A.
[0042]
The above (4) corresponds to Expression (3), (5) corresponds to Expression (4), and (6) corresponds to Expression (2).
[0043]
In this way, when the target pixel P (a, b) is a pixel constituting a line, a peak or valley of density is detected, so that the one-dimensional line feature value hx (a, b) is greater than zero. And the pixel of interest P (a, b) is identified as being part of the line pixel.
[0044]
FIG. 9 illustrates the values of the one-dimensional line feature value hx (a, b) when the target pixel P (a, b) is located around the edge of the gradation image region. is there. In the direction X from the lower left to the upper right through the target pixel P (a, b), the target pixel P (a, b) is obtained in the same manner as (1) to (6) in FIG. When the one-dimensional line feature amount hx (a, b) of is obtained, it becomes zero.
[0045]
As described above, when the target pixel P (a, b) is located around the edge of the gradation image region, only the rising or falling of the density can be observed in the reference region, so that the one-dimensional line feature amount hx ( a, b) takes a value close to zero (zero in this example), and the target pixel P (a, b) is not part of the line pixel, that is, is identified as a pixel in the gradation image region.
[0046]
Next, the characteristics of the two-dimensional line feature amount of the target pixel P (a, b) will be described. The above equation (1) accurately defines the two-dimensional line feature H (a, b), and passes through the target pixel P (a, b) with respect to the main scanning direction.
Direction 1 is the longitudinal (-90 °) direction
Direction 2 is diagonally down to the right (-45 °)
Direction 3 is the lateral (0 °) direction
Direction 4 is diagonally upward (+ 45 °)
The one-dimensional line feature quantity hx (a, b) is calculated along four different directions, and the one having the second largest absolute value is selected from the four values.
[0047]
In FIG. 10, a line (density value B; B> A) exists in a reference area of a predetermined size surrounding the target pixel P (a, b) of the document whose background density value is A as in FIG. When the target pixel P (a, b) is a pixel constituting a line, what values are the one-dimensional line feature value hx (a, b) and the two-dimensional line feature value H (a, b)? Is an illustration of whether or not In this example, the one-dimensional line feature quantity h3 (a, b) is zero in the direction 1, but the other one-dimensional line feature quantity hx (a, b) is B-A, so the two-dimensional line feature quantity H (a, b) is B-A.
[0048]
Further, in FIG. 11, similarly to FIG. 9, when the target pixel P (a, b) is located around the edge of the gradation image region, each one-dimensional line feature amount hx (a, b) and two-dimensional This figure illustrates what value the line feature amount H (a, b) shows. In this example, since the one-dimensional line feature quantity hx (a, b) is zero in any direction, the two-dimensional line feature quantity H (a, b) is also zero.
[0049]
In this way, since the two-dimensional line feature H (a, b) exhibits the same characteristics as the one-dimensional line feature hx (a, b), line detection is also performed using the two-dimensional line feature H (a, b). The two-dimensional line feature amount H (a, b) has a characteristic that the periphery of the edge of the gradation image region is not erroneously determined as a line.
[0050]
In FIG. 12, when the target pixel P (a, b) is located around the corner of the gradation image region, the one-dimensional line feature amount hx (a, b) and the two-dimensional line feature amount H (a, b) ) Illustrates what value it represents. In this example, the one-dimensional line feature value h2 (a, b) is BA in the direction 4, but the other one-dimensional line feature value hx (a, b) is zero, so the two-dimensional line feature value H (a, b) is zero. That is, according to the two-dimensional line feature amount H (a, b), the periphery of the corner of the gradation image region is correctly recognized as a gradation image region, not as a line.
[0051]
As described above, the two-dimensional line feature amount H (a, b) has a characteristic that the periphery of the corner of the gradation image region is not erroneously determined as a line in addition to the above characteristic, and the target pixel P (a, b When b) is located around the corner of the gradation image region, the one-dimensional line feature quantity hx (a, b) has only one direction (in the above example, direction 4), so the equation ( According to 1), the two-dimensional line feature amount H (a, b) takes a value close to zero (zero in the above example), and the target pixel P (a, b) is correctly identified as a gradation image region pixel.
[0052]
In FIGS. 10 to 12 described above, in consideration of the fact that in general documents, the corners of the gradation image region are often placed at positions that are substantially the same as the scanning direction, the four directions are described above. However, the number and direction for calculating the one-dimensional line feature value are not necessarily limited to the above four, and may be, for example, five or more, and the selected absolute value is 2 as well. The absolute value of the third and subsequent sizes can be selected without being limited to the second largest. At this time, in order not to misrecognize the corner of the gradation image area as a line, the size of the corner is selected according to the relationship between the size and direction of the corner and the direction and the number of the one-dimensional line feature values to be calculated. In short, any method may be used as long as “the corners of the gradation image region are not erroneously recognized as lines” can be achieved.
[0053]
Next, a specific operation of the image processing apparatus 100 will be described with reference to FIG. FIG. 5 is a block diagram showing the image processing apparatus 100 shown in FIGS. 1 and 2 more specifically.
[0054]
Elements included in each of the means 1, 2, 3 and 4 indicated by dotted lines in the figure are parts for realizing the analyzing means 110, the judging means 122, the adaptive density converting means 124 and the binarizing means 130 shown in FIG. (Hereinafter, the same names as in FIG. 1 are used.) The circuits 8, 10, 11 and 12 in the analyzing means 1 are respectively a distance calculating means 114c, an edge sharpness calculating means 114a, a strong edge detecting means 114b, a high density vertical portion constituting the first analyzing means 114 shown in FIG. A circuit for realizing the line detecting means 114d, and 9 is a two-dimensional line feature quantity calculating circuit for realizing the two-dimensional line feature quantity calculating means 116. The RAM 7 and the image data buffer 6 in the analysis means 1 are circuits for realizing the data storage means 122. As will be described later, the input / output management circuit 5 in the analysis means 1 outputs a density signal f carrying a density value input from a document reading means (not shown) to the RAM 7 and also in the RAM 7 and the image data buffer 6. Manage data input and output.
[0055]
Since the image processing apparatus 100 repeats the same processing for each pixel on the input density value f (or density signal f; each feature amount described later is also referred to as “** signal” as appropriate), the following description will be given. Will describe the process at the time of calculating each feature quantity of the pixel of interest P (i, jL).
[0056]
FIG. 4 illustrates image processing when processing of a target pixel P (i, jL) to be processed, a central pixel P (i, j) that is the center of a reference region, and a target pixel P (i, jL) starts. 3 illustrates the spatial relationship of the density signal f (i + N, j + N) of the latest pixel P (i + N, j + N) that is input to the apparatus 100. FIG. Here, symbols in parentheses indicate pixel numbers on the sub-scanning and main scanning. The reason why the pixel position in the main scanning direction of the central pixel P (i, j) and the target pixel P (i, jL) is shifted by L is that the high density vertical line detection circuit 12 obtains the feature value K. This is because it can be obtained only after a delay of L from the pixel position for obtaining other feature values.
[0057]
(1) Process flow in the analysis means 1
When the density signal f (i + N, j + N) of the latest pixel P (i + N, j + N) is input to this apparatus, the input / output management circuit 5 receives the center pixel P (i, Edge sharpness feature quantity S (i, j) in j), data necessary for the image data buffer 6 (hereinafter referred to as “latest image data for image data buffer”), and distance D (i from the closest strong edge pixel , j), D (i−1, j), D (i−1, j + 1) and D (i, j−1) necessary for calculation are read, and the read distance signal D (i−1 , j) etc. and the density value of the latest pixel are sent to the image data buffer 6, and the distance signal D (i-1, j) etc. is sent to the distance calculation circuit 8 from the nearest strong edge pixel. Then, the input / output management circuit 5 overwrites the density value f (i−N, j + N) stored in the RAM 7 with the density value f (i + N, j + N) of the latest pixel.
[0058]
FIG. 6 illustrates image data representing density values stored in the RAM 7 before the density values f (i + N, j + N) of the latest pixel are stored in the RAM 7. FIG. 7 shows N = 8, S + = {1,2,4,8}, S-= {-1, -2, -4, -8}, and a direction x = 0 ° for obtaining a two-dimensional line feature quantity. , 45 °, −90 °, and −45 ° indicate the image data stored in the image data buffer 6. In FIG. 7, f (i + 8, j + 8) is the density signal of the latest pixel, f (i-8, j + 8), f (i-4, j + 4), f (i-2, j + 2), f (i-1, j + 1), f (i, j + 8), f (i + 1, j + 1), f (i + 2, j + 2), f (i + 4, j + 4) is the latest image data for the image data buffer. The image data buffer 6 is configured by having a shift register for each line. Note that the jump values “1, 2, 4, 8” are used as S + and S− here in order to increase the calculation speed for obtaining the two-dimensional line feature value, and continuous integer values are used. May be.
[0059]
Using the image data stored in the image data buffer 6 (see FIG. 7), the two-dimensional line feature quantity calculation circuit 9 is defined by the equation (1) in the same manner as described above with reference to FIGS. After calculating the dimension line feature amount H (i, j), H (i, j) is input to the delay circuit (FIFO) 15.
[0060]
In the edge sharpness calculation circuit 10, in order to detect edges having four orientations (see FIG. 7) of −90 °, −45 °, 0 °, and 45 °, the vertical (−90) shown in the proviso of equation (5) °) Convolution by edge detection filter (edge detection coefficient matrix) with four directions: direction, diagonally lower right (-45 °), horizontal (0 °), and diagonally upper right (45 °). Is calculated. The maximum edge strength among the absolute values of the obtained four values is the edge sharpness feature quantity S (i, j) defined by Expression (5) (see Japanese Patent Laid-Open No. 8-51538). After calculating the edge sharpness feature quantity S (i, j) defined by the expression (5) by the edge sharpness calculation circuit 10, S (i, j) is input to the FIFO 16.
[0061]
[Expression 2]
Figure 0003755854
[0062]
The edge sharpness feature quantity S (i, j) is such that the higher the edge sharpness of the center pixel P (i, j) is, the lower the center pixel P (i, j) is the contour pixel of the character (gradation image). This indicates that there is a high possibility that the pixel is an area pixel.
[0063]
The strong edge detection circuit 11 uses the edge sharpness signal S (i, j) at the center pixel P (i, j) to calculate the strong edge detection feature quantity E (i, j) defined by equation (6). The distance from the closest strong edge pixel is sent to the distance calculation circuit 8 and the high density vertical line detection circuit 12. The strong edge detection feature amount E (i, j) defined by Equation (6) is true only when the edge sharpness feature amount S (i, j) exceeds a predetermined threshold.
[0064]
[Equation 3]
Figure 0003755854
[0065]
The distance calculation circuit 8 from the closest strong edge pixel has a strong edge detection feature E (i, j) and D (i−1, j + 1), D (i−1) read by the input / output management circuit 5. j + 1), D (i, j-1) and the calculated edge detection feature quantity E (i, j), the most at the center pixel P (i, j) defined by equation (7) The distance D (i, j) from the near strong edge pixel is calculated and sent to the input of the FIFO 14 and the input / output management circuit 5. Then, the input / output management circuit 5 overwrites D (i−1, j) stored in the RAM 7 with D (i, j).
[0066]
Equation (7) is read before the center pixel and indicates the distance between the nearest pixel among the pixels with strong edges, and is an approximate calculation that can realize this equation with simple hardware. A method is provided (see Japanese Patent Laid-Open No. 8-51538).
[0067]
[Expression 4]
Figure 0003755854
[0068]
The distance D (i, j) from the nearest strong edge pixel indicates the character image area as the distance from the central pixel P (i, j) and the closest strong edge pixel is shorter, and the gradation image area is longer as it is longer. Shows its uniqueness.
[0069]
The high-density vertical line detection circuit 12 outputs a strong edge detection feature amount E (i, j) and the output of the FIFO 13 that delays the density value f (i, j) of the center pixel P (i, j) by L pixels. And the density value f (i, jL) of the target pixel P (i, jL) to be the high density vertical line detection feature value K at the target pixel P (i, jL) defined by Expression (8) (i, jL) is calculated and sent to the determination circuit 17. Does this high density vertical line detection feature value K (i, jL) lie on the pixel of interest P (i, jL) on a dark and somewhat thick vertical line (a line parallel to the sub-scanning direction) in the original image? Of the pixel of interest P (i, jL) to a certain degree, and the pixels located within a certain distance along the main scanning direction from the pixel of interest P (i, jL) When the three conditions that a strong edge exists in at least one pixel and the pixel immediately before the pixel of interest P (i, jL) is a strong edge pixel or a high-density vertical line pixel are satisfied, This means that (i, jL) is a high density vertical line pixel (see JP-A-8-51538). Therefore, when a high-density vertical line is detected in the target pixel P (i, j-L), there is a high possibility that this line is a dark vertical line constituting a character.
[0070]
[Equation 5]
Figure 0003755854
[0071]
FIFOs 14, 15, and 16 are the distance signal D (i, j), the two-dimensional line feature signal H (i, j), and the edge sharpness signal from the closest strong edge pixel at the center pixel P (i, j), respectively. In order to match S (i, j) with the feature quantity K (i, jL) in the pixel of interest P (i, jL) and the pixel position, it is delayed by L pixels, and the pixel at the pixel of interest P (i, jL) The respective signals D (i, jL), H (i, jL), and S (i, jL) are sent to the determination circuit 17 that determines an appropriate density conversion curve number.
[0072]
(2) Process flow in determination means 2
The determination circuit 17 determines the state of the target pixel P (i, j-L) in multiple stages between a binary image and a gradation image. The determination circuit 17 includes a D (i, jL) signal, an H (i, jL) signal, an S (i, jL) signal that characterizes the state of the target pixel P (i, jL) obtained as described above, Using the K (i, jL) signal, a selection signal B (i, j, which selects the number of the gamma curve (density conversion curve) suitable for the state of the pixel of interest P (i, jL) defined by equation (9) jL) is obtained, and a density conversion curve corresponding to the selection signal B (i, jL) is selected and sent to the adaptive density conversion circuit 19 for density conversion.
[0073]
[Formula 6]
Figure 0003755854
[0074]
The value of this selection signal B (i, j-L) is
1. Edge pixels are mainly present in the binary image area
2. A higher (larger) edge sharpness (density difference) indicates a binary image, and a weaker (smaller) edge sharpness indicates a gradation image.
3. When there is a line segment that sandwiches the pixel of interest between the rising edge pixel and the falling edge pixel of the density in the main scanning direction or the sub-scanning direction, if the density of the line is high and thin, a line that forms a character (one binary image) Part)
4). The shorter the distance between the pixel of interest and the edge pixel closest to the pixel of interest, the more likely it is to be a character image, and the longer the distance is, the more likely it is to be a gradation image.
5. When there is a line segment from which a two-dimensional line feature is detected, there is a line that is darker (or lighter) than the background, and the larger the value, the greater the difference in density.
The threshold values T3 to T12 are set so that the adaptive density conversion circuit 19 selects a desired density conversion curve for each pixel in consideration of the characteristics of the target pixel.
[0075]
Specifically, the selection signal B (i, j-L) is
If B (i, j-L) = 0, binary image
If B (i, j-L) = 1, the line is much darker than the base
If B (i, j-L) = 2, the line is darker than the base
If B (i, j-L) = 3, the line is slightly darker than the base
If B (i, j-L) = 4, the line is much thinner than the base
If B (i, j-L) = 5, the line is thinner than the base
If B (i, j-L) = 6, the line is slightly thinner than the base
If B (i, j-L) = 7, it seems to be a binary image
If B (i, j-L) = 8, an image that cannot be judged as a binary image or a gradation image
If B (i, j-L) = 9, it looks like a gradation image
If B (i, j-L) = 10, gradation image
It has a meaning.
[0076]
(3) Flow of processing in density conversion means 3
The ROM 18 connected to the adaptive density conversion circuit 19 stores adaptive density conversion data G (i, j-L) corresponding to density conversion curves corresponding to the selection signals B (i, j-L).
[0077]
The adaptive density conversion circuit 19 uses the selection signal B (i, jL) suitable for the state of the target pixel P (i, jL) and the density value f (i, jL) of the target pixel P (i, jL) in advance. The adaptive density conversion data G (i, jL) defined by the equation (10) stored in the ROM 18 is read and sent to the binarization circuit 20.
[0078]
[Expression 7]
Figure 0003755854
[0079]
Specifically, as shown in FIG. 13, the adaptive density conversion circuit 19 has a binary image area density conversion curve 21 for binarizing the input density to either the maximum or the minimum, and gradation characteristics of the input density. From the density conversion curve 25 for the gradation image area, the density conversion curve 22 from the density conversion curve 21 for the binary image area that interpolates the above two curves, and the density conversion curve 25 for the gradation image area The density conversion curve 24, the density conversion curve 22 and the density conversion curve 23 having the intermediate gradation between the density conversion curve 24 and the density conversion curve 23 for the gradation image and the density conversion curve 25 for the gradation image as a whole are extremely / Medium / Density conversion curves 26 (for Oyama), 27 (for Nakayama), 28 (for Oyama) that transform smoothly to give a slightly darker output, and density conversion curve 25 for gradation image as a whole Density conversion curve 29 (small) that deforms smoothly to give a little / moderate / very thin output Density conversion is performed according to any one of a total of 11 density conversion curves (for valley), 30 (for Nakatani), and 31 (for Otani).
[0080]
Therefore, the adaptive density conversion circuit 19 is in accordance with the selection signal B (i, j-L) input from the determination circuit 17.
If B (i, j-L) = 0, the density conversion curve 21 for the binary image area
If B (i, j-L) = 1, Oyama concentration conversion curve 26
If B (i, j-L) = 2, concentration conversion curve for Nakayama 27
If B (i, j-L) = 3, the density conversion curve for mound 28
If B (i, j-L) = 4, concentration conversion curve for Otani 31
If B (i, j-L) = 5, concentration conversion curve for Nakatani 30
If B (i, j-L) = 6, the concentration conversion curve for Otani 29
If B (i, j-L) = 7, the density conversion curve 22 from the binary image density conversion curve 21 for interpolating the two curves.
If B (i, j-L) = 8, a density conversion curve 23 exhibiting an intermediate gradation between the density conversion curve 22 and the density conversion curve 24.
If B (i, j-L) = 9, the density conversion curve 24 from the density image density conversion curve 25 for interpolating the two curves is used.
If B (i, j-L) = 10, the density conversion curve for the gradation image area 25
Any density conversion curve is selected according to the following rule, density conversion is performed according to the selected density conversion curve for the density value f (i, j-L), and adaptive density conversion data G (i, j-L) is output.
[0081]
(Four) Process flow in binarization means 4
The density value G (i, jL) of the pixel of interest P (i, jL) adaptively converted by the adaptive density conversion circuit 19 is input to the binarization circuit 20, and the binarization circuit 20 applies the error diffusion method. Binary image data suitable for stencil printing is output after being binarized by a method based on the above.
[0082]
As a result, density conversion curves 26, 27, and 28 that give an overall darker (brighter) output than the gradation image density conversion curve 25 in accordance with the size of the two-dimensional line feature amount indicating a dark (bright) line. By selecting (29, 30, 31), it is possible to appropriately reproduce thin characters composed of thin lines, and it is also possible to avoid collapse of bright lines. Further, by making the dark line darker and the bright line brighter, it is possible to prevent a halftone dot photo consisting of fine dots from becoming unilaterally dark or unilaterally thinning.
[0083]
As described above, according to the image processing apparatus of the present invention, by determining the image state based on the two-dimensional line feature amount, the gradation property of the gradation image region can be preserved, and the character density can be reduced. Even if they are different, it is possible to solve the problem of drowning or disappearance of thin characters by appropriately increasing or decreasing the contrast of the characters.
[0084]
Further, in the analysis means 1 of FIG. 5, the feature quantity calculation circuits 8, 10, 11, 12 and the two-dimensional line feature quantity calculation circuit 9 are the same as the feature quantity calculation circuit in the image processing apparatus disclosed in Japanese Patent Application Laid-Open No. 8-51538. And the adaptive density conversion circuit 19 uses the outputs of the feature quantity calculation circuits 8, 10, 11, and 12 as factors for determining the image state, so that more appropriate density conversion can be performed according to the image state. The closer to the edge, the closer to the density conversion curve 22 or 23 closer to the binary image area density conversion curve 21, and the closer to the edge, the closer to the gradation image area density conversion curve 25 to select the density conversion curve 24 or 23. Thus, similar to the image processing apparatus disclosed in Japanese Patent Application Laid-Open No. 8-51538, thick characters and solid parts can be seen in black, and the gradation property of the dark part of the gradation image can be preserved. it can.
[0085]
Note that the image processing apparatus according to the present invention is combined with the density conversion processing of the conventional image processing apparatus as in the above-described embodiment, so that the image state is determined based on the two-dimensional line feature amount based on the effect of the conventional apparatus. However, it is also possible to determine the image state based only on the two-dimensional line feature amount and perform appropriate density conversion.
[Brief description of the drawings]
FIG. 1 is a block diagram showing the basic configuration of an image processing apparatus according to the present invention.
FIG. 2 is a block diagram showing details of analysis means of the image processing apparatus.
FIG. 3 is a diagram for explaining a relationship between a pixel position and an original scanning direction;
FIG. 4 is a diagram for explaining a relationship among a target pixel, a center pixel, and a latest pixel;
FIG. 5 is a block diagram showing details of the image processing apparatus.
FIG. 6 is a diagram showing positions of pixels on a document corresponding to image density signals stored in a memory.
FIG. 7 is a diagram showing an example of data stored in an image data buffer (N = 8, S + = {1,2,4,8}, S-= {-1, -2, -4, -8) }in the case of)
FIG. 8 is a diagram for explaining line detection based on a one-dimensional line feature amount;
FIG. 9 is a diagram for explaining the function of the one-dimensional line feature amount around the edge of the gradation image region;
FIG. 10 is a diagram for explaining line detection based on a two-dimensional line feature amount;
FIG. 11 is a diagram for explaining the function of the two-dimensional line feature amount around the edge of the gradation image region;
FIG. 12 is a diagram for explaining the function of a two-dimensional line feature around a corner of a gradation image area;
FIG. 13 is a diagram showing a density conversion curve in the image processing apparatus.
[Explanation of symbols]
1 Analytical means
2 judgment means
3 Concentration conversion means
4 Binarization means
5 I / O management circuit
6 Image data buffer
7 RAM memory for storing image data
8 Circuit to calculate distance from nearest strong edge pixel
9 Two-dimensional line feature quantity calculation circuit
10 Edge sharpness calculation circuit
11 Strong edge detection circuit
12 High density vertical line detection circuit
13-16 delay circuit (FIFO)
17 Judgment circuit
18 ROM memory (ROM)
19 Adaptive density conversion circuit
20 Binary circuit
21 Binary image area density conversion curve (selection signal = 0)
22 Density conversion curve close to binary image (selection signal = 7)
23 Halftone density conversion curve (selection signal = 8)
Density conversion curve close to 24 gradation image (selection signal = 9)
Density conversion curve for 25 gradation image area (selection signal = 10)
26 Oyama density conversion curve (selection signal = 1)
27 Nakayama density conversion curve (selection signal = 2)
28 Oyama density conversion curve (selection signal = 3)
29 Otani density conversion curve (selection signal = 6)
30 Concentration conversion curve for Nakatani (selection signal = 5)
31 Otani concentration conversion curve (selection signal = 4)
      110 Analytical tools
      112 Data storage means
      114 First analysis means
      114a Edge sharpness calculation means
114b Strong edge detection means
114c Distance calculation means from strong edge pixel closest to target pixel
114d High density vertical line detection means
      116 Second analysis means

Claims (6)

注目画素の2値画像らしさ階調画像らしさを該注目画素の周辺のエッジの特徴に応じて分析する第1の分析手段と、
前記注目画素周辺の参照画像データに基づき、該注目画素を通る複数の異なる方向について該注目画素の山および谷の度合いのうち少なくとも一方を、前記注目画素と該注目画素周辺の参照画像との濃度差の絶対値の大きさで示される一次元線特徴量として夫々算出するとともに、予め定められた角度と前記複数の異なる方向の数との関連により、何番目に大きい前記一次元線特徴量を二次元線特徴量として抽出すべきかを予め決定し、該当する一次元線特徴量を前記注目画素の二次元線特徴量として算出して前記注目画素の2値画像らしさ階調画像らしさを分析する第2の分析手段と、
前記第1および第2の分析手段による分析結果に基づいて前記注目画素の濃度を変換する濃度変換手段とを備えたことを特徴とする画像処理装置。
First analysis means for analyzing the binary image-like gradation image-likeness of the pixel of interest according to the characteristics of the edges around the pixel of interest ;
Based on the reference image data around the pixel of interest, at least one of the degrees of the peaks and valleys of the pixel of interest for a plurality of different directions passing through the pixel of interest is the density of the pixel of interest and the reference image around the pixel of interest. While calculating each one-dimensional line feature amount indicated by the magnitude of the absolute value of the difference, the largest one-dimensional line feature amount is determined by the relationship between the predetermined angle and the number of different directions. Whether to extract as a two-dimensional line feature amount is determined in advance, the corresponding one-dimensional line feature amount is calculated as a two-dimensional line feature amount of the pixel of interest, and the binary image-likeness of the target pixel is analyzed. A second analysis means;
An image processing apparatus comprising: density conversion means for converting the density of the target pixel based on the analysis results of the first and second analysis means.
前記度合いが、前記注目画素を挟む前記参照画像中の両参照画素群のうち該注目画素との濃度差の小さい方の画素群と該注目画素との濃度差の絶対値の大きさで示すものであることを特徴とする請求項記載の画像処理装置。The degree is indicated by the magnitude of the absolute value of the density difference between the pixel of interest and the pixel group having the smaller density difference from the pixel of interest of both reference pixel groups in the reference image sandwiching the pixel of interest The image processing apparatus according to claim 1, wherein: 前記二次元線特徴量が、式(1)に基づいて算出されることを特徴とする請求項記載の画像処理装置。
Figure 0003755854
The image processing apparatus according to claim 2, wherein the two-dimensional line feature amount is calculated based on Expression (1).
Figure 0003755854
前記濃度変換手段が、複数の異なる濃度変換曲線から適切な濃度変換曲線を選択して濃度変換するものであることを特徴とする請求項1からいずれか1項記載の画像処理装置。Said density conversion means, the image processing apparatus of claim 1, wherein 3 any one of claims that is to density by selecting an appropriate density conversion curve from a plurality of different density conversion curve converter. 前記濃度変換された注目画素に対して誤差拡散法により2値化処理を施す2値化手段を備えたことを特徴とする請求項1からいずれか1項記載の画像処理装置。The density converted image processing apparatus of the error diffusion method by according to claims 1, further comprising a binarizing means for performing binarization 4 any one the pixel of interest. 注目画素周辺の参照画像データに基づき、該注目画素を通る複数の異なる方向について該注目画素の山および谷の度合いのうち少なくとも一方を、前記注目画素と該注目画素周辺の参照画像との濃度差の絶対値の大きさで示される一次元線特徴量として夫々算出する一次元線特徴量算出部と、
予め定められた角度と前記複数の異なる方向の数との関連により、何番目に大きい前記一次元線特徴量を二次元線特徴量として抽出すべきかを予め決定し、該当する一次元線特徴量を前記注目画素の二次元線特徴量として算出する二次元線特徴量算出部とを備えたことを特徴とする二次元線特徴量算出装置。
Based on the reference image data around the pixel of interest, at least one of the degrees of the peaks and valleys of the pixel of interest for a plurality of different directions passing through the pixel of interest is determined as a density difference between the pixel of interest and the reference image around the pixel of interest. A one-dimensional line feature amount calculation unit for calculating each one as a one-dimensional line feature amount indicated by the absolute value of
Based on the relationship between a predetermined angle and the number of the plurality of different directions, it is determined in advance what number the one-dimensional line feature quantity should be extracted as a two-dimensional line feature quantity, and the corresponding one-dimensional line feature quantity And a two-dimensional line feature quantity calculation unit that calculates a two-dimensional line feature quantity of the pixel of interest .
JP06273798A 1997-04-07 1998-03-13 Image processing apparatus and two-dimensional line feature amount calculation apparatus Expired - Lifetime JP3755854B2 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
JP06273798A JP3755854B2 (en) 1997-04-07 1998-03-13 Image processing apparatus and two-dimensional line feature amount calculation apparatus
US09/055,468 US6167154A (en) 1997-04-07 1998-04-06 Image processing apparatus and secondary line feature indicative value calculating device
EP98106374A EP0871323B1 (en) 1997-04-07 1998-04-07 Image processing apparatus and secondary line feature indicative value calculating device
DE69834976T DE69834976T2 (en) 1997-04-07 1998-04-07 An image processing apparatus and apparatus for calculating a secondary line feature indicative value

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP8804197 1997-04-07
JP9-88041 1997-04-07
JP06273798A JP3755854B2 (en) 1997-04-07 1998-03-13 Image processing apparatus and two-dimensional line feature amount calculation apparatus

Publications (2)

Publication Number Publication Date
JPH10341336A JPH10341336A (en) 1998-12-22
JP3755854B2 true JP3755854B2 (en) 2006-03-15

Family

ID=26403787

Family Applications (1)

Application Number Title Priority Date Filing Date
JP06273798A Expired - Lifetime JP3755854B2 (en) 1997-04-07 1998-03-13 Image processing apparatus and two-dimensional line feature amount calculation apparatus

Country Status (4)

Country Link
US (1) US6167154A (en)
EP (1) EP0871323B1 (en)
JP (1) JP3755854B2 (en)
DE (1) DE69834976T2 (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6636633B2 (en) 1999-05-03 2003-10-21 Intel Corporation Rendering of photorealistic computer graphics images
JP3392798B2 (en) * 2000-02-22 2003-03-31 理想科学工業株式会社 Image attribute determination method and apparatus
JP3426189B2 (en) * 2000-04-26 2003-07-14 インターナショナル・ビジネス・マシーンズ・コーポレーション Image processing method, relative density detection method, and image processing apparatus
US7336396B2 (en) * 2003-03-20 2008-02-26 Kabushiki Kaisha Toshiba Image processing apparatus and image processing method
TWI225622B (en) * 2003-10-24 2004-12-21 Sunplus Technology Co Ltd Method for detecting the sub-pixel motion for optic navigation device
US7463774B2 (en) * 2004-01-07 2008-12-09 Microsoft Corporation Global localization by fast image matching
US7751642B1 (en) * 2005-05-18 2010-07-06 Arm Limited Methods and devices for image processing, image capturing and image downscaling
TWI338514B (en) * 2006-01-20 2011-03-01 Au Optronics Corp Image processing method for enhancing contrast
JP4890974B2 (en) 2006-06-29 2012-03-07 キヤノン株式会社 Image processing apparatus and image processing method
JP4890973B2 (en) 2006-06-29 2012-03-07 キヤノン株式会社 Image processing apparatus, image processing method, image processing program, and storage medium
JP4926568B2 (en) 2006-06-29 2012-05-09 キヤノン株式会社 Image processing apparatus, image processing method, and image processing program
JP4238902B2 (en) * 2006-09-04 2009-03-18 日本電気株式会社 Character noise elimination device, character noise elimination method, character noise elimination program
CN101388951A (en) * 2007-09-14 2009-03-18 株式会社东芝 Image forming apparatus, image processing apparatus and wire narrowing method
JP5365817B2 (en) * 2011-11-24 2013-12-11 富士ゼロックス株式会社 Image processing apparatus and image processing program
CN110796139B (en) * 2019-10-17 2023-06-23 中国测试技术研究院辐射研究所 Method for positioning and dividing pattern of indication value in test/detection/calibration/verification

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4554593A (en) * 1981-01-02 1985-11-19 International Business Machines Corporation Universal thresholder/discriminator
JPS62200976A (en) * 1986-02-28 1987-09-04 Dainippon Screen Mfg Co Ltd Forming method and device for high resolution binarization picture data to straight line edge
US4903316A (en) * 1986-05-16 1990-02-20 Fuji Electric Co., Ltd. Binarizing apparatus
JP2702928B2 (en) * 1987-06-19 1998-01-26 株式会社日立製作所 Image input device
JP2859268B2 (en) * 1988-03-07 1999-02-17 キヤノン株式会社 Image processing method
EP0415648B1 (en) * 1989-08-31 1998-05-20 Canon Kabushiki Kaisha Image processing apparatus
JP2505889B2 (en) * 1989-08-31 1996-06-12 キヤノン株式会社 Image processing device
JPH03153167A (en) * 1989-11-10 1991-07-01 Ricoh Co Ltd Character area separation system
US5396584A (en) * 1992-05-29 1995-03-07 Destiny Technology Corporation Multi-bit image edge enhancement method and apparatus
JP3479161B2 (en) * 1994-06-03 2003-12-15 理想科学工業株式会社 Image processing device
JP3207690B2 (en) * 1994-10-27 2001-09-10 シャープ株式会社 Image processing device
JP2730665B2 (en) * 1994-12-15 1998-03-25 北陸先端科学技術大学院大学長 Character recognition apparatus and method
US5987221A (en) * 1997-01-24 1999-11-16 Hewlett-Packard Company Encoded orphan pixels for discriminating halftone data from text and line art data

Also Published As

Publication number Publication date
JPH10341336A (en) 1998-12-22
DE69834976T2 (en) 2007-02-08
EP0871323A3 (en) 2000-07-05
DE69834976D1 (en) 2006-08-03
EP0871323B1 (en) 2006-06-21
EP0871323A2 (en) 1998-10-14
US6167154A (en) 2000-12-26

Similar Documents

Publication Publication Date Title
JP3755854B2 (en) Image processing apparatus and two-dimensional line feature amount calculation apparatus
JP3392798B2 (en) Image attribute determination method and apparatus
US7773776B2 (en) Image processing apparatus, image forming apparatus, image reading process apparatus, image processing method, image processing program, and computer-readable storage medium
US7298896B2 (en) Image processing device for optically decoding an image recorded on a medium for digital transmission of the image information
US6839151B1 (en) System and method for color copy image processing
JP2005318593A (en) Reformatting of image data for further compressed image data size
US20030198398A1 (en) Image correcting apparatus and method, program, storage medium, image reading apparatus, and image forming apparatus
JP2005094740A (en) Image processing apparatus, image forming apparatus and image processing method
EP0685961B1 (en) Image processing apparatus
US20060152765A1 (en) Image processing apparatus, image forming apparatus, image reading process apparatus, image processing method, image processing program, and computer-readable storage medium
US8306335B2 (en) Method of analyzing digital document images
US6958828B2 (en) Method and apparatus for detecting photocopier tracking signatures
JP4502001B2 (en) Image processing apparatus and image processing method
JP3807891B2 (en) Image information area discrimination method and apparatus
JP3479161B2 (en) Image processing device
JP3983721B2 (en) Image distortion correction apparatus, image reading apparatus, image forming apparatus, and program
JP4396710B2 (en) Image processing apparatus, image processing apparatus control method, and image processing apparatus control program
JP3423665B2 (en) Area determining method and device
KR100537829B1 (en) Method for segmenting Scan Image
JP3966448B2 (en) Image processing apparatus, image processing method, program for executing the method, and recording medium storing the program
JP3605773B2 (en) Image area discriminating device
JP4005243B2 (en) Image processing device
JPH02199588A (en) Image area identifying device
JP2004048130A (en) Image processing method, image processing apparatus, and image processing program
JP2003051948A (en) Image processing apparatus and program

Legal Events

Date Code Title Description
A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20050325

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20050426

A521 Request for written amendment filed

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20050621

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20051129

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20051219

R150 Certificate of patent or registration of utility model

Free format text: JAPANESE INTERMEDIATE CODE: R150

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20090106

Year of fee payment: 3

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20100106

Year of fee payment: 4

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20110106

Year of fee payment: 5

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20110106

Year of fee payment: 5

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20120106

Year of fee payment: 6

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20120106

Year of fee payment: 6

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20130106

Year of fee payment: 7

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20130106

Year of fee payment: 7

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20140106

Year of fee payment: 8

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

EXPY Cancellation because of completion of term