CN108269233B - Text dithering method based on shading halftone - Google Patents

Text dithering method based on shading halftone Download PDF

Info

Publication number
CN108269233B
CN108269233B CN201810211748.4A CN201810211748A CN108269233B CN 108269233 B CN108269233 B CN 108269233B CN 201810211748 A CN201810211748 A CN 201810211748A CN 108269233 B CN108269233 B CN 108269233B
Authority
CN
China
Prior art keywords
picture
shading
data
identifying
format
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810211748.4A
Other languages
Chinese (zh)
Other versions
CN108269233A (en
Inventor
柯逍
柯力
陈羽中
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fuzhou University
Original Assignee
Fuzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuzhou University filed Critical Fuzhou University
Priority to CN201810211748.4A priority Critical patent/CN108269233B/en
Publication of CN108269233A publication Critical patent/CN108269233A/en
Application granted granted Critical
Publication of CN108269233B publication Critical patent/CN108269233B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T3/04
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation

Abstract

The invention discloses a text dithering method based on shading halftone, firstly identifying the format and compression mode of the input image; secondly, identifying a specific coordinate position needing shading treatment; then, dotting and shaking the shading; finally, the method is used for performing normalization processing on the picture by fusing a plurality of libraries such as LibTiff, OpenCV and GDAL and outputting the normalized picture. The method improves the defects of the traditional algorithm in shading processing, can process pictures needing shading conversion in batch, provides various shading patterns, has more selectivity, has high automation standard ratio and the like, and is very suitable for the requirements of printing industry on shading halftone processing.

Description

Text dithering method based on shading halftone
Technical Field
The invention relates to the technical field of pattern recognition, in particular to a text dithering method based on shading halftone.
Background
In recent years, the global packaging industry is rapidly developing towards high quality and colorization, and the trend of greatly widening printing equipment and materials is raised, so that the large-format printing technology becomes a new technical system which meets the market of large-format printing products and enhances the printing competitiveness.
For the common gray continuous shading, although the characters can be emphasized, the characters are easy to be mixed together and are not easy to be recognized. If the gray scale of the shading is adjusted to be low, the shading effect is not emphasized. At this time, therefore, a method is needed to solve this problem, and at this time, the dithering algorithm becomes very important.
The dithering algorithms are classified into random dithering algorithms and ordered dithering algorithms. The random dithering algorithm randomly generates a set of template square array numbers, the random numbers being generated between a minimum gray level and a maximum gray level of the image. The ordered dithering algorithm is to artificially set some template values to perform matching operation, and mainly includes a dispersive dithering algorithm (Disperse gather) and an aggregation discrete dithering algorithm (cluster gather). The dispersion type is represented by a Bayer ordered dither algorithm. Ulichenay later proposed a dithering algorithm that aggregates the local dispersion as a whole based on the above two algorithms. In the above dithering algorithms, horizontal and vertical artificial textures appear, and the sensitivity of human vision to the textures in the 45-degree direction is relatively low, so that the Victor Distomoklov rotates the Victor Distomoklov by 45 degrees on the basis of the Bayer ordered dithering algorithm to obtain the rotary ordered dithering algorithm. The above two methods are independent for processing image data, all errors are fixed, and no compensation measures are taken.
The error dispersion method uses a neighboring processing mode. Each output value is associated not only with the current pixel but also with neighboring pixels. It combines its value with the error value of the previous point and the template value to determine the output for that point. The method introduces the concept of error for the first time, thereby greatly improving the quality of an output image. To avoid the resulting artifacts, most error diffusion algorithms use an asymmetric matching shift approach, where the "s" shift is typical, i.e., the first row is scanned from left to right and the second row is scanned from right to left. The floyd. stein algorithm is typical for the error diffusion method.
Dots printed by the Bayer ordered dithering algorithm are too sparse, so that not only is the shading picture not emphasized, but also the overall aesthetic degree is greatly influenced, the Floyd Stein algorithm disordered dithering is too fuzzy and dense and influences the overall aesthetic degree analysis, and the Floyd Stein algorithm is not well adjusted and generates large interference on characters.
Disclosure of Invention
Aiming at the problem that characters are unobtrusive and unobvious due to interference of the shading on the characters in the printing industry, the invention provides a character dithering method based on shading halftone, which is used for processing pictures needing shading conversion in batches and providing various shading patterns.
In order to achieve the purpose, the technical scheme of the invention is as follows: a text dithering method based on shading halftone comprises the following steps:
step S1: identifying the format and compression mode of the input picture, and converting the input picture into a non-true color single-page picture;
step S2: converting the converted picture into a gray-scale image, identifying a block position containing the shading in the picture, converting the block position into a coordinate, and storing the coordinate by using a set structure;
step S3: dotting and shaking the shading position in the block-shaped position, and not performing any treatment on the shading-free position; step S4: carrying out binarization processing on the image subjected to the dithering processing, and storing the image as a BMP image with the bit depth of 1;
step S5: and outputting the BMP picture according to the specified specification.
Further, the step S1 specifically includes:
step S11: identifying the format of the input picture, if the format is the TIFF format, entering the step S12; otherwise, go to step S2:
step S12: identifying the compression mode of an input picture and judging whether the input picture is a true color picture;
step S13: identifying the input picture as a single page or a plurality of pages, counting the number of pages by using a TIFFNumberOfDirectories function provided by a LibTiff library, wherein PNUM represents the total number of pages, and is a multi-page picture if the PNUM is more than 1;
step S14: and uniformly converting the identified true-color multi-page TIFF format picture, the true-color single-page TIFF format picture and the non-true-color multi-page TIFF format picture into the non-true-color single-page TIFF format picture by using a GDAL library.
Further, the step S2 specifically includes:
step S21: carrying out graying processing on an input picture;
step S22: traversing all pixels of the whole picture, and calculating by adopting a formula: length denotes the Length of the picture, CNUM denotes the number of columns of the picture, CHNUM denotes the total number of channels of the picture, Wide denotes the width of the picture, and RNUM denotes the number of rows of the picture;
step S23: if data [ i, j-1] < δ | | data [ i, j-1] > epsilon and δ < data [ i, a ] < epsilon, where a ═ j, j + 1.., j +49, then the upper left-hand coordinate of the block location containing the shading is (i, j); the lower right corner coordinates are then calculated: δ < data [ u, j ] < epsilon, u ═ i +1, i +2, ·, n and data [ n +1, j ] > 250| | data [ n +1, j ] < 200, δ < data [ i, v ] < epsilon, v ═ j +1, j +2,. once, m and data [ i, m +1] > 250| | | data [ i, m +1] < 200, the lower right corner coordinate of the block position containing the shading is (n, m), where data [ i, j ] represents a picture pixel value, δ and epsilon represent a set pixel threshold, respectively, | | | represents an or operation;
step S24: and storing the block positions of the shading into a specified set structure, storing data by using vector vectors, and marking the vector vectors as empty if the positions of the identified pictures have no shading and no data exists in the vector vectors.
Further, the step S3 specifically includes:
step S31: judging by using kappa < data [ r, s ] < lambda, if the requirement is expressed as shading, and respectively expressing kappa and lambda as set pixel threshold values;
step S32: traversing the shading, dotting the shading, and adopting a formula: pdata [ r, s ] (unidentified char) ζ, Pdata [ r, s ] representing modified pixel values, the values of the modified parameter ζ and the values of the coordinates (r, s) resulting in different shading patterns;
step S33: and taking out the stored position information of the shading, and selecting the pattern for processing.
Further, the step S4 specifically includes:
step S41: and (3) carrying out picture binarization processing by adopting a formula:
Figure BDA0001597388370000031
wherein p (x, y) represents an input gray level graph, g (x, y) represents a picture after binarization, and Threshold is a set Threshold;
step S42: using the formula: line _ byte ═ (width × biBitCount/8+3)/4 × 4, where line _ byte is the number of bytes in each row of the image data to be stored, width represents the width of the picture, biBitCount represents the bit depth of the picture, and the number of bytes in each row of the image data to be stored is calculated to be a multiple of 4;
step S43: performing bit depth conversion on the picture, wherein temp is less than (8-offset-1),
Figure BDA0001597388370000041
where temp represents the pixel value, offset ═ col% 8 represents the value of the remainder of the column count and 8,<<indicating a shift left operation, p indicates the pixel pointer, and-indicating negation, converts the picture with the shading bit depth of 8 into a BMP picture with the bit depth of 1.
Compared with the prior art, the invention has the beneficial effects that: (1) the shading obviously highlights the content of the characters, and the characters are not shielded;
(2) the method can carry out normalized conversion and processing on the pictures in batches, and is simple, flexible to implement and high in practicability.
Drawings
FIG. 1 is a flow chart of a text dithering method based on shading halftone according to the present invention.
Detailed Description
The invention is further explained below with reference to the drawings and the embodiments.
As shown in fig. 1, a text dithering method based on shading halftone includes the following steps:
step S1: automatically identifying the compression mode of pictures, single-page or multi-page TIFF format pictures, JPG, PNG and other format pictures of the input images, and then uniformly converting the pictures into non-true color single-page pictures;
in this embodiment, the step S1 specifically includes:
step S11: identifying the format of the input picture, if the format is the TIFF format, entering the step S12; otherwise, go to step S2: generally, only TIFF format pictures are likely to have multiple pages;
step S12: identifying the compression mode of the input picture, extracting the compression number of the input picture and the JPEG compression number for comparison analysis, and judging whether the input picture is a true color picture; JPEG compression No. 3435921415;
step S13: identifying the input picture as a single page or a plurality of pages, counting the number of pages by using a TIFFNumberOfDirectories function provided by a LibTiff library, wherein PNUM represents the total number of pages, and is a multi-page picture if the PNUM is more than 1;
step S14: and uniformly converting the identified true-color multi-page TIFF format picture, the true-color single-page TIFF format picture and the non-true-color multi-page TIFF format picture into the non-true-color single-page TIFF format picture by using a GDAL library.
Step S2: converting the converted picture into a gray-scale image, identifying a block position containing the shading in the picture, converting the block position into a coordinate, and storing the coordinate by using a set structure;
in this embodiment, the step S2 specifically includes:
step S21: carrying out graying processing on an input picture; the formula is adopted:
f (i, j) ═ α R (i, j) + β G (i, j) + χ B (i, j), where let α ═ 0.30, β ═ 0.59, χ ═ 0.11,
step S22: traversing all pixels of the whole picture, and calculating by adopting a formula: length denotes the Length of the picture, CNUM denotes the number of columns of the picture, CHNUM denotes the total number of channels of the picture, Wide denotes the width of the picture, and RNUM denotes the number of rows of the picture;
step S23: if data [ i, j-1] < δ | | data [ i, j-1] > epsilon and δ < data [ i, a ] < epsilon, where a ═ j, j + 1.., j +49, then the upper left-hand coordinate of the block location containing the shading is (i, j); the lower right corner coordinates are then calculated: δ < data [ u, j ] < ∈, u ═ i +1, i +2, ·, n and data [ n +1, j ] > 250| | data [ n +1, j ] < 200, δ < data [ i, v ] < ∈, v ═ j +1, j +2,. and m and data [ i, m +1] > 250| | | data [ i, m +1] < 200, the lower right corner coordinate of the block position containing the shading is (n, m), where data [ i, j ] represents a picture pixel value, δ and |, respectively, represent a set pixel threshold, in the present embodiment, δ ═ 200, ∈ ═ 230, | represents an or operation; this step identifies the block-like position of the shading of the entire picture, as shown in fig. 1;
step S24: finding the position coordinate of the first shading block according to the step S23, finding the next shading block information, traversing and finding the last coordinate information until all picture pixels are traversed, storing the shading block position in an appointed set structure, storing data by using a vector, and marking as empty if the position of the identified picture has no shading, and no data exists in the vector.
Step S3: dotting and shaking the shading position in the block-shaped position, and not performing any treatment on the shading-free position; in this embodiment, the step S3 specifically includes:
step S31: judging by using kappa < data [ r, s ] < lambda, if the requirement is expressed as shading, and respectively expressing kappa and lambda as set pixel threshold values; in this embodiment, let κ be 200 and γ be 230; finding out a specific shading to be subjected to dotting treatment, and changing the pixel value of the shading to form shading in different modes;
step S32: traversing the shading, and dotting when the shading in the pixel range is met, wherein a formula is adopted: pdata [ r, s ] (unsigned charr) ζ,
pdata [ r, s ] represents the modified pixel value, modifying the value of parameter ζ and the value of the coordinate (r, s) to yield a different shading pattern A, B, C;
step S33: and taking out the stored position information of the shading, and selecting the pattern for processing.
Step S4: carrying out binarization processing on the image subjected to the dithering processing, and storing the image as a BMP image with the bit depth of 1;
in this embodiment, the step S4 specifically includes:
step S41: and (3) carrying out picture binarization processing by adopting a formula:
Figure BDA0001597388370000061
wherein p (x, y) represents an input gray level graph, g (x, y) represents a picture after binarization, Threshold is set Threshold, and 200 is taken;
step S42: using the formula: line _ byte ═ (width × biBitCount/8+3)/4 × 4, where line _ byte is the number of bytes in each row of the image data to be stored, width represents the width of the picture, biBitCount represents the bit depth of the picture, and the number of bytes in each row of the image data to be stored is calculated to be a multiple of 4;
step S43: performing bit depth conversion on the picture, wherein temp is less than (8-offset-1),
Figure BDA0001597388370000062
where temp represents the pixel value, offset ═ col% 8 represents the value of the remainder of the column count and 8,<<indicating a shift left operation, p indicates the pixel pointer, and-indicating negation, converts the picture with the shading bit depth of 8 into a BMP picture with the bit depth of 1.
Step S5: and outputting the BMP picture according to the specified specification.
In this embodiment, the specified specifications are: outputting the image with the resolution of 300dpi, wherein the size of the file compressed by adopting a CCITT Group 4 is 150KB, and the maximum TIFF image does not exceed 200 KB;
step S5 specifically includes:
step S51: the output picture requires a picture resolution (DPI) of 300, and is compressed by adopting a CCITT Group 4, and the specification of the picture can be adjusted by utilizing a Libtiff library;
step S52: the BMP format picture is converted into the TIFF format picture, and a formula can be directly utilized:
the picture is converted into TIFF by a formula, wherein size represents the size of the picture and pInfo- > bmiheader.
The above are preferred embodiments of the present invention, and all changes made according to the technical scheme of the present invention that produce functional effects do not exceed the scope of the technical scheme of the present invention belong to the protection scope of the present invention.

Claims (3)

1. A text dithering method based on shading halftone is characterized by comprising the following steps:
step S1: identifying the format and compression mode of the input picture, and converting the input picture into a non-true color single-page picture;
step S2: converting the converted picture into a gray-scale image, identifying a block position containing the shading in the picture, converting the block position into a coordinate, and storing the coordinate by using a set structure;
step S3: dotting and shaking the positions with the ground lines, and not performing any treatment on the positions without the ground lines;
step S4: carrying out binarization processing on the image subjected to the dithering processing, and storing the image as a BMP image with the bit depth of 1;
step S5: outputting the BMP picture according to a specified specification;
the step S2 specifically includes:
step S21: carrying out graying processing on an input picture;
step S22: traversing all pixels of the whole picture, and calculating by adopting a formula: length denotes the Length of the picture, CNUM denotes the number of columns of the picture, CHNUM denotes the total number of channels of the picture, Wide denotes the width of the picture, and RNUM denotes the number of rows of the picture;
step S23: if data [ i, j-1] < δ | | data [ i, j-1] > epsilon and δ < data [ i, a ] < epsilon, where a ═ j, j + 1.., j +49, then the upper left-hand coordinate of the block location containing the shading is (i, j); the lower right corner coordinates are then calculated: δ < data [ u, j ] < epsilon, u ═ i +1, i +2, ·, n and data [ n +1, j ] > 250| | data [ n +1, j ] < 200, δ < data [ i, v ] < epsilon, v ═ j +1, j +2,. once, m and data [ i, m +1] > 250| | | data [ i, m +1] < 200, the lower right corner coordinate of the block position containing the shading is (n, m), where data [ i, j ] represents a picture pixel value, δ and epsilon represent a set pixel threshold, respectively, | | | represents an or operation;
step S24: storing the block positions of the shading into a specified set structure, storing data by using vector vectors, and marking the vector vectors as empty if the positions of the identified pictures have no shading and no data exists in the vector vectors;
the step S3 specifically includes:
step S31: judging by using kappa < data [ r, s ] < lambda, if the requirement is expressed as shading, and respectively expressing kappa and lambda as set pixel threshold values;
step S32: traversing the shading, dotting the shading, and adopting a formula:
Figure FDA0003075744850000011
Pdata[r,s]indicating modified pixel values, modifying parameters
Figure FDA0003075744850000012
And the value of the coordinates (r, s) to obtain different shading patterns;
step S33: and taking out the stored position information of the shading, and selecting the pattern for processing.
2. The method for dithering text based on shading halftone according to claim 1, wherein the step S1 specifically includes:
step S11: identifying the format of the input picture, if the format is the TIFF format, entering the step S12; otherwise, go to step S2:
step S12: identifying the compression mode of an input picture and judging whether the input picture is a true color picture;
step S13: identifying the input picture as a single page or a plurality of pages, counting the number of pages by using a TIFFNumberOfDirectories function provided by a LibTiff library, wherein PNUM represents the total number of pages, and is a multi-page picture if the PNUM is more than 1;
step S14: and uniformly converting the identified true-color multi-page TIFF format picture, the true-color single-page TIFF format picture and the non-true-color multi-page TIFF format picture into the non-true-color single-page TIFF format picture by using a GDAL library.
3. The method for dithering text based on shading halftone according to claim 1, wherein the step S4 specifically includes:
step S41: and (3) carrying out picture binarization processing by adopting a formula:
Figure FDA0003075744850000021
wherein p (x, y) represents an input gray level graph, g (x, y) represents a picture after binarization, and Threshold is a set Threshold;
step S42: using the formula: line _ byte ═ (width × biBitCount/8+3)/4 × 4, where line _ byte is the number of bytes in each row of the image data to be stored, width represents the width of the picture, biBitCount represents the bit depth of the picture, and the number of bytes in each row of the image data to be stored is calculated to be a multiple of 4;
step S43: performing bit depth conversion on the picture, wherein temp is less than (8-offset-1),
Figure FDA0003075744850000022
where temp represents the pixel value, offset ═ col% 8 represents the value of the remainder of the column count and 8,<<indicating a shift left operation, p indicates the pixel pointer, and-indicating negation, converts the picture with the shading bit depth of 8 into a BMP picture with the bit depth of 1.
CN201810211748.4A 2018-03-15 2018-03-15 Text dithering method based on shading halftone Active CN108269233B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810211748.4A CN108269233B (en) 2018-03-15 2018-03-15 Text dithering method based on shading halftone

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810211748.4A CN108269233B (en) 2018-03-15 2018-03-15 Text dithering method based on shading halftone

Publications (2)

Publication Number Publication Date
CN108269233A CN108269233A (en) 2018-07-10
CN108269233B true CN108269233B (en) 2021-07-27

Family

ID=62774919

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810211748.4A Active CN108269233B (en) 2018-03-15 2018-03-15 Text dithering method based on shading halftone

Country Status (1)

Country Link
CN (1) CN108269233B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109831597A (en) * 2019-02-28 2019-05-31 江苏实达迪美数据处理有限公司 A kind of shading halftoning method based on deep learning
CN112395837A (en) * 2019-08-01 2021-02-23 北京字节跳动网络技术有限公司 Method and device for processing special effects of characters

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH01252066A (en) * 1988-03-31 1989-10-06 Toshiba Corp Halftone picture separation processor
CN1790190A (en) * 2005-12-01 2006-06-21 北京北大方正电子有限公司 Printer capable of preventing document from copy
CN101727839A (en) * 2008-10-10 2010-06-09 华映视讯(吴江)有限公司 Device and method for compressing/decompressing image
CN102903085A (en) * 2012-09-25 2013-01-30 福州大学 Rapid image mosaic method based on corner matching
CN103488711A (en) * 2013-09-09 2014-01-01 北京大学 Method and system for fast making vector font library
CN105141842A (en) * 2015-08-31 2015-12-09 广州市幸福网络技术有限公司 Tamper-proof license camera system and method
CN105139334A (en) * 2015-10-10 2015-12-09 上海中信信息发展股份有限公司 Multiline text watermark production device
CN105654072A (en) * 2016-03-24 2016-06-08 哈尔滨工业大学 Automatic character extraction and recognition system and method for low-resolution medical bill image

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH01252066A (en) * 1988-03-31 1989-10-06 Toshiba Corp Halftone picture separation processor
CN1790190A (en) * 2005-12-01 2006-06-21 北京北大方正电子有限公司 Printer capable of preventing document from copy
CN101727839A (en) * 2008-10-10 2010-06-09 华映视讯(吴江)有限公司 Device and method for compressing/decompressing image
CN102903085A (en) * 2012-09-25 2013-01-30 福州大学 Rapid image mosaic method based on corner matching
CN103488711A (en) * 2013-09-09 2014-01-01 北京大学 Method and system for fast making vector font library
CN105141842A (en) * 2015-08-31 2015-12-09 广州市幸福网络技术有限公司 Tamper-proof license camera system and method
CN105139334A (en) * 2015-10-10 2015-12-09 上海中信信息发展股份有限公司 Multiline text watermark production device
CN105654072A (en) * 2016-03-24 2016-06-08 哈尔滨工业大学 Automatic character extraction and recognition system and method for low-resolution medical bill image

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Frame-to-Frame Coherent Halftoning in Image Space;Mike Eissele ET AL.;《Proceedings of the Theory and Practice of Computer Graphics 2004 (TPCG’04)》;20041231;第1-8页 *
带记忆存储的分级自组织背景差分算法;柯逍,等;《模式识别与人工智能》;20161031;第881-893页 *

Also Published As

Publication number Publication date
CN108269233A (en) 2018-07-10

Similar Documents

Publication Publication Date Title
US6901170B1 (en) Image processing device and recording medium
US6137918A (en) Memory efficient method and apparatus to enable tagging of thin antialiased lines
US6873436B1 (en) Image processing device and recording medium
US20150243010A1 (en) Image inspection device, image inspection system, and recording medium storing image inspection program
CN106313918A (en) Image processing apparatus and image processing method
CN101098388A (en) Image processing apparatus and image processing method
CN108269233B (en) Text dithering method based on shading halftone
US7145693B2 (en) Image processing apparatus and method
US7339703B2 (en) Image processing apparatus and method thereof
EP0631430A2 (en) Color image processing apparatus capable of suppressing moire
US20080198216A1 (en) Background pattern image generating method
US10424066B2 (en) Image analyzing apparatus that corrects isolated pixels in target image data
EP1411713A1 (en) Bit-map decompression
JP2005117504A (en) Image processor and image processing method
US6181437B1 (en) Image processing apparatus capable of producing images without jaggies at edges
JP2019140538A (en) Image processing system, image formation device, image processing method, and program
CN108401084A (en) Image processing apparatus and image processing method
US6567565B1 (en) Antialiased image rendering algorithm
JP3087845B2 (en) Digital image processing method for reading an original image with a scanner and enlarging and printing
CN109831597A (en) A kind of shading halftoning method based on deep learning
US20100259795A1 (en) System and method of image edge growth control
JP2002290763A (en) Image processing method and image processing device
JP6732428B2 (en) Image processing device, halftone dot determination method, and program
JP2001351068A (en) Device and method for recognizing character, device and method for processing image and computer readable recording medium
JP6651776B2 (en) Image processing apparatus, image processing method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant