WO2013108757A1 - Dispositif de traitement d'images, dispositif de formation d'image, programme et support de stockage - Google Patents

Dispositif de traitement d'images, dispositif de formation d'image, programme et support de stockage Download PDF

Info

Publication number
WO2013108757A1
WO2013108757A1 PCT/JP2013/050584 JP2013050584W WO2013108757A1 WO 2013108757 A1 WO2013108757 A1 WO 2013108757A1 JP 2013050584 W JP2013050584 W JP 2013050584W WO 2013108757 A1 WO2013108757 A1 WO 2013108757A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
unit
translation
file
image file
Prior art date
Application number
PCT/JP2013/050584
Other languages
English (en)
Japanese (ja)
Inventor
淳寿 森本
小西 陽介
仁志 廣畑
章人 ▲吉▼田
Original Assignee
シャープ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by シャープ株式会社 filed Critical シャープ株式会社
Priority to CN201380004659.8A priority Critical patent/CN104054047B/zh
Priority to US14/370,170 priority patent/US20140337008A1/en
Publication of WO2013108757A1 publication Critical patent/WO2013108757A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/58Use of machine translation, e.g. for multi-lingual retrieval, for server-side translation for client devices or for real-time translation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/103Formatting, i.e. changing of presentation of documents
    • G06F40/109Font handling; Temporal or kinetic typography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K15/00Arrangements for producing a permanent visual presentation of the output data, e.g. computer output printers
    • G06K15/02Arrangements for producing a permanent visual presentation of the output data, e.g. computer output printers using printers
    • G06K15/18Conditioning data for presenting it to the physical printing elements
    • G06K15/1801Input data handling means
    • G06K15/181Receiving print data characterized by its formatting, e.g. particular page description languages
    • G06K15/1811Receiving print data characterized by its formatting, e.g. particular page description languages including high level document description only
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/146Aligning or centring of the image pick-up or image-field
    • G06V30/1475Inclination or skew detection or correction of characters or of image to be recognised
    • G06V30/1478Inclination or skew detection or correction of characters or of image to be recognised of characters or characters lines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/24Character recognition characterised by the processing or recognition method
    • G06V30/242Division of the character sequences into groups prior to recognition; Selection of dictionaries
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/0035User-machine interface; Control console
    • H04N1/00405Output means
    • H04N1/00408Display of information to the user, e.g. menus
    • H04N1/0044Display of information to the user, e.g. menus for image preview or review, e.g. to help the user position a sheet
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N1/32101Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N1/32128Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title attached to the image data, e.g. file header, transmitted message header, information on the same page or in the same computer file as the image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2201/00Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
    • H04N2201/0077Types of the still picture apparatus
    • H04N2201/0094Multifunctional device, i.e. a device capable of all of reading, reproducing, copying, facsimile transception, file transception
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2201/00Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
    • H04N2201/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N2201/3201Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N2201/3225Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title of data relating to an image, a page or a document
    • H04N2201/3242Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title of data relating to an image, a page or a document of processing required or performed, e.g. for reproduction or before recording

Definitions

  • the present invention relates to an image processing apparatus, an image forming apparatus, a program, and a recording medium on which the program is recorded, which has a function of translating a language indicated in image data.
  • Character recognition is performed on the input image data (digitized document), the recognized character (original text) is translated, and a PDF file is created that shows the image (translated image) with the translated text added to the original text.
  • a multifunction peripheral having a function of
  • Patent Literature 1 acquires a translation (translation information) corresponding to character information included in image data, and acquires information indicating a region into which the translation is inserted based on the configuration of a character line including the character information. And the point which determines the insertion position of a translation based on the acquired area
  • region information is disclosed.
  • the technique of Patent Document 1 when the character string in the image data has a predetermined width or less, only the reference index is inserted between the character strings, and the translated word is inserted in the lower margin part. Yes.
  • file the user who browses the PDF file (hereinafter referred to as “file”) is not necessarily the user who created the file, and when the user who is different from the user who created the file browses, it is written in the original text.
  • creating both a file with translated words and a file without translated words means creating two files for documents with the same content, and file management becomes cumbersome. End up.
  • the present invention has been made in view of the above-described problems, and an image processing apparatus, an image forming apparatus, a program, and a recording medium that reduce the trouble of creating an image file and reduce the troublesomeness of file management
  • the purpose is to provide.
  • an image processing apparatus of the present invention performs a translation process on a language included in image data, thereby specifying a translation unit corresponding to the language, the image data, and the translation A formatting processing unit that generates an image file formatted into data of a predetermined format based on the processing result, and the formatting processing unit is instructed by the user to switch the image file A command for causing the computer to switch between a first display state in which the language and the translated word are displayed together and a second display state in which the language is displayed without displaying the translated word. It is characterized by adding.
  • a single display state in which the language and the translated word are displayed together and a second display state in which the language is displayed without displaying the translated word can be switched as necessary. Since the image file can be generated, it is possible to reduce the trouble at the time of file generation and the troublesomeness at the time of file management as compared with the case of generating two files as in the prior art.
  • FIG. 1 is a block diagram illustrating a schematic configuration of an image forming apparatus including an image processing apparatus according to an embodiment of the present invention.
  • FIG. 2 is a block diagram illustrating an internal configuration of a document detection unit illustrated in FIG. 1. It is a block diagram which shows the internal structure of the file production
  • generation part shown in FIG. (A) is the figure which showed the image displayed without a translation word
  • (b) is the figure which showed the image displayed with a translation word presence state.
  • 5 is a flowchart showing a flow of processing of the image forming apparatus in an image transmission mode.
  • (A) is the figure which showed the translucent switching button typically.
  • (B) is the figure which showed typically the state by which the switching button shown by (a) was rolled over and became non-transparent.
  • (A) is a diagram showing a document catalog described in an image file.
  • (B) is a diagram showing an optional content group dictionary described in an image file.
  • (C) is information described in the image file, and shows a description related to the specification of an optional content range.
  • (A) is a diagram showing a page object described in an image file.
  • (B) is a diagram showing a Widget annotation described in an image file.
  • (C) is a diagram showing a form XObject described in an image file. This is information described in the image file, and indicates information for printing the switching button at the time of printing.
  • FIG. 5 is a diagram illustrating information that is described in an image file and that is used to designate an initial display state when the image file is opened by a user as a no-translate state in FIG.
  • A is the figure which showed the Widget annotation described in the image file which displays a translucent switch button.
  • B is a diagram showing a graphics state parameter dictionary described in an image file configured to display a translucent switching button.
  • C is a diagram showing a form XObject described in an image file configured to display a translucent switching button. It is a figure for demonstrating the structure of the image file of this embodiment. It is a figure for demonstrating the structure of the image file which consists of a several page and can change a translated word existence state and a translation non-state in every page.
  • A is a figure which shows the image displayed without a translation in the image file containing the translation of a simple level, and the translation of a detailed level.
  • B is a figure which shows the image displayed in the state which selected the simple level translation in the image file containing the translation of the simple level and the detailed level translation.
  • C is a figure which shows the image displayed in the state which selected the simple level translation and the detailed level translation in the image file containing the simple level translation and the detailed level translation.
  • FIG. 1 is a block diagram illustrating a configuration example when the present invention is applied to a color image reading apparatus.
  • 3 is a block diagram illustrating an internal configuration of a file generation unit of an image forming apparatus having a function of processing an image file input from an external apparatus.
  • FIG. It is the flowchart which showed the method of recognizing the format of the image file preserve
  • FIG. 1 is a block diagram illustrating a schematic configuration of an image forming apparatus 1 including an image processing apparatus 3 according to the present embodiment.
  • the image forming apparatus 1 of the present embodiment is a digital color multi-function machine having a copy function, a printer function, a facsimile transmission function, a scan to e-mail function, etc., but may be a digital color copier.
  • the image forming apparatus 1 includes an image input device 2, an image processing device 3, an image output device 4, a transmission / reception unit 5, a storage unit 6, a control unit 7, and an encoding / decoding unit 8. ing.
  • the image processing apparatus 3 includes an A / D conversion unit 11, a shading correction unit 12, an input processing unit 13, a document detection unit 14, a document correction unit 15, a color correction unit 16, a black generation / under color removal unit 17, a space A filter unit 18, an output tone correction unit 19, a halftone generation unit 20, a region separation unit 21, and a file generation unit 30 are provided.
  • the image forming apparatus 1 includes a print mode in which an image according to image data read by the image input apparatus 2 is printed on a recording material by the image output apparatus 4, and image data read by the image input apparatus 2 is transmitted / received by the transmission / reception unit 5. It is possible to execute a transmission mode in which transmission is performed to devices that are communicably connected via a network or the like.
  • the image input device 2 is a scanner equipped with a CCD (Charge-Coupled Device) line sensor, and separates the light reflected from the original into R, G, and B (R: red, G: green, B: blue). Converted into an electrical signal (image data).
  • the configuration of the image input device 2 is not particularly limited.
  • the image input device 2 may read a document placed on a document placement table, or read a document conveyed by a document conveying unit. May be.
  • the image processing apparatus 3 outputs to the image output apparatus 4 CMYK image data obtained by performing various image processing on the image data input from the image input apparatus 2 in the print mode (printing operation).
  • the image processing device 3 performs various image processing on the image data input from the image input device 2, and performs character recognition processing and translation processing based on the image data.
  • An image file is generated using the results of the processing and the translation processing, and the image file is transmitted to a storage destination or a transmission destination designated by the user. Details of each block included in the image processing apparatus 3 will be described later.
  • the image output device 4 outputs (prints) the image of the image data input from the image processing device 3 on a recording material (for example, paper).
  • a recording material for example, paper.
  • the configuration of the image output device 4 is not particularly limited, and for example, an image output device using an electrophotographic method or an inkjet method can be used.
  • the transmission / reception unit 5 is composed of, for example, a modem or a network card.
  • the transmission / reception unit 5 connects the image forming apparatus 1 to a network via a network card, a LAN cable, etc., and an external device (for example, a personal computer, a server device, a display device, other digital devices) connected to the network. Data communication with multifunction devices, facsimile machines, etc.).
  • the storage unit 6 is a storage unit that stores various data (image data and the like) handled by the image forming apparatus 1.
  • the configuration of the storage unit 6 is not particularly limited.
  • a data storage device such as a hard disk can be used.
  • the encoding / decoding unit 8 encodes the image data when the image data handled by the image processing device 3 is stored in the storage unit 6. It has become. That is, when the encoding mode is selected, the image data is encoded and stored in the storage unit 6, and when the encoding mode is not selected, the image data is encoded / decoded without being encoded. The data is stored in the storage unit 6 through the control unit 8. The encoding mode is selected by the user through an operation panel (not shown). When the image data read from the storage unit 6 is encoded, the encoding / decoding unit 8 also decodes the image data.
  • the control unit 7 is a processing control device (control means) that controls the operation of each unit provided in the image processing device 3.
  • the control unit 7 may be provided in a main control unit (not shown) that controls the operation of each unit of the image forming apparatus 1, and is provided separately from the main control unit and cooperates with the main control unit. Then, the processing may be performed.
  • the main control unit includes, for example, a CPU (Central Processing Unit) and the like, and is based on information input from a UI of an operation panel (not shown), a program stored in a ROM (not shown), and various data.
  • 1 is a device for controlling the operation of each part of 1.
  • the main control unit controls the flow of data in the image forming apparatus 1 and the reading and writing of data to and from the storage unit 6.
  • the A / D conversion unit 11 converts the RGB analog signal input from the image input device 2 into a digital signal and outputs the digital signal to the shading correction unit 12.
  • the shading correction unit 12 performs a process of removing various distortions generated in the illumination system, the imaging system, and the imaging system of the image input device 2 on the digital RGB signal sent from the A / D conversion unit 11, The data is output to the input processing unit 13.
  • the input processing unit (input gradation correction unit) 13 performs various processes such as gamma correction on the RGB signal from which various distortions have been removed by the shading correction unit 12. Further, the input processing unit 13 causes the storage unit 6 to store the image data subjected to the above various processes.
  • the document detection unit 14 reads the image data stored in the storage unit 6 by the input processing unit 13, detects the tilt angle of the document image indicated by the image data, and corrects the detected tilt angle (detection result). To the unit 15.
  • the document correction unit 15 reads out the image data stored in the storage unit 6 based on the tilt angle transmitted from the document detection unit 14, corrects the tilt of the document, and stores the image data after the tilt correction in the storage unit 6.
  • the document detection unit 14 reads out the image data (image data after the tilt correction) stored in the storage unit 6, and based on this image data, reads the document data. The top / bottom direction is determined, and the determination result is output to the document correction unit 15.
  • the document correction unit 15 reads out the image data stored in the storage unit 6 and performs direction correction processing according to the determination result of the determination of the top and bottom direction of the document.
  • FIG. 2 is a block diagram illustrating a schematic configuration of the document detection unit 14.
  • the document detection unit 14 includes a signal conversion unit 51, a resolution conversion unit 52, a binarization processing unit 53, a document inclination detection unit 54, and a top / down direction detection unit 55.
  • the signal conversion unit 51 is for achromatizing the image data input from the storage unit 6 and converting it into a brightness signal or a luminance signal.
  • Y is a luminance signal of each pixel
  • R, G, and B are each color component in the RGB signal of each pixel
  • the subscript i is a value assigned to each pixel (i is an integer of 1 or more). It is.
  • the RGB signal may be converted into a CIE 1976 L * a * b * signal (CIE: Commission International de l'Eclairage, L * : brightness, a * , b * : chromaticity).
  • CIE Commission International de l'Eclairage
  • the resolution conversion unit 52 converts the image data (luminance value (luminance signal) or brightness value (brightness signal)) achromatized by the signal conversion unit 51 into a low resolution. For example, image data read at 1200 dpi, 750 dpi, or 600 dpi is converted to 300 dpi.
  • the resolution conversion method is not particularly limited, and for example, a known nearest neighbor method, bilinear method, bicubic method, or the like can be used.
  • the binarization processing unit 53 binarizes the image data by comparing the image data converted into the low resolution with a preset threshold value. For example, when the image data is 8 bits, the threshold value is set to 128. Or it is good also considering the average value of the density
  • the document tilt detection unit 54 Based on the image data binarized by the binarization processing unit 53, the document tilt detection unit 54 detects the tilt angle of the document with respect to the scan range (regular document position) at the time of image reading, and the detected result as the document. Output to the correction unit 15.
  • the method for detecting the tilt angle is not particularly limited, and various conventionally known methods can be used.
  • the method described in Patent Document 2 may be used.
  • a plurality of boundary points between black pixels and white pixels are extracted from the binarized image data, and a point sequence of each boundary point is extracted. Find the coordinate data. And a regression line is calculated
  • Sx and Sy are the residual sum of squares of the variables x and y, respectively, and Sxy is the sum of the products of the residual of x and the residual of y. That is, Sx, Sy, Sxy are expressed by the above formulas (2) to (4).
  • the inclination angle ⁇ is calculated based on the following equation (5) from the regression coefficient b calculated as described above.
  • the top-and-bottom direction detection unit 55 determines the top-and-bottom direction of the document image indicated in the image data stored in the storage unit 6 and determines the determination result. The data is output to the document correction unit 15.
  • the method for determining the vertical direction is not particularly limited, and various conventionally known methods can be used.
  • the method described in Patent Document 3 may be used.
  • matching between the character pattern characteristics and the character pattern information stored in the database in advance is performed.
  • the character pattern cut out from the image data is superimposed on the character pattern created in the database and the black and white for each pixel is compared, and the characters in the image data are all of the character patterns created in the database. It is determined that the character pattern matches the character. When there is no character pattern that matches all, the character in the image data is determined as the character pattern having the largest number of matching pixels. However, if the ratio of the number of matching pixels does not reach a predetermined matching ratio, it is determined that the discrimination is impossible.
  • the character recognition process is performed for each of the image data rotated by 90 °, 180 °, and 270 °. Thereafter, for each of 0 °, 90 °, 180 °, and 270 °, the number of distinguishable characters is calculated, and the rotation angle with the largest number of distinguishable characters is determined as the character direction, that is, the top / bottom direction of the document. To do. Then, the rotation angle for matching the vertical direction of the document image in the image data with the normal vertical direction is determined. Specifically, the angle in the clockwise direction with respect to the normal top / down direction is positive, and the case where the top / bottom direction (reference direction) of the document image in the image data matches the normal top / bottom direction is 0 °.
  • the rotation angle is 90 °
  • the vertical direction of the original image in the image data is ⁇ 180 ° with respect to the normal vertical direction. If they are different, the rotation angle is 180 °, and if the vertical direction of the original image in the image data is ⁇ 270 ° different from the normal vertical direction, the rotation angle is 270 °.
  • the document detection unit 14 outputs the rotation angle to the document correction unit 15 (see FIG. 1) as a determination result of the vertical direction. Then, the document correction unit 15 performs rotation processing on the image data stored in the storage unit 6 by the rotation angle described above.
  • image data processed by the input processing unit 13 is read from the storage unit 6 and input to the signal conversion unit 51. Then, after the processing of the signal conversion unit 51, the resolution conversion unit 52, and the binarization processing unit 53, the document inclination detection unit 54 detects the inclination angle. Thereafter, the document correction unit 15 reads the image data stored in the storage unit 6, performs tilt correction on the image data based on the result detected by the document tilt detection unit 54, and the image after tilt correction. Data is stored in the storage unit 6.
  • the image data after the inclination correction is read from the storage unit 6 and input to the signal conversion unit 51, and through the processing of the signal conversion unit 51, the resolution conversion unit 52, and the binarization processing unit 53, the vertical direction detection is performed.
  • the vertical direction is determined by the unit 55.
  • the document correction unit 15 reads out the image data (image data after tilt correction) stored in the storage unit 6, and changes the direction to the image data as necessary based on the determination result of the top / bottom direction detection unit 55. Make corrections.
  • the image data output from the input processing unit 13 or the document correction unit 15 and stored in the storage unit 6 is encoded by the encoding / decoding unit 8. It is stored in the storage unit 6 above.
  • the image data read from the storage unit 6 and input to the document detection unit 14 or the document correction unit 15 is decoded by the encoding / decoding unit 8. This is input to the document detection unit 14 or the document correction unit 15.
  • the color correction unit 16 inputs image data after completion of processing by the document detection unit 14 and the document correction unit 15 from the document correction unit 15, and inputs the image data into CMY (C: Cyan M: magenta, Y: yellow) image data, and processing for improving color reproducibility is performed.
  • CMY Cyan M: magenta, Y: yellow
  • the area separation unit 21 inputs image data after processing by the document detection unit 14 and the document correction unit 15 from the document correction unit 15, and sets each pixel in the image of the image data to a black character region, a color character region, The image is separated into either a halftone dot area or a photographic paper photograph (continuous tone area) area.
  • the region separation unit 21 generates region separation data (region separation signal) indicating to which region a pixel belongs based on the separation result, a black generation / undercolor removal unit 17, a spatial filter unit 18, and a halftone generation unit. Output to 20.
  • the method of region separation processing is not particularly limited, and a conventionally known method can be used.
  • the black generation / undercolor removal unit 17, the spatial filter unit 18, and the halftone generation unit 20 perform processing suitable for each region based on the input region separation signal.
  • the black generation / under color removal unit 17 generates a black (K) signal from the CMY three-color signal after color correction, and subtracts the K signal obtained by the black generation from the original CMY signal to generate a new CMY signal.
  • generates is performed.
  • the CMY three-color signal is converted into a CMYK four-color signal.
  • the spatial filter unit 18 performs spatial filter processing (enhancement processing and / or smoothing processing) using a digital filter on the image data of the CMYK signal input from the black generation / undercolor removal unit 17 based on the region separation data. And correct the spatial frequency characteristics. As a result, blurring of the output image and deterioration of graininess can be reduced.
  • the output tone correction unit 19 performs an output ⁇ correction process for outputting to a recording material such as paper, and outputs the image data after the output ⁇ correction process to the halftone generation unit 20.
  • the halftone generation unit 20 performs gradation reproduction processing (halftone generation) on the image data so that the image is finally separated into pixels and each gradation can be reproduced.
  • the image data that has been subjected to the above-described processes and output from the halftone generator 20 is temporarily stored in a memory (not shown), and then read out at a predetermined timing and input to the image output device 4 for image output.
  • the apparatus 4 performs printing based on the image data.
  • the document detection unit 14 and the document correction unit 15 apply the tilt angle to the image data stored in the storage unit 6 as in the print mode. Detection, tilt correction, vertical direction determination, and direction correction.
  • the document detection unit 14 detects the tilt angle and determines the vertical direction, but the document correction unit 15 does not perform processing.
  • the image data is transmitted from the document correction unit 15 to the file generation unit 30 after being processed by the document detection unit 14 and the document correction unit 15.
  • the document correction unit 15 reads the image data from the storage unit 6 and transmits the image data as it is to the file generation unit 30 without performing various correction processes.
  • the file generation unit 30 includes a character recognition unit 31, a translation unit 32, a layer generation unit 33, and a formatting processing unit 34.
  • the file generation unit 30 The recognition process and the translation process are executed, and an image file to be transmitted to a transmission destination or a storage destination designated by the user is generated.
  • the character recognizing unit 31 reduces the resolution of the input image data (for example, 300 dpi), generates binarized image data by binarizing the reduced-resolution image data, and generates the binarized image data. Character recognition processing is performed using. Further, the character recognition unit 31 generates text data of a document included in the original corresponding to the image data based on the result of the character recognition process, and the text data is sent to each of the translation unit 32 and the layer generation unit 33. Output.
  • the text data includes a character code of each character and position information of each character.
  • the character recognition processing method is not particularly limited, and a conventionally known method can be used. For example, the feature amount of each character in the binarized image data is extracted, and character recognition is performed by comparing the feature amount with dictionary data (character database).
  • the dictionary data used by the character recognition unit 31 is stored in the storage unit 6.
  • the character recognition unit 31 not only transmits the above text data to the layer generation unit 33 but also transmits the input image data as it is. That is, the layer generation unit 33 is configured to receive image data indicating a document and the text data from the character recognition unit 31.
  • the translation unit 32 performs a translation process on the language indicated in the text data sent from the character recognition unit 31. Specifically, the translation unit 32 obtains a translation corresponding to the language (original text) of the document by comparing with the text data and dictionary data (meaning database) having meaning information.
  • the dictionary data used in the translation unit 32 is stored in the storage unit 6.
  • a plurality of word meaning databases are stored in the storage unit 6 so that the processing contents can be switched according to the translation mode.
  • the storage unit 6 stores a plurality of types of databases such as an English-Japanese translation database for translating from English to Japanese and an English-to-Chinese translation database for translating from English to Chinese. Then, when the English-Japanese mode for translating English into Japanese is selected by the user, the translation unit 32 refers to the English-Japanese translation database in the storage unit 6 to perform translation processing, and translates English into Chinese. When the user selects the English / Chinese mode to be used, the translation processing is performed by referring to the English / Chinese translation database in the storage unit 6 (that is, the database to be referred to is switched according to the mode). ).
  • a plurality of word meaning databases are stored in the storage unit 6 in accordance with the translation level (simple, standard, detail) for the same translation mode.
  • the storage unit 6 stores a simple level English-Japanese translation database, a standard level English-Japanese translation database, and a detailed level English-Japanese translation database, and the translation unit 32 selects the level selected by the user. Translation processing is performed with reference to the database.
  • the simple level is the level at which only difficult words are translated
  • the standard level is the level at which words up to the high school level are translated
  • the detailed level is the level at which simple words are translated (junior high student level) Means.
  • the layer generation unit 33 generates each layer constituting the image file (PDF file) generated by the subsequent formatting processing unit 34. Specifically, the layer generation unit 33 generates a layer indicating a document image (hereinafter simply referred to as “document image”) based on the document image data sent from the character recognition unit 31. A layer indicating transparent text (hereinafter simply referred to as “transparent text”) is generated based on the sent text data, and a layer indicating a translated word based on the translation result by the translation unit 32 (hereinafter simply referred to as “translated image”). ) Is generated.
  • the transparent text is data for superimposing (or embedding) recognized characters and words as text information on the original image data in an invisible form.
  • a PDF file an image file in which transparent text is added to document image data is generally used.
  • the translated word image is text data in which a translated sentence with respect to the original sentence shown in the original image is visible and a portion other than the translated sentence is transparent, and the translated word image is superimposed on the original image.
  • the position of the translated sentence (for example, a blank area adjacent to the original sentence between the lines of the original sentence) is determined so that the user can compare the translated sentence and the original sentence corresponding to the translated sentence.
  • the translated word image is visible text data that is superimposed on the original image data so that the translated word is visible to the user when superimposed on the original image.
  • the information can be inserted by the information insertion control unit shown in paragraphs [0063] to [0067] of Patent Document 1.
  • An area calculation method can be used.
  • the layer generation unit 33 also functions as a drawing command generation unit that generates a drawing command to be embedded in the image file generated by the subsequent formatting processing unit 34.
  • the drawing command is a command for instructing the computer of display conditions when the image file is opened and the image of the image file is displayed on the computer, and printing conditions when the image of the image file is printed.
  • the formatting processing unit 34 is a block that generates an image file formatted into predetermined format data based on the image data input to the file generating unit 30 and the result of the translation processing.
  • An example of an image file generated by the formatting processor 34 is a PDF file.
  • the formatting processing unit 34 performs processing for generating an image file in which each layer generated by the layer generating unit 33 and the drawing command are embedded. That is, the image file generated by the formatting processing unit 34 is data including the original image, the transparent text, and the translated word image.
  • the drawing commands include an initial display command, a button display command, a switching command, a print prohibition command, and a batch switching command described below.
  • Initial display command When a user's display instruction is input to an image file (when an image file is opened by a user), the original image is displayed by placing the transparent text on the original image. It is a command to make. That is, the initial display command is a command for instructing the computer to shift to a non-translation state in which only the original image is displayed without displaying the translation image when the display instruction is input.
  • Button display command A command for instructing the computer to display a switching button together with a document image while an image file is opened.
  • Switching command This is a command for instructing a computer to switch between the non-translation state and the translation-presence state when the user is instructed to switch by clicking (button operation) the switching button.
  • the translated word existence state is a state in which the translated image and the transparent text are arranged on the original image to display the original image and the translated image.
  • Print prohibition command a command for instructing a computer not to print the switch button when a user gives a print instruction to the image file.
  • Batch switching command When the document image is composed of a plurality of pages and a switching button displayed together with any page is clicked, the computer is instructed to switch between the non-translation state and the translation presence state for all pages. It is a command.
  • each command shown above is embedded in the image file generated by the formatting processing unit 34, the image file is handled as shown below.
  • the translated text Japanese
  • the original text English
  • a switching button is displayed.
  • the translated word absence state shown in FIG. 4A is switched to the translated word existence state shown in FIG. 4B.
  • the translated word presence state is switched, the original text (English) of the original image of the image file and the translated text (Japanese) corresponding to the original text in the translated word image are displayed side by side.
  • the switch button is also displayed in the translated word presence state shown in FIG. 4B, and when the user clicks the switch button shown in FIG. 4B, the switch button is shown in FIG.
  • the translated word absence state is switched to the translated word existence state shown in FIG.
  • switching from the translated state to the non-translated state or switching from the non-translated state to the translated state is performed for all pages.
  • the switching button for the first page to switch from the non-translated state to the translated word present state the translated word is also present when the second and subsequent pages are displayed.
  • the switch button is printed even when the switch button is displayed. There is no.
  • the formatting processing unit 34 stores the image file generated as described above in the storage unit 6.
  • the transmission / reception unit 5 transmits the image file stored in the storage unit 6 to a transmission destination or a storage destination designated by the user.
  • FIG. 5 is a flowchart showing the flow of processing in the image transmission mode of the image forming apparatus 1.
  • the image forming apparatus 1 first sets processing conditions for the image transmission mode in accordance with an instruction from a user input via an operation panel (not shown) (S1).
  • the user sets the presence / absence of translation processing.
  • the flowchart of FIG. 5 is a flowchart on the assumption that translation processing is set, and the following description is based on the assumption that translation processing is set in S1.
  • the translation processing is set to “present”, a screen for setting the translation mode, the translation level, and the display color of the translated word is displayed to prompt the user to set the translation process.
  • English to Japanese translation mode from English to Japanese
  • English to Chinese translation mode from English to Chinese
  • Japanese to English translation mode from Japanese to English Japanese to Chinese translation mode from Japanese to Chinese
  • Japanese to Chinese translation mode from Japanese to Chinese etc.
  • a desired translation mode is selected by the user.
  • a desired translation level is selected by the user from among a simple level, a standard level, and a detailed level.
  • the display color is predetermined for each translation level, and the user selects the translation level without selecting the display color, and the display color corresponding to the translation level selected by the user is set. It may be like this.
  • the user selects either a normal mode in which correction processing (tilt correction, direction correction) by the document correction unit 15 or a simple mode in which the correction is not performed is performed.
  • the detection result of the document detection unit 14 is embedded in the header of the image file (PDF) generated by the formatting processing unit 34.
  • PDF image file
  • the document is read and image data is generated (S2).
  • the document placed on the document placing table may be read, or the document conveyed by the document conveying means may be read.
  • the image forming apparatus 1 After S2, the image forming apparatus 1 performs character recognition processing on the image data read from the document (S3), and performs translation processing based on the result of the character recognition processing (S4). After S4, the image forming apparatus 1 generates each layer constituting an image file generated later (S5). Specifically, a document image (layer) is generated based on the image data read in S2, and a transparent text (layer) is generated based on the result of the character recognition process performed in S3. A translated word image (layer) is generated based on the result of the translation processing performed.
  • the image forming apparatus 1 After S5, the image forming apparatus 1 generates a drawing command to be embedded in an image file generated later (S6).
  • the drawing commands generated here are the initial display command, button display command, switching command, print prohibition command, and batch switching command.
  • the image forming apparatus 1 After S6, the image forming apparatus 1 generates an image file in which each layer generated in S5 is embedded (S7), and embeds the drawing command generated in S6 in the image file (S8).
  • the image forming apparatus 1 temporarily stores the image file generated as described above in the storage unit 6 and transmits the image file to a transmission destination or a storage destination designated by the user.
  • the translated word existence state for displaying the original sentence (language) shown in the manuscript and the translation corresponding to the original sentence together with the translated word
  • the viewer of the image file can switch between a translated word presence state displaying the translation result and a translated word absence state where the translation result is hidden as necessary. Therefore, for those who want to delete the translation and view it, it is possible to save time and effort compared to the conventional technique that had to create a file with no translation displayed in addition to the original image file with the translation displayed. it can.
  • the translated word shown in (b) of FIG. 4 can be obtained by simply clicking the switching button shown in (a) of FIG. 4 or (b) of FIG. Since the presence state and the non-translation state shown in FIG. 4A can be switched, the user can easily switch between the presence / absence state of the translation and the presence / absence of the translation. .
  • the switching button when the image of the image file is printed, the switching button is not printed, so that an unnecessary image (switching button) can be prevented from being displayed on the sheet. Play.
  • the switching button is shown on each page, and all when the switching button of one page is clicked.
  • the translated word present state and the translated word absent state can be switched with respect to the page. Therefore, the user can save the trouble of having to click the switching button for each page.
  • the image file of this embodiment has an initial display for instructing the computer to shift to the no-translation state of FIG. 4A when a user's display instruction is input to the image file. Since the command is embedded, when the image file is opened, there is no translation at first as shown in FIG. However, the initial display command does not instruct the computer to shift to the no-translation state of FIG. 4A when a user's display instruction is input to the image file. On the other hand, when the display instruction of the user is input, the computer may be instructed to shift to the translated word existence state of FIG. Thereby, when an image file is opened, it can be set so that the translated word existence state shown in FIG.
  • the operation panel is used to set (specify) which one of the translated word absence state of FIG. 4A and the translated word presence state of FIG. It may be.
  • the file generation unit 30 includes an initial state designating unit (not shown) that designates either a no-translate state or a translated-word state as an initial state according to an instruction from the user input via the operation panel. Then, the formatting processing unit 34 shifts from the non-display state where the image of the image file is not displayed to the state specified as the initial state when a display instruction is given by the user to the image file. An initial display command for causing the computer to execute is embedded in the image file.
  • the formatting processing unit 34 changes from the non-display state to the non-translation state when the user's display instruction is input to the image file when the non-translation state is set as the initial state by the user.
  • an initial display command for instructing the computer to be transferred is embedded in an image file and the translated word presence state is set as an initial state by the user, it is not displayed when a user's display instruction is input to the image file.
  • An initial display command for instructing the computer to shift to the translated word existence state is embedded in the image file.
  • the no-translation state shown in FIG. 4A is displayed in the initial display state (first displayed when the file is opened).
  • the main user of the image file is assumed to be a person who needs translation (for example, a person who is not good at language), the translated word presence state shown in FIG.
  • the translated word presence state shown in FIG.
  • the user when the translation process is set to be enabled in S1, the user is allowed to set the conditions for the translation mode, the translation level, and the display color of the translated word.
  • the user when only one dictionary used for translation processing is stored in, it is not necessary for the user to set conditions for the translation mode and translation level, and also to set the display color of the translated word. Instead of this, it may be automatically set on the apparatus side.
  • S ⁇ b> 1 the user is allowed to set the presence / absence of translation processing and select the normal mode in which inclination correction is performed and the simple mode in which inclination correction is not performed.
  • FIG. 6 is a switching button that is displayed with no translation and with a translation and is not selected by the user with the cursor 800.
  • the button area which is at least a part of the switching button is displayed translucently. Thereby, when the switch button is not selected by the user, the user can visually recognize the object image (not shown) superimposed on the button area.
  • the switching button When the cursor 800 is overlaid on the switching button shown in FIG. 6A, the switching button is rolled over, and the state of the switching button is as shown in FIG. 6B. In the state shown in FIG. 6B (rollover appearance), the density of the button area of the switching button is higher than the state of FIG. 6A, and the button area becomes non-transparent. Yes. This makes it impossible for the user to visually recognize the object image superimposed on the button area (the object image positioned below the button area), and makes it easier for the user to visually recognize the button area.
  • a balloon 900 is displayed together with the switching button.
  • This balloon 900 is displayed only when the cursor 800 is overlaid on the switching button, and is an image (description image) showing an explanation (message) of the function of the switching button.
  • the switch button is clicked in the state shown in FIG. 6B, the above-described translated word absence state and translated word existence state are switched.
  • the switching button when the user does not need the switching button (when the cursor 800 is not overlaid on the switching button), the switching button interferes with the display of the object image.
  • the switching button when the user needs a switching button (when the cursor 800 is superimposed on the switching button), the switching button can be displayed prominently. Become.
  • the balloon 900 is not displayed when the cursor 800 is not overlaid on the switching button, but the balloon 900 is not displayed when the cursor 800 is overlaid on the switching button. It is displayed. Therefore, the balloon 900 can explain the function of the switching button to the user, and the balloon 900 can be displayed only when necessary to prevent the image viewability of the image file from being impaired.
  • the formatting processing unit 34 embeds the following rollover display command and balloon display command as drawing commands in the image file.
  • Rollover display command When the cursor is not overlaid on the switching button, the button area is displayed through the button area so that the user can visually recognize the object image superimposed on the button area that is at least a part of the switching button. When the cursor is overlaid on the switching button, the object image is displayed to the user by increasing the density of the button area and making it non-transparent than when the cursor is not overlaid on the switching button. This is a command for instructing the computer to make the button area easily visible to the user without being visually recognized.
  • Balloon display command A command for causing the computer to display a balloon 900 for explaining the function of the switch button to the user only when the cursor is over the switch button.
  • the button area becomes non-transparent and the object image overlaid on the button area becomes invisible. It is not limited to a form that makes it transparent and invisible. That is, when the cursor 800 is overlaid on the switching button, the object image is less visible to the user because the density of the button area is higher than when the cursor 800 is not overlaid on the switching button.
  • the button area only needs to be easily visible to the user, and there is no need for the button area to become non-transparent so that the object image cannot be visually recognized.
  • FIG. 7 shows information described in the image file and information for switching between a translated word present state and a translated word absent state.
  • FIG. 7B is an optional content group dictionary described in the image file, and a label (FIG. 12) for organizing mutual relations when performing an action of switching between a translated word present state and a translated word absent state.
  • a label FIG. 12
  • FIG. 15 the name and type of the object “39 0” are defined in order to use the object (object) “39 0” as the switching label.
  • (A) in FIG. 7 is a document catalog described in the image file, and represents information on the entire document (original image). Set for the object to be switched, and set for each page and for each object. The example of FIG. 7A shows that switching display is performed for one object “39 0”.
  • (C) in FIG. 7 shows a description regarding the range specification of the optional content, and is an object indicating the content information of the translation result for each page.
  • the object “15 ⁇ ⁇ ⁇ 0” is included as the range of the object for which display switching of the object “39 0” as the switching label is performed.
  • FIG. 8 shows information described in the image file and information for displaying the switching button.
  • FIG. 8 is a page object and represents information for each page of the document.
  • the page object also includes reference information for performing an action (display or non-display, move to a link destination, etc.).
  • the page object shown in FIG. 8A includes reference information for the objects “43 0”, “45 0”, and “47 0” including the Widget annotation shown in FIG. 8B.
  • FIG. 8 is a Widget annotation, and shows an explanation of an object that causes an action.
  • the instruction with reference numeral 500 indicates that the display or non-display of the object “39 0” is switched by the switching button.
  • the switching button is set to not print (default setting).
  • “/ N / 44 0 R” designates reference information to the image of the switching button, and is linked to the form XObject (object “44 0”) of FIG. 8C.
  • (C) in FIG. 8 is a form XObject, which defines the appearance of the switching button (drawing image of the switching button).
  • a label (translation 1 of the figure) is associated with the layers constituting the image file. This label is defined in the optional content group dictionary of FIG.
  • the “switching operation” shown in FIG. 12 is defined by the Widget annotation in FIG. 8B. Further, the “button image” shown in FIG. 12 is defined in the form XObject in FIG.
  • the image file shown in FIG. It is possible to switch between the non-translation state and the translation-presence state shown in FIG. Further, when printing is performed in a state where there is no translation (when the translation result is not displayed), only the document image is printed, and when printing is performed while the translation is present (when the translation result is displayed), the document image and the translation are printed.
  • the switch button is printed at the time of printing.
  • the command “/ F4” may not be inserted in the Widget annotation.
  • the display form of the switching button may be the same or different.
  • the translated image of each page is defined as a different object from each other, as shown in FIG. Associate the same label. In this case, the display form of the switching button is the same on each page.
  • FIG. 11 shows information described in the image file and information for rolling over the switch button.
  • FIG. 11 is a Widget annotation, and shows an explanation of an object that causes an action. That is, in the case where the switch button is rolled over, the Widget annotation of FIG. 11A is embedded in the image file instead of the Widget annotation of FIG.
  • the instruction with reference numeral 510 indicates that the object “39 ⁇ ⁇ 0” is switched between display and non-display by the switch button.
  • the switching button is set to not print (default setting).
  • “/ N 45 0R” in FIG. 11A designates reference information to the translucent drawing image (normal state) of the switch button, and the form XObject () in FIG. “45 0”).
  • “/ R 44 0 R” in FIG. 11A designates reference information to the non-transparent drawing image (rollover appearance) of the switching button, and the drawing image of the non-transparent switching button is designated. It is linked to the defined form XObject (“44 0”) in FIG.
  • the translucent drawing image is an image when the cursor is not overlaid on the switching button as shown in FIG. 6A, and the non-transparent drawing image is as shown in FIG. 6B. This is an image when the cursor is overlaid on the switch button.
  • FIG. 11 is a graphics state parameter dictionary (semi-transparent drawing state), which defines a display rate when the switching button is drawn in a semi-transparent state.
  • a display rate of 30% is set, and a translucent state with a transmittance of 70% is obtained.
  • FIG. 11 is a form XObject, which defines the appearance of a switching button (button drawing image) when displayed in a semi-transparent state.
  • the form XObject in FIG. 11C is different from the form XObject in FIG. 8C in that the definition for making the switching button translucent is indicated (reference marks 900 and 905).
  • a translation image and a translation level are set in S1, and a translation image (layer) indicating a translation according to the set mode and level is set. ) Will be generated.
  • a plurality of translation modes or language levels may be selected at the same time.
  • a plurality of translated word images (layers) are generated.
  • a setting screen for setting a translation mode and a translation level is displayed, the user selects an English-Japanese translation mode, and two translation levels, a simple level and a standard level, are selected.
  • two translation levels a simple level and a standard level
  • a translated word image (layer) translated in a simple level English-Japanese dictionary and a translated word image (layer) translated in a detailed level English-Japanese dictionary are generated, and the generated translated word information is It will be embedded in the image file.
  • the first button is a button for displaying a simple level translated image
  • the second button is a button for displaying a detailed level translated image.
  • FIG. 16 (a) when the user clicks the first button, the original text and the simple level translation are displayed as shown in FIG. 16 (b). Here, the simple level translations are displayed in blue.
  • the first button is clicked in the state shown in FIG. 16B, the simple level translation is deleted and the state returns to the state in FIG.
  • the second button is clicked in the state shown in FIG. 16B, as shown in FIG. 16C, the original sentence, the simple level translation, and the detailed level translation are displayed. It will be.
  • the translated word (“target” in FIG. 16) that exists in common in both the simple level and the detailed level is displayed in an overlapping manner as shown in FIG.
  • the translated word at the detail level is displayed in green, and the translated word existing in common in both the simple level and the detailed level is displayed in blue.
  • the image file shown above it is possible to display and browse the translation result according to the language level of the viewer, saving the trouble of rescanning and reprocessing by changing the translation level setting.
  • an image showing a plurality of levels of translated words can be saved in one file.
  • buttons for each translation mode and click each button to display the translation corresponding to the button. For example, when an image file is opened for the first time, the original text (English), button A and button B are displayed. When button A is selected, a Japanese translation is displayed, and when button B is selected, a Chinese translation is displayed. It has come to be.
  • S11 to S15 in FIG. 17 are the same as S1 to S5 in FIG. 5, and S16 to S18 in FIG. 17 are the same as S6 to S8 in FIG.
  • S13, S14 and S15 are repeated until the number of translation images corresponding to the translation mode and translation level set in S11 is generated (YES in S20).
  • the processing from S16 onward is executed.
  • the image forming apparatus 1 performs printing or transmission based on the image data input from the image input apparatus 2. However, it may have a function of performing an image transmission mode or a print mode based on an image file input from an external device.
  • an image transmission mode of the image forming apparatus 1 having this function will be described.
  • the external device means a USB memory (removable medium) inserted into the image forming apparatus 1 or a terminal device connected to the image forming apparatus 1 via a network.
  • the overall configuration of the image forming apparatus 1 is as shown in FIG.
  • the file generation unit 30 of this example is configured as shown in FIG. 19 instead of the configuration shown in FIG.
  • the 19 includes a character recognition unit 31, a translation unit 32, a layer generation unit 33, a formatting processing unit 34, and a character extraction unit 39.
  • the processing contents of the character recognition unit 31, the translation unit 32, the layer generation unit 33, and the formatting processing unit 34 are the same as those shown in FIG. That is, in the image forming apparatus 1 of this example, when the image transmission mode is selected and the document of the image input apparatus 2 is selected as a processing target, the character recognition unit 31, the translation unit 32, the layer shown in FIG.
  • the generation unit 33 and the formatting processing unit 34 execute the same processing as that shown in FIG.
  • control unit 7 when the image transmission mode is selected and the image file stored in the storage unit 6 is selected as the processing target, the control unit 7 selects the processing target image file stored in the storage unit 6. It is determined whether text data (character data) is embedded in the.
  • the image file to be processed is a file received via the network and the transmission / reception unit 5 and stored in the storage unit 6, or a removable medium (memory device) such as a USB memory inserted into the image forming apparatus 1. Means a file read from the file and stored in the storage unit 6.
  • control unit 7 determines that the text data is not embedded in the processing target image file
  • the control unit 7 extracts the image data included in the image file, and encodes / decodes the document and the document correction unit 8.
  • the image data is transmitted to the character recognition unit 31 shown in FIG.
  • the character recognition unit 31 and the subsequent blocks in FIG. 19 perform the same processing as the character recognition unit 31 and the subsequent blocks shown in FIG. 3, and an image file with a translation is generated.
  • control unit 7 determines that the text data is embedded in the processing target image file
  • the control unit 7 transmits the image file from the storage unit 6 to the character extraction unit 39.
  • the character extraction unit 39 is a block that, when an image file is input from the storage unit 6, performs processing for extracting image data indicating a document image and text data from the image file.
  • the character extraction unit 39 transmits the extracted text data to the translation unit 32 and the layer generation unit 33, and transmits the extracted image data to the layer generation unit 33.
  • the translation unit 32, the layer generation unit 33, and the formatting processing unit 34 in FIG. 19 perform the same processing as the translation unit 32, the layer generation unit 33, and the formatting processing unit 34 shown in FIG. Is generated.
  • the control unit 7 executes the processing shown in FIG.
  • the process of recognizing the format is performed.
  • the processing shown in FIG. 20 pays attention to the fact that various image files often have a characteristic byte sequence at the head of the file (header), and by checking the byte sequence at the top of the file, The file type (format) is simply recognized.
  • the control unit 7 acquires a byte string at the beginning of the file of the image file (S21).
  • the control unit 7 determines that the format of the image file to be processed is Judged to be TIFF (S26).
  • the control unit 7 uses JPEG as the format of the image file to be processed. (S27).
  • the control unit 7 formats the image file to be processed. Is determined to be PDF (S28).
  • the control unit 7 cannot process the processing target image file. It is determined that the file is a file (S29), and in this case, the image transmission mode is stopped.
  • control unit 7 When the control unit 7 specifies the format of the image file by the processing of FIG. 20, it determines the presence / absence of text data as follows, and switches the input destination of the image file according to the presence / absence of the text data.
  • the control unit 7 determines the presence or absence of text data in the PDF file by examining the text command. For example, in a file format in which character data is embedded in a PDF such as a searchable PDF, there is a description such as “stream BT 84 Tz...” Inside the PDF file as shown in FIG. It can be determined that text data (character data) is embedded. On the other hand, when character information is stored in the PDF file as a bitmap image (when there is no text data), it is possible to determine that the text data is not embedded because the above description is not included.
  • the control unit 7 reads out the PDF file from the storage unit 6 and inputs it to the character extraction unit 39 in FIG.
  • the control unit 7 extracts image data included in the PDF file, and encodes / decodes 8 and the document correction unit 15. The image data is input to the character recognition unit 31 in FIG.
  • the control unit 7 recognizes the image file as having no text data. That is, in this case, the control unit 7 extracts the image data included in the JPEG file, converts the image data into RGB image data in the encoding / decoding unit 8, and the image data via the document correction unit 15. Is input to the character recognition unit 31 in FIG.
  • the control unit 7 recognizes the image file as having no text data. However, in this case, the control unit 7 determines whether the TIFF file is a binary image or a multi-valued image by examining the tag of the TIFF file. If the TIFF file is a multi-valued image, the control unit 7 extracts the image data included in the TIFF file, converts the image data into RGB image data at the encoding / decoding unit 8, and the document correction unit. The image data is input to the character recognition unit 31 shown in FIG.
  • the control unit 7 extracts the binary image included in the TIFF file, and converts the binary image into multi-value RGB image data (for example, an 8-bit image).
  • RGB image data for example, an 8-bit image.
  • an image file with a translation is finally generated by the formatting processing unit 34, and the transmission destination designated by the user Alternatively, the image file is sent to the storage destination.
  • the control unit 7 reads out the electronic data from the storage unit 6 and inputs it to the character extraction unit 39 in FIG.
  • the display color of the translated image can be set in S1 of FIG. 5, but the display color that can be set may include a transparent color.
  • the layer generation unit 33 sets the translated image as a character image (layer) in the form of transparent text, instead of a visible character image (layer).
  • the formatting processing unit 34 does not embed various commands (initial display command, button display command, switching command, etc.) for switching between the translated word present state and the translated word absent state in the image file (therefore, the switching button is not displayed).
  • the translation can be made transparent when the translation information is embedded in the PDF file for the purpose of searching instead of browsing.
  • a search purpose for example, by assigning a Japanese translation to a manuscript of an English document, it becomes possible to search the text of the digitized data of the English document using a Japanese keyword.
  • image data processed by the document correction unit 15 (image data processed by the input processing unit 13 in the simple mode) is input to the file generation unit 30, and an image file is generated based on the image data.
  • the color correction unit 16 converts RGB image data processed by the document correction unit 15 into R′G′B ′ image data (for example, sRGB) suitable for the characteristics of the display device.
  • the spatial filter unit 18 executes spatial filter processing (enhancement processing and / or smoothing processing) on the image data of R′G′B ′, and the output tone correction unit 19 performs R after the spatial filter processing.
  • Gradation correction may be performed on the image data of “G′B”, and the image data of R′G′B ′ after the gradation correction may be input to the file generation unit 30.
  • the image data after the processing by the document correction unit 15 is transferred from the document correction unit 15 to the color correction unit 16.
  • the image data after the completion of processing may be temporarily stored in the storage unit 6 as filing data.
  • the image data after the processing by the document correction unit 15 is completed is compressed into a JPEG code based on, for example, a JPEG compression algorithm and stored in the storage unit 6.
  • a JPEG code is extracted from the storage unit (hard disk) 6, decoded by the encoding / decoding unit 8, and converted into RGB image data.
  • the image data converted into RGB passes through the document correction unit 15 and is sent to the color correction unit 16 and the region separation unit 21.
  • a JPEG code is extracted from the storage unit 6 and data is transmitted to an external connection device or a communication line via a network or a communication line.
  • the control unit 7 performs filing data management and data transfer operation control.
  • Image Reading Apparatus In the present embodiment, the case where the present invention is applied to a color image forming apparatus has been described. However, the present invention is not limited to this and may be applied to a monochrome image forming apparatus. The present invention is not limited to an image forming apparatus, and may be applied to, for example, a single color image reading apparatus.
  • FIG. 18 is a block diagram showing a configuration example when the present invention is applied to a color image reading apparatus (hereinafter referred to as “image reading apparatus”).
  • the image reading apparatus 100 includes an image input device 2, an image processing device 3 b, a transmission / reception unit 5, a storage unit 6, a control unit 7, and an encoding / decoding unit 8. Since the configurations and functions of the image input device 2, the transmission / reception unit 5, the control unit 7, and the encoding / decoding unit 8 are substantially the same as those of the image forming apparatus 1 described above, description thereof is omitted here.
  • the image processing apparatus 3b includes an A / D conversion unit 11, a shading correction unit 12, an input processing unit 13, a document detection unit 14, a document correction unit 15, and a file generation unit 30.
  • the internal configuration of the file generation unit 30 is as shown in FIG. 3 or FIG.
  • the processing content of each unit included in the image input device 2 and the image processing device 3b is the same as that of the image forming device 1 shown in FIG.
  • the image file after the above processing is performed in the image processing apparatus 3b is output to a computer, a hard disk, a network, or the like.
  • the image processing apparatus may be applied to a system including a digital camera or a portable terminal device having a camera function, a computer, and an electronic blackboard.
  • a digital camera or a portable terminal device having a camera function, a computer, and an electronic blackboard.
  • at least A / D conversion is performed on an image captured by a portable terminal device, and the image is transmitted to a computer.
  • the computer performs input processing, document detection processing, document correction processing, and file generation processing, and displays them on an electronic blackboard. You may come to do.
  • a document or poster that is an object to be imaged may be captured from an oblique direction, and geometric distortion may occur in the captured image. When determining whether or not, geometric distortion may be corrected.
  • a method for correcting geometric distortion and lens distortion for example, a method described in JP 2010-245787 can be used.
  • edge points are detected from the captured image, each edge point is classified into four groups corresponding to the four sides of the object to be imaged, and quadratic curve approximation is performed on the edge points belonging to each group.
  • the quadratic curves obtained for the four groups in this way correspond to the four sides of the imaging object, and correspond to the corners of the region surrounded by the four quadratic curves, and the intersections of the four quadratic curves. Ask for.
  • a circumscribed rectangle that circumscribes the quadratic curve obtained for each side and is congruent with the quadrilateral connecting the four intersections is obtained, and the circumscribed rectangle obtained in this way is the edge of the corrected object.
  • the pixel position in the area of the imaging target in the captured image is converted so as to be the pixel position. This conversion is performed based on a vector from a reference point (for example, the center of gravity of the area of the imaging target). Thereby, lens distortion can be corrected.
  • the circumscribed rectangle obtained as described above is geometrically converted in accordance with the aspect ratio of the target object (for example, 7:10 for A size B used in business documents) to perform geometric conversion in the same manner. Correct for geometric distortion.
  • a known technique may be used for the mapping conversion.
  • the file generation unit 30 of the present embodiment may be realized by software using a processor such as a CPU.
  • the image forming apparatus 1 of the present embodiment includes a CPU (central processing unit) that executes instructions of a control program that realizes the function of the file generation unit 30, a ROM (read only memory) that stores the program, and the program A random access memory (RAM) for expanding the program, a storage device (recording medium) such as a memory for storing the program and various data, and the like.
  • An object of the present invention is a recording medium on which a program code (execution format program, intermediate code program, source program) of a control program of the image forming apparatus 1 which is software for realizing the above-described functions is recorded so as to be readable by a computer. This is achieved by supplying the image forming apparatus 1 and reading and executing the program code recorded on the recording medium by the computer (or CPU or MPU).
  • a program code execution format program, intermediate code program, source program
  • a memory such as a ROM itself may be a program medium because processing is performed by a microcomputer, or an external storage is not shown.
  • a program reading device may be provided as a device, and a program medium that can be read by inserting a recording medium into the device may be used.
  • the stored program may be configured to be accessed and executed by the microprocessor, or in any case, the program code is read and the read program code is stored in the microcomputer.
  • the program may be downloaded to a program storage area (not shown) and executed. It is assumed that this download program is stored in the main device in advance.
  • a recording medium on which the program code is recorded can be provided in a portable manner.
  • the recording medium is a recording medium configured to be separable from the main body, such as a tape system such as a magnetic tape or a cassette tape, a magnetic disk such as a flexible disk or a hard disk, or a CD-ROM / MO / MD / DVD / CD-R.
  • Medium that carries program code in a fixed manner including disk systems including optical disks such as IC cards, card systems such as IC cards (including memory cards) / optical cards, and semiconductor memories such as mask ROM / EPROM / EEPROM / flash ROM It may be.
  • the image forming apparatus 1 since the image forming apparatus 1 according to the present embodiment has a system configuration capable of connecting to a communication network including the Internet, the image forming apparatus 1 is a medium that fluidly carries the program code so as to download the program code from the communication network. Also good. When the program is downloaded from the communication network in this way, the download program may be stored in the main device in advance or may be installed from another recording medium.
  • an image processing apparatus includes a translation unit that identifies a translation corresponding to a language by performing a translation process on the language included in the image data.
  • a formatting processing unit that generates an image file formatted into data of a predetermined format based on the image data and the result of the translation processing, and the formatting processing unit is used for the image file.
  • the computer In order to cause the computer to switch between a first display state in which the language and the translated word are displayed together and a second display state in which the language is displayed without displaying the translated word when a switching instruction is issued by the person These commands are added to the image file.
  • the first display state in which the language and the translated word are displayed together and the second display state in which the language is displayed without displaying the translated word are switched as necessary. Since it is possible to generate a single possible image file, it is possible to reduce the trouble in generating the file and the troublesomeness in managing the file, compared to the case of generating two files as in the prior art.
  • the formatting processing unit in the image processing apparatus of one aspect of the present invention provides a command for causing the computer to execute display of a switching button for inputting the switching instruction in the first display state and the second display state. It may be added to the file.
  • the first display state and the first display state can be input. There is an effect that the user can easily switch between the two display states.
  • the button operation is realized by clicking, for example.
  • the formatting processing unit in the image processing apparatus provides a command for instructing a computer not to print the switching button when a print instruction is given from the user to the image file. It may be added to the image file.
  • the image processing apparatus includes an initial state designating unit that designates either the first display state or the second display state as an initial state according to an instruction from the user.
  • the formatting processing unit is configured to display the image file from a non-display state in which the image file is not displayed when a display instruction is given to the image file by the user (when the image file is opened by the user).
  • a configuration may be employed in which a command for causing the computer to execute the transition to the display state designated as the initial state is added to the image file.
  • the user allows the user to specify whether the initial state when the image file is opened is the first display state or the second display state. Therefore, for example, when it is assumed that the main user of the image file does not need translation, the second display state is set as the initial state, and the main user of the image file requires translation. If it is a person who is not good at language (for example, a person who is not good at language), if the first display state is set to the initial state, switching between the first display state and the second display state can be omitted as much as possible.
  • the formatting processing unit is configured so that the switching button is displayed on each page of the image.
  • the computer is caused to switch between the first display state and the second display state for all pages.
  • the command may be added to the image file.
  • the formatting processing unit in the image processing apparatus uses an object image superimposed on a button area that is at least a part of the switching button when the switching button is not selected by a user.
  • a command for causing the computer to execute a process of displaying the button area in such a way that the button area is visible to the user is added to the image file, and the switch button is selected by the user, the switch button In order to cause the computer to execute processing that makes it difficult for the user to visually recognize the object image and makes the button area easily visible to the user by increasing the density of the button area compared to the case where it is not selected by the user.
  • These commands may be added to the image file.
  • the switch button when the switch button is not selected by the user, it is possible to suppress the switch button from interfering with the display of the object image, and when necessary (the switch button is selected by the user).
  • the switch button In the case where the switch button is set), the switch button can be displayed more conspicuously than in other cases.
  • the selection of the switch button can be realized by, for example, placing the cursor on the switch button, and the switch button is not selected when the cursor is released from the switch button.
  • the formatting processing unit in the image processing apparatus of one aspect of the present invention provides an explanation image for explaining the function of the switching button to the user only when the switching button is selected by the user.
  • a command for instructing the computer to display may be added to the image file.
  • the explanation image is not displayed when the switching button is not selected by the user, but the explanation image is displayed when the switching button is selected by the user. It has become. Therefore, the function of the switching button can be explained to the user by the explanation image, and the visibility of the image of the image file can be prevented from being impaired by displaying the explanation image only when necessary.
  • An image forming apparatus includes the above-described image processing device. Therefore, it is possible to reduce the trouble at the time of file generation and the troublesomeness at the time of file management, compared to the case of generating two files as in the prior art.
  • the image processing apparatus may be realized by a computer.
  • a program that causes the computer to operate as the above-described units, thereby causing the image processing apparatus to be realized by the computer, and A computer-readable recording medium on which is recorded is also included in the category of the present invention.
  • the present invention can be used for an image processing apparatus and an image forming apparatus that generate an image file based on image data.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Machine Translation (AREA)
  • User Interface Of Digital Computer (AREA)
  • Facsimiles In General (AREA)

Abstract

L'invention porte sur un dispositif de traitement d'image (3) qui comprend une unité de traduction (32) qui effectue un traitement de traduction d'une langue contenue dans les données d'image, et une unité de traitement de formatage (34) qui génère un fichier d'image sur la base des données d'image et des résultats du traitement de traduction. Lorsque l'utilisateur donne une instruction de commutation concernant le fichier d'image, l'unité de traitement de formatage (34) écrit dans le fichier d'image une instruction servant à permettre à l'ordinateur de commuter entre un premier état d'affichage dans lequel la langue susmentionnée et la traduction susmentionnée sont affichées ensemble et un second état d'affichage dans lequel la langue susmentionnée est affichée sans afficher la traduction susmentionnée.
PCT/JP2013/050584 2012-01-16 2013-01-15 Dispositif de traitement d'images, dispositif de formation d'image, programme et support de stockage WO2013108757A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201380004659.8A CN104054047B (zh) 2012-01-16 2013-01-15 图像处理装置以及图像形成装置
US14/370,170 US20140337008A1 (en) 2012-01-16 2013-01-15 Image processing apparatus, image forming apparatus, program and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2012-005991 2012-01-16
JP2012005991A JP5972578B2 (ja) 2012-01-16 2012-01-16 画像処理装置、画像形成装置、プログラム、記録媒体

Publications (1)

Publication Number Publication Date
WO2013108757A1 true WO2013108757A1 (fr) 2013-07-25

Family

ID=48799182

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2013/050584 WO2013108757A1 (fr) 2012-01-16 2013-01-15 Dispositif de traitement d'images, dispositif de formation d'image, programme et support de stockage

Country Status (4)

Country Link
US (1) US20140337008A1 (fr)
JP (1) JP5972578B2 (fr)
CN (1) CN104054047B (fr)
WO (1) WO2013108757A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106201398A (zh) * 2016-06-30 2016-12-07 乐视控股(北京)有限公司 一种显示方法及电子设备

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6137998B2 (ja) * 2013-08-30 2017-05-31 シャープ株式会社 画像処理装置、画像形成装置、プログラムおよび記録媒体
JP2015069365A (ja) * 2013-09-27 2015-04-13 シャープ株式会社 情報処理装置、および制御プログラム
US9792276B2 (en) 2013-12-13 2017-10-17 International Business Machines Corporation Content availability for natural language processing tasks
JP6259804B2 (ja) * 2014-11-26 2018-01-10 ネイバー コーポレーションNAVER Corporation コンテンツ参加翻訳装置、及びそれを利用したコンテンツ参加翻訳方法
CN104936039B (zh) * 2015-06-19 2018-09-04 小米科技有限责任公司 图像处理方法及装置
CN105383184B (zh) * 2015-12-01 2017-10-20 苏州中兴鼎工业设备有限公司 一种激光打标离线控制手柄
US10019772B1 (en) * 2017-10-06 2018-07-10 Vertifi Software, LLC Document image orientation, assessment and correction
CN108389613B (zh) * 2018-01-30 2022-04-05 华侨大学 一种基于图像几何对称属性的侧旋姿态校正方法
US11227101B2 (en) * 2019-07-05 2022-01-18 Open Text Sa Ulc System and method for document translation in a format agnostic document viewer
CN112162711B (zh) * 2020-09-01 2021-11-16 珠海格力电器股份有限公司 图像文件转换方法、装置、电子设备和计算机可读介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002236639A (ja) * 2001-02-08 2002-08-23 Mitsubishi Heavy Ind Ltd 書類配信システム、及び、書類配信方法
JP2006350554A (ja) * 2005-06-14 2006-12-28 Mitsubishi Heavy Ind Ltd 文書電子化システム
JP2007122286A (ja) * 2005-10-26 2007-05-17 Seiko Epson Corp 情報処理装置、情報処理装置の制御方法及びその制御方法を実行させるプログラム
JP2007333973A (ja) * 2006-06-14 2007-12-27 Softbank Telecom Corp 電子ブック
JP2011150599A (ja) * 2010-01-22 2011-08-04 Ricoh Co Ltd 情報処理装置、情報処理方法、プログラム及び記録媒体
JP2011526033A (ja) * 2008-06-26 2011-09-29 マイクロソフト コーポレーション 半透明のライブプレビューを有するメニュー

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0644339A (ja) * 1992-03-06 1994-02-18 Hewlett Packard Co <Hp> 図形オブジェクト操作システム及び方法
DE69433236T2 (de) * 1993-12-01 2004-08-19 Canon K.K. Vorrichtung und Verfahren zum Drucken vertraulicher Daten
DE69434717T2 (de) * 1993-12-09 2007-04-12 Canon K.K. Datenverarbeitungsgerät das als Host arbeitet und Verfahren um das Datenverarbeitungsgerät zu steuern
KR970000258B1 (ko) * 1994-09-28 1997-01-08 삼성전자 주식회사 컴퓨터의 전원공급 제어장치
US6288991B1 (en) * 1995-03-06 2001-09-11 Fujitsu Limited Storage medium playback method and device
JP3822990B2 (ja) * 1999-01-07 2006-09-20 株式会社日立製作所 翻訳装置、記録媒体
JP2002091962A (ja) * 2000-09-13 2002-03-29 Oki Electric Ind Co Ltd 翻訳機能付き文書表示システム
JP3969628B2 (ja) * 2001-03-19 2007-09-05 富士通株式会社 翻訳支援装置、方法及び翻訳支援プログラム
JP2003099302A (ja) * 2001-09-21 2003-04-04 Ricoh Co Ltd 文書変換サービス方法、文書のデータ構造、記憶媒体及び情報処理装置
JP3840244B2 (ja) * 2003-11-12 2006-11-01 キヤノン株式会社 印刷装置、ジョブ処理方法、記憶媒体、プログラム
JP4449584B2 (ja) * 2004-06-01 2010-04-14 コニカミノルタビジネステクノロジーズ株式会社 画像形成装置
EP1842143A2 (fr) * 2005-01-10 2007-10-10 Melingo, Ltd. Recherche amelioree a traduction integree
JP2006254372A (ja) * 2005-03-14 2006-09-21 Sony Corp データ取込装置、データ取込方法及びプログラム
JP2007072594A (ja) * 2005-09-05 2007-03-22 Sharp Corp 翻訳装置、翻訳方法および翻訳プログラム、媒体
CN101149733A (zh) * 2007-03-30 2008-03-26 传神联合(北京)信息技术有限公司 一种使用隐藏格式保存双语文件的系统和方法
US20100030549A1 (en) * 2008-07-31 2010-02-04 Lee Michael M Mobile device having human language translation capability with positional feedback
JP4948586B2 (ja) * 2009-11-06 2012-06-06 シャープ株式会社 文書画像生成装置、文書画像生成方法、コンピュータプログラム及び記録媒体
JP2011175569A (ja) * 2010-02-25 2011-09-08 Sharp Corp 文書画像生成装置、文書画像生成方法及びコンピュータプログラム

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002236639A (ja) * 2001-02-08 2002-08-23 Mitsubishi Heavy Ind Ltd 書類配信システム、及び、書類配信方法
JP2006350554A (ja) * 2005-06-14 2006-12-28 Mitsubishi Heavy Ind Ltd 文書電子化システム
JP2007122286A (ja) * 2005-10-26 2007-05-17 Seiko Epson Corp 情報処理装置、情報処理装置の制御方法及びその制御方法を実行させるプログラム
JP2007333973A (ja) * 2006-06-14 2007-12-27 Softbank Telecom Corp 電子ブック
JP2011526033A (ja) * 2008-06-26 2011-09-29 マイクロソフト コーポレーション 半透明のライブプレビューを有するメニュー
JP2011150599A (ja) * 2010-01-22 2011-08-04 Ricoh Co Ltd 情報処理装置、情報処理方法、プログラム及び記録媒体

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106201398A (zh) * 2016-06-30 2016-12-07 乐视控股(北京)有限公司 一种显示方法及电子设备

Also Published As

Publication number Publication date
CN104054047A (zh) 2014-09-17
CN104054047B (zh) 2016-12-14
JP5972578B2 (ja) 2016-08-17
JP2013145491A (ja) 2013-07-25
US20140337008A1 (en) 2014-11-13

Similar Documents

Publication Publication Date Title
JP5972578B2 (ja) 画像処理装置、画像形成装置、プログラム、記録媒体
JP4772888B2 (ja) 画像処理装置、画像形成装置、画像処理方法、プログラムおよびその記録媒体
JP5280425B2 (ja) 画像処理装置、画像読取装置、画像形成装置、画像処理方法、プログラムおよびその記録媒体
JP4927122B2 (ja) 画像処理方法、画像処理装置、画像形成装置、プログラムおよび記録媒体
WO2014045788A1 (fr) Appareil de traitement d&#39;image, appareil de formation d&#39;image et support d&#39;enregistrement
US20100315681A1 (en) Image processing apparatus, image processing method, and computer-readable storage medium
US7421124B2 (en) Image processing system and image processing method
JP2010161764A (ja) 画像処理装置、画像読取装置、画像送信装置、画像形成装置、画像処理方法、プログラムおよびその記録媒体
JP2010146185A (ja) 画像処理装置、画像読取装置、画像送信装置、画像処理方法、プログラムおよびその記録媒体
JP6254002B2 (ja) 変換処理装置、それを備えた情報処理装置、プログラム、及び記録媒体
JP2011008549A (ja) 画像処理装置、画像読取装置、複合機、画像処理方法、プログラム、記録媒体
JP2008154106A (ja) 秘匿処理方法,画像処理装置および画像形成装置
JP2012118863A (ja) 画像読取装置、画像形成装置、画像読取方法、プログラムおよびその記録媒体
JP2017130811A (ja) 画像処理装置、画像処理方法、画像処理プログラムおよび記録媒体
JP4582200B2 (ja) 画像処理装置、画像変換方法、およびコンピュータプログラム
JP2008288912A (ja) 画像処理装置および画像形成装置
JP3899872B2 (ja) 画像処理装置、画像処理方法ならびに画像処理プログラムおよびこれを記録したコンピュータ読み取り可能な記録媒体
JP4710672B2 (ja) 文字色判別装置、文字色判別方法、およびコンピュータプログラム
JP2010287178A (ja) 画像処理装置、画像読取装置、複合機、画像処理方法、プログラム、記録媒体
JP4396710B2 (ja) 画像処理装置、画像処理装置の制御方法、および画像処理装置の制御プログラム
JP2014033439A (ja) 画像処理装置、画像形成装置、プログラム、および記録媒体
JP5689090B2 (ja) 画像形成方法及び画像形成装置
JP2007028181A (ja) 画像処理装置
JP2011010232A (ja) 画像処理装置、画像読取装置、複合機、画像処理方法、プログラムおよび記録媒体
JP6137998B2 (ja) 画像処理装置、画像形成装置、プログラムおよび記録媒体

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13738292

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 14370170

Country of ref document: US

122 Ep: pct application non-entry in european phase

Ref document number: 13738292

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE