CN1992764A - Image processing device and image processing method - Google Patents

Image processing device and image processing method Download PDF

Info

Publication number
CN1992764A
CN1992764A CN 200610063642 CN200610063642A CN1992764A CN 1992764 A CN1992764 A CN 1992764A CN 200610063642 CN200610063642 CN 200610063642 CN 200610063642 A CN200610063642 A CN 200610063642A CN 1992764 A CN1992764 A CN 1992764A
Authority
CN
China
Prior art keywords
information
image
additional information
unit
image processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN 200610063642
Other languages
Chinese (zh)
Inventor
阿部悌
石井真树
山本阳平
松野阳一郎
堀川裕文
津田道彦
远藤早苗
安部佳奈子
野崎健太
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ricoh Co Ltd
Original Assignee
Ricoh Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ricoh Co Ltd filed Critical Ricoh Co Ltd
Publication of CN1992764A publication Critical patent/CN1992764A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Editing Of Facsimile Originals (AREA)
  • Image Processing (AREA)

Abstract

An image processing device and an image processing method are provided. The device can obtain working content, working condition or information to be appended based on an additional information of an image, and also can remove useless special information for common users in the manuscript image and facilitate the user to set working condition. The image processing device includes an image reading unit to read a manuscript including image information and additional information related to the image information; an extraction unit to extract the image information and the additional information from manuscript image data of the manuscript; and a processing unit to perform processing on the image information based on the additional information.

Description

Image processing apparatus and image processing method
Technical Field
The present invention relates to an image processing apparatus and an image processing method, and more particularly to an image processing apparatus and an image processing method that perform processing based on additional information extracted from a document image.
Background
Conventionally, there is disclosed a technique in which when an image formed on a sheet is read, a barcode formed on the same sheet is read at the same time, and a user can easily set copying conditions, a transmission destination, and the like according to the contents of the barcode. For example, japanese laid-open patent publication "jp-a-11-119597" (hereinafter referred to as document 1) discloses a copying apparatus that recognizes code information added to a predetermined position of a read document image, calls various copying conditions corresponding to the code information from a memory in which the various copying conditions are stored, and sets the copying conditions as copying conditions for copying the read document image.
Further, japanese laid-open patent publication "jp-a-2004-343564" (hereinafter referred to as document 2) discloses a facsimile machine including a two-dimensional barcode printing device for recording various information such as a destination, polling (polling), group sending, number of pages of an original and the like inputted at the time of transmitting a facsimile in a header portion of the original, and a scanner for reading the original on which the two-dimensional barcode is printed.
However, in the techniques disclosed in documents 1 and 2, the codes added to the image are both bar codes or two-dimensional codes, and the information capacity thereof is limited. The prior art has disclosed a technique of embedding information to be added in a dot pattern, and adding a large amount of information to an image by adding the dot pattern as a pattern to the image. The techniques disclosed in documents 1 and 2 do not consider various information addition methods such as these.
The techniques disclosed in documents 1 and 2, or other devices whose processing contents are fixed (for example, copying or facsimile), can only set these fixed processes, and cannot satisfy the requirements of devices such as a multifunction peripheral (mfp), a personal computer, and the like, which can execute a plurality of jobs, and cannot set the contents and conditions of each job.
In addition, in the techniques disclosed in documents 1 and 2, since the input document or the document image itself is directly processed, it is impossible to remove information which is included in the document image and is very professional and useless to a general user, and this is inconvenient to the general user. Although the techniques of documents 1 and 2 can save the user the trouble of setting the processing conditions, the two-dimensional bar code used for this purpose appears in the field of view of the user and affects the appearance of the image.
Disclosure of Invention
The object of the present invention is to provide an image processing apparatus and an image processing method supporting multi-job, which can add information to an image by various methods, can obtain job contents, job conditions, or information to be added based on the added information of the image, can specify one job among a plurality of jobs, and set the condition of the job, and further, which is user-friendly, can remove professional information which is useless to a general user from an image of an original document, and can facilitate the user to set the job conditions.
An image processing apparatus includes a reading unit configured to read an original containing image information and additional information related to the image information; an extracting unit configured to extract the image information and the additional information from original image data obtained by reading the original; and a processing unit for processing the image information based on the additional information.
The invention provides an image processing method, comprising a reading step for reading an original containing image information and additional information related to the image information; an extraction step of extracting the image information and the additional information from original image data obtained by reading the original; and a processing step of processing the image information based on the additional information.
According to the above invention, processing of image information can be set based on additional information extracted from document image data, and various processing can be performed on image information extracted from document image data.
Drawings
The objects, features and advantages of the present invention will be more clearly understood from the following detailed description taken in conjunction with the accompanying drawings.
Fig. 1 is a block diagram of an image processing system including an image processing apparatus according to a first embodiment of the present invention;
fig. 2 is a block diagram showing the configuration of an image processing apparatus 2000 according to the first embodiment of the present invention;
fig. 3 is a flowchart showing an example of the operation content of the information processing unit 2040;
fig. 4A and 4B are flowcharts showing an example of the operation content of the image processing apparatus 2000;
fig. 5 is a diagram showing a correspondence relationship between additional information and an attribute of the additional information;
FIG. 6A and FIG. 6B are schematic views of isolated pixels;
FIG. 7A and FIG. 7B are each a schematic view of a pixel comprising a dot combination;
fig. 8A, 8B, and 8C are diagrams showing the data structure of the additional information 2;
fig. 9 is a diagram showing area information ("data 1" variable) of a target area subjected to a mask at the time of mask processing, and a mask method ("data 2" variable);
FIG. 10 is a block diagram showing the configuration of an image processing system according to a second embodiment of the present invention;
fig. 11 is a block diagram showing the configuration of an image processing apparatus 200 according to the second embodiment of the present invention;
fig. 12 is a flowchart showing that the image processing apparatus 200 reads the recording information 90 and then performs certain processing;
fig. 13A and 13B are flowcharts showing an operation of the transmission information generating unit 260 generating the transmission file from the image additional information 92;
fig. 14A and 14B are operation flowcharts showing the storage information generating unit 270 storing the image information 91 to a predetermined place based on the image additional information 92;
fig. 15A and 15B are flowcharts showing an operation of the additional information extracting unit 280 to acquire additional information from the image additional information 92;
fig. 16A, 16B, 16C are diagrams showing the data structure of the image additional information 92; and
fig. 17 shows an example of image additional information 92 expressed in a description language.
Detailed Description
Specific embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
First embodiment
The outline of the present invention will be described. In the present embodiment, an original is read, and then various kinds of processing are performed on image information in the read original image data according to additional information. Here, the "additional information" refers to information added to the image information. The form of the additional information is, for example, readable by a reading device, and includes a code or a pattern to which information is to be added. The additional information may be in a form that can be visually perceived by a human, or may not be, specifically, a two-dimensional bar code such as a bar code or a QR code, a dot pattern, a change in a text edge, a change in a background color, or the like. Of course, the form of the additional information is not limited to the above specific example.
The "image information" extracted from the document image refers to character image information represented on the document image, for example, character image information created by software such as "WORD" (registered trademark) or "POWER POINT" (registered trademark), or handwritten character image information, and covers a wide range of information other than additional information.
Fig. 1 is a block diagram of an image processing system including an image processing apparatus according to a first embodiment of the present invention.
As shown in fig. 1, the information adding apparatus 1000 and the image processing apparatus 2000 are connected to each other through a network 70 a. The image processing apparatus 2000 is also connected to the reading apparatus 2100, and is connected to the MFP501 and the storage apparatus 502 via the network 70 b.
The information adding apparatus 1000 reads image information 1 and additional information 2, adds the additional information 2 to the image information 1 to obtain a document 900, prints out the document 900, and outputs document image data 910a obtained by adding the additional information 2 to the image information 1. That is, both document 900 and document image data 910a include image information 1 and additional information 2.
Document image data 910a is input to the image processing apparatus 2000 via the network 70a, and document image data 910b obtained by reading the document 900 with the reading apparatus 2100 (e.g., a scanner) is also input to the image processing apparatus 2000, and the image processing apparatus 2000 executes predetermined processing. Note that the document image data 910a and the document image data 910b may be input to the image processing apparatus 2000 not via the network 70a but via a storage medium such as a flexible disk or an SD card, not shown.
The networks 70a, 70b may be the internet or Intranet. Image processing apparatus 2000 obtains image information 1 and additional information 2 from document image data 910a or document image data 910b, and executes processing corresponding to additional information 2.
Configuration of image processing apparatus 2000
Fig. 2 is a block diagram showing the configuration of an image processing apparatus 2000 according to the first embodiment of the present invention.
For example, the image processing apparatus 2000 is constituted by a general-purpose computer, or a printer or MFP in which application software is installed. As shown in fig. 2, the image processing apparatus 2000 includes a data acquisition unit 2010, an additional information type determination unit 2020, an information extraction unit 2030, an information processing unit 2040, an information processing control unit 2050, and an information output unit 2060.
The data acquiring unit 2010 reads document image data 910a or document image data 910b including image information 1 and additional information 2 as a processing target. The data acquiring unit 2010 may be integrated with the reader 2100, may be an interface (interface or I/F) for inputting image data, or may be a means for reading document image data stored in a memory of the reader 2100.
When the additional information 2 is readable by a reading device, the additional information type determination section 2020 determines the type and attribute of the additional information 2 and outputs the determination result to the information extraction section 2030 and the information processing section 2040. The type of the additional information is a two-dimensional bar code such as a bar code or a QR code, a dot pattern, a shape change of a character edge, a frequency conversion of a high frequency region of a character edge, or the like, but is not limited to the above examples.
The attribute of the additional information is information indicating whether the presence of the additional information is perceivable by human vision. For example, the information can be visually perceived only when special attention is required, and a form in which additional information that does not attract the visual attention of a person is usually present is regarded as an information form that cannot be visually perceived by a person. Generally, a two-dimensional barcode such as a barcode or a QR code, a dot pattern is an additional information form which can be visually perceived by a human and can be read by a reading device, and a form in which a shape of a character edge changes and information embedding in a frequency region is an information form which cannot be visually perceived by a human.
The information extraction unit 2030 extracts the additional information 2 based on the type and attribute of the additional information 2 determined by the additional information type determination unit 2020. The information processing unit 2040 performs image processing on the image information 1 in accordance with the type and attribute of the additional information 2 determined by the additional information type determination unit 2020. This can improve the extraction accuracy of the image information 1 and the additional information 2 and can improve the processing speed.
The information processing unit 2040 includes an additional information separating unit 2041 and an image information processing unit 2042. The additional information separating unit 2041 separates the image information 1 and the additional information 2 in the document image data 901a or 901b based on the type and attribute of the additional information 2 determined by the additional information type determining unit 2020.
Fig. 3 is a flowchart showing an example of the operation content of the information processing unit 2040.
In step S301, the additional information separating unit 2041 determines whether the additional information 2 is visually perceptible by a human. For example, this determination may be made using the determination result of the additional information type determination unit 2020. If the additional information 2 is visually perceivable by a human, step S302 is entered, otherwise, step S304 is entered.
In step S302, the additional information separation unit 2041 separates the image information 1 and the additional information 2 from the document image data, and extracts the image information 1. Since this step is omitted when the additional information 2 cannot be perceived by human vision, it is possible to improve the processing speed.
Further, if the image information 1 and the additional information 2 are not separated, the subsequent processing is performed on the entire document image data, including the specialized additional information 2, and this information is not useful for the general user, so that such a processing method is inconvenient for the general user. In the present embodiment, since the image information 1 is separated from the additional information 2, it is possible to facilitate setting and operation of all users. Details of the processing of step S302 will be described later.
In step S303, the additional information separating unit 2041 determines whether or not all the additional information 2 in the document image data has been separated and removed. If the separation is removed, the process proceeds to step S304, otherwise, the process returns to step S301.
In step S304, the image information processing unit 2042 performs corresponding image processing on the image information 1 according to the content of the additional information 2. When it is determined in step S301 that the additional information 2 cannot be visually perceived by a human, the document image data is processed by the additional information 2 extracted by the information extraction section 2030.
Returning to fig. 2, the information processing control unit 2050 controls the information processing unit 2040 (specifically, the image information processing unit 2042 thereof) to perform image processing on the image information 1 in accordance with the additional information 2 extracted by the information extraction unit 2030. Details of the image processing corresponding to the additional information 2 will be described later.
The information output unit 2060 outputs the image information 1 processed in accordance with the additional information 2.
Each component of the image processing apparatus 2000 described above may be configured by hardware or software. When the components of the image processing apparatus 2000 are implemented by software, for example, the CPU of the computer reads out and executes the corresponding computer program, so that the above components can be implemented, and the operations of fig. 4A and 4B are completed. The computer program may be stored in advance in a memory such as a ROM or a hard disk of the image processing apparatus 2000, or downloaded via a network. Alternatively, information stored in a storage medium such as a CD-ROM or an SD card may be read out and stored in the memory of the image processing apparatus 2000.
Fig. 4A and 4B are flowcharts showing an example of the operation content of the image processing apparatus 2000, in which fig. 4A shows an operation of extracting the additional information 2 after obtaining the attribute of the additional information 2 in the document image data, and fig. 4B shows an operation of obtaining the attribute of the additional information 2 after extracting the additional information 2 by some method.
As shown in fig. 4A, in step S401, the data acquiring unit 2010 reads in document image data and expands it on the memory of the image processing apparatus 2000.
In step S402, the additional information type determination unit 2020 determines whether or not the document image data read in step S401 contains additional information 2. If so, the process proceeds to step S403, otherwise the operation of FIG. 4A ends.
In step S403, the additional information type determination unit 2020 determines the type and attribute of the additional information 2 added to the document image data.
For example, a screen for inputting the type of additional information may be displayed on the display, and the operator may input the type of additional information. If it has been set in advance that the image processing apparatus 2000 processes only one type of additional information, the one type of additional information may be directly used as the kind of additional information 2 required in step S403.
Fig. 5 is a graph showing the correspondence relationship between the additional information and the additional information attribute.
For example, the table may be stored in the RAM, and the additional information type determination unit 2020 may determine the attribute of the additional information 2 by referring to the table in the RAM.
In step S404, the information extraction section 2030 extracts the additional information 2 from the input document image data. For example, the information extraction unit 2030 extracts the additional information 2 according to the type of the additional information 2 determined in step S403. Many techniques for extracting additional information are disclosed in the prior art and will not be described in detail here.
In step S405, the information extraction section 2030 determines whether or not all the additional information 2 is extracted from the input document image data. If all the additional information 2 has been extracted, the flow proceeds to step S406, otherwise, the flow returns to step S403. Step S405 may be omitted if the number of additional information 2 in the original image data is 1.
In step S406, the information processing portion 2040 extracts image information 1 from the document image data.
In step S407, the information processing unit 2040 performs corresponding image processing on the image information 1 according to the content of the additional information 2.
Through the above processing of steps S401 to S407, image processing corresponding to the additional information 2 can be performed on the image information 1 extracted from the document image data, based on the additional information 2 extracted from the document image data.
The processing contents of fig. 4B are substantially the same as those of fig. 4A except for step S503 and step S504. Only these two steps are described below.
In step S503, the information extraction unit 2030 attempts to extract the additional information 2 from the input document image data by various methods provided thereto, and obtains the additional information 2.
In step S504, the additional information type determination unit 2020 determines the type and attribute of the additional information 2 extracted in step S503.
In fig. 4A and 4B, if the additional information 2 is set or limited in advance, various operations of the additional information type determination section 2020 may be omitted and processing may be performed using the type and attribute of the additional information 2 that is set or limited in advance.
Removing additional information in original image data
The following describes the operation of removing the additional information in the document image data mentioned in step S302 of fig. 3, step S406 of fig. 4A, and step S506 of fig. 4B.
(1) Additional information of bar code type, QR code type or other two-dimensional bar code
If the form of the additional information is a barcode, a QR code, or another two-dimensional barcode, first, template masking (masking) is performed on the input document image data using a barcode, a QR code, or a template (template) of another two-dimensional barcode stored in advance in a memory such as a RAM of the image processing apparatus 2000, and the similarity is obtained. The area with the similarity greater than a certain threshold is determined as a two-dimensional barcode such as a barcode or a QR code. Then, the pixel value of the pixel of the region recognized as the barcode or the like is changed to the pixel value of the background region pixel of the original image data, whereby the image of the additional information 2 such as the barcode or the like can be removed from the original image data, and thus the image information 1 can be obtained.
In general, since the background area of the document image data is mostly white, it is generally sufficient to change the pixel values of the pixels in the target area to white. In addition, although shown in the figure, a background color extraction unit may be provided to extract a background color and change the pixel values of the pixels of the target region to the extracted background color, so that the image of the additional information 2 may be removed from the original image data and the image information 1 may be obtained.
(2) Case of dot pattern type additional information
If the form of the additional information is a pattern expressed by dots, or a combination of dots, the processing can be performed as shown in fig. 6A, 6B, 7A, and 7B.
For example, first, isolated pixels are detected from input document image data.
Fig. 6A and 6B are schematic views of isolated pixels.
For example, in fig. 6A, it is determined whether the target pixel is an isolated pixel in units of one pixel, and in fig. 6B, it is determined whether the target pixel is an isolated pixel in units of 9 pixels. For example, as shown in fig. 6B, it can be determined whether 8 pixels around the target pixel have the same pixel value to detect the isolated pixel.
Of course, the number of pixels used to detect isolated pixels is not limited to the above values. For example, it may be previously set and stored in a memory such as a RAM, or only pixels having a certain pixel value may be used when the original image is a color image and the dot pattern is constituted by pixels having a certain pixel value.
Other methods may also be used. For example, a pixel connection component is extracted from document image data, and when the area or diameter of the pixel connection component is smaller than a certain threshold, the pixel connection component is recognized as an image of additional information expressed by a dot (or a combination of dots) pattern.
Then, the pixel value of the detected isolated pixel region is changed to the pixel value of the background region of the document image data, whereby the image of the additional information 2 composed of a dot (or dot combination) pattern can be removed from the document image data, and the image information 1 can be obtained.
Fig. 7A and 7B are schematic views of a pixel including a dot combination.
When the shape of the pattern formed by the dot combination is not a square as shown in fig. 6A but a shape as shown in fig. 7A, 7B, the following process is possible.
First, a dot combination pattern stored in a memory such as a RAM of the image processing apparatus 2000 is read out, and it is determined whether or not there is a pattern in which the read-out patterns match or are similar in input document image data. If a uniform or similar pattern is found in the original image data, the pixel values of the pixels included in the pattern are changed to the pixel values of the pixels of the background area of the original image data, whereby the image of the additional information 2 of the dot combination pattern can be removed from the original image data, and thus the image information 1 is obtained. Further, as described above, the image information 1 may be obtained by removing the image of the additional information 2 from the document image data by providing a background color extraction means for extracting a background color and changing the pixel values of the pixels of the target region to the extracted background color.
Other methods may also be used. For example, a pixel connection component is extracted from document image data, and when the area or diameter of the pixel connection component is smaller than a certain threshold, the pixel connection component is recognized as an image of additional information expressed by a dot (or a combination of dots) pattern.
Then, the pixel value of the region of the image identified as the additional information 2 is changed to the pixel value of the background region of the document image data, whereby the image of the additional information 2 composed of a dot (or a combination of dots) pattern can be removed from the document image data, and the image information 1 can be obtained. When the original image is a color image and the dot pattern is formed of pixels having a predetermined pixel value, only these pixels having a predetermined pixel value may be used.
Additional information and processing corresponding to the additional information
Fig. 8A, 8B, and 8C are diagrams showing the data structure of the additional information 2.
As shown in fig. 8A, the additional information 2 is a variable having a certain byte length. The first variable in the graph of fig. 8A is "instruction", and the second and subsequent variables are information necessary for executing the instruction of the "instruction" variable.
Fig. 8B shows the values of the "indicate" variable and its meaning in binary code or ASCII symbols.
For example, when the "indication" variable is "1", it indicates that image information 1 having the "indication" variable with a value of 1 is to be transmitted by facsimile.
FIG. 8C shows the meaning of each data variable after the "indicate" variable.
For example, when facsimile transmission is instructed, the value of the "instruction" variable is "1", and the "data 1" variable is assigned to the facsimile number of the transmission destination. When the mask processing is instructed to the image information 1, the value of the "instruction" variable is "5", the area information of the target area is given the "data 1" variable, and the mask processing is given the "data 2" variable.
Fig. 9 is a graph showing region information ("data 1" variable) of a target region where a mask is performed and a mask method ("data 2" variable) at the time of mask processing.
In fig. 9, each area ID corresponds to one value of the quantity "start point end point (area information)" and one value of the quantity "processing method". The amount "start point and end point (area information)" indicates the coordinates of the area of the corresponding area ID. For example, in fig. 9, all the areas are rectangles, and the upper left coordinate and the lower right coordinate of the rectangle are used as "start point and end point (area information)" to determine the target area to be masked.
In addition, the amount "processing method" indicates information specifying a smear pattern or the like in the mask processing. For example, in fig. 9, the "processing method" is "dot" or "solid paint" for each region.
For each area ID in fig. 9, the value of the amount "start point and end point (area information)" is assigned to the "data 1" variable in fig. 8A and 8C, and the value of the amount "processing method" is assigned to the "data 2" variable in fig. 8A and 8C. Thus, additional information defining the mask process is constituted.
It is to be noted that the present embodiment is not limited to the above example, and may have various variations.
For example, if the additional information is constituted by the "data 1" variable in fig. 8C, the image processing apparatus 2000 may determine the instruction content from the additional information extracted by the information extraction unit 2030. For example, if the additional information is a string of digits used as a telephone/fax number and it is judged that the processing contents corresponding to the additional information are fax transmission image information 1 to above fax numbers, then fax transmission image information 1 to above fax numbers.
If the additional information is a character string containing "@" and if it is judged that the processing content corresponding to the additional information is the mailbox address represented by the character string from the image information 1 to above transmitted by the email, the image information 1 is transmitted by the email to the mailbox address represented by the character string above.
If the additional information is the character string containing "@" and other character strings, the character string containing "@" may be used as the mailbox address, and the other character strings may be used as the headers of the email.
If the additional information includes a character string such as "/", it is determined that the character string indicates the storage address of the image information 1, and the image information 1 can be stored in the address.
Note that the number of processes corresponding to one piece of additional information may be plural, and for example, the process corresponding to one piece of additional information may include both facsimile transmission of the image information 1 and output after the mask process. Of course, the facsimile transmission and the email transmission may be performed simultaneously.
Second embodiment
Constitution of image processing system
Fig. 10 is a block diagram showing the configuration of an image processing system according to a second embodiment of the present invention.
Fig. 10 shows an image processing system including an information embedding apparatus 100 and an image processing apparatus 200.
The information embedding apparatus 100 outputs the record information 90, and the image processing apparatus 200 reads in the record information 90 and performs corresponding processing.
As shown in fig. 10, the information embedding apparatus 100 and the image processing apparatus 200 are connected to each other via a network 70 and a local bus 80. However, the information embedding apparatus 100 and the image processing apparatus 200 may be connected through only one of the network 70 and the local bus 80, or may be directly connected without the network 70 and the local bus 80. The information embedding apparatus 100 and the image processing apparatus 200 may be integrated to constitute a single information embedding and image processing apparatus.
The information embedding device 100 reads the data of the image 1a and the additional information 2, and adds the additional information 2 to the image 1a, or converts the additional information 2 into data of the encoded embedded image 1a, thereby generating the recorded information 90. The recording information 90 includes data of the image 1a (hereinafter referred to as image information 91) and data or code of the additional information 2 (hereinafter referred to as image additional information 92).
In order to facilitate a comparative understanding of the first embodiment and the second embodiment, the correspondence relationship of the respective configurations of the image processing systems of the first embodiment and the second embodiment is explained below.
The information embedding apparatus 100 corresponds to the information adding apparatus 1000, the image processing apparatus 200 corresponds to the image processing apparatus 2000, the recording information 90 corresponds to the original image data 910a, or the original image data 910b, or the original 900, and the image 1a corresponds to the image information 1. The image additional information extraction unit 230 described below corresponds to the information extraction unit 2030, and the information processing unit 240 described below corresponds to the information processing unit 2040.
There are various methods for embedding the additional information 2, and for example, the additional information 2 may be embedded after being converted into a barcode or a QR code, or the additional information 2 may be embedded in a form that is not visually perceptible by a conversion code. There are also various methods of converting the information into a form that is not visually perceptible, and for example, the additional information 2 may be embedded in a dot pattern, the dot pattern being attached to the background of the image 1a, or the additional information 2 may be embedded as a digital watermark in the image 1 a.
Since the information is converted into a visually imperceptible form, when the image processing apparatus 200 processes the additional information 2 (image additional information 92), such as when the image processing apparatus 200 reads in the recorded information 90 or in other processing, it can be ensured that the additional information 2 is not easily perceived.
The image processing apparatus 200 reads the recording information 90 and executes certain processing. The image processing apparatus 200 is connected to a Facsimile (FAX)40, a network 50, and a storage apparatus 60. The recorded information 90 may be inputted to the image processing apparatus 200 from an image read on a medium such as paper, or may be inputted to the image processing apparatus 200 from the information embedding apparatus 100 via the network 70 and the local bus 80.
The image processing apparatus 200 obtains the image information 91 and the image additional information 92 from the recording information 90, and performs various processes based on the image additional information 92.
Configuration of image processing apparatus 200
Fig. 11 is a block diagram showing the configuration of an image processing apparatus 200 according to the second embodiment of the present invention.
As shown in fig. 11, the image processing apparatus 200 is connected to a Facsimile (FAX)40 or the like, reads the recording information 90, and outputs data to the Facsimile (FAX)40 or the like.
The image processing apparatus 200 is connected to a Facsimile (FAX)40, a network 50, and a storage apparatus 60 via a FAX interface (FAX I/F)49, a network interface (network I/F)59, and a storage interface (HDD I/F) 69.
The image processing apparatus 200 includes a processing unit 240, an image reading unit 210, an image additional information extraction unit 230, and an image information extraction unit 220.
The processing unit 240 selects and executes one job from a plurality of jobs executed by the image processing apparatus 200, and includes a multi-job execution unit 250 and a job decision unit 290.
The multi-job execution unit 250, which is a main processing component that processes a plurality of jobs executed by the image processing apparatus 200, includes a transmission information generation unit 260, a storage information generation unit 270, and an additional information extraction unit 280.
Facsimile transmission and mail transmission
The transmission information generating unit 260 generates transmission information for transmitting the image information 91, and includes an image transmission information extracting section 261, an address acquiring section 262, an image transmission information acquiring section 263, and a transmission file generating section 264. The transmission information generated by the transmission information generation unit 260 corresponds to the transmission unit designated by the image additional information 92, and may be mail (email), facsimile, or any other transmission information used by the transmission unit. The transmission information may include an address of a destination to which the image information 91 is transmitted, a title of the image information 91, or an address of a sender, etc.
When an image to be transmitted is specified in the image additional information 92 and there is transmission information in the image additional information 92, the image transmission information extraction section 261 obtains the transmission information from the image additional information 92.
When an image to be transmitted is designated in the image additional information 92 and an address storing transmission information (this address is referred to as a transmission information address) is present in the image additional information 92, the address acquisition section 262 obtains the transmission information address from the image additional information 92. Here, the transmission information address may be a path name or a file name in a memory, not shown, of the image processing apparatus 200, a path name or a file name in the memory 60 connected to the image processing apparatus 200, or a uri (uniform resource identifier) on the network 50 connected to the image processing apparatus 200.
The image transmission information acquiring section 263 accesses the address acquired by the address acquiring section 262 and acquires transmission information.
When the image transmission information extracting unit 261 or the image transmission information acquiring unit 263 obtains the transmission information, the transmission file generating unit 264 generates a format of a file to be transmitted based on the transmission information. For example, when the transmission unit designated by the image attachment information 92 is a facsimile machine, the transmission document generation section 264 generates the head part of a facsimile letter, and when the designated transmission unit is a mail, the transmission document generation section 264 generates the header part of the mail, and after these parts are attached to the image information 91, generates a transmission document.
The transmission document generated by the transmission document generating section 264 is transmitted through the facsimile interface (FAX I/F)49, the network interface (network I/F)59, or the like, in accordance with the method designated by the image additional information 92.
File storage
The stored information generating unit 270 is a means for storing the image information 91 in a predetermined place, and includes an image stored information extracting section 271, an address acquiring section 272, and an image stored information acquiring section 273. The storage information generated by the storage information generating unit 270 may be an address, a path name, or a file name where the image information 91 is stored, a server name, a file name given to the image information 91, or a manager name of a file of the image information 91, or the like. The address where the image information 91 is stored may be a URI on the network.
When an image is designated to be stored in the image additional information 92 and there is storage information in the image additional information 92, the image storage information extracting section 271 obtains the storage information from the image additional information 92.
When an image is designated to be stored in the image additional information 92 and an address of the stored information (this address is referred to as a stored information address) is present in the image additional information 92, the address acquisition section 272 obtains the stored information address from the image additional information 92. Here, the storage information address may be a path name or a file name in a memory, not shown, of the image processing apparatus 200, a path name or a file name in the memory 60 connected to the image processing apparatus 200, or a uri (uniform resource identifier) on the network 50 connected to the image processing apparatus 200.
The image stored information acquiring unit 273 accesses the stored information address obtained by the address acquiring unit 272 to obtain stored information.
The image information 91 is stored to the address designated by the image additional information 92 through the network interface (network I/F)59 or the memory interface (HDD I/F) 69.
Obtaining information over a network
Additional information extraction section 280 acquires additional information specified by image additional information 92, and generates a composite image by combining the additional information with image information 91. It includes an additional information extracting section 281, an additional information position acquiring section 282, an additional information acquiring section 283, and an information adding section 284. The additional information obtained by the additional information extracting unit 280 may be text data, an image, and a URI on the network storing the text data and the image.
When additional information is specified to be obtained in the image additional information 92 and the additional information is present in the image additional information 92, the additional information extracting unit 281 obtains the additional information from the image additional information 92.
When additional information is specified in the image additional information 92 and an address (referred to as an additional information address) for storing the additional information is present in the image additional information 92 and the image additional information 92, the additional information position acquisition unit 282 acquires the additional information address from the image additional information 92. Here, the additional information address may be a path name or a file name in a memory, not shown, of the image processing apparatus 200, a path name or a file name in the memory 60 connected to the image processing apparatus 200, or a uri (uniform Resource identifier) on the network 50 connected to the image processing apparatus 200.
The additional information acquiring unit 283 accesses the additional information address acquired by the additional information position acquiring unit 282 to acquire additional information.
The information adding unit 284 synthesizes the additional information obtained by the additional information extracting unit 281 or the additional information acquiring unit 283 with the image information 91 to generate a synthesized image.
The composite image generated by the information adding unit 284 is displayed, printed, or transmitted in accordance with the processing method specified by the image additional information 92.
The job determining unit 290 determines which processing component of the plurality of processing components that the multi-job executing unit 250 has to run, in accordance with the processing method specified by the image additional information 92.
The image additional information extraction unit 230 reads the image additional information 92, which includes a pattern-embedding extraction unit 231, from the image read by the image reading unit 210. For example, the image additional information extraction unit 230 acquires text data generated by OCR, reads a barcode or a QR code, or extracts a digital watermark embedded in the image information 91.
The pattern-embedding extracting unit 231 extracts information embedded in a dot pattern manner.
Here, for example, the "pattern" is a dot pattern. The term "pattern embedding" as used herein, for example, "dot pattern embedding" means that information is embedded in a pattern by using a dot pattern and a certain rule.
The image reading unit 210 reads an image of the recording information.
The image information extraction unit 220 reads the image information acquired by the image reading unit 210, and the image information extraction unit 220 performs OCR processing on the read image to obtain generated text data.
The facsimile interface (FAX I/F)49, the network interface (network I/F)59, and the storage interface (HDD I/F)69 are interfaces of the Facsimile (FAX)40, the network 50, and the storage device 60, respectively.
Operation process
Fig. 12 to 15 are flowcharts showing the operation of the image processing apparatus 200.
Operation flow of the image processing apparatus 200
Fig. 12 is a flowchart showing that the image processing apparatus 200 reads the recording information 90 and then performs certain processing.
In step S1000, the image reading unit 210 reads the image of the input recording information.
In step S2000, the image information extraction unit 220 reads in the image information 91 acquired by the image reading unit 210.
In step S3000, the image additional information extraction unit 230 extracts the image additional information 92 from the image read by the image reading unit 210. When the image additional information 92 is attached to the recorded information 90 in the form of a visual symbol such as a character or symbol, a barcode, a QR code, or a dot pattern, the visual symbol may be deleted from the image information 91.
In step S4000, the job decision unit 290 selects and decides the process executed by the multi-job execution unit 250 based on the image additional information 92.
In step S5000, the multi-job execution unit 250 executes processing based on the output of the job determination unit 290 and the image additional information 92.
Operation flow of the transmission information generation unit 260
Fig. 13A and 13B are flowcharts showing an operation of the transmission information generating unit 260 generating the transmission file from the image additional information 92.
Fig. 13A is a flowchart showing a case where there is transmission information in the image additional information 92.
Fig. 13B is a flowchart showing a case where the image additional information 92 includes a transmission information address.
As shown in fig. 13A, in step S5100, the image transmission information extraction unit 261 extracts the transmission information from the image additional information 92.
In step S5200, the transmission document generation unit 264 generates a document prototype corresponding to the transmission unit (facsimile, mail, or the like) based on the extracted transmission information and the image information 91. The transmission means may be designated by the image additional information 92 or may be designated by an input means not shown.
In step S5300, the transmission file generated in step S5200 is transmitted to the destination address specified by the transmission information through the facsimile interface (FAX I/F)49 or the network interface (network I/F) 59.
As shown in fig. 13B, in step S5110, the address acquisition section 262 obtains an address (transmission information address) instructing to store transmission information from the image additional information 92.
In step S5111, the image transmission information acquiring section 263 accesses the address acquired by the address acquiring section 262 and acquires transmission information such as the address of the transmission destination.
In step S5210, the transmission document generation unit 264 generates a document format corresponding to the transmission unit (facsimile, mail, or the like) based on the extracted transmission information and the image information 91. The transmission means may be designated by the image additional information 92 or may be designated by the input means shown in the figure.
In step S5310, the transmission file generated in step S5200 is transmitted to the destination address specified by the transmission information via the facsimile interface (FAX I/F)49 or the network interface (network I/F) 59.
Operation flow of the storage information generating unit 270
Fig. 14A and 14B are operation flowcharts showing the storage information generating unit 270 storing the image information 91 to a predetermined place based on the image additional information 92.
Fig. 14A is a flowchart showing a case where there is stored information in the image additional information 92.
Fig. 14B is a flowchart showing a case where the image additional information 92 includes a storage information address.
As shown in fig. 14A, in step S5120, the image stored information extracting unit 271 obtains stored information from the image additional information 92.
In step S5320, the image information 91 is stored to a predetermined place in the memory 60 connected to the image processing apparatus 200 through the memory interface (HDD I/F)69, based on the storage information obtained in step S5120. Alternatively, the image information 91 is stored in a predetermined place in a memory on the network 50 connected to the image processing apparatus 200 via a network interface (network I/F) 59.
As shown in fig. 14B, in step S5130, the address acquisition section 272 obtains the storage information address from the image additional information 92.
In step S5131, the image stored information acquiring unit 273 accesses the stored information address obtained by the address acquiring unit 272 and obtains the stored information.
In step S5321, the image information 91 is stored in a predetermined place in the memory 60 connected to the image processing apparatus 200 through the memory interface (HDD I/F)69, based on the storage information obtained in step S5131. Alternatively, the image information 91 is stored in a predetermined place in a memory on the network 50 connected to the image processing apparatus 200 via a network interface (network I/F) 59.
Operation flow of additional information extraction unit 280
Fig. 15A and 15B are flowcharts showing an operation of the additional information extracting unit 280 to acquire additional information from the image additional information 92.
Fig. 15A is a flowchart showing a case where additional information is included in the image additional information 92.
Fig. 15B is a flowchart showing a case where the additional information address is present in the image additional information 92.
As shown in fig. 15A, in step S5140, the additional information extraction unit 281 obtains additional information from the image additional information 92.
In step S5340, the information adding unit 284 adds the obtained additional information to the image information 91 to generate a composite image.
As shown in fig. 15B, in step S5150, the additional information position acquisition unit 282 obtains an additional information address from the image additional information 92.
In step S5151, the additional information acquisition unit 283 accesses the obtained additional information address to obtain additional information.
In step S5350, the information adding unit 284 adds the obtained additional information to the image information 91 to generate a composite image.
Data structure of image additional information
Fig. 16A, 16B, and 16C are diagrams showing the data structure of the image additional information 92.
As shown in fig. 16A, the image additional information 92 is a variable having a certain byte length. The first variable in the graph of fig. 16A is "instruction", and the second and subsequent variables are information necessary for executing the instruction of the "instruction" variable.
Fig. 16B shows the values of the "indicate" variable and its meaning in binary code or ASCII symbols.
For example, when the "indication" variable is "1", it indicates that image information 1 having the "indication" variable with a value of 1 is to be transmitted by facsimile.
FIG. 16C shows the meaning of each data variable after the "indicate" variable.
For example, when facsimile transmission is instructed, the value of the "instruction" variable is "1", and the "data 1" variable is assigned to the facsimile number of the transmission destination.
Description language of image additional information
Fig. 17 shows an example of image additional information 92 expressed in a description language.
In fig. 17, a character string 921 indicates a mailbox address when sending a mail.
In fig. 16 and 17, the operation contents of the image processing apparatus 200 are explicitly stored in the image additional information 92. The present embodiment is not limited thereto. As shown by the character string 921 in fig. 17, if the image additional information 92 contains a character string with "@", it is determined that the character string is a destination address of a transmission mail, and generation of image transmission information for mail transmission may be performed.
If the image additional information 92 contains a character string such as "/", it is determined that the character string indicates a storage address of the image information 91, and image storage information for storing the image information 91 at the address can be generated.
Computer system for implementing image processing method
Fig. 18 shows a configuration diagram of a computer system for implementing the image processing method of the present embodiment.
As shown in fig. 18, the main processing portion 3 of the computer system is connected to a Facsimile (FAX)40, a network 50, and a storage device 60 through a facsimile interface (FAX I/F)49, a network interface (network I/F)59, and a memory interface (HDD I/F) 69. The main processing section 3 includes a CPU4, a ROM5, and a RAM 6. The CPU4 reads out a program stored in the ROM5 and executes the program, thereby implementing the image processing method of the present embodiment. The CPU4 also controls the RAM6 and other external devices.
The ROM5 stores a program that realizes the image processing method of the present embodiment. The CPU4 temporarily uses the RAM6 when executing programs. The program for implementing the image processing method of the present embodiment may also be stored in other readable storage media such as a hard disk, a CD-ROM, a DVD, and the like.
Other constitution of the image processing apparatus of the present invention
The image processing apparatus of the present invention may have the following configuration.
1. An image processing apparatus, comprising:
an image reading unit which reads image information, an
And a processing unit that performs processing based on the read image information.
2. The image processing apparatus according to 1, wherein the processing unit
Multiple job execution unit, and
an operation determining unit.
3. The image processing apparatus according to 1, further comprising
An image information extracting unit for reading image information from an input image, an
An image additional information extraction unit.
4. An image processing apparatus according to 3, wherein
The image additional information extraction unit comprises a pattern embedded information extraction unit for extracting pattern embedded information in the image information.
5. The image processing apparatus according to 3 or 4, wherein
The multi-job execution unit comprises an image sending information extraction unit.
6. The image processing apparatus according to 3 or 4, further comprising
An identification information extraction unit for extracting the identification information,
wherein,
the multi-job execution unit includes an image transmission information extraction unit that operates in accordance with the identification information.
7. The image processing apparatus according to 5 or 6, wherein,
the multi-job execution unit includes a transmission file generation unit.
8. The image processing apparatus according to 3 or 4, wherein,
the multi-job execution unit includes an image storage information extraction unit.
13. The image processing method of the image processing apparatus according to 1.
25. Including the image processing systems of 1 to 7.
The above description is only for the preferred embodiment of the present invention, and the present invention is not limited to the above embodiment, and those skilled in the relevant art can make various modifications without departing from the scope of the present invention.

Claims (16)

1. An image processing apparatus comprising:
a reading unit configured to read an original containing image information and additional information related to the image information;
an extracting unit configured to extract the image information and the additional information from original image data obtained by reading the original; and
and the processing unit is used for processing the image information according to the additional information.
2. The image processing apparatus according to claim 1,
the additional information is constituted by a predetermined pattern,
the extraction unit separates the predetermined pattern from the original image data to extract the additional information.
3. The image processing apparatus according to claim 1 or 2,
the additional information is visible in the form of,
the processing unit deletes the additional information from the document image data.
4. The image processing apparatus according to claim 1 or 2,
the additional information contains the transmission destination information,
the processing unit transmits the image information extracted by the extracting unit or the image information processed by the processing unit to a transmission destination specified by the transmission destination information.
5. The image processing apparatus according to claim 1 or 2, further comprising,
a storage unit for storing the additional information and the transmission destination information and keeping the additional information and the transmission destination information associated with each other, and
a transmission destination information extracting unit operable to extract the transmission destination information from the storage unit,
the processing unit transmits the image information or data related to the image information to a transmission destination specified by the transmission destination information.
6. The image processing apparatus according to claim 1 or 2,
the additional information contains a title of the image information.
7. The image processing apparatus according to claim 1 or 2,
the additional information contains information for masking the image information,
the processing unit masks the image information according to the additional information.
8. The image processing apparatus according to claim 1 or 2,
the additional information contains identification information for identifying a storage address of the related information related to the image information,
the image processing apparatus further includes a related information acquisition unit that acquires the related information based on the identification information,
the processing unit appends the related information to the image information and outputs it.
9. An image processing method comprising:
a reading step of reading an original containing image information and additional information related to the image information;
an extraction step of extracting the image information and the additional information from original image data obtained by reading the original; and
a processing step of processing the image information based on the additional information.
10. The image processing method according to claim 9,
the additional information is constituted by a predetermined pattern,
in the extracting, the predetermined pattern is separated from the original image data to extract the additional information.
11. The image processing method according to claim 9 or 10,
the additional information is visible in the form of,
in the processing step, the additional information is deleted from the original image data.
12. The image processing method according to claim 9 or 10,
the additional information contains the transmission destination information,
in the processing step, the extracted image information or the image information processed in the processing step is transmitted to a transmission destination specified by the transmission destination information.
13. The image processing method according to claim 9 or 10, further comprising,
a transmission destination information extracting step of extracting transmission destination information from a storage unit that stores the additional information and the transmission destination information and holds the additional information and the transmission destination information in association with each other, and
in the processing step, the image information or the data related to the image information is transmitted to a transmission destination specified by the transmission destination information.
14. The image processing method according to claim 9 or 10,
the additional information contains a title of the image information.
15. The image processing method according to claim 9 or 10,
the additional information contains information for masking the image information,
in the processing step, the image information is subjected to mask processing in accordance with the additional information.
16. The image processing method according to claim 9 or 10,
the additional information contains identification information for identifying a storage address of the related information related to the image information,
the image processing method further includes a related information acquisition step of acquiring the related information based on the identification information,
in the processing step, the related information is attached to the image information and output.
CN 200610063642 2005-12-28 2006-12-28 Image processing device and image processing method Pending CN1992764A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2005380492 2005-12-28
JP2005380492 2005-12-28
JP2006348446 2006-12-25

Publications (1)

Publication Number Publication Date
CN1992764A true CN1992764A (en) 2007-07-04

Family

ID=38214694

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 200610063642 Pending CN1992764A (en) 2005-12-28 2006-12-28 Image processing device and image processing method

Country Status (1)

Country Link
CN (1) CN1992764A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105635511A (en) * 2014-11-26 2016-06-01 株式会社东芝 Image forming apparatus and image forming method
CN112019940A (en) * 2020-09-08 2020-12-01 南京云照乐摄影有限公司 Image transmission system, image uploading device, cloud server and image receiving device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105635511A (en) * 2014-11-26 2016-06-01 株式会社东芝 Image forming apparatus and image forming method
CN112019940A (en) * 2020-09-08 2020-12-01 南京云照乐摄影有限公司 Image transmission system, image uploading device, cloud server and image receiving device

Similar Documents

Publication Publication Date Title
CN1184796C (en) Image processing method and equipment, image processing system and storage medium
CN1282068C (en) Device and its operation method for enabling printer to print page of desired design
CN1292381C (en) Image processing system
CN1114888C (en) Image processing method and device, image processing system,and memory medium
CN1918624A (en) Image transmission system and image transmission method
CN101030251A (en) Information processing apparatus, information processing method and computer readable medium
CN1885899A (en) Image combining apparatus, and control method and program therefor
CN1945523A (en) Image processing apparatus and method for controlling the same
CN1849813A (en) Printed matter processing system, watermark-containing document printing device, watermark-containing document read device, printed matter processing method, information read device, and information r
CN1719862A (en) Image processing system and image processing method
CN1612122A (en) Service provision device, service provision program, recording medium, and service provision method
CN1620094A (en) Image processing apparatus and method for converting image data to predetermined format
CN1859541A (en) Image processing apparatus and its control method
CN101038534A (en) Information processing apparatus and control method therefor
CN1893527A (en) Image data processing system
CN1893535A (en) Density determination method, image forming apparatus, and image processing system
CN1913573A (en) Image processing apparatus for image retrieval and control method therefor
CN1719864A (en) Image processing system and image processing method
CN1878232A (en) Image processing apparatus and method
CN101079940A (en) Multi-function peripheral and information acquisition system including a plurality of the multi-function peripherals
CN1684492A (en) Image dictionary creating apparatus, coding apparatus, image dictionary creating method
CN1812468A (en) Method of generating protected document image and apparatus of generating the same
CN1525733A (en) Boundary detection method between areas having different features in image data
CN1878222A (en) Image processing device and control method thereof
CN101039366A (en) Scan solution system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication

Open date: 20070704