JPH07123244A - Picture processor - Google Patents

Picture processor

Info

Publication number
JPH07123244A
JPH07123244A JP6232460A JP23246094A JPH07123244A JP H07123244 A JPH07123244 A JP H07123244A JP 6232460 A JP6232460 A JP 6232460A JP 23246094 A JP23246094 A JP 23246094A JP H07123244 A JPH07123244 A JP H07123244A
Authority
JP
Japan
Prior art keywords
color
image
means
information
color image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP6232460A
Other languages
Japanese (ja)
Other versions
JP3599795B2 (en
Inventor
Haruko Kawakami
Hidekazu Sekizawa
Tadashi Yamamoto
直史 山本
晴子 川上
秀和 関沢
Original Assignee
Toshiba Corp
株式会社東芝
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to JP5-219294 priority Critical
Priority to JP21929493 priority
Application filed by Toshiba Corp, 株式会社東芝 filed Critical Toshiba Corp
Priority to JP23246094A priority patent/JP3599795B2/en
Publication of JPH07123244A publication Critical patent/JPH07123244A/en
Application granted granted Critical
Publication of JP3599795B2 publication Critical patent/JP3599795B2/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Abstract

(57) [Summary] (Correction) [Purpose] Providing an image processing device that embeds other information in the image information so that it does not give a visually uncomfortable feeling when the image information is printed and image deterioration does not occur. To do. In a device for processing a color image, a data signal representing information different from the color image is generated by a generating means 103. The other information is the image processing means 10.
4, 105 is embedded in the color image by changing either the color difference or the saturation with the data signal so that the sum of the three primary color components of the color image does not change.

Description

Detailed Description of the Invention

[0001]

BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to an image processing apparatus for embedding other information in image information so as not to give a visually uncomfortable feeling by utilizing the redundancy included in the image information, and a separate image processing apparatus embedded in the image information. The present invention relates to an image processing device for extracting information of the above from image information.

[0002]

2. Description of the Related Art Nakamura, Matsui et al. ("Combining Encoding Method of Text Data into Image by Color Density Pattern", 17th Annual Meeting of the Institute of Image Electronics, Japan) has been used as a technique for superposing text data on a color image for recording. Volume 4 (1988)
pp194-198). According to this technique, paying attention to the fact that the image data has a lot of redundancy, other data, for example, text data is combined and superimposed on the redundant portion of the image data by using the color density pattern method. However, the color density pattern method generally has a drawback that the resolution becomes rough and a high-definition image cannot be expressed. In addition, there is a defect that image quality is deteriorated due to color unevenness or the like caused by variation in pixel arrangement due to information superposition.

On the other hand, as an example of application of dither image recording capable of higher definition display than the density pattern method, Tanaka, Nakamura, Matsui: Embedding character information into compositional dither image by 2k element vector ", image IEICE Journal, 1st
Volume 9, No. 5 (1990) pp 337-343 is known. This technique also has a drawback that image quality deteriorates when character information or the like is embedded. Further, this technique has a drawback that it cannot be applied to a high-definition recording technique such as an error diffusion recording method.

Further, according to each of the above-mentioned techniques, although it is possible in principle to extract character information or the like from an image actually printed, dither pattern information is used as the dither pattern information in actual general recording. It is not accurately recorded on the disk, and it is difficult to read such information. Therefore, it is extremely difficult to read the embedded specific information.
All that is possible is to extract from the image data (transmission information or data in the floppy disk) that is the basis for printing. In order to read coded specific information such as character information from an actual recorded image based on the above technology, an extremely high-precision printer capable of recording beyond the visual acuity limit of a human is used for recording. Difficult unless read with a precision reader.

Further, in the above method, since noise during recording and noise during reading occur, it becomes difficult to read coded information such as character information separately from image information. Furthermore, in the color-recorded image information, even if the image information is recorded by a high-precision recording device, the image points of the respective colors overlap each other, so that it is difficult to form an accurate pixel shape. In this case, there is a drawback that it is extremely difficult to read the image data of each color separately from the image information.

Further, Japanese Patent Laid-Open No. 4-294682 describes a technique for adding information to yellow ink.
According to this technique, there is no problem when the original image is composed of only pixels including only the yellow component. But,
If other colors are included, there is no guarantee that simply adding yellow will make the record visually unnoticeable. Further, there is a problem that the specific information cannot be added when the yellow component such as only cyan or only magenta is not included.

[0007]

In view of the above-mentioned problems, an object of the present invention is to provide another information so that the result of outputting the image information does not give a visually uncomfortable feeling and the image quality does not deteriorate. An object is to provide an image processing device embedded in the image information.

Another object of the present invention is to provide an image processing apparatus capable of easily extracting the other information from the image information in which the other information is embedded. Further, at this time, a recording device or a reading device having a visual acuity limit or more is not required.

[0009]

The first invention of the present invention is as follows:
Means for generating a data signal representing information different from the color image; and image processing means for embedding the other information in the color image by changing either the color difference or the saturation of the color image by the data signal. An image processing apparatus comprising: This change in either the color difference or the saturation is performed so that the total of the three primary color components of the color image does not change due to the processing.

Here, the change in the color difference direction means that the image processing means converts the three primary color component signals of the color image into the luminance signal,
It is realized by including means for converting into the first and second color difference signals and means for embedding the other information in the first color difference signal. It is preferable that the second color difference signal is a red-green color difference signal and the first color difference signal is a yellow-blue color difference signal.

Further, the change in the saturation direction can be obtained by, for example, the image processing means converting the three primary color component signals of the color image into the luminance signal,
Means for converting into first and second color difference signals, and first and second means
And embedding the other information in the saturation represented by the color difference signal.

Further, the image processing means may embed the other information in the color image by changing the three primary color signals of the subtractive color mixture or the additive color mixture of the color image according to the data signal. This embedding can also be performed by means for converting the generated data signal into an amount of change in color difference or saturation of the color image, and means for adding this amount of change to the color image. preferable.

Further, there may be provided means for recording on the recording medium the second color image which is processed by the image processing means and in which other information is embedded.

Further, preferably, the image processing means includes means for detecting a high frequency component of luminance based on a color image, and means for adjusting an amount of embedding the other information according to the detected high frequency component. To have.

According to a second aspect of the present invention, by changing either the color difference or the saturation of the first color image with a data signal representing information different from the first color image, the color image is changed to the color image. It is characterized by further comprising input means for inputting a second color image in which the other information is embedded, and extraction means for extracting the other information from the second color image input by the input means. Is an image processing device.

When the color difference direction is changed, the extracting means reads the input second color image and the second color image read by the reading means as a luminance signal and a second signal. It is preferable to include means for converting into the first and second color difference signals, and separation means for separating and extracting the data signal from the first color difference signal converted by the conversion means. When changed in the saturation direction, the extraction means reads the input second color image, and the second read by the reading means.
Means for converting the color image into a luminance signal and first and second color difference signals, and separating the data signal from the saturation represented by the first and second color difference signals converted by the converting means. It is preferable to include a separating means for taking out.

Further, the extracting means detects the duplicated second color image from the input second color image signal and the duplicated second color image detected by the detecting means. In some cases, it may be preferable to provide a means for performing averaging. In some cases, the extracting means may include means for performing bandpass processing of a predetermined frequency band on the input second color image.

According to a third aspect of the present invention, means for generating a data signal representing information different from the color image, and a striped pattern having a plurality of frequency components according to the data signal generated by the generating means are provided. An image processing apparatus comprising: an image processing unit that embeds the other information in the color image by adding it to the color image.
This change in either the color difference or the saturation causes 3
It is performed so that the total of the primary color components does not change depending on the processing.

The image processing means arranges the plurality of frequency components forming the striped pattern on a plane, and the striped pattern on the color image based on the plurality of frequency components arranged on the plane. And means for adding to

Here, it is preferable that the arranging means includes means for increasing the frequency of the frequency component arranged on the plane as the distance from the predetermined point increases. Further, the arranging means may include means for arranging a corresponding dummy frequency component together with the plurality of frequency components on the plane.

The arranging means may arrange a part of the plurality of frequency components in a concentric circle shape or a concentric oval shape on the plane, or a part of the plurality of frequency components in the plane. They may be arranged in a grid pattern on the top.
In the case of arranging in a lattice shape, a phase difference is given to a part of the plurality of frequency components, and they are arranged.

The arranging means may arrange a start bit indicating a start position of the plurality of frequency components on the plane, and arrange a part of the plurality of frequency components irregularly. Further, it may be arranged such that the one having a higher frequency among the plurality of frequency components has a larger amplitude.

Further, there may be a band elimination filter for eliminating the band corresponding to the frequency band of the plurality of frequency components in the color image before the stripe pattern is added by the adding means.

Further, the image processing means of the third invention, for example, converts the three primary color component signals of a color image into luminance signals, first and second
And a means for embedding the other information by adding the striped pattern to the first color difference signal, thereby making a change in the color difference direction. Then, for example, a means for converting the three primary color component signals of the color image into a luminance signal, first and second color difference signals, and the other information for the saturation represented by the first and second color difference signals. By providing a means for embedding, a change in the saturation direction is performed.

A second color image processed by the image processing means of the third invention and having other information embedded therein is recorded on a recording medium, and then the second color image recorded on this recording medium is recorded.
In the image processing apparatus adapted to extract the other information from the color image, it is preferable that the extracting means includes means for performing a Fourier transform on the second color image.

In order to read the change in the color difference direction, the extraction means reads the input second color image, and the second color image read by the reading means is a luminance signal. It is preferable to include means for converting into the first and second color difference signals, and separation means for separating and extracting the data signal from the first color difference signal converted by the conversion means. In order to read the change in the saturation direction, the extraction means reads the input second color image, and the second color image read by the reading means is a luminance signal, a first ,
It is preferable to include means for converting into the second color difference signal and separation means for separating and extracting the data signal from the saturation represented by the first and second color difference signals converted by the conversion means. .

In a fourth aspect of the present invention, means for generating a data signal representing information different from that of the black-and-white image, and changing the brightness of the black-and-white image according to the data signal, the other information in the black-and-white image. And an image processing unit for embedding the image processing apparatus.

According to a fifth aspect of the present invention, the means for generating a data signal representing information other than the character information and the arrangement interval when the character information is developed as an image are changed by the data signal. An image processing apparatus comprising: an information processing unit for embedding the other information in an image of character information.

[0029]

In general, the visual acuity limit of color difference and saturation information is lower than that of luminance information. In other words, the color difference and the saturation are fine, and there is a characteristic that it is dull rather than the luminance with respect to subtle changes. On the other hand, in color recording, a printer that records density (signal including brightness) information of each color up to a visual acuity limit of brightness has the highest image quality.
(Note that it is not necessary for humans to record it beyond the visual acuity limit because it is not visible.) When recording close to the visual acuity limit of luminance, the color difference and saturation information is not visible to humans. Has been done. The present invention has been made paying attention to the fact that invisible code information can be recorded by encoding and superimposing the information on the invisible recording portion, that is, the color difference and the saturation component at a high frequency. This enables recording without deterioration in image quality.

That is, the present invention embeds information other than image information in the color difference or saturation direction other than the luminance information. At this time, it is also effective to make the intensity of information to be embedded variable according to the chromaticity and change rate of the input pixel in order to further reduce the image quality deterioration.

Further, after recording an image obtained by such image processing on a recording medium, the recorded image is read, and another information embedded by averaging the read signals and band pass processing is added. It is detected and further converted into color difference and saturation information to be detected.

That is, since general image information hardly exists in the frequency band where the color difference and the saturation are above the visual acuity limit, the image information is embedded in the image information by converting the color difference and the saturation information into bandpass processing. Further information can be separated and read with extremely high accuracy.

[0033]

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS An embodiment of the present invention will be described below with reference to the drawings. 1 is a block diagram of an image processing apparatus according to a first embodiment of the present invention.

Color signals Y, M, and C representing the densities of yellow, magenta, and cyan when recording / printing a color image are supplied to an input system 101 including input terminals and the like. Input system 10
The first color signals Y, M, and C input to 1 are supplied to the first conversion circuit 102.

The color signals representing the densities of Y, M, and C become Y, M, and C ink amount signals when the apparatus is used as a printer. The first conversion circuit 102 is supplied with the first
Performs the first conversion processing based on the color signals Y, M, and C of
From these, the luminance signal I and the two color difference signals C1 and C2
To generate. The luminance signal I is directly supplied to the second conversion circuit 106. Of the two types of color difference signals C1 and C2, the color difference signal C1 is directly supplied to the second conversion circuit 106, and the color difference signal C2 is supplied to the second conversion circuit 106 via the adder 105.

On the other hand, this embodiment also has a code generator 103. The code generator 103 holds information different from image information to be embedded in a color image (hereinafter referred to as “specific information”), compresses the information, generates the code, etc., and generates the code.
It is supplied to the pattern generation circuit 104. Based on this code, the pattern generation circuit 104 supplies to the adder 105 a pattern signal having a rectangular wave as shown in FIG. 2A corresponding to “0” and “1” of each bit data forming the code. . When this pattern signal is repeatedly generated over a plurality of lines, a striped pattern as shown in FIG. 2B is generated. When the width of the pattern signal is less than the length of one scanning line, the same pattern signal may be repeatedly generated in the main scanning direction.

The adder 105 adds the pattern signal from the pattern generation circuit 104 to the color difference signal C2 from the first conversion circuit 102. The signal CC2 as the addition result is supplied to the second conversion circuit 106. The second conversion circuit 106 includes a luminance signal I, a color difference signal C1, and a color difference signal C1 from the first conversion circuit 102.
And a signal CC2 from the adder 105
A second color signal representing the density of yellow, magenta, and cyan when recording / printing a color image in which specific information is embedded from these signals by performing a second conversion process that is the inverse conversion of the above conversion process. Y ′, M ′, C ′ are generated. The second color signals Y ′, M ′, C ′ are supplied to the error diffusion processing circuit 107. The error diffusion processing circuit 107 performs error diffusion processing on the supplied second color signals Y ′, M ′, C ′ to generate an error diffusion pattern. The generated error diffusion pattern is output by the output system 10.
8 are supplied. The output system 108 is, for example, a printer, a color copy machine, or a facsimile machine, and outputs a color image (here, the pattern of the specific information is embedded by the adder 105) according to the supplied error diffusion pattern. The error diffusion processing circuit 107 does not necessarily have to be provided. In this case, the second color signals Y ′, M ′, C ′ output from the second conversion circuit 106 are directly supplied to the output system 108, and the output system 108 outputs the second color signals Y ′, M ′,
A color image is output based on C '.

Next, the operation of the first embodiment will be described.

The first color signals Y, M, C corresponding to the ink amount when printing a color image are supplied from the input system 101 to the first conversion circuit 102. The value of the first color signal is Y = M = C = 0 when the color image is white, and Y = M when the color image is black.
= C = 1 is determined. The first color signal supplied from the input system 101 is converted into the luminance signal I and the color difference signals C1 and C2 by the first conversion circuit 102. The conversion from the first color signals Y, M and C to the luminance signal I and the color difference signals C1 and C2 is performed according to the following equation.

I = 1- (Y + M + C) / 3 (1) C1 = M-C (2) C2 = Y-M (3) Here, I represents an amount corresponding to luminance, and C1 Represents a color difference corresponding to cyan to red, and C2 represents a color difference corresponding to blue to yellow. When six solid colors are arranged in the C1 and C2 coordinate systems, the result is as shown in FIG. 21. From this figure, (Y-M) represents the yellow-blue direction and (M-C represents the red-cyan direction). ).

The luminance signal I and the color difference signal C1 thus generated are supplied to the second conversion circuit 106, and the color difference signal C2 is supplied to the adder 105.

On the other hand, the specific information to be embedded in the image information is the information about the details of the output system 108 such as the date and time of printing, the maker name, model name, and machine number of the printer that constitutes the output system 108. And When the printed matter is forged, a counterfeiter can be found by sneaking in the printed matter with information indicating by which machine the printed matter is printed, and as a result, the effect of preventing forgery is improved. The code generator 103 has a built-in clock generator for generating the date and time for printing,
Further, it has a memory in which a maker name, a model name, and a machine number are set in advance. Specific information is code generator 1
It is generated by 03 in the form of a code.

The specific information is, for example, 17 in order from the upper bit.
date (6 decimal digits are displayed), 11b
It is configured by assigning it to time, 10 bit to manufacturer name, 34 bit to model name and machine number, and 72b in total.
It has data of it (corresponding to 9 bytes).
The code generator 103 converts the data of the specific information into code data of 9 bytes or less by compressing / encrypting the data.

The pattern generating circuit 104 is based on the above code, and is simple ON and O as shown in FIG.
A pattern signal composed of a rectangular wave composed of FFs is supplied to the adder 105.

The adder 105 superimposes this pattern signal on the blue-to-yellow color difference signal C2 from the first conversion circuit 102. The pattern signal is generated over a plurality of scan lines. As a result, as shown in FIG. 2B, a striped pattern regarding the YM color difference is superimposed on the color image.
The pattern signal is superimposed so that the intermediate level of the amplitude becomes 0 level of the color difference signal. Therefore, when the amplitude in FIG. 2A is ± α / 2, the color difference signal CC2 on which the pattern signal is superimposed is expressed by the following equation.

CC2 = C2 ± α / 2 (4) The sign + means “0” when the code bit is “1”.
Is the case.

The pattern shown in FIG. 2 (b) should not give a visually uncomfortable feeling when it is superimposed on a color image. Therefore, it is necessary to study the setting of the amplitude α and the pattern period τ (see FIG. 2A) in consideration of the human visual limit. In this case, the smaller the amplitude of the pattern and the shorter its cycle, the less noticeable the pattern is to the human eye.

FIG. 3 shows a subject observed by a sample output using a printer capable of printing at a high frequency of 300 dpi, and gradation changes in the luminance direction, color difference (blue-yellow) direction, and saturation direction. 6 is a graph showing the results of a survey on the gray level discrimination ability of humans in the case of being given. In this graph, the frequency is plotted on the horizontal axis and the gray level discrimination capability is plotted on the vertical axis. As is clear from this graph, the human gray level discrimination ability is much lower in the change in the color difference (blue-yellow) direction than in the change in the luminance direction. Furthermore, color difference (blue-yellow)
Significantly lower for changes in saturation than for changes in direction.

Further, as is clear from FIG. 3, the sensitivity is sharply lowered in any case from when the frequency exceeds 2 cycles / mm. That is, if the frequency of the above pattern is set to a high frequency exceeding 2 cycles / mm, the number of visually distinguishable gradations is about 60 gradations in the luminance direction and 20 gradations or less in the color difference direction and the saturation direction. Become. For this reason, even if the amplitude α is considerably increased, there is no fear that the human eyes will feel uncomfortable. Further, the larger the amplitude of the pattern, the less likely it is to be buried in noise. Therefore, the pattern can be easily extracted without using a sensor having a high SN ratio. Also,
If the frequency of the embedded pattern is set to 3 cycles / mm or more, it is possible to make the pattern more visually unrecognizable. In this case, a printer capable of reproducing a frequency of 3 cycle / mm or more, that is, a resolution of 6 dot / mm
A printer capable of reproducing image points of (= 150 dpi) or more is sufficient. In particular, the printer need not be a high-precision printer. That is, if it is possible to reproduce a normal color image, further high-definition recording is not particularly required.

The signal CC2 generated by the adder 105 is supplied to the second conversion circuit 106. Next, the luminance signal I, the color difference signal C1, and the signal CC2 are converted into the second color signals Y ′, M ′, C ′ by the second conversion circuit 106. Here, the conversion into the second color signal is performed according to the following equation.

Y '= 1-I + (C1 + 2CC2) / 3 (5) M' = 1-I + (C1-CC2) / 3 (6) C '= 1-I- (2C1 + CC2) / 3 (7) In this way, the second conversion circuit 106 causes the color signals Y ′, M ′, and
C'is obtained.

In the equations (5) to (7), (1) to (4)
By substituting I, C1, C2, and CC2 in the equation, the following relationship is obtained.

Y + M + C = Y ′ + M ′ + C ′ (8) According to the present invention, in the present invention, the total of color signals before embedding specific information and after embedding, that is, the total sum of ink amounts does not change. Indicates that.

When a printer having a limited number of gradations that can be expressed as the output system 108 is used, it is necessary to perform pseudo gradation expression using a multivalued error diffusion method. Therefore, the pattern corresponding to the specific information is embedded in the image information, the color signal for printing is obtained by the second conversion circuit 106, and then the error diffusion pattern is generated by the error diffusion processing circuit 107. In this way, when the gradation expression is performed by using the error diffusion method, the embedded specific information becomes more visually indistinguishable.

In the output system 108, a pattern corresponding to the specific information thus generated is embedded in the image information and output (printed).

Next, the process of reading the specific information output in the above procedure will be described.

The reading section of the image processing apparatus is provided with a scanner (not shown) for extracting specific information from the image printed by the processing of the embedding processing section. This scanner uses RGB (Blue, Green, Y
This is equipped with a color separation filter.

First, in order to stably separate the embedded specific information pattern from the image pattern and to reliably extract the specific information, the image information read over a plurality of scanning lines is averaged. Here, 128 lines are read and averaged to obtain pixel data for one line. By doing so, the complicated pattern appearing in the image is not averaged in the main scanning direction, and the images of the same content are averaged for each sub-scanning direction, so that specific information can be obtained at a high S / N ratio. To be detected. However, in this case, it is almost impossible to make the scanning direction for reading the document 401 (FIG. 4) completely coincident with the scanning direction for recording, and in most cases, the scanning direction is inclined. If the scanning line direction is slightly deviated between recording and reading, the above averaging effect is not reflected. Therefore, as shown in FIG. 4, the original 401 is superposed on the auxiliary sheet 402, which is slightly larger than the original 401, and is read. In the case of a reflection type scanner, the document 401 is placed on the document table, and then the auxiliary sheet 402 is stacked. When the original 401 is white, the auxiliary sheet 402 is black, and when the original 401 is black, the auxiliary sheet 402 is white. The auxiliary sheet 402 is arranged so that it is always read before the original in the main scanning direction. Thus, at the time of scanning, the edge portion of the document 401 is identified by the difference between black and white. Therefore, the position of the edge of the document is identified every time scanning is performed, and the effect of the averaging process can be enhanced.

Next, the reading process of the specific information embedded in the above procedure will be described with reference to the flowchart of FIG.

First, the number of pixel samples WI in the main scanning direction
DTH and the number of lines HIGHT in the sub-scanning direction are set (step A01). At this time, the number of samples WIDTH in the main scanning direction is set so that the reading range of the main scanning method is smaller than the width of the document. Further, for example, the number of lines HIGHT = 128 is set. The count number in the main scanning direction is n, and the count number in the sub scanning direction is m. First, m is set to "0" (step A02), and n is set to "0" (step A03). The sum Dn of the pixel values of the n-th pixel described later is set to "0" (step A04). It is determined whether n is equal to "WIDTH-1" (step A05). If NO, the current n is “1”
Is added (step A06) and step A04 is repeated. If YES, the process advances to step A07.

In step A07, the RGB signal for one pixel is taken in, and in step A08, the sum of R, G, and B is divided by 3 to obtain the average value of the RGB signals, and n = 0 (that is, 0).
The brightness data I0 of the (th) pixel is obtained. Next, n is set to "1" (step A09). Similar to the above, the RGB signal for one pixel is fetched (step A1
0), the sum of R, G, and B is divided by 3 to obtain the average value of the RGB signals, and the brightness data In of the nth pixel is obtained (step A11).

Next, the difference ΔIn between the brightness data In of the nth pixel and the brightness data In-1 of the n-1th pixel.
Is calculated (step A12). It is determined whether or not this ΔIn is larger than a preset threshold “TH” (step A13). If NO, "1" is added to the current n (step A14), and steps A10 to A12 are repeated. If YES, the process proceeds to step A15. here,
When the difference ΔIn = In−In−1 is considered as a differential value, the nth pixel at the point where the differential value changes greatly, that is, ΔIn takes a value larger than the threshold value TH, is the left end of the document. Can be determined, and this is the target for actual use in averaging. Note that the start target is the first pixel until ΔIn becomes larger than the threshold value TH.

At step A15, the RGB signal of the pixel to be started is fetched. Next, the color difference D between G and B
Di (i = 1 to n) (color difference component in the GB direction) is obtained (step A16). The obtained color difference DD is added to the total Dn (initially Dn = 0) for each pixel. This allows
The total Dn is updated (step A17). n is “WI
It is determined whether it is equal to DTH "(step A1)
8). If NO, "1" is added to the current n (step A19) and steps A15 to A17 are repeated. Y
If it is ES, go to Step A20. In step A20, it is determined whether or not m is equal to "HIGHT-1". N
If it is O, add "1" to the current m (step A2
1), steps A03 to A19 are repeated. If YES, the process proceeds to step A22. As a result, the sum of the color differences DD relating to the nth pixel in each line is obtained.

At step A22, the current n is set to "0". The current total Dn is divided by the number of lines "HIGHT" to obtain an average, and this average is defined as Dn (step A2
3). It is determined whether n is equal to "WIDTH-1" (step A24). If NO, the current n is “1”
Is added (step A25), and step A23 is repeated. If YES, end.

In this way, the average color difference for each pixel is obtained.

After that, in order to extract the frequency component of the pattern of the specific information, the average of the obtained color differences (average value data) is filtered by a bandpass filter. When the image information is averaged, it becomes a frequency component centering on the DC component, and the pattern of the specific information becomes a high frequency component. Therefore, the DC component, that is, the averaged image information, is removed by the bandpass filter, so that the image information is embedded. Only specific information can be extracted. Further, if the frequency to be added can be extracted, it is possible to use a high pass filter for removing the DC component.

The resolution of the scanner is sufficient as long as it can read the printed document in units of one image point. Therefore, if the scanner can reproduce a normal image, the specific information can be easily extracted by the above procedure.

Next, a first embodiment of the present invention will be described with a photograph I.
An example of an embedding place when applied to a D card or the like is shown. When the specific information is embedded in the ID card, for example, FIG.
As shown in 0 (a), it is desirable that a part of the embedded specific information is applied to the photograph. This is so that if a third party replaces a photograph and forges an ID card, this can be discovered. It should be noted that the range in which the specific information is embedded is not limited to that shown in FIG.
Variations such as 0 (b) to 20 (d) are included.

The limitation of the place of embedding in the ID card or the like is not limited to the first embodiment, but can be applied to the second to fifth embodiments described later.

The specific information to be embedded has a maximum of 20 digits (for example, a general credit card number having the largest number of digits including identification cards is 16 digits, and a recitation number is 4 digits),
That is, a data capacity of about 65 bits is required, but 72 bits as in this example is a sufficient capacity. Further, if the embedding position of the pattern is included in a part of the specific information, more specific information can be recorded.

As described above, according to the first embodiment of the present invention, more specific information can be embedded in a smaller area without giving a visually uncomfortable feeling. Also,
The specific information can be easily extracted.

In the first embodiment, the specific information may be directly embedded in the color signal without using the first and second conversion circuits. That is, I, C1, C in the equations (5) to (7)
2, by substituting the equations (1) to (4) into CC2,
Since the following relationship is obtained, the second color signals Y ′, M ′, C ′ may be obtained from the first color signals Y, M, C so as to satisfy this relationship.

Y ′ = Y + α / 3 (9) M ′ = M−α / 6 (10) C ′ = C−α / 6 (11) (9) to (11) +, −, − Represents the case of −, +, + (if the data is “1”, −)
If it is "0", it is +).

FIG. 22 shows the configuration of the apparatus for adding information to this direct color signal. First, as in the first embodiment, specific information is generated from the code generator 2202, and the pattern generation circuit 2203 generates a rectangular pattern. At this time, the amplitude value given in the color difference direction is ± α / 2. This is input to the signal conversion circuit 2204 and converted into a form that can be directly superimposed on the color signal. For example, the fluctuation amount given to Y, M, and C when the color difference direction is modulated and added is DY, DM, and D.
If C is used, each can be expressed by the following equation.

DY = + (−) α / 3 DM = − (+) α / 6 DC = − (+) α / 3 The value obtained by the above equation is supplied to the adder 2205,
Color signals Y ', M', C'containing additional information are obtained. Code generation / addition unit 2 for performing the above steps 2202 to 2205
It is also possible to give the function of the present invention to a general printer or the like by collecting 207 in an external ROM or board and inserting it into a general printer or copying machine.

Since 128 lines are averaged during reading, different patterns can be embedded in every 128 lines.

Another embodiment of the present invention will be described below. In other embodiments, the same parts are designated by the same reference numerals and detailed description thereof will be omitted.

In the first embodiment, when the specific information is embedded,
A gradation change was given in the color difference direction. As shown in FIG. 3, the advantage of embedding specific information by giving gradation changes in the saturation direction is that human visual sensitivity is more sensitive to gradation changes in the color difference (blue-yellow) direction than gradation changes in the luminance direction. Since it was low, it was possible to embed specific information without giving a sense of discomfort, but it was found that the sensitivity is lower for gradation changes in the saturation direction than for gradation changes in the color difference direction. There is. Therefore, a second embodiment will be described next in which the specific information is embedded by giving a gradation change in the saturation direction instead of the color difference direction.

FIG. 6 is a block diagram showing an embedding processing unit in the image processing apparatus according to the second embodiment of the present invention.

As shown in FIG. 6, the embedding processing section is provided with an input system 601. From the input system 601, the first color signals Y, M, C corresponding to a color image are supplied to the first conversion circuit 602. The first conversion circuit 602 has an input system 6
The first color signals Y, M, and C supplied from 01 are converted, and the luminance signal I and the two color difference signals C are converted.
1 and C2 are generated respectively. The configuration up to this point is the same as that of the first embodiment. The luminance signal I is the second
It is supplied to the conversion circuit 607 and the pattern generation circuit 606. Further, the color difference signal C1 is supplied to the first adder 603 and the pattern generation circuit 606. The color difference signal C2 is supplied to the second adder 604 and the pattern generation circuit 606.

Further, a code generator 605 is provided in this embedding section as in the case of the first embodiment. The code generator 605 stores the specific information to be embedded in the color image, generates the specific information in the form of a code, and supplies it to the pattern generating circuit 606. The pattern generation circuit 104 generates two rectangular wave pattern signals based on the code supplied from the code generator 103 and the luminance signal I and the color difference signals C1 and C2 supplied from the first conversion circuit 602, and The first adder 603 and the second adder 604 are supplied. In the process of generating the pattern signal, the saturation of the image is calculated.

The first adder 603 adds (or subtracts) the pattern signal from the pattern generation circuit 606 to the color difference signal C1 from the first conversion circuit 602. The signal CC1 as the addition result is supplied to the second conversion circuit 607.
The second adder 604 adds (or subtracts) the pattern signal from the pattern generation circuit 606 to the color difference signal C2 from the first conversion circuit 602. The signal CC2 as the addition result is supplied to the second conversion circuit 607. The second conversion circuit 607 receives the luminance signal I from the first conversion circuit 602,
Conversion is performed based on the signal CC1 from the adder 603 and the signal CC2 from the adder 604 to generate second color signals Y ′, M ′, C ′. The second color signal Y ′,
M ′ and C ′ are supplied to the error diffusion processing circuit 608. The error diffusion processing circuit 608 supplies the second color signal Y ′,
Error diffusion processing is performed on M ′ and C ′ to generate an error diffusion pattern. The generated error diffusion pattern is supplied to the output system 609. The output system 609 is, for example, a printer and outputs an image according to the supplied error diffusion pattern. In addition,
It is also possible to configure the system without using the error diffusion processing circuit 608. In this case, the second conversion circuit 607
The second color signals Y ′, M ′, C ′ are directly supplied to the output system 609. Then, the output system 609 outputs an image corresponding to the second color signals Y ′, M ′, C ′.

Next, the operation of the second embodiment will be described.

First, as in the case of the first embodiment, the first color signals Y, M, C corresponding to the color image are input to the input system 60.
1 to the first conversion circuit 602. In the first conversion circuit 602, the first color signals Y, M, C supplied from the input system 601 are described in the first embodiment (1).
~ Luminance signal I, color difference signals C1, C2 according to equation (3)
Is converted to. The luminance signal I and the color difference signals C1 and C2 are supplied from the first conversion circuit 602 to the pattern generation circuit 606.

On the other hand, in the code generator 605, the specific information is generated in the form of a code, and the pattern generation circuit 606.
Is supplied to. Next, the pattern generation circuit 606 generates a pattern signal in the two color difference directions based on the code. The generated pattern signal is converted into the chrominance signal C1 in the first adder 603 and the second adder 604.
In, the color difference signals C2 are added respectively. In this case, the pattern generation circuit 606 embeds a certain amount of specific information having the same component as the vector formed by the color difference signals C1 and C2 from the color difference signals C1 and C2. That is, assuming that the amount (amplitude) of the specific information to be embedded is ± α / 2, the signal CC after the pattern signal is added to the color difference signals C1 and C2.
1 and CC2 are respectively represented by the following equations.

CC1 = C1 ± α · C1 / (2Cc) (12) CC2 = C2 ± α · C2 / (2Cc) (13) where Cc represents the saturation of the input image. Cc is calculated by the following equation.

Cc = SQRT {(C1) 2 + (C2) 2 } (14) Then, the color signals Y ′, M ′, to be supplied to the output system.
The procedure for obtaining C'is the same as in the first embodiment.

When the input color image is a solid monochrome image, the color difference signals C1 and C2 are both 0, so the saturation Cc is also 0, and most of the image points in the screen are in the color difference direction. Cannot be determined. Therefore, it becomes difficult to embed the specific information. Therefore, when both the color difference signals C1 and C2 stay within a certain range and the input image is considered to be a monochrome image, the specific information in the YM color difference direction is set. Switch the process to embed. That is, the distribution of the saturation Cc on the screen is obtained, and if the range covered by the distribution falls within a preset value, the color difference signal C1 is not changed and only the color difference signal C2 is changed. That is, the signal CC2 after adding the pattern signal to the color difference signal C2 is as expressed by the following equation.

CC2 = C2 ± α / 2 (15) This is the same as the processing in the first embodiment.

Alternatively, if both C1 and C2 stay within a certain range and the input image is regarded as a monochrome image, the specific information may not be embedded. It is possible.

Further, in the vicinity of the achromatic color, the human eye sometimes feels sensitive. Therefore, if the specific information is not embedded particularly in the vicinity of the achromatic color, it is difficult for the human eyes to identify the specific information.

The setting of the amplitude and the period of the specific information to be embedded needs to be examined in consideration of the human visual limit. In this case, the smaller the amplitude of the appearing pattern and the shorter its cycle, the less noticeable the pattern is to the human eye.

Further, as is clear from FIG. 3 described in the first embodiment, if the period is shortened, there is no danger of being discerned by human eyes even if the amplitude is considerably increased. Further, since the amplitude of the pattern itself is large, there is little possibility that it will be buried in noise. Therefore, the pattern can be easily extracted without using a sensor having a high SN ratio.

The signal CC1 generated by the adder 603 is supplied to the second conversion circuit 607. Further, the signal CC2 generated by the adder 604 is supplied to the second conversion circuit 604. Next, the luminance signal I and the color difference signal C
1, and the signal CC2 are converted into the second color signals Y ′, M ′, C ′ by the second conversion circuit 607. The conversion in this case is performed according to the equations (5) to (7) described in the first embodiment. However, C1 in the expressions (5) to (7) is replaced with CC1.
Think instead of.

Thus, the image after the specific information is embedded in the image information is obtained.

The above-obtained second color signals Y ', M', C '
Is supplied to the error diffusion processing circuit 608. An error diffusion pattern is generated in the error diffusion processing circuit 608.

In the output system 609, as shown in FIG. 2B, 9-byte data corresponding to the specific information is repeatedly embedded in the main scanning direction, and is completely the same in the sub scanning direction. Pattern is repeatedly embedded. In this way, the specific information is embedded in the image information and printed.

Here, a technique capable of embedding more specific information will be described. In this technique, the amount of specific information to be embedded is controlled so as to be changed according to the chromaticity of the input image.

FIG. 7 is a schematic diagram showing the distribution of the results of an examination of sensitivities by chromaticity with respect to patterns of the same period, using subjects. In FIG. 7, the horizontal axis represents color difference and the vertical axis represents luminance. In addition, the lighter the area is, the higher the sensitivity is. From the figure, it can be seen that when a pattern is embedded in a color portion having a low color difference and an intermediate luminance, the pattern can be easily discriminated by human eyes. Therefore, especially for the color in the highly sensitive chromaticity region corresponding to the unfilled core, the pattern is not embedded, or the amplitude is suppressed small, and the amplitude of the embedded pattern is reduced as the sensitivity decreases. It is necessary to control it so that it becomes larger.

In order to deal with this, in the block diagram of FIG. 6, a memory (not shown) for storing the amplitude coefficient that determines the addition amount of the pattern signal is provided inside the pattern generator 606. The pattern generator 606 supplies the luminance signal I and the color difference signal C supplied from the first conversion circuit 602.
An appropriate amplitude coefficient is fetched from the memory according to 1 and C2. In this case, for example, a LUT (Look Up Table) is referred to. Then, the pattern generator 606 changes the amplitude of the pattern signal to be added to each of the color difference signals C1 and C2 according to the extracted amplitude coefficient of the memory. That is, the pattern signal is generated in the pattern generator 606 so that the pattern signal is not added or the amplitude thereof is suppressed to a small value in a highly sensitive area such as the vicinity of an achromatic color. As a result, the generated pattern signals are added to the color difference signals C1 and C2 in the adders 603 and 604, respectively. When the amplitude coefficient is β, the color difference signals CC1 and C
C2 is expressed as follows.

CC1 = C1 ± αβC1 / (2Cc) (16) CC2 = C2 ± αβC2 / (2Cc) (17) In this way, it becomes more difficult to visually identify. , It becomes possible to embed more specific information.

Next, the process of reading the specific information output in the above procedure will be described.

The reading section of the present system is provided with a scanner (not shown) for reading specific information from the image printed by the processing of the embedding processing section.
The scanner is equipped with RGB (Blue, Green, Yellow) color separation system filters.

The procedure for reading the specific information is the same as in the case of the first embodiment. However, in the description of the first embodiment, there are some differences. Referring to FIG. 5, the first
In the embodiment, in step A16, the color difference D between G and B
D (color difference component in the G-B direction) is calculated. On the other hand, in the second embodiment, in step A16, SQRT {(GB)
Calculate the 2 + (R-G) 2 } Request chroma DD by.

In the first embodiment, the color difference DD is added to the total Dn in step A17. On the other hand, in the second embodiment, in step A17, the saturation DD is the total Dn.
Is added to. The procedure other than the above is the same as in the case of the first embodiment. As a result, the average saturation of each pixel is obtained.

Thereafter, in order to extract the frequency component of the pattern, the average of the obtained color differences (average value data) is filtered by a bandpass filter. As a result, the DC component, that is, the averaged image information of the base is removed, and only the embedded specific information can be extracted.

The resolution of the scanner is sufficient as long as it can read the printed document in units of one image point. Therefore, if there is a scanner capable of reproducing a normal image, the specific information can be easily extracted by the above procedure.

As described above, according to the second embodiment, it is possible to make it more difficult to visually identify and embed more specific information as compared with the case of the first embodiment. Also, the specific information can be easily extracted.

In the second embodiment, the specific information may be directly embedded in the color signal without using the first and second conversion circuits. That is, equations (5) to (7) and (1) to
Since the following relationship can be obtained from the equation (4), the first color signals Y, M, C to the second signal Y ′,
You may ask for M'and C '. However, in this case, (5)-
Calculation is performed by replacing C1 in Expression (7) with CC1. Y ′ = Y ± α (2Y−MC) / (6 · SQRT {(M−C) 2 + (Y−M) 2 }) (18) M ′ = M ± α (2M−C−Y) ) / (6 · SQRT {(M−C) 2 + (Y−M) 2 }) (19) C ′ = C ± α (2C−Y−M) / (6 · SQRT {(M−C) 2 + (Y-M) 2 }) ... (20) will be described using the configuration diagram of FIG. 22 described above this would be the signal converting circuit 2204 obtains each variation as:. DY = ± α (2Y−M−C) / (6 · SQRT {(M−C) 2 + (Y−M) 2 }) DM = ± α (2M−C−Y) / (6 · SQRT {( M−C) 2 + (Y−M) 2 }) DC = ± α (2C−Y−M) / (6 · SQRT {(M−C) 2 + (Y−M) 2 }) Next An example will be described.

Generally, in a part where the density change is flat in the image, the part is conspicuous only by giving a slight change, but in the part where the density change is severe, it is visually conspicuous even if a little change is given. There is no property. In this embodiment, such a characteristic is used. That is, embedding of specific information is strengthened in a portion where the density change is large, and embedding of specific information is weakened in a flat portion.

FIG. 8 is a block diagram showing an embedding processing unit in the image processing apparatus according to the third embodiment of the present invention.

As shown in FIG. 8, an input system 801 is provided in the embedding processing section. First color signals Y, C, K (black) corresponding to a color image are supplied from the input system 801 to the first conversion circuit 802. First conversion circuit 802
Are the first color signals Y, M, supplied from the input system 801.
Conversion is performed based on C and K to generate a luminance signal I and two color difference signals C1 and C2, respectively. Luminance signal I
Is supplied to the second conversion circuit 809, the high frequency extraction circuit 807, and the pattern generation circuit 806. In addition, the color difference signal C1
Is supplied to the first adder 803 and the pattern generation circuit 806. The color difference signal C2 is supplied to the second adder 804 and the pattern generation circuit 806.

A code generator 805 is provided in the embedding processor. The code generator 805 stores the specific information to be embedded in the color image, generates the specific information in the form of a code, and supplies it to the pattern generating circuit 806.
The pattern generation circuit 806 is based on the code supplied from the code generator 103 and the luminance signal I and the color difference signals C1 and C2 supplied from the first conversion circuit 802.
Generate a rectangular wave pattern signal as shown in (a),
It is supplied to the multipliers 808a and 808b. Then, the high-frequency extraction circuit 805 performs well-known high-frequency component extraction processing according to the luminance signal I supplied from the first conversion circuit 802, and a coefficient that determines the amplitude of the pattern signal according to the strength of the high-frequency component. k is obtained using an LUT or the like, and a multiplier 808
a, 808b. Multipliers 808a and 808b
Multiplies the pattern signal (s) from the pattern generation circuit 806 by the coefficient k from the high frequency extraction circuit 807, and supplies the outputs to the first adder 803 and the second adder 804, respectively.

The first adder 803 adds (or subtracts) the signal from the multiplier 808a to the color difference signal C1 from the first conversion circuit 802. Signal CC1 as the addition result
Are supplied to the second conversion circuit 809. The second adder 804 converts the color difference signal C2 from the first conversion circuit 802 into
The signals from the multiplier 808b are added (or subtracted). The signal CC2 as the addition result is converted into the second conversion circuit 80
9 is supplied. The second conversion circuit 809 is the first conversion circuit 8
02 from the luminance signal I, signal CC1 from the adder 803
And a signal CC2 from the adder 804 to perform conversion to generate second color signals Y ', M', C ', K'. The second color signals Y ′, M ′, C ′, K ′ are supplied to the error diffusion processing circuit 810. The error diffusion processing circuit 810 performs error diffusion processing on the supplied second color signals Y ′, M ′, C ′, K ′ to generate an error diffusion pattern. The generated error diffusion pattern is supplied to the output system 811. Output total 8
A printer 11, for example, outputs an image according to the supplied error diffusion pattern.

Next, the operation of the third embodiment will be described.

First, the first color signals Y, M, C and K corresponding to a color image are input from the input system 801 to the first conversion circuit 80.
2 is supplied. In the first conversion circuit 802, the first color signals Y, M, C, K supplied from the input system 801 are supplied.
Are converted into a luminance signal I and color difference signals C1 and C2. A luminance signal I and a color difference signal C are output from the first conversion circuit 802.
1, C2 are supplied to the pattern generation circuit 806.

The conversion equations corresponding to the equations (1) to (3) in this embodiment are as follows: I = 1-((Y + M + C) / 3 + K) C1 = M-C C2 = Y-M, and (5)-( The conversion formula corresponding to the formula 7) is: Y '= 1- (I + K) + (CC1 + 2CC2) / 3 M' = 1- (I + K) + (CC1-CC2) / 3 C '= 1- (I + K)-( 2CC1 + CC2) / 3 K ′ = K. That is, although the amount of smear K affects the luminance signal, it is not directly related to the color difference signals C1 and C2, and in order to prevent the luminance from being changed before and after the conversion as in the present invention, the above equation is used.

On the other hand, the code generator 805 generates the specific information in the form of a code, and the pattern generation circuit 806
Is supplied to. Next, the pattern generation circuit 606 generates two pattern signals based on the code. In this case, in the pattern generation circuit 806, a certain amount of specific information having the same component as the vector formed by the color difference signals C1 and C2 is embedded from the color difference signals C1 and C2. The conversion relationship of the color difference signal before and after embedding the specific information is the same as that of the equations (12) to (14) described in the second embodiment. As in the case of the second embodiment, the pattern generation circuit 806 is provided with a memory (not shown) for storing the amplitude coefficient that determines the addition amount of the pattern signal. The pattern generator 806 fetches an appropriate amplitude coefficient from the memory according to the luminance signal I and the color difference signals C1 and C2 supplied from the first conversion circuit 802. In this case, for example, the LUT is referred to. Then, the pattern generator 806
Is a color difference signal C according to the amplitude coefficient of the extracted memory.
The amplitude of the pattern signal to be added to 1 and C2 is changed. That is, the pattern signal is generated by the pattern generator 806 so that the pattern signal is not added or the amplitude thereof is suppressed to a small value in a highly sensitive area such as near an achromatic color.

The generated pattern signal is multiplied by the multiplier 80.
8a and 808b, the amplitude thereof is further controlled by the coefficient k from the high frequency extraction circuit 807, and is supplied to the first adder 803 and the second adder 804. in this case,
For example, where a high frequency component is extracted in a small amount, the coefficient k suppresses the amplitude to be small. The pattern signal after the multiplication is applied to the color difference signal C in the first adder 803.
1 is added to the color difference signal C2 in the second adder 804. Then, in the second conversion circuit 809,
Color signals Y ', M', C ', K'for supplying to the output system
Is required. After that, the error diffusion processing circuit 810 performs pseudo halftone expression processing and outputs the result to the output system 811.

The procedure for reading the specific information is the same as in the case of the first embodiment.

As described above, in the third embodiment, the amplitude of the embedding pattern is increased in a portion where the high frequency component is large and changes frequently, and the amplitude of the embedding pattern is increased in the portion where the high frequency component is small and the change is small. Reduce the pattern amplitude. As a result, as compared with the case of the second embodiment, it is more difficult to visually identify, and more specific information can be embedded. Also, the specific information can be easily extracted.

It is not always necessary to change the amplitude according to the visual sensitivity in this embodiment.

In the third embodiment, in order to directly embed the specific information in the color signal without using the first and second conversion circuits, the first color signal Y should satisfy the following expression. ,
The second color signals Y ′, M ′, C ′, K ′ may be obtained from M, C, K. Further, this embedding is similar to that described in the first and second embodiments with reference to FIG. 22.
In 04, the change amount in the following equation is obtained, and this change amount is added to the color signal from the input system 2201.

Y ′ = Y + (−) α / 3 M ′ = M − (+) α / 6 C ′ = C − (+) α / 6 K ′ = K In the first to third examples, sub Although exactly the same information is embedded in the scanning direction, since 128 lines are averaged, the amount of information may be increased by embedding different information for every 128 lines. Furthermore, the unit of one piece of specific information is not limited to 7 bytes, and any number of units may be used.

Further, the amplitude control of the pattern signal according to the diopter sensitivity of the second embodiment and the amplitude control of the pattern signal according to the high frequency amount of the third embodiment can be implemented in all the embodiments.

FIG. 22 is a block diagram showing a case in which the specific information is directly embedded in the color signal in the first to third embodiments. Here, the band removing circuit is a band removing circuit 9 shown in FIG.
It has the same function as 03 and may be omitted. The information processing unit is introduced in the fourth embodiment and is not necessary here.

Next, a fourth embodiment will be described.

In the above-mentioned embodiment, the pattern obtained by amplitude-modulating the embedded data at a constant cycle is superimposed on the image, but in this embodiment, a large number of frequency components on the two-dimensional Fourier transform plane are used as the specific information data. Accordingly, a two-dimensional striped pattern having multiple frequency components is added to the color image signal.

FIG. 9 is a block diagram showing an embedding processing section in the image processing device according to the fourth embodiment of the present invention.

As shown in FIG. 9, the embedding processing section is provided with an input system 901. From the input system 901, the first color signals Y, M, C corresponding to a color image are supplied to the first conversion circuit 902. The first conversion circuit 902 is an input system 90.
Based on the first color signals Y, M, C supplied from 1,
The first conversion is performed to obtain a luminance signal I and two color difference signals C
1 and C2 are generated respectively. The first conversion is the same as in the first embodiment. The luminance signal I is supplied to the second conversion circuit 908. Further, of the two types of color difference signals C1 and C2, the color difference signal C1 is supplied to the second conversion circuit 908, and the color difference signal C2 is supplied to the second conversion circuit 908 via the band removal circuit 903 and the adder 903. It The band removal circuit 903 performs, for example, 8 × 8 moving average processing on the color difference signal C2 from the first conversion circuit 902, and removes information other than image information. That is, the band removing operation is a low pass filter operation. This is because the image signal supplied from the input system 901 already has specific information (high frequency component) by this method.
In some cases, the image information may be already embedded, so that only the image information composed of components near the direct current is extracted.

A code generator 904 is provided in the embedding processing section. The code generator 904 stores the specific information to be embedded in the color image, generates the specific information in the form of a code, and supplies it to the information processing unit 905. The information processing unit 905 performs processing such as code encryption and compression supplied from the code generator 904, and supplies the processing result to the pattern generation circuit 906. The pattern generation circuit 906 generates a pattern signal having multiple frequency components based on the code supplied from the information processing unit 905, and the adder 90
Supply to 7.

The adder 907 adds (or subtracts) the pattern signal from the pattern generating circuit 906 to the color difference signal C2 from the band removing circuit 903. The signal CC2 as the addition result is supplied to the second conversion circuit 908. The second conversion circuit 908 receives the luminance signal I from the first conversion circuit 902,
A second conversion is performed based on the color difference signal C1 and the signal CC2 from the adder 907 to obtain a second color signal Y ′,
Generate M'and C '. The second conversion process is similar to that of the first embodiment. The second color signals Y ′, M ′, C ′ are output by the output system 9
09 is supplied. The output system 909 is, for example, a printer, a facsimile, or a color copying machine, and outputs an image according to the supplied second color signals Y ′, M ′, C ′.

Next, the operation of the fourth embodiment will be described.

First, the input first color signals Y, M and C are converted into a luminance signal I and color difference signals C1 and C2. The conversion formula at this time is based on the formulas (1) to (3) described above. The color signal is represented by a value of 0 to 1, Y = M = C = 0 is white,
Y = M = C = 1 represents black.

Here, it is assumed that the input document or image data has specific information recorded in advance by the technique according to the present embodiment. In this case, it is necessary to remove old information from the printed document or image data and extract only the original image data. For example, a moving average of 8 × 8 is obtained for the color difference C2 by the band removing circuit 903, and the obtained value is used again as image data of C2 to extract only the image signal. The number of pixels to be averaged depends on the number of pixels of the printer. Or
Only the image data may be obtained by performing the Fourier transform on the color difference direction to extract the embedded specific information and removing only the extracted periodic component.

The adder 907 embeds specific information into this image data, and supplies it to the output unit 909 as color signals Y ', M', and C'through the second conversion circuit 908. Here, from I, C1, C2, color signals Y ', M',
The conversion into C ′ is performed by the equation (5) described in the first embodiment.
It is performed according to (7).

Next, the procedure for embedding specific information will be described in detail. The specific information is represented by a numerical value such as a code as in the first embodiment. This value is processed in advance in the information processing unit 905 such as encryption or compression. As is clear from FIG. 2 referred to in the first embodiment, the human gradation discrimination ability (expressed by the number of discrimination gradations) is high with respect to the change in the luminance direction, and is high in the color difference (YB) direction. It turns out to be lower for changes. Also in this embodiment, the specific information is embedded by utilizing this characteristic.

The pattern generation circuit 906 generates a striped pattern signal having multiple frequency components.
As shown in 0 (a), a Fourier transform plane defined by an axis in the main scanning direction and an axis in the sub scanning direction is defined, and a large number of points are arranged on the surface according to a predetermined rule. Each bit data forming the code of the embedded information is arranged at these many points according to a predetermined rule. Each point has a period and an amplitude. The embedding pattern is generated by adding the cycle and amplitude of each bit data according to the embedding position of the image data.

The code processed by the information processing unit 905 such as encryption and compression is supplied to the pattern generation circuit 906. When the code is supplied to the pattern generation circuit 906, a plurality of bits forming the code are sequentially arranged at predetermined positions on the Fourier transform plane. It should be noted that the arrangement location and arrangement order of each bit can be arbitrarily determined. Here, it is assumed that the arrangement positions of the respective bits are provided at regular intervals on a plurality of lines extending radially. That is, the positions of the bits are concentric with the origin as the center. The angle between this line and the main scanning direction axis is θ
Then, the value of the angle θ is given in the range of 0 ≦ θ <π,
If the whole range is divided into n equal parts, θ = k / n · π (k = 0 to n−
1). n (the number of divisions) can be set to a larger value as the period WL becomes shorter.

The period WL corresponds to the distance between each bit position and the deduction point, and corresponds to the Nyquist frequency (2
The closer to the dot / cycle), the larger the division number n can be made. Each bit is evenly arranged on each ray from the view limit frequency to the Nyquist frequency. The distance from the origin of the Fourier transform plane represents a cycle. The closer to the origin, the longer the cycle, and the farther from the origin, the shorter the cycle. The Nyquist frequency is the upper limit of high frequency components that can be represented by the printer.

In order to specify the start position of the bit data arrangement, as shown in FIG. 10A, except for one position where the cycle corresponds to the visual limit, it is always off regardless of the specific information or On (always off in the example of FIG. 10A: 0)
Is always on in the 1 position of the exception,
Alternatively, a dot (white circle marked with S in FIG. 10A) that is OFF (always ON: 0 in the example of FIG. 10A) is arranged. A dot distinguished from the other visual limit dots is used as a start bit (arrangement start bit of each bit of code data).

Bits are sequentially arranged from the start bit in the radial direction, and when the bit position corresponding to the Nyquist frequency is reached, θ is sequentially decreased and the bits are similarly arranged on the next radiation. The numbers in the circles in FIG. 10A indicate the bit arrangement order. This may be arranged in order from the high-order bit of the specific information code, or conversely, may be arranged in order from the low-order bit.

In this way, for example, the dummy bit for confirming the start start position on the Fourier transform plane, that is, the start bit S which does not depend on the specific information is always turned on (or the low frequency pattern which is relatively less likely to deteriorate). Set to OFF). Another example of the start bit S is shown in FIGS. 10 (b) and 10 (c). Both are diagrams showing the arrangement of bits only near the visual limit on the Fourier transform plane, and are similar to FIG. 10A. Figure 10
Contrary to the case of FIG. 10A, FIG. 10B shows the case where the start bit is always OFF and the bits at the other visual limit positions are always ON. In FIG. 10 (c), all the bits at the visual limit position are always turned on, but the amplitude W only at the position corresponding to the start bit (white double circle in the figure).
You may distinguish from the other bits of a visual limit position by making I larger than others, for example, 2 times.

The pattern generation circuit 906 adds the cycle and amplitude of all bit data of the specific information arranged on the Fourier transform plane in this way according to the positions x and y of the pixels to be embedded in the color image, and the specific information pattern ΣΣβ (Θ, WL) is generated. Σ is the sum of θ (0 ≦ θ <180 °) and WL (from the visual limit to the Nyquist frequency).

ΣΣβ (θ, WL) = (WI / 2) · cos (cos θ · x · 2π / WL + sin θ · y · 2π / W L) (21) where WI / 2 is the amplitude of each bit, Bit is 0
If so, WI / 2 = 0 and only the frequency component of the bit of 1 is added.

Therefore, CC2 of the output of the adder 907 is expressed as follows.

CC2 = C2 + ΣΣβ (θ, WL) (22) Next, setting of the period WL, the angle θ, and the value of the amplitude WI will be described. First, roughly speaking, the range of the period WL is from the "visual limit" in the color difference direction in which specific information is embedded to the Nyquist frequency of the printer. However, the “visual limit” here is a convenient expression, and actually represents the frequency at a point where the sensitivity to a change in concentration is extremely low. The "visual limit" is a printer-independent value. By the way, the visual limit in the color difference (YB) direction is 2 cycles / mm. Here, this value will be concretely converted into the control amount of the printer. For example, if your printer has 400 dpi resolution,
One cycle of the visual limit corresponds to about 8 pixels. Therefore, if the printer has the above resolution, the range of the cycle WL corresponds to 2 to 8 pixels. In other words, the Nyquist frequency is the maximum frequency that the printer can express,
It corresponds to two pixels.

The value of the amplitude WI is the MTF (modulati) of the output system.
on transfer function) and the visual characteristics of the periodic structure. Assuming the discrimination ability as shown in FIG. 2, for example, if the period WL is 8 pixels, WI is 1
The value of the amplitude WI is set larger for a component having a higher frequency such as / 64 for 2 pixels and 1/4 for 2 pixels to improve data efficiency. This is done in consideration of the fact that the high frequency component is particularly likely to deteriorate under the influence of the MTF characteristic of the output system.

In addition, the period range and the number of divisions of the pattern to be embedded depend on the number of gray levels that can be expressed by the output system, the SN ratio of the reading system, the number of sampling pixels at the time of reading, and the like. The angle θ (or the number of divisions) also depends on the SN ratio of the reading system, the number of sampling pixels, and the like.

In the present embodiment, these cycles and angles have been described as being arranged at equal intervals on the Fourier transform plane, but there is a problem such as difficulty in matching the data when reading the embedded specific information. For example, the arrangement does not necessarily have to be evenly spaced. That is, although they are arranged concentrically in the example of FIG. 10A, they may be arranged in a concentric oval shape instead of a perfect circle.

Most general color originals do not include a high-frequency periodic component in the color difference component. However, there are very few line drawings and halftone images that include this periodic component.
When the specific information is embedded in these images, it may be mistakenly considered to have embedded a component that is not actually embedded during reading. In order to prevent this, it is effective to handle a plurality of periodic components as one bit. That is, at least one dummy bit having the same content is provided for a certain bit. However, in this case, the amount of specific information that can be embedded decreases according to the number of dummy bits ((number of dummy bits + 1)).
It becomes one-third).

11 (a) and 11 (b) show the bit arrangement on the Fourier transform plane based on the above idea. Figure 1
In FIG. 1, bits that are always off are not shown in the figure for simplification of description. Bits with the same number indicate that they are regarded as the same bit, and bits with a dash in the number are dummy bits. FIG. 11A shows an example in which the same bit is arranged in a unit of two adjacent lines arranged in a radial pattern (when two components are set as one unit). That is,
Bits are arranged on one line as usual, and dummy bits are arranged on the adjacent lines in the reverse order. FIG. 11 (b)
Shows an example in which three lines are set as one block and the same bit is arranged in block units (when two dummy blocks are arranged for one block). In any case, it is preferable that the normal bit and the dummy bit do not ride on the same radiation or the same circumference. When the two components are treated as the same bit, it is preferable to confirm the presence or absence of the bit by performing averaging during reading and performing threshold processing. Further, if there are two or more dummy bits (three or more identical bits), a procedure of taking a majority vote may be taken.

By handling the bits as described above, it is possible to prevent errors during reading. For example, if the original is a halftone image or a line drawing, it may be accompanied by a high frequency component in the color difference direction, which may cause an erroneous determination.
To alleviate this, multiple components are treated as one unit.

In the present embodiment, as in the above-described embodiments, the second color signals Y ', M', C are set as follows without using the first conversion circuit 902 and the second conversion circuit 908. It is also possible to directly embed the specific information in the '(FIG. 23). In this case, the procedure for calculating the amount β of the periodic component to be added is as described above.

Y ′ = Y + (ΣΣβ) 2/3 (23) M ′ = M− (ΣΣβ) / 3 (24) C ′ = C− (ΣΣβ) / 3 (25) Next, the above procedure is performed. The process of reading the specific information printed in step S1 will be described.

To read the above specific information, RGB (Bl
ue, Green, Yellow) A scanner equipped with a color separation filter is used.

The extraction process of the specific information will be described below with reference to the flowchart of FIG. It should be noted that the size to be extracted may be about 64 × 64 pixels, for example. By the way, the above size is 4 when converted to 400dpi.
The size is 4 mm, which is only a small part of the image. In other words, in the present embodiment, the specific information pattern does not have to be superimposed on the entire image but may be superimposed on only a part of the area, and the area may be known. First, input RGB signals (step B)
01). The number of divisions n for averaging is set (step B02). m is set to 1 (step B03). The reading start position and the reading size are set (step B04). The area to be read is cut out (step B05). Of the input RGB signals,
Since only the color difference GB is extracted, DD = GB is set (step 06). When the added component of C2 is added with the extracted Y, M, and C inks, it is added as C1 = M−C. However, in the RGB mode, when a significant difference between G and Y of the complementary color B of M is obtained. The component that preys on C2 can be calculated. Two-dimensional Fourier transform is performed on the color difference signals (step 07), and the frequency at the visual limit (2 cycle) is
The start bit position is confirmed based on the component (/ mm) (step B08). Using the same start position as a clue, the presence or absence of a frequency component in the bit is confirmed. If there is no component, it is "0", and if it can be confirmed, it is "1".
Then, each bit is checked to confirm the input data (step B09). It is determined whether m is equal to the number of divisions n (step B10). If NO, "1" is added to m (step B11), and step B04
~ Repeat B09. If YES, step B12
Proceed to.

In step B12, in order to improve the reliability of the data, a plurality of regions are sampled and averaged for each periodic component on the Fourier transform plane. In addition, at this time, if necessary, a large sampling area is set. Further, threshold processing is performed to confirm the presence / absence of bits (step B13). Then, the specific information is calculated (step B
14). If the read data is encrypted, decryption is performed, and if it is compressed, decompression processing is performed (step B15). Note that steps B2, B3, B10,
B11 and B12 are omitted when no dummy bit is provided on the Fourier transform plane (when different information is assigned to all bits as shown in FIG. 10).

As described above, according to the fourth embodiment, even when the amount of specific information to be embedded is large,
It is possible to prevent visually uncomfortable feeling. Further, even if the image is slightly tilted at the time of reading, the periodic component can be surely detected, and erroneous reading is small.

In the fourth embodiment, the case of embedding the specific information in the color difference direction has been described. However, it can be modified so as to be embedded in the saturation direction as in the second embodiment, or as in the third embodiment. It is possible to modify so as to adjust the amplitude of the embedding pattern according to the visual sensitivity and the high frequency component of the luminance component.

The first to fourth embodiments have been described with respect to the case of processing color signals of subtractive color mixture (Y, M, C), but they are applied to the additive color mixture (R, G, B) system. You can also That is, the specific information is added as it is to the RGB signal read by the scanner.

First, when adding to the color difference (YB),
Color signals of Y, M, and C from the input system 101 of FIG.
The color signals become G and B, and Y ′, M ′, and C ′ to the error diffusion processing circuit 10 or the main power system 108 are R ′, G ′,
It becomes B '. Then, the conversion in the first conversion circuit 102 is I = (R + G + B) / 3 C1 = R−G C2 = G−B, and the conversion in the second conversion circuit 106 is Equation (4).
Assuming that, R ′ = I + (2C1 + CC2) / 3 G ′ = I + (− C1 + CC2) / 3 B ′ = I + (− C1-2CC2) / 3. When embedding directly as shown in FIG. 22, the signals from the input system are R, G and B, the signals to the output system are R ′,
The change amounts DR, DG, and DB generated by the signal conversion circuit 2204 when G ′ and B ′ are: DR = + (−) α / 6 DG = + (−) α / 6 DB = − (+) It becomes α / 3.

When adding to the saturation, the same replacement as above is performed in FIG.
The conversion in is R ′ = I + (2CC1 + CC2) / 3 G ′ = I + (− CC1 + CC2) / 3 B ′ = I + (− CC1-2CC2) / 3 on the premise of the equations (12) to (14). Further, the amount of change added by the adder 2205 when directly embedding as shown in FIG. 22 is DR = ± α · (2R−G−B) / (6 · SQRT {(R−G) 2 + (G−B) 2 }) DG = ± α · (2G−B−R) / (6 · SQRT {(R−G) 2 + (G−B) 2 }) DB = ± α · (2B−R−G) / ( 6 · SQRT {(R−G) 2 + (G−B) 2 }).

Next, a fifth embodiment will be described.

In the fourth embodiment, the case where the periodic components are arranged concentrically or concentrically on the Fourier transform plane has been described. On the other hand, in the fifth embodiment, a case where they are arranged in a lattice will be described.

The general procedure for embedding the specific information is the same as in the case of the fourth embodiment. Further, the embedding processing unit in this embodiment has the same configuration as that of FIG. 9 used in the fourth embodiment. However, the processing inside the information processing unit 905 is different. Also, the specific information can be directly embedded in the color signal without using the first conversion circuit and the second conversion circuit, as in the case of the fourth embodiment.

The operation of the fifth embodiment will be described. First, Fig. 1
As shown in 3, each bit data is arranged in a grid on the Fourier transform plane. The period of each arrangement position in the main scanning direction is W
When L1 and the period in the sub-scanning direction are WL2, the formula representing the amount of the periodic component of the added information is as follows. Σ is the total for WL1 and WL2.

ΣΣβ (WL1, WL2) = (WI / 2) · cos (x · 2π / WL1 + y · 2π / WL2 + β (WL1, WL2)) where β changes in the range of 0 ≦ β <2π Represents a phase difference and changes the value for each frequency component to reduce the influence of superposition of periodic structures. However, WL1, WL
If either of 2 corresponds to the Nyquist frequency, β
Is set so as not to be close to π / 2 or 3 / 2π to prevent the periodic component from disappearing.

As shown in FIG. 13, WL1, WL
Both 2 are close to the Nyquist frequency, and a periodic component is added to a position where deterioration easily occurs.

The process of extracting the specific information is the same as in the case of the fourth embodiment.

As described above, according to the fifth embodiment, when the amount of specific information is relatively small, it can be handled easily. It should be noted that the periodic components are likely to be aligned with each other, and relatively obtrusive low-frequency periodic components are likely to occur. To prevent this, 0-2 for each periodic component
The phase difference in the range of π is given to suppress the occurrence of superposition. This makes it possible to prevent deterioration of image quality.

Next, a sixth embodiment will be described.

FIG. 14 is a block diagram showing an embedding processing section in an image processing apparatus according to the sixth embodiment of the present invention. In the sixth embodiment, a case where the present invention is applied to a color printer will be described.

As shown in FIG. 14, the main embedding processing section,
That is, the color printer is provided with the input system 1401. Graphic data or text data is supplied from the input system 1401 to the bitmap development unit 1402 as the first color signals Y, M, and C. The bit map expansion unit 1402 performs bit map expansion based on the first color signals Y, M, C supplied from the input system 1401 and supplies it to the adder 1407. A code generator 1403 is provided in the embedding processing unit. Code generator 14
Reference numeral 03 stores specific information to be embedded in graphic data or the like supplied from the input system 1401, generates the specific information in the form of a code, and supplies it to the information processing unit 1404.
The information processing unit 1404 performs processing such as encryption and compression on the code supplied from the code generation unit 1403, and supplies the processing result to the pattern generation circuit 1406. On the other hand, the embedding processing unit is provided with a switching mode selector 1405 capable of selecting high definition mode / normal mode. A signal indicating one of the modes is supplied from the mode selector 1405 to the pattern generation circuit 1406.
The pattern generation circuit 1406 generates a pattern signal based on the code supplied from the information processing unit 1404 and the mode designation signal from the mode selector 1405, and supplies it to the adder 1407.

The adder 1407 is the bitmap expansion unit 14
The pattern signal from the pattern generation circuit 1406 is added (or subtracted) to the first color signals Y, M, and C from 02. Color signals Y ', M', C'to which pattern signals are added
Is supplied to the error diffusion processing circuit 1408. The output of the error diffusion processing circuit 1408 is supplied to the output system 1409.
The output system 1409 prints out figures and texts according to the second color signals Y ′, M ′, C ′.

Next, the operation of the sixth embodiment will be described.

In this embodiment, when graphic data such as graphics or text data is expanded into a bit map or the like to obtain a pattern, a pattern having a predetermined periodic component is superimposed on the pattern. The pattern added at this time is based on, for example, a coded data representing the confidentiality of the document. The pattern is generated using the Fourier transform plane described in the above embodiments.

When the data to which the pattern is to be added is binary data such as characters and binary graphics, it is highly possible that the non-printed portion of the document to be printed is completely blank and the printed portion is solid. . In this case, no matter whether the pattern is added to the non-printed portion or the printed portion, the amplitude is reduced to half, so that it becomes difficult to extract the added pattern. In order to solve this problem, a small amount of ink is applied to the background (non-printed portion of the document) at the same time as the pattern is added. That is, the predetermined ink amounts Y0, M0, C0 are applied to the non-printing portion when the pattern is added. In this case, it is appropriate that the amount of each ink is about 1/6 of the amplitude WI of the periodic component at the Nyquist frequency position described in the fourth embodiment. The amount of ink other than Y0 may be reduced to half. However, in this case, the background may become yellowish. When the color balance is emphasized rather than the luminance balance on the background, the ink amount on the background is Y0.
= M0 = C0. The ink amount conversion formula is as follows.

Y '= Y0 + (ΣΣβ) 2/3 (27) M' = M0- (ΣΣβ) / 3 (28) C '= C0- (ΣΣβ) / 3 (29) The printer to be used. Is a binary printer or a printer having a small number of expressible gradations, the error diffusion processing circuit 1408 performs error diffusion processing on the data to which the pattern is added.

Further, in this embodiment, a mode selector 1405 capable of selecting a high definition mode / normal mode is provided on a control panel (not shown). Here, it may be set so that the pattern generation / addition processing is performed only when the high-definition mode is selected.

Especially when a pattern is added to a graphic image, the number of line drawings increases. In this case, if band removal is performed at the time of input, it will lead to deterioration of the information itself. Therefore, as shown in FIG. 15, the periodic components (components along the main scanning direction and the sub scanning direction axis) peculiar to the line image are not arranged on the Fourier transform plane.

The process of extracting the specific information is the same as in the case of the fourth embodiment.

As described above, according to the sixth embodiment, it is possible to embed and extract specific information without difficulty even when handling binary data such as characters and binary graphics.

Next, a seventh embodiment will be described.

FIG. 16 is a block diagram showing an embedding processing section in the image processing apparatus according to the seventh embodiment of the present invention. In the seventh embodiment, a case where the present invention is applied to a color facsimile will be described.

As shown in FIG. 16, the embedding processing section comprises two color facsimiles, that is, a transmitting section 161 and a receiving section 162. The input unit 160 is provided in the transmission unit 161.
1 is provided. Data is supplied from the input system 1601 to the compression / encoding unit 1602 as the first color signals Y, M, and C. The compression / coding unit 1602 compresses or codes the data and supplies the data to the adder 1605. On the other hand, the code generator 1603 stores the specific information A, generates the specific information A in the form of a code, and the information processing unit 1604.
Supply to. The information processing unit 1604 is a code generator 160.
The code supplied from No. 3 is subjected to processing such as encryption and compression, and is supplied to the adder 1605. The adder 1605 adds (or subtracts) the code (specific information A) from the information processing unit 1604 to the data from the compression / encoding unit 1602. The data to which the code (specific information A) is added is transferred to the information separation unit 1606 of the reception unit 162.

The information separating unit 1606 separates the specific information A from the transferred data, and supplies the data body to the decompressing / expanding unit 1607 and the specific information A to the information synthesizing unit 1610. The decompression / decompression unit 1607 decompresses / decompresses the data body and supplies it to the adder 1612. On the other hand, the code generator 1608 generates a code indicating the machine number of the receiving unit 162 or a code indicating the department number (specific information B) and supplies it to the information processing unit 1609. The information processing unit 1609 performs processing such as encryption and compression on the code (specific information B) supplied from the code generator 1608, and the information synthesis unit 1610.
Supply to. The information combining unit 1610 is an information separating unit 1606.
And the specific information B from the information processing unit 1609 are combined and supplied to the pattern generation circuit 1611.
The pattern generation circuit 1611 generates a pattern based on the combined code and supplies it to the adder 1612. The adder 1612 adds the pattern from the pattern generation circuit 1611 to the data from the decompression / expansion unit 1607 and supplies it to the error diffusion processing circuit 1613. The error diffusion processing circuit 1613 outputs the data from the adder 1612 to the output system 161.
Supply to 4. The output system 1614 outputs the above data.

Next, the operation of the seventh embodiment will be described.

For example, when transferring data (color information) between two facsimiles, there may be a case where the transmitting side wants to add the specific information and a case where the receiving side wants to add the specific information. First, as a simple method,
As described in the fourth embodiment, it is conceivable that the pattern is superimposed on the data, the data is transmitted from the transmission unit, and then the reception unit receives it as it is. However, since the color information itself has a very large capacity, it is sufficiently possible to compress and transfer the data. Further, there is a possibility that the data is encoded and transferred. FIG. 16 shows an example in the case of being constructed corresponding to such various conditions.

Data is compressed in advance by the transmission unit 161.
Coding or compression processing is performed in the coding unit 1602. Next, the encoded specific information is added by the adder 160.
In FIG. 5, as shown in FIGS. 17 (a) and 17 (b), it is connected to the data body to be transferred as a header or trailer. At this time, a start bit or an end bit is provided as a mark at the boundary between the data body and the specific information. The specific information added here is, for example, a number for identifying the machine of the transmitting unit, an attribute of the data, etc. (for example, classification of confidential matters), or a code indicating or encoding a transmitting department. Can be considered. On the other hand, in the receiving unit 162, the data including the received specific information is temporarily separated into the data body and the specific information, and if necessary, transferred to the specific information (code) representing the machine number or the department number of the receiving unit. The specific information (code) is combined with the specific information. The specific information after synthesis is generated as a pattern in the pattern generation circuit. And this pattern is
It is added to the data after the same processing as the bitmap expansion described in the embodiment. After that, the data is output after processing such as error diffusion. In the above procedure, it may be possible that the receiving unit does not add the specific information but only the transmitting unit adds the specific information.

Note that the specific information extraction processing is the same as in the case of the fourth embodiment.

As described above, according to the seventh embodiment, both the specific information on the transmitting side and the specific information on the receiving side can be added to the transfer data between the color facsimiles. Further, only the specific information on the transmitting side can be added to the transfer data.

Next, the eighth embodiment will be described.

In the sixth embodiment, the case of application to a color printer has been described, but in the eighth embodiment, the case of application to a monochrome printer will be described. In addition, this embodiment will be described with reference to FIG. 14 which is also referred to in the sixth embodiment.

FIG. 14 is a block diagram showing an embedding processing unit in the image processing apparatus according to the eighth embodiment of the present invention.

As shown in FIG. 14, the main embedding processing section,
That is, the monochrome printer is provided with the input system 1401. From the input system 1401, the text data is converted into a first color signal K (Y, M, and C in the figure, but it is regarded as K) as a bitmap expansion unit 1402.
Is supplied to. The bitmap expansion unit 1402 is the input system 1
Based on the first color signal K supplied from 401, the bitmap is developed and supplied to the adder 1407. On the other hand, the configurations of the code generator 1403, the information processing unit 1404, and the mode selector 1405 are the same as those in the sixth embodiment. However, the Fourier transform plane in the pattern generation circuit 1406 is different from that in the sixth embodiment as described later. The configurations of the adder 1407 and the error diffusion processing 1408 are the same as those in the sixth embodiment. The output system 1409 prints monochrome characters and the like according to the supplied second color signal. The monochrome printer in this embodiment has a higher resolution than the color printer in the sixth embodiment.

Next, the operation of the eighth embodiment will be described.

In a monochrome printer, it is difficult to apply modulation in the color difference direction and the saturation direction. However, a monochrome printer requires a higher resolution than a color printer, for example, a resolution of 600 dpi or higher. When generating a pattern in the pattern generation circuit 1406, a Fourier transform plane is used. However, in this Fourier transform plane, the sensitivity to changes in the luminance direction is high, so the frequency of the visual field limit is relatively high. That is, it is necessary to add a component having a frequency higher than the visual limit frequency of 8 [cycle / mm]. Therefore, as shown in FIG. 18, the possible arrangement range of the periodic component is limited. In addition,
The pattern is added to the data after the bitmap expansion processing, as in the case of the color printer.

In the above method, if the original non-printed portion is grayed out and is not preferable, the line information or the character spacing is changed by a very small amount with respect to the character string to be printed, so that the specific information can be obtained. A method of embedding can be considered. When the printer used has a high resolution, it is almost inconspicuous with a shift of about 1 dot unless the characters are shifted vertically and horizontally. By utilizing this, the line spacing and the character spacing are changed line by line or column by column to embed the specific information. For example, as shown in FIG. 19, line spacing L0, L1 and character spacing m0, m1 are changed. Also, for example, general A4
In the case of a version document, 40 × 36 characters with a size of about 10 points are arranged on the entire page. In this case, if all line spacing and character spacing are used, 39 ×
Data of 35 = 74 bits can be embedded.
The higher the resolution of the printer, the more specific information can be embedded.

The method of embedding the specific information by changing the line spacing and the character spacing line by line or column by column can be applied to a printer that does not expand text data into a bitmap, such as a thermal printer. That is, the same effect can be obtained by mechanically modulating the head feed pitch (character pitch) and the recording paper feed pitch (row pitch).

As described in detail above, the present invention has the following effects through the first to eighth embodiments.

According to the present invention, the color difference and the saturation information generally have a lower visual acuity limit than the luminance information. In other words, the color difference and the saturation are fine, and there is a characteristic that it is dull rather than the luminance with respect to subtle changes. On the other hand, in color recording, the highest image quality is achieved by a printer that records density (signal including brightness) information of each color at the visual acuity limit of brightness. (Note that it is not necessary for humans to see if it exceeds the limit of their visual acuity.)
In this manner, when the brightness is recorded close to the visual acuity limit, the color difference and the saturation information cannot be identified by humans. According to the present invention, if information is coded and embedded in a portion where this unidentifiable recording is made, that is, a color difference or a saturation component at a high frequency, it is possible to perform recording without giving a visually uncomfortable feeling. That is, it is possible to record without deterioration of image quality.

Further, since general image information hardly exists in the frequency band where the color difference and the saturation are above the visual acuity limit,
By converting into color difference or saturation information and performing band pass processing, it becomes possible to separate and read the specific information (code information) embedded from the image information with extremely high accuracy.

As described above, by applying the present invention, it is possible to record specific information without giving a visually uncomfortable feeling when outputting to an ordinary printer or the like. Further, even if the scanner used at the time of reading does not have a high-precision resolution exceeding the visual limit, the recorded pattern can be sufficiently read. For example, a scanner having a resolution used in a general copying machine can be used.

Further, in general image information, since there is almost no color difference or saturation in the frequency band exceeding the visual acuity limit, by converting the image information into a color difference signal or a saturation signal and performing band removal processing, It is possible to separate and extract the recorded specific information with extremely high accuracy. As a result, it is possible to prevent a misunderstanding between the image information and the specific information during reading.

Further, according to the present invention, it becomes possible to record a bar code which cannot be identified by human eyes. For this reason, it becomes possible to attach a barcode to, for example, an extremely small product that cannot normally be attached with a barcode, or that has an inconvenient design due to the attachment of a barcode.

According to the present invention, the specific information can be recorded in the color image information without degrading the image quality of the color image, and the specific information recorded in the color image information can be separated and read with high accuracy. You can also

Further, according to the present invention, it is possible to easily embed specific information into a color character original or a graphic image, and further, even if it is a monochrome image other than a color image or a monochrome character original, It is possible to blindly embed specific information. Therefore, the present invention can be applied not only to color printers but also to color facsimiles and monochrome printers.

The present invention is not limited to the above embodiments.
For example, when another information is already superposed on the image described in the fourth embodiment, the band removing circuit for removing the old information is commonly used in the first to third embodiments. It is possible. Further, although the example relating to the details of the output system has been described as the information to be embedded, this is also merely an example, and any information may be superimposed. In addition, in the fourth to eighth embodiments in which multiple frequency information is embedded using the Fourier transform plane, the one-dimensional Fourier transform plane may be used instead of the two-dimensional Fourier transform plane.

[0210]

As described above, according to the present invention, it is possible to embed different information in a color image or the like without giving a visually uncomfortable feeling and causing image deterioration. Further, other embedded information can be separated and read with high accuracy and ease.

[Brief description of drawings]

FIG. 1 is a block diagram showing an embedding processing unit in an image processing apparatus according to a first embodiment of the present invention.

FIG. 2 is a diagram showing a pattern generated by the pattern generating circuit shown in FIG.

FIG. 3 is a graph showing human gradation discrimination ability with respect to changes in a luminance direction, a color difference direction, and a saturation direction.

FIG. 4 is a diagram showing a document on which image information in which specific information is embedded is printed and a sheet used for reading.

FIG. 5 is a flowchart showing processing of a reading processing unit in the image processing apparatus according to the first embodiment of the present invention.

FIG. 6 is a block diagram showing an embedding processing unit in the image processing apparatus according to the second embodiment of the present invention.

FIG. 7 is a graph showing a sensitivity distribution of human chromaticity for a pattern having the same period.

FIG. 8 is a block diagram showing an embedding processing unit in an image processing apparatus according to a third embodiment of the present invention.

FIG. 9 is a block diagram showing an embedding processing unit in an image processing apparatus according to fourth and fifth embodiments of the present invention.

FIG. 10 is a diagram showing a bit arrangement on the Fourier transform plane used in the fourth embodiment shown in FIG. 9;

FIG. 11 is a diagram showing a bit arrangement for preventing erroneous determination on the Fourier transform plane used in the fourth embodiment shown in FIG. 9;

FIG. 12 is a flowchart showing the processing of the reading processing unit in the image processing apparatus according to the fourth and fifth embodiments of the present invention.

FIG. 13 is a diagram showing a bit arrangement on a Fourier transform plane used in the fifth embodiment shown in FIG. 9;

FIG. 14 is a block diagram showing an embedding processing unit in an image processing apparatus according to sixth and eighth embodiments of the present invention.

FIG. 15 is a diagram showing a bit arrangement on the Fourier transform plane used in the sixth embodiment shown in FIG. 14;

FIG. 16 is a block diagram showing an embedding processing unit in an image processing apparatus according to a seventh embodiment of the present invention.

17 is a diagram showing a data format transferred in the seventh embodiment shown in FIG.

FIG. 18 is a diagram showing a bit arrangement on the Fourier transform plane used in the eighth embodiment shown in FIG. 14;

FIG. 19 is a diagram showing a character string on a character original which is output in the eighth embodiment shown in FIG.

FIG. 20 is a diagram showing an example in which the present invention is applied to an ID card with a photograph in the first to eighth embodiments.

FIG. 21 is a diagram showing a relationship between colors in a color difference coordinate system.

FIG. 22 is a block diagram showing a modification of the first to third embodiments.

FIG. 23 is a block diagram showing a modification of the fourth embodiment.

[Explanation of symbols]

101, 601, 801, 901, 2201, 2301
Input system 102, 602, 802, 902 ... First conversion circuit 103, 605, 805, 904, 2202, 2303
... Code generators 104, 606, 806, 906, 2203, 2305
... pattern generation circuit 106, 607, 809, 908 ... second conversion circuit 107, 608, 810 ... error diffusion processing circuit 108, 609, 811, 909, 2206, 2308
Output system 807 ... High frequency extraction circuit 903 ... Band releasing circuit 905 ... Information processing unit 2204, 2306 ... Signal conversion circuit 2205, 2307 ... Adder

Continuation of front page (51) Int.Cl. 6 Identification number Office reference number FI technical display location H04N 1/46 4226-5C H04N 1/46 Z

Claims (19)

[Claims]
1. A means for generating a data signal representing information different from that of the color image, and the color image being provided with the other information by changing either color difference or saturation of the color image by the data signal. An image processing apparatus comprising: an image processing unit to be embedded.
2. A means for generating a data signal representing information different from that of the color image, and one of the color difference and the saturation of the color image so that the total of the three primary color components of the color image does not change by processing. An image processing apparatus comprising: an image processing unit that embeds the other information in the color image by changing the color image according to a signal.
3. The image processing means converts the three primary color component signals of the color image into a luminance signal, first and second color difference signals, and embeds the other information in the first color difference signal. The image processing apparatus according to claim 1, further comprising means.
4. The second color difference signal is a red-green color difference signal,
The image processing apparatus according to claim 3, wherein the first color difference signal is a yellow-blue color difference signal.
5. The image processing means converts the three primary color component signals of a color image into a luminance signal and first and second color difference signals, and a saturation represented by the first and second color difference signals. The image processing apparatus according to claim 1, further comprising means for embedding the other information.
6. The image processing means embeds the other information in the color image by changing three primary color signals of subtractive color mixture or additive color mixture of the color image according to the data signal. The image processing apparatus according to claim 1.
7. The image processing means includes means for converting the data signal into a change amount of any one of color difference and saturation of the color image, and means for adding the change amount to the color image. The image processing apparatus according to claim 1, further comprising:
8. The image processing apparatus according to claim 1, further comprising means for recording on a recording medium a second color image which is processed by the image processing means and in which other information is embedded.
9. The other information is embedded in the color image by changing either the color difference or the saturation of the first color image by a data signal representing the information different from the first color image. An image processing apparatus comprising: an input unit for inputting the second color image, and an extraction unit for extracting the other information from the second color image input by the input unit.
10. The extracting means reads the input second color image, and the second color image read by the reading means as a luminance signal,
10. The image processing according to claim 9, further comprising: means for converting into a second color difference signal, and separation means for separating and extracting the data signal from the first color difference signal converted by the conversion means. apparatus.
11. The extracting means reads the input second color image, and the second color image read by the reading means is a luminance signal, first,
A second color difference signal conversion means, and a separation means for separating and extracting the data signal from the saturation represented by the first and second color difference signals converted by the conversion means. The image processing apparatus according to claim 9.
12. The extracting means includes means for detecting a duplicated second color image from the input second color image signal, and the duplicated second color image detected by the detecting means. The image processing apparatus according to claim 9, further comprising means for averaging.
13. The image according to claim 9, wherein the extracting means includes means for performing band-pass processing of a predetermined frequency band on the input second color image. Processing equipment.
14. The image processing means comprises means for detecting a high frequency component of luminance based on a color image, and means for adjusting an amount of embedding the other information according to the detected high frequency component. The image processing apparatus according to claim 1, wherein
15. A means for generating a data signal representing information different from the color image, and a striped pattern having a plurality of frequency components according to the data signal generated by the generating means are added to the color image. And an image processing unit for embedding the other information in the color image.
16. A means for generating a data signal representing information different from that of a color image, and a plurality of means in accordance with the data signal generated by said generating means so that the total of the three primary color components of the color image does not change by processing. Image processing means for embedding the other information in the color image by adding a striped pattern having the frequency component of 1. to the color image.
17. The image processing means includes means for arranging the plurality of frequency components forming the striped pattern on a plane,
17. The image processing apparatus according to claim 16, further comprising means for adding the striped pattern to the color image based on a plurality of frequency components arranged on a plane.
18. A means for generating a data signal representing information different from that of the monochrome image, and an image processing means for embedding the other information in the monochrome image by changing the brightness of the monochrome image by the data signal. An image processing apparatus comprising:
19. A means for generating a data signal representing information different from character information, and an arrangement interval when character information is developed as an image are changed by the data signal to separate the character information into the image. An image processing apparatus, comprising:
JP23246094A 1993-09-03 1994-09-02 Image processing device Expired - Lifetime JP3599795B2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP5-219294 1993-09-03
JP21929493 1993-09-03
JP23246094A JP3599795B2 (en) 1993-09-03 1994-09-02 Image processing device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP23246094A JP3599795B2 (en) 1993-09-03 1994-09-02 Image processing device

Publications (2)

Publication Number Publication Date
JPH07123244A true JPH07123244A (en) 1995-05-12
JP3599795B2 JP3599795B2 (en) 2004-12-08

Family

ID=26523038

Family Applications (1)

Application Number Title Priority Date Filing Date
JP23246094A Expired - Lifetime JP3599795B2 (en) 1993-09-03 1994-09-02 Image processing device

Country Status (1)

Country Link
JP (1) JP3599795B2 (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1997049235A1 (en) * 1996-06-20 1997-12-24 Ibm Japan Ltd. Data hiding method and data extracting method
WO1998016928A1 (en) * 1996-10-16 1998-04-23 International Business Machines Corporation Method and system for managing access to data
SG79951A1 (en) * 1996-11-27 2001-04-17 Ibm Data hiding method and data extracting method
US6753979B2 (en) 2001-01-16 2004-06-22 Canon Kabushiki Kaisha Data processing apparatus and method, and storage medium
US7058232B1 (en) 1999-11-19 2006-06-06 Canon Kabushiki Kaisha Image processing apparatus, method and memory medium therefor
US7072522B2 (en) 2001-09-26 2006-07-04 Canon Kabushiki Kaisha Image processing apparatus and method
US7079267B2 (en) 2001-09-25 2006-07-18 Canon Kabushiki Kaisha Image processing apparatus, method, computer program and recording medium
US7187476B2 (en) 2001-10-01 2007-03-06 Canon Kabushiki Kaisha Image processing apparatus and method, computer program, and recording medium
JP2007104176A (en) * 2005-10-03 2007-04-19 Matsushita Electric Ind Co Ltd Image compositing apparatus and image collation apparatus, image compositing method, and image compositing program
JP2007312383A (en) * 1995-05-08 2007-11-29 Digimarc Corp Steganographic system
US7400727B2 (en) 1997-07-03 2008-07-15 Matsushita Electric Industrial Co., Ltd. Information embedding method, information extracting method, information embedding apparatus, information extracting apparatus, and recording media
US7408680B2 (en) 2001-09-26 2008-08-05 Canon Kabushiki Kaisha Image processing apparatus and method
US7489800B2 (en) 2002-07-23 2009-02-10 Kabushiki Kaisha Toshiba Image processing method
JP2009100232A (en) * 2007-10-16 2009-05-07 Canon Inc Image processor
WO2009087764A1 (en) * 2008-01-09 2009-07-16 Monami Software, Lp Information encoding method for two-dimensional bar code subjected to wavelet transformation
JP2010141518A (en) * 2008-12-10 2010-06-24 Riso Kagaku Corp Image processor and image processing method
JP2013520874A (en) * 2010-02-22 2013-06-06 ドルビー ラボラトリーズ ライセンシング コーポレイション Video distribution and control by overwriting video data

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007312383A (en) * 1995-05-08 2007-11-29 Digimarc Corp Steganographic system
US6512835B1 (en) 1996-06-20 2003-01-28 International Business Machines Corporation Data hiding and extraction methods
WO1997049235A1 (en) * 1996-06-20 1997-12-24 Ibm Japan Ltd. Data hiding method and data extracting method
WO1998016928A1 (en) * 1996-10-16 1998-04-23 International Business Machines Corporation Method and system for managing access to data
SG79951A1 (en) * 1996-11-27 2001-04-17 Ibm Data hiding method and data extracting method
US6286100B1 (en) 1996-11-27 2001-09-04 International Business Machines Corporation Method for hiding message data into media data and a method for extracting that hidden data
US7400727B2 (en) 1997-07-03 2008-07-15 Matsushita Electric Industrial Co., Ltd. Information embedding method, information extracting method, information embedding apparatus, information extracting apparatus, and recording media
EP2866191A1 (en) 1999-11-19 2015-04-29 Canon Kabushiki Kaisha Image processing apparatus, method and computer program for extracting additional information added images by error diffusion watermarking
US7058232B1 (en) 1999-11-19 2006-06-06 Canon Kabushiki Kaisha Image processing apparatus, method and memory medium therefor
US6753979B2 (en) 2001-01-16 2004-06-22 Canon Kabushiki Kaisha Data processing apparatus and method, and storage medium
US7079267B2 (en) 2001-09-25 2006-07-18 Canon Kabushiki Kaisha Image processing apparatus, method, computer program and recording medium
US7408680B2 (en) 2001-09-26 2008-08-05 Canon Kabushiki Kaisha Image processing apparatus and method
US7072522B2 (en) 2001-09-26 2006-07-04 Canon Kabushiki Kaisha Image processing apparatus and method
US7187476B2 (en) 2001-10-01 2007-03-06 Canon Kabushiki Kaisha Image processing apparatus and method, computer program, and recording medium
US7773266B2 (en) 2001-10-01 2010-08-10 Canon Kabushiki Kaisha Image processing apparatus, method, and computer product for adding reference frame information used to detect position of image embedded information
US7489800B2 (en) 2002-07-23 2009-02-10 Kabushiki Kaisha Toshiba Image processing method
JP2007104176A (en) * 2005-10-03 2007-04-19 Matsushita Electric Ind Co Ltd Image compositing apparatus and image collation apparatus, image compositing method, and image compositing program
JP2009100232A (en) * 2007-10-16 2009-05-07 Canon Inc Image processor
WO2009087764A1 (en) * 2008-01-09 2009-07-16 Monami Software, Lp Information encoding method for two-dimensional bar code subjected to wavelet transformation
JPWO2009087764A1 (en) * 2008-01-09 2011-05-26 Zak株式会社 Information encoding method of wavelet transformed 2D barcode
JP2010141518A (en) * 2008-12-10 2010-06-24 Riso Kagaku Corp Image processor and image processing method
JP2013520874A (en) * 2010-02-22 2013-06-06 ドルビー ラボラトリーズ ライセンシング コーポレイション Video distribution and control by overwriting video data

Also Published As

Publication number Publication date
JP3599795B2 (en) 2004-12-08

Similar Documents

Publication Publication Date Title
JP3997720B2 (en) Image processing apparatus and image forming apparatus
US6421145B1 (en) Image processing apparatus and method using image information and additional information or an additional pattern added thereto or superposed thereon
US5581376A (en) System for correcting color images using tetrahedral interpolation over a hexagonal lattice
US7280249B2 (en) Image processing device having functions for detecting specified images
US6268939B1 (en) Method and apparatus for correcting luminance and chrominance data in digital color images
JP2720924B2 (en) Image signal encoding device
EP1739947B1 (en) Density determination method, image forming apparatus, and image processing system
US6239886B1 (en) Method and apparatus for correcting luminance and chrominance data in digital color images
US4974171A (en) Page buffer system for an electronic gray-scale color printer
JP4137084B2 (en) Method for processing documents with fraud revealing function and method for validating documents with fraud revealing function
US7310167B2 (en) Color converting device emphasizing a contrast of output color data corresponding to a black character
US6959385B2 (en) Image processor and image processing method
JP4362528B2 (en) Image collation apparatus, image collation method, image data output processing apparatus, program, and recording medium
US7236641B2 (en) Page background detection and neutrality on scanned documents
DE69937044T2 (en) Technology for multiple watermark
US7227661B2 (en) Image generating method, device and program, and illicit copying prevention system
de Queiroz et al. Color to gray and back: color embedding into textured gray images
US6697498B2 (en) Method and computer program product for hiding information in an indexed color image
CN100508547C (en) Image processing apparatus and image processing method
US5668636A (en) Embedded data controlled digital highlight color copier
US7200263B2 (en) Background suppression and color adjustment method
CA2306068C (en) Digital imaging method and apparatus for detection of document security marks
JP4167590B2 (en) Image processing method
DE69935120T2 (en) Automatic improvement of print quality based on size, shape, orientation and color of textures
US7133559B2 (en) Image processing device, image processing method, image processing program, and computer readable recording medium on which image processing program is recorded

Legal Events

Date Code Title Description
TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20040914

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20040915

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20070924

Year of fee payment: 3

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20080924

Year of fee payment: 4

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20080924

Year of fee payment: 4

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20090924

Year of fee payment: 5

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20090924

Year of fee payment: 5

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20100924

Year of fee payment: 6

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20110924

Year of fee payment: 7

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20110924

Year of fee payment: 7

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20120924

Year of fee payment: 8